id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9911/hep-th9911005.html
ar5iv
text
# References Entropy of extreme three-dimensional charged black holes Bin Wang<sup>a,b,</sup><sup>1</sup><sup>1</sup>1e-mail:binwang@fma.if.usp.br, Elcio Abdalla<sup>a,</sup><sup>2</sup><sup>2</sup>2e-mail:eabdalla@fma.if.usp.br <sup>a</sup> Instituto De Fisica, Universidade De Sao Paulo, C.P.66.318, CEP 05315-970, Sao Paulo, Brazil <sup>b</sup> Department of Physics, Shanghai Teachers’ University, P. R. China ## Abstract It is shown that three-dimensional charged black holes can approach the extreme state at nonzero temperature. Unlike even dimensional cases, the entropy for the extreme three-dimensional charged black hole is uniquely described by the Bekenstein-Hawking formula, regardless of different treatments of preparing the extreme black hole, namely, Hawking’s treatment and Zaslavskii’s treatment. PACS number(s): 04.70.Dy, 04.20.Gz, 04.62.+v. There has been recently a great deal of interest in the study of the extreme black hole (EBH) entropy. The interest was first heated up by the findings that the four-dimensional (4D) Reissner-Nordstr$`\ddot{o}`$m (RN) EBH and the non-extreme black hole (NEBH) are different objects due to their drastically different topological properties and RN EBH has zero entropy regardless of its nonzero horizon area . However, using the grand canonical ensemble, Zaslavskii argued that a 4D RN black hole in a finite size cavity can approach the extreme state as closely as one likes and the Bekenstein-Hawking formula is still expected to hold for RN EBH . The geometrical and topological properties were also claimed of nonextreme sectors . Support for this view is also provided by state-counting calculations of certain extreme and near-extreme black holes in string theory, see for a review. These different results indicate that EBHs have a special and controversial role in black hole thermodynamics and topologies. Comparing and \[3-5\], it seems that the clash comes from two different treatments: one refers to Hawking’s treatment by starting with the original EBH and the other Zaslavskii’s treatment by first taking the boundary limit and then the extreme limit to get the EBH from its nonextreme counterpart \[3-5\]. Recently by using these two treatments, the geometry and intrinsic thermodynamics have been investigated in detail for a wide class of EBHs including 4D and two-dimensional (2D) cases \[7-10\]. It was found that these different treatments lead to two different topological objects represented by different Euler characteristics and show drastically different intrinsic thermodynamical properties both classically and quantum-mechanically. Based upon these results it was suggested that there maybe two kinds of EBHs in the nature: the first kind suggested by Hawking et al with the extreme topology and zero entropy, which can only be formed by pair creation in the early universe; on the other hand, the second kind, suggested by Zaslavskii, has the topology of the nonextreme sector and the entropy is still described by the Bekenstein-Hawking formula, which can be developed from its nonextreme counterpart through second order phase transition \[11-13\]. This speculation has been further confirmed recently in a Hamiltonian framework and the grand canonical ensemble as well as canonical ensemble formulation for RN anti-de Sitter black hole. All these results available for EBHs’ entropy are limited to even dimensions. Whether these results can be extended to odd dimensions is unclear. This paper evolves from an attempt to study this problem by using (2+1)-dimensional (3D) charged black hole as an example. The metric of the 3D charged black hole reads $$\mathrm{d}s^2=N^2\mathrm{d}t^2+N^2\mathrm{d}r^2+r^2\mathrm{d}\varphi ^2$$ (1) where $$N^2=M+\frac{r^2}{l^2}\frac{ϵ^2}{2}\mathrm{ln}\frac{r}{r_0}$$ (2) with $`\mathrm{}<t<+\mathrm{},0<r<\mathrm{}`$ and $`0\varphi 2\pi `$, $`M`$ and $`ϵ`$ in the above metric associated respectively with the mass and the charge of the black hole, $`l^2`$ is the negative cosmological constant and $`r_0`$ is a constant. When $`r>r_0`$, the 3D charged black hole is described by the Penrose diagram as usual. The electric potential of the charge is $$A_0(r)=ϵ\mathrm{ln}\frac{r}{r_0}.$$ (3) This black hole has two, one, or no horizons, depending on whether $$M(\frac{ϵ^2}{4}\frac{ϵ^2}{4}\mathrm{ln}\frac{ϵ^2l^2}{4r_0^2})$$ (4) is greater than, equal to or less than zero, respectively. Now we can directly make use of the approach of to study the black hole thermodynamics in a grand canonical ensemble where we consider the black hole in a cavity with radius $`r_B`$. The temperature on the boundary of the cavity is $`T_W=T_H/N(r_B)`$, where $`T_H=k/2\pi `$ is the Hawking temperature and $`k`$ is the surface gravity. For our metric (1), the local temperature has the form $`T_W`$ $`=`$ $`{\displaystyle \frac{T_H}{\sqrt{M+r_B^2/l^2ϵ^2/2\mathrm{ln}(r_B/r_0)}}}`$ (5) $`T_H`$ $`=`$ $`{\displaystyle \frac{2r_+/l^2ϵ^2/2r_+}{4\pi }}`$ (6) When a black hole approaches the extreme state $`(M={\displaystyle \frac{ϵ^2}{4}}{\displaystyle \frac{ϵ^2}{4}}\mathrm{ln}{\displaystyle \frac{ϵ^2l^2}{4r_0^2}},ϵ^2={\displaystyle \frac{4r_+^2}{l^2}})`$, according to (6) $`T_H0`$. The simplest choice is to take the limit $`T_W0`$. One might refer to the third law of thermodynamics to argue that the EBH cannot be achieved because the absolute zero temperature is unachievable. However, it is interesting to point out that although $`T_H0`$, the square root in (5) tends to zero as well if we take $`r_+r_B`$, thus the extreme state with nonzero local temperature does exist. Indeed, taking $`r_+`$ and $`r_{}`$ as corresponding to event horizon and Cauchy horizons, we have $$ϵ^2=\frac{2(r_+^2r_{}^2)}{l^2\mathrm{ln}(r_+/r_{})},$$ (7) and we can readily see that although $`T_H`$ has a simple zero in $`r_+r_{}`$, the expression in the square-root in the denominator of $`T_W`$ has a double-zero, i.e., $$M+\frac{r_B^2}{l^2}\frac{ϵ^2}{2}\mathrm{ln}\frac{r_B}{r_0}=\frac{r_+^2r_{}^2}{l^2}(1\frac{\mathrm{ln}r_+\mathrm{ln}r_{}}{\mathrm{ln}r_+\mathrm{ln}r_{}}),$$ (8) therefore $`T_W`$ tends to a constant value in the EBH case. Recall that in the grand canonical ensemble, only the temperature on the boundary has physical meaning, whereas $`T_H`$ can always be rescaled without changing observable quantities . Therefore analogous to the 4D RN case , there exists a well defined extreme state of the 3D charged black hole in the grand canonical ensemble and no contradiction with the third law arise. Now it is of interest to investigate whether two different treatments applied in even dimensions will lead to similar different entropy results for 3D charged EBH. The action for the Euclidean version of the 3D charged black hole on a 3D manifold $`M`$ with a boundary is given by $$I=\frac{1}{2\pi }_Md^3x\sqrt{g}(R+2/l^2+1/4F_{\mu \nu }F^{\mu \nu })+\frac{1}{\pi }_Md^2x\sqrt{\gamma }(KK_0)$$ (9) Here $`\gamma `$ is the induced metric on the boundary $`M`$ and $`K`$ is the extrinsic curvature of the boundary. $`K_0`$ is a constant independent on the metric of 3D spacetime and we choose it to be zero to normalize the thermodynamic energy in a flat spacetime. Introducing the Gaussian normal coordinates near every point on the surface of the cavity, the timelike coordinate of this system is the proper time $`\tau `$ for an observer on the surface and the coordinates on the surface are ($`\tau ,\varphi `$). Defining $`\stackrel{}{N}`$ as the unit spacelike vector orthogonal to the surface and $`\stackrel{}{U}`$ the velocity of a mass element of this surface, the orthogonal condition becomes $$\stackrel{}{N}\stackrel{}{U}=0$$ (10) The velocity is $`\stackrel{}{U}=\dot{t}_t+\dot{r}_r`$ where the overdot denotes differentiation with respect to $`\tau `$. We obtain $`\stackrel{}{N}=(|g_{tt}|)^1\dot{r}_t+|g_{tt}|\dot{t}_r`$ from Eq(10). The normalization conditions are $`\stackrel{}{N}\stackrel{}{N}=1,\stackrel{}{U}\stackrel{}{U}=1`$. The extrinsic curvatures relative to the Gaussian normal coordinates are $`K_{\tau \tau }=N_{\tau ;\tau }`$ $`=`$ $`U^\mu U^\nu N_{\mu \nu }`$ (11) $`K_{\varphi \varphi }`$ $`=`$ $`N_{\varphi ;\varphi }`$ (12) The action for the black hole with the cavity at $`r=r_B`$ is $$\frac{\beta }{N_{r_B}}[\frac{4}{l^2}(r_B^2r_+^2)\frac{ϵ^2}{4}\mathrm{ln}\frac{r_B}{r_+}]+2\beta [\frac{r_B}{2N_{r_B}}(\frac{dN^2}{dr})_{r_B}+N_{r_B}],$$ (13) where the relation $`\beta =T_W^1=_0^{2\pi }N(r_B)𝑑\tau `$ has been used. The free energy is given by the expression $$F=\frac{I}{\beta }=\frac{1}{N_{r_B}}[\frac{4}{l^2}(r_B^2r_+^2)\frac{ϵ^2}{4}\mathrm{ln}\frac{r_B}{r_+}]+2\frac{r_B}{2N_{r_B}}(\frac{dN^2}{dr})_{r_B}+2N_{r_B},$$ (14) while the entropy can be calculated by means of the formula $$S=(\frac{F}{T_W})_D=(\frac{F}{r_+})_D(\frac{dT_W}{dr_+})_D^1.$$ (15) We have, $`S`$ $`=`$ $`4\pi {\displaystyle \frac{(8r_+/l^2+ϵ^2/4r_+)N_{r_B}+{\displaystyle \frac{dN_{r_B}}{dr_+}}[4(r_B^2r_+^2)/l^2ϵ^2/4\mathrm{ln}(r_B/r_+)]}{{\displaystyle \frac{dN_{r_B}}{dr_+}}({\displaystyle \frac{dN^2}{dr}})_{r_+}N_{r_B}{\displaystyle \frac{d}{dr_+}}({\displaystyle \frac{dN^2}{dr}})_{r_+}}}`$ (16) $`8\pi N_{r_B}^2{\displaystyle \frac{{\displaystyle \frac{r_B}{2N_{r_B}^2}}{\displaystyle \frac{dN_{r_B}}{dr_+}}({\displaystyle \frac{dN^2}{dr}})_{r_B}+{\displaystyle \frac{r_B}{2N_{r_B}}}{\displaystyle \frac{d}{dr_+}}({\displaystyle \frac{dN^2}{dr}})_{r_B}+{\displaystyle \frac{dN_{r_B}}{dr_+}}}{{\displaystyle \frac{dN_{r_B}}{dr_+}}({\displaystyle \frac{dN^2}{dr}})_{r_+}N_{r_B}{\displaystyle \frac{d}{dr_+}}({\displaystyle \frac{dN^2}{dr}})_{r_+}}}.`$ Taking the boundary limit $`r_+r_B,(N_{r_B}0)`$, we find $$S=4\pi r_+$$ (17) This is just the entropy for the 3D charged NEBH . We note that the first term in (16) does not contribute to the entropy, which is similar to the even dimensional cases, where the entropy result is only attributed to the surface term of the Euclidean action. We are now in position to extend the above calculations to EBH. We are facing two limits, namely, the boundary limit $`r_+r_B`$ and the extreme limit $`M={\displaystyle \frac{ϵ^2}{4}}{\displaystyle \frac{ϵ^2}{4}}\mathrm{ln}{\displaystyle \frac{ϵ^2l^2}{4r_0^2}},ϵ^2=4r_+^2/l^2`$. We follow two different treatments while taking these two limits: (A) first take the boundary limit and then the extreme limit, which corresponds to the treatment adopted in \[3-5\]; and (B) first take the extreme limit and then the boundary limit, which corresponds to starting with the original EBH in . From Eq.(14), it is easy to find that both the first term and the third term in the free energy will vanish either in treatment (A) or (B) due to the limit $`r_+r_B`$. Therefore only the second term of the free energy has the contribution to the entropy in these two treatments. Using (15), we have $`S(A)`$ $`=`$ $`[{\displaystyle \frac{4\pi r_B{\displaystyle \frac{dN_{r_B}}{dr_+}}({\displaystyle \frac{dN^2}{dr}})_{r_B}}{{\displaystyle \frac{dN_{r_B}}{dr_+}}({\displaystyle \frac{dN^2}{dr}})_{r_+}N_{r_B}{\displaystyle \frac{d}{dr_+}}({\displaystyle \frac{dN^2}{dr}})_{r_+}}}]_{r_+r_B}|_{extr}=4\pi r_+`$ (18) $`S(B)`$ $`=`$ $`[{\displaystyle \frac{4\pi r_B{\displaystyle \frac{dN_{r_B}}{dr_+}}({\displaystyle \frac{dN^2}{dr}})_{r_B}}{{\displaystyle \frac{dN_{r_B}}{dr_+}}({\displaystyle \frac{dN^2}{dr}})_{r_+}N_{r_B}{\displaystyle \frac{d}{dr_+}}({\displaystyle \frac{dN^2}{dr}})_{r_+}}}]_{extr}|_{r_+r_B}`$ $`=lim_{r_+r_B}4\pi r_B{\displaystyle \frac{r_+}{r_B}}{\displaystyle \frac{\mathrm{ln}{\displaystyle \frac{r_B^2}{r_+^2}}({\displaystyle \frac{r_B^2}{l^2r_+^2}}{\displaystyle \frac{1}{l^2}})}{\mathrm{ln}{\displaystyle \frac{r_B^2}{r_+^2}}(1/l^21/l^2)2({\displaystyle \frac{r_B^2}{r_+^2}}1)(1/l^21/l^2)}}=4\pi r_+`$ These two different ways of taking the limits lead to the same entropy for 3D charged EBH, and entropy never vanishes. This result can also be extended to 3D rotating black hole. Thus we have shown that in the grand canonical ensemble, the 3D charged black hole can approach the extreme state at nonzero temperature. Unlike even dimensional cases, the entropy of the 3D charged EBH is uniquely described by the Bekenstein-Hawking formula regardless of the different ways of taking the limits. As a matter of fact, in even dimensions it is usual to classify the topology of the manifold in terms of the Chern class, in dimensions multiple of four also by the Pontryagin number. The problem of the black hole entropy has been generally related to the Euler characteristic of the manifold, rather useful in the context of general relativity due to the Gauss-Bonnet theorem. Thus, for even dimensional NEBHs and EBHs, there are direct relations between the black hole entropy and the topological properties represented by Euler characteristics obtained from Gauss-Bonnet theorem \[22,23,24,25,2,8-11\]. In odd dimensional space-time the situation is much more difficult, since most of the tradicional topological invariants do not exist, in spite of the fact that the topology may be far from trivial. In three dimensions, in particular, the Gauss-Bonnet theorem does not exist, and the relations between the entropies and topologies either in NEBH or EBH valid in even dimensions are not appropriate. As far as the traditional invariants are concerned, our result is compatible with having the same topology for both, the extreme and non-extreme black holes in three dimensions, thus we find no contradiction when the extreme limit is taken. However, for more general configurations we certainly need a finer analysis. We can say that the relation between the entropy result and their topological properties in odd dimensions is still unclear and needs further study. ACKNOWLEDGEMENTS: This work was partically supported by Fundacão de Amparo à Pesquisa do Estado de São Paulo (FAPESP) and Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPQ). B. Wang would also like to acknowledge the support given by Shanghai Science and Technology Commission. We would like to thank Dr. A. Saa for discussions.
no-problem/9911/astro-ph9911073.html
ar5iv
text
# A Fast Search Technique for Binary Pulsars ### Acknowledgments. I would like to thank both Steve Eikenberry and Jim Cordes for useful ideas and discussions related to this work. ## References Johnston, H. M. & Kulkarni, S. R. 1991, ApJ, 368, 504 Nice, D. J. & Thorsett, S. E. 1992, ApJ, 397, 249 Ransom, S. M., Cordes, J. M. & Eikenberry, S. S. 2000, in preparation.
no-problem/9911/astro-ph9911354.html
ar5iv
text
# Starfinder: a code for crowded stellar fields analysis ## 1. Introduction If the PSF is constant across the frame, the observed stellar field may be considered a superposition of shifted scaled replicas of the PSF itself, lying on a smooth background originated by faint unresolved stars and other possible sources. Actually the PSF is not always constant: in wide-field AO imaging, for instance, off-axis stars appear blurred and radially elongated with respect to the guide star; this anisoplanatic effect is mainly due to the partial correction of the wavefront tip-tilt. A further complication in the analysis of nearly diffraction-limited images is represented by the detailed structure of the PSF, which is generally difficult to model analytically and may produce false detections. Starfinder (see also Diolaiti et al, 1998) seeks to consider all these aspects. ## 2. Analysis procedure ### 2.1. PSF and background determination If it is not known, the PSF for the analysis must be extracted from the image. In our code the user selects a set of stars, which are cleaned from the most contaminating sources, background-subtracted, centered with sub-pixel accuracy, normalized and superposed with a median operation. The halo of the retrieved PSF is then smoothed, applying a variable box size median filtering technique. The PSF estimate represents a template to analyze the field stars; sub-pixel positioning is accomplished by interpolating the PSF array, which must be well-sampled. A similar approach has been described in Véran et al. (1998). To overcome anisoplanatic effects in AO imaging we use an approximation of the local PSF given by the convolution of the reference source, commonly referred to as guide star, with a radially elongated elliptical gaussian. The parameters of this blurring kernel are derived from a polynomial fit. To do this, first a set of stars at various distances from the reference source is selected then the parameters (elongation and width) of the convolving elliptical gaussian, which gives the best match to the observation, are determined for each one . This set of measurements is fitted with a polynomial, which will be used to determine the local PSF for the analysis of each presumed star in the field. The image background is estimated by interpolating a set of local measurements relative to sub-regions arranged in a regular grid (see Bertin et al, 1996). If the brightest stars in the field can be removed, a very similar estimate may be obtained by the application of a median smoothing technique to the input frame. ### 2.2. Stars detection, astrometry and photometry The starting point is a list of presumed stars, whose observed intensity in the background-removed image is greater than a prefixed detection threshold. Preliminary smoothing reduces the incidence of noise spikes. The objects are listed by decreasing intensity and analyzed one by one by the following sequence of steps: 1. re-identification after subtraction of the known stars, in order to reject spurious detections due to PSF features of bright sources; 2. cross-correlation with the PSF, as a measure of similarity with the template; 3. astrometric and photometric analysis by local fitting. Each new accepted star is added to a synthetic stellar field, updated at every step. When all the objects in the list have been analyzed, a final re-fitting is performed to improve their astrometry and photometry; then they are temporarily removed to upgrade the background estimate. The basic step described above (detection and analysis) may be repeated: a new list of presumed stars is formed after subtracting the previously detected ones and the analysis is started again on the original image. This iteration is very useful to detect stars in crowded groups, down to separations comparable to the Rayleigh limit for the detection of close binaries. An optional deblending mode is available. All the objects somewhat more extended than the PSF are considered blends. The deblending strategy consists of an iterative search for residuals around the object and subsequent fitting; the iteration stops when no more residual is found or the fit of the last residual was not successful. ## 3. Applications to high and low Strehl images The algorithm has been run on a K-band PUEO image of the Galactic Center, as an example of a well-sampled high-Strehl AO observation of a stellar field. We have evaluated the astrometric and photometric accuracy of the algorithm adding to the image, for each magnitude bin in the retrieved luminosity function (fig.1), a total of 10 % of synthetic stars located at random positions, with the only constraint that the minimum distance of each simulated star from all the previously detected ones must be greater than 1 PSF FWHM (about 4 pixels). The plot of the photometric errors shows accurate and unbiased photometry (see fig.2a,2b). Our method has also been applied to two well-sampled low-Strehl images of the globular cluster 47 Tuc, observed at the ESO 3.6m telescope with the ADONIS AO system. The PSF FWHM is about 6 pixels.The two frames, that have a large overlap area, have been analyzed with the same procedure. The results (fig. 2 c and d) present a good internal astrometric and photometric accuracy. ## 4. Conclusions and future developments Starfinder seems to be able to analyze well-sampled images of very crowded fields observed, for instance, with ground-based AO systems. According to our experience it may be successfully applied also to adequately sampled HST images, like those obtained with dithering strategies. It is reasonably fast (only few minutes on a Pentium II PC for the analysis of the Galactic Center) and a widget interface makes it accessible to users unfamiliar with IDL. In the near future the tools for space-variant analysis will be improved. A further work will give a complete description of the code. #### Acknowledgments. Francois Rigaut is acknowledged for kindly providing the PUEO image of the Galactic Center and for supporting the initial development of this method. ## References Bertin E., Arnouts S. 1996, A&AS, 117, 393 Diolaiti E., Bendinelli O., Bonaccini D., Parmeggiani G., Rigaut F. 1998, in ESO/OSA Topical Meeting on Astronomy with Adaptive Optics, ed. D.Bonaccini (Garching b. Munchen, ESO), p.175 Véran J.-P., Rigaut F. 1998, Proc. SPIE 3353, 426
no-problem/9911/cond-mat9911415.html
ar5iv
text
# Magnetic circular dichroism in X-ray fluorescence of Heusler alloys at threshold excitation ## Abstract The results of fluorescence measurements of magnetic circular dichroism (MCD) in Mn L<sub>2</sub>,<sub>3</sub> X-ray emission and absorption for Heusler alloys NiMnSb and Co<sub>2</sub>MnSb are presented. Very intense resonance Mn L<sub>3</sub> emission is found at the Mn 2p<sub>3/2</sub> threshold and is attributed to a peculiarity of the threshold excitation in materials with the half-metallic character of the electronic structure. A theoretical model for the description of resonance scattering of polarized x-rays is suggested. Heusler alloys with chemical composition X<sub>2</sub>MnZ or XMnZ (X=Cu,Ni,Co,Fe,Pd,Pt; Z=Al,Ga,In,Si,Sn,Sb) and L2<sub>1</sub> or C1<sub>b</sub> crystal structure are of great interest since their discovery in 1903 because they are ternary intermetallic compounds with magnetic properties that can be altered by changing the degree or type of chemical order . First-principles spin-polarized calculations showed that the electronic structure of Heusler alloys has a metallic character for majority spin-electrons but an insulating character for minority spin-electrons which can induce the half-metallic ferromagnet state and band gap near the Fermi level. It is supposed that X-ray fluorescence spectroscopy, performed with circularly polarized photons, could measure the spin polarization of the occupied density of states (DOS). This kind of dichroism in X-ray fluorescence closely related magnetic circular dichroism (MCD) in absorption and is complementary to it. It is shown that a spin-polarized core hole may be used as a local site-specific spin detector for the valence states via appropriate valence-to-core dipole transitions. According to these expectations,the MCD effect can be observed without determining the polarization of emitted photons if the core holes are excited alternatively by the right- and left-handed circularly polarized radiation by using different sample magnetization directions. In this case the difference in the emitted radiation will closely reflect the spin-resolved local DOS. MCD in X-ray emission is predicted theoretically in Ref.5, based on fully relativistic spin-polarized band structure calculations for iron. Experimentally, this effect is confirmed for pure Fe in Ref and then is used for the study of the electronic structure of other magnetic systems as Co and Ni , Rh , Rh<sub>25</sub>Fe<sub>75</sub> and Ni<sub>45</sub>Fe<sub>55</sub> and Fe-Co alloys . In this letter we present the results of study of magnetic dichroism in Mn 2p X-ray emission and absorption for Heusler alloys NiMnSb and Co<sub>2</sub>MnSb using energy-selective monochromatic excitation with circularly polarized X-rays. The measurements were performed on beamline ID12B at the European Synchrotron Radiation Facility (ESRF). This beamline consists of a Dragon-like spherical grating monochromator producing some 83% circularly polarized x-rays. The extraordinarily low emittance of the 6 GeV stored electron beam allowed us to refocus the x-ray beam that passes the monochromator exit slit into a spot with dimensions of about 40 micron x 1 mm without excessive loss of intensity. The permanent magnet (NdFeB) devices were used to magnetize the sample for a fixed magnetization direction with H=0.2 T. The X-ray emission spectrometer consisted of an entrance 20 mkm slit, three spherical diffraction gratings, and a two-dimensional position-sensitive multichannel detector . It was oriented with its optical axis perpendicular to the incident X-rays, in a vertical dispersive geometry and was operated with a 1200 lines/mm grating in second order of diffraction at a spectral resolution of 1.1 eV. A single crystal of NiMnSb and a polycrystalline sample of Co<sub>2</sub>MnSb were taken for X-ray fluorescence measurements. The samples were scraped in vacuum of 10<sup>-6</sup> torr before the measurements. Fig. 1 shows the MCD effect in X-ray absorption and emission of NiMnSb excited by the right- and left-polarized radiation. The Mn 2p X-ray absorption spectrum (XAS) measured in a total electron yield (TEY) mode shows dichroism (Fig. 1a) at both L<sub>3</sub> (E<sub>exc</sub>=640.5 eV) and L<sub>2</sub> (E<sub>exc</sub>=652.0 eV) thresholds. The MCD effect is stronger at the L<sub>3</sub> threshold than that at L<sub>2</sub> threshold and a reason of this difference will be discussed later. The Mn 3d DOS is also presented. Mn 2p X-ray emission spectra (XES) measured at the L<sub>3</sub> threshold (E<sub>exc</sub>=640.5 eV), L<sub>2</sub> threshold (E<sub>exc</sub>=652 eV) and far above threshold (E<sub>exc</sub>=680 eV) show quite different fine structures. The Mn L<sub>3</sub> XES measured at E<sub>exc</sub>=640.5 eV (Fig. 1b), which corresponds to the 3d 4s$``$2p<sub>3/2</sub> transition, has two subbands A and B located at 637.0 and 640.5 eV, respectively. Both subbands show dichroism with the same sign as is found for the Mn L<sub>3</sub> XAS. The MCD effect reaches its maximum at emission energy E=640.5 eV which exactly corresponds to the Mn L<sub>3</sub> threshold. The double peak structure revealed in Mn 2p XES can be a result of superposition of spectra of normal emission (A) and elastic x-ray scattering, known as re-emission (B). The intensity of the B subband is found to be about 1.5 times higher than that of the A subband. The Fermi level (estimated from Mn2p<sub>3/2</sub> core level photoemission ), is at the intensity minimum between these two subbands which means that the B subband corresponds to re-emission from unoccupied 3d states which are populated during near-threshold excitation of the Mn 2p<sub>3/2</sub> electron to the conduction band. X-ray re-emission is usually observed in spectra of insulators (see, for example, reference ) but in the case of a Heusler alloy, it have been never seen with so high intensity. In the Mn L<sub>3</sub> XES measured at the L<sub>2</sub> threshold excitation (E<sub>exc</sub>=652 eV), we found great changes in the intensity distribution. The intensity of the peak A with an emission energy E=637.5 eV is decreased with respect to the spectrum measured at E<sub>exc</sub>=640.5 eV and this peak is merged with the peak B forming a rather wide emission band centered at the emission energy E=638.3 eV. The main intensity is due to the Mn L<sub>2</sub> XES (3d4s$``$2p<sub>1/2</sub> transition) which again shows two peaks (A’ and B’) located at emission energies of 648.2 and 652.0 eV (with Fermi level in between these peaks) corresponding to normal emission and re-emission, respectively. The intensity ratio of relative intensity I(B)/I(A) in Mn L<sub>2</sub> XES is found to be opposite with respect to that of the Mn L<sub>3</sub> XES. Dichroism of the Mn L<sub>3</sub> XES (A) at the L<sub>2</sub> threshold is found to be higher than that of the Mn L<sub>3</sub> XES (A) at the L<sub>3</sub> threshold whereas dichroism of the Mn L<sub>3</sub> (B) at the L<sub>2</sub> threshold is absent. The Mn L<sub>2,3</sub> XES measured far above the thresholds at E<sub>exc</sub>=680 eV (Fig. 1d) shows dichroism with different sign for L<sub>3</sub> and L<sub>2</sub> and relatively high ratio of I(L<sub>2</sub>)/I((L<sub>3</sub>)=0.92 which is higher than that for pure Mn (0.27). We have found also anomalously high ratio I(L<sub>2</sub>)/I((L<sub>3</sub>)= 0.5-0.6 for that of Mn L<sub>2,3</sub> XES of Heusler alloys, measured with electron excitation . Fig. 2 shows MCD in Mn 2p X-ray emission for polycrystalline Co<sub>2</sub>MnSb excited by right- and left-polarized radiation at E<sub>exc</sub>=640.5, 644, 652 and 680 eV. As seen, the MCD behavior for Co<sub>2</sub>MnSb is found to be almost the same as that for NiMnSb. Therefore, one can conclude that experimentally observed MCD effects in near-threshold excited Mn 2p XES are the same for Heusler alloys with half-metallic character of electronic structure and can be discussed together. To interpret obtained results we have developed a theory for the description of X-ray transitions under threshold excitation. Here we will discuss only the main aspects of this theory. The details are given in . The model is based on the calculated partial Mn 3d-density of states for NiMnSb which has the following main features: (i) a strong exchange splitting (about 2 eV) which affects the valence states distribution, (ii) spin-down states have a strong peak above the Fermi level (spin-up states are absent in this region and located below the Fermi level). We will show that this peak can be used as a trap for an excited core electron with a spin selectivity and responsible for the high intense re-emission peak in XES. Let us discuss the MCD effects in XES of Heusler alloys. Core states are split by the spin-orbital interaction for 2p<sub>3/2</sub> and 2p<sub>1/2</sub> with mostly parallel and anti-parallel direction of the orbital moment with respect to the spin moment of an electron. The spin direction is fixed by the spin-polarization of unoccupied states in which core electron are excited under absorption of x-ray quanta. Therefore, in such a way the part of states with orbital moment projections parallel (or anti-parallel) to the spin moment of atom are selected. The photons with right (photon spin +1) and left (photon spin -1) polarizations interact differently with electrons having a fixed projection of the orbital moment.This leads to dichroism. X-ray emission is a reverse process with respect to x-ray absorption and therefore also reveals dichroism. Using theory of the second order on electron-phonon interaction we have given the quantitative description of emission, re-emission and absorption within the one-particle approach In the case of resonance 2p<sub>1/2</sub> and 2p<sub>3/2</sub> excitations with right-hand photons (q=+1), when the photon spin is parallel to the spin moment of the solid, the emission intensities are given by equations . This change of the dichroism sign can for differerent threshold excitations can be explained within framework of the two-step process for the X-ray emission. At first, the transition of core level electron to the conduction band takes place which is characterized by dichroism of absorption and then a normal emission occurs (with an X-ray transition from the valence band to the vacancy on the core level) which is characterized by dichroism of emission. In the case of the near-threshold excitation, these both constituents of dichroism have different signs because spin- polarizations of unoccupied and occupied states are different. In given case, dichroism of the first process (absorption) prevails. Under excitation far away from the threshold, the unoccupied states are unpolarized, $`P^e`$=0, and the result is determined by dichroism of the second step, i.e. emission. Going back to the re-emission peaks at L<sub>3</sub> and L<sub>2</sub> thresholds we need to note that the relative intensity of a first one is higher than a second one. Though the calculations predict the decrease in the ratio $`I^r`$/$`I^e`$ from 1.0 to 0.45 on going from L<sub>3</sub> to the L<sub>2</sub> threshold, the observed re-emission peak at L<sub>2</sub> threshold is even lower than predicted. It can be due to a shorter lifetime of a L<sub>2</sub> core hole (compared to that of L<sub>3</sub> core hole) as a result of L<sub>3</sub> $``$ L<sub>2</sub> Auger transition. One can also expect that the field of a L<sub>2</sub> core hole with higher binding energy is screened stronger than that of a L<sub>3</sub> core hole. A value of the field directly determines the lifetime of the excited electron on the central atom and a probability of re-emission. In conclusion we have found MCD effects in X-ray emission and absorption of Heusler alloys which evidence about strong exchange splitting of Mn 3d-states with different spin projections. The observed giant re-emission peak in the Mn XES at the L<sub>3</sub> threshold is due to the half-metallic character of the electronic structure of Heusler alloys. The long lifetime of the excited states (which is necessary for intense re-emission) is provided by the limitation of the radiation-less relaxation of excited electrons on the Fermi level and their delay on the atom by the field of the core hole. In Heusler alloys with large local magnetic moment there is a possibility of revealing of such a mechanism of suppression of the relaxation which is based on strong exchange splitting, typical for half-metallic systems. Both the existence of the energy gap in spin-down projection of Mn 3d-states and weak hybridization of Mn 3d electrons with nearest neighborhood conduce to the suppression of the excited electron relaxation. A theoretical model for the description of resonance scattering of polarized X-rays is suggested and the quantitative theory of X-ray emission MCD in terms of spin-polarized electron density of states and spin-orbital splitting of core levels is developed. This work was supported by Russian Science Foundation for Fundamental Research (Project 96-15-96598 and 99-02-16266), NATO Linkage Grant (HTECH.LG 971222), Swedish Natural Science Research Council (NFR) and Goran Gustavsson Foundation in Natural Sciences and Medicine,Deutsche Forschungsgemeinschaft DFG project Br 1184/4 and Bundesministerium fur Bildung und Forshung (BMFB 05SB8MPB8). Technical assistance of ESRF staff is gratefully acknowledget. The single crystal of NiMnSb was supplied by Ch. Hordequien (CNRS) and polycrystaline Co<sub>2</sub>MnSb was supplied by Elena I. Shreder (Institute of Metal Physics, Russian Academy of Sciences-Ural Division) Figures Caption: Fig. 1 Excitation energy dependence of the Mn L<sub>2</sub>,<sub>3</sub> XES and Mn 2p XAS of NiMnSb. Mn 2p TEY spectrum (a) and XES spectra (b), (c), (d) ,excited with parallel aligment of photon spin and spin moment (filled squares) and antiparallel aligment (open circles). Mn 2p<sub>3/2</sub> and Mn2p<sub>1/2</sub> binding energies relative to Fermi level are indicated as E<sub>f</sub>(L<sub>3</sub>) and E<sub>f</sub>(L<sub>2</sub>),respectively Energy resolution is 1.1 eV The Mn 3d DOS is shown at insertion at Fig 1.b Fig. 2 Excitation energy dependence of Mn L<sub>2</sub>,<sub>3</sub> XES of Co<sub>2</sub>MnSb. Spectra (a), (b), (c), (d) excited with parallel aligment of photon spin and spin moment (filled squares), and antiparallel aligment (open circles). Corresponding excitation energies are shown right side of frame. Mn 2p<sub>3/2</sub> and Mn2p<sub>1/2</sub> binding energies relative to Fermi level indicated as E<sub>f</sub>(L<sub>3</sub>) and E<sub>f</sub>(L<sub>2</sub>), respectively.Energy resolution is 1.1 eV.
no-problem/9911/physics9911026.html
ar5iv
text
# Multipositronic systems ## Abstract The stability and structure of systems comprising a negative ion and positrons are investigated by the stochastic variational method. It is shown that the H<sup>-</sup> and the Li<sup>-</sup> ions can bind not only one but two positrons. The binding energies of these double positronic atoms E(H<sup>-</sup>,e<sup>+</sup>,e<sup>+</sup>)=0.57eV and E(Li<sup>-</sup>,e<sup>+</sup>,e<sup>+</sup>)=0.15eV are somewhat smaller than those of their single positronic counterparts (E(HPs)=1.06eV and E(LiPs)=0.32 eV). We have also found that a Ps<sup>-</sup>, Ps<sup>-</sup> and a proton form a bound system. 36.10.Dr, 31.15.Pf.,71.35.-y,73.20.Dx The many-body problem is conceptually simple and well defined in atomic physics: indistinguishable fermions (electrons) interact via the Coulomb potential in the external Coulomb field of the atomic nuclei. The solution of this many-body problem is very difficult, because in addition to the direct interaction between the electrons, their indistinguishability brings an exchange correlation into effect. Despite the complexity enormous progress has been made in this field which has been rapidly developing ever since the birth of quantum mechanics. The calculations have been mainly focused on systems (atoms and molecules) where fast electronic motion takes place in the field of slowly moving heavy positive charges. Much less is known about systems which contain positively and negatively charged particles of equal or nearly equal masses. The simplest examples of these systems are the Positronium ion $`(e^+,e^{},e^{})`$ (predicted by Wheeler in 1946, experimentally observed by Mills in 1981), the Ps<sub>2</sub> molecule $`(e^+,e^+,e^{},e^{})`$ (predicted by Hylleraas and Ore in 1947, not observed yet in nature), or the HPs molecule (indirectly observed, see ). These systems have been extensively studied by various theoretical methods in the last few years . The existence of these small systems makes theorists curious as to whether (similarly to molecules) larger stable systems containing positrons can also be formed. One can ask whether a system of $`m`$ electrons $`n`$ positrons (for example an $`(3e^{},3e^+)`$ system) is bound or whether a positron, a positronium, a Ps<sup>-</sup> ion or a Ps<sub>2</sub> molecule can attach itself to an atom or molecule. The theoretical description of such systems (let alone the prediction of their stability against autodissociation) is obviously very difficult. The difficulty can largely be attributed to the fact that the electron-electron and the electron-positron correlations are quite different due to the attraction and to absence of the Pauli-principle constraint in the latter case. The tiny binding energies of these loosely bound extended systems require highly accurate calculations. Recent calculations have given the very surprising result that a positron can cling to a neutral atom . The simplest such positronic atom is the Lie<sup>+</sup>. The complexity of the calculation of its small binding energy is best illustrated by the fact that many otherwise successful methods had failed to predict the existence of the bound state of this system . These calculations show that the energy of the Lie<sup>+</sup> is lower than that of the Li atom but the energy was not below the Li<sup>+</sup>+Ps dissociation threshold. The first rigorous proof showing that the positron can attach itself to a Li atom was given by Ryzhikh and Mitroy by using the stochastic variational method (SVM) . This finding has been later confirmed by different theoretical approaches . Other atoms (e.g. Be,Na,Mg,Cu,Zn and Ag) has also been found to be capable of binding a positron . There is an other family of positronic atoms which are formed when positronium is attached to an atom. The possibility of the existence of such systems is more obvious: removing the positron leaves behind a negatively charged ion so one can understand how the positron becomes bound. The simplest example of such system is the HPs molecule which has been the subject of numerous theoretical investigations and has been experimentally observed as well . Another examples are the LiPs, NaPs and KPs atoms. The LiPs has been described by several microscopic methods , while the other two bound systems have been predicted by a semi-microscopic model . In this paper we explore the possibility of the formation of stable atoms/ions containing two or more positrons. The simplest known example for such system is the Ps<sub>2</sub> molecule. The study is inspired by the speculation that if a neutral atom can bind a positron then it may even be able to bind a positively charged Ps<sup>+</sup>=(e<sup>+</sup>,e<sup>+</sup>,e<sup>-</sup>) ion. This motivation can also be phrased in an other way: If positronium can bind itself to a neutral atom “A” forming a neutral system “APs” then can we attach a positron into APs? The stochastic variational method systematically improves the correlation functions between the particles and it is especially suitable to solve Coulombic few-body problems. The method has been tested on a number of problems in different fields of physics and it has been proved to be highly accurate and reliable . The present study is restricted to states with total orbital angular momentum L=0 and the following trial function is assumed $$\mathrm{\Psi }=𝒜\{\mathrm{e}^{\frac{1}{2}𝐱A𝐱}\chi _{SM_S}\},$$ (1) where $`𝐱=(𝐱_1,\mathrm{}𝐱_{N1})`$ is a set of relative coordinates, $`\chi _{SM_S}`$ is the spin function, and $`A`$ is a matrix of nonlinear variational parameters. The nonlinear parameters are optimized by the stochastic variational method through a trial and error procedure. The details can be found in Ref. . This trial function includes explicit $`\mathrm{exp}(\alpha r_{ij}^2)`$ correlation factors between the particles and it gives very accurate solutions provided that the nonlinear parameters (in the exponents) are properly optimized. As the number of parameters for a typical system is at least a few thousands a direct search for the optimal values is out of question. The stochastic variational method sets up a basis successively enlarging the model space by including the optimal trial functions. This basis was systematically improved by a refining procedure: The basis states were replaced by randomly chosen states which lower the energy. The energy found in this variational procedure converges to the upper bound of the exact ground state energy of the system. The Correlated Gaussians offer computational advantage: fast analytical evaluation of the matrix elements and good approximation to various wave functions. They also have well-kown drawbacks such as their slow convergence (compared to exponential functions) and the fact that they do not satisfy the cusp condition. The simplest (A,Ps<sup>+</sup>) is the (H,Ps<sup>+</sup>)=(p,e<sup>-</sup>,e<sup>-</sup>,e<sup>+</sup>,e<sup>+</sup>) system. This five-body system can dissociate into H+Ps<sup>+</sup>, p+Ps<sub>2</sub> or HPs+e<sup>+</sup>, the lowest dissociation thresholds are shown in Fig. 1. To validate the method we have calculated the energies of the Ps<sub>2</sub> and HPs molecules (see Table I.). The SVM significantly improved the theoretical values of the binding energies of these systems. Our calculation shows that the energy of the (H,Ps<sup>+</sup>) is below the dissociation threshold and forms an electronically stable system. The H<sup>-</sup> ion can bind not only one but two positrons. The binding energy of (H,Ps<sup>+</sup>)=HPse<sup>+</sup> (0.021 a.u) is comparable to that of HPs (0.039 a.u.). The convergence of the energy as a function of the basis dimension is shown in Table I. The HPse<sup>+</sup> system can be also viewed as a bound system of a proton and a Ps<sub>2</sub> molecule. The Ps<sub>2</sub> molecule cannot bind an extra electron or positron because of the Pauli principle. Our calculations show that the Ps<sub>2</sub> can bind a charged particle if it is distinguishable from the electron and the positron. The binding energy of a five-body system Ps<sub>2</sub>+x<sup>+</sup>=(e<sup>+</sup>,e<sup>+</sup>,e<sup>-</sup>,e<sup>-</sup>,x<sup>+</sup>) consisting a hypotetical “x” particle is bound for any $`0m_e/m_x1`$ mass ratio. This has been checked by calculating the binding energy of that system for several different $`m_x`$ masses ($`m_x=10^{50},100,10,8,6,4,2,1`$ in units of m<sub>e</sub>). So while the (e<sup>+</sup>,e<sup>+</sup>,e<sup>+</sup>,e<sup>-</sup>,e<sup>-</sup>) is unbound the Ps<sub>2</sub> can bind any charged particle, e.g. $`\mu ^+`$ or p<sup>+</sup>, because the Pauli principle does not restrict the motion of fifth particle in that case. Some of the properties of these systems are shown in Table II. It is intriguing to compare the relative distances between the particles in HPs and HPse<sup>+</sup>. The electron-nucleus or electron-electron relative distances are almost the same in the two systems. The average nucleus-positron distance, however is substantially larger in HPse<sup>+</sup>. An other interesting property is that the relative distance between positrons is about twice that between electrons. All these facts suggest that a possible geometrical picture of the HPse<sup>+</sup> looks as an isosceles triangle formed by the two positrons and the proton and the two electrons are moving between the positive charges. The two positrons are placed on the vertices of the baseline of the triangle, and this baseline is so long that the system almost looks like as a linear chain. The HPse<sup>+</sup> is somewhat related to H$`{}_{}{}^{+}{}_{3}{}^{}`$. In H$`{}_{}{}^{+}{}_{3}{}^{}`$ three protons and two electrons form a very stable system, where the three protons are at the vertices of an equilateral triangle. By changing the mass of two of the positive charges this equilateral triangle is changed to an isosceles triangle, and in the positronic limit it looks like a linear chain. The Li atom can bind a positron or a positronium forming an electronically stable Lie<sup>+</sup> or LiPs . The binding energy of the Lie<sup>+</sup> is very small and it can be best viewed as a positronium orbiting around a Li<sup>+</sup> core. In our calculation we replace the positron with a Ps<sub>2</sub> ion and try to determine the binding energy. In this case we have six active particles, four electrons and two positrons. This system has various different dissociation channels (see Fig. 2). The calculated energies of the relevant subsystems are listed in Table III. Our calculation shows that the Li can bind a Ps<sup>+</sup> ion to form an electronically stable LiPse<sup>+</sup>. The calculated binding energy might not be very accurate due to the complexity of the system, but it is definitely below the lowest threshold (see Fig. 2). The convergence of the binding energy is shown in Table I. Further increase of the basis size would improve the ground state energy. This system, again, can be viewed in different ways. One can say that a Li atom can bind a Ps<sup>+</sup> ion, or a Li<sup>-</sup> ion is able to bind two positrons or the Ps<sub>2</sub> molecule can attach itself to a Li<sup>+</sup> ion. The relative distances between the particles in LiPse<sup>+</sup> are shown in Table III. The average distance between the nucleus and the positron or between a positron and an electron is larger than that in LiPs but smaller than what one can find in Lie<sup>+</sup>. This would suggest a picture of LiPse<sup>+</sup> as a Li<sup>+</sup> core with an orbiting Ps<sub>2</sub> molecule. These systems are electronically stable but the positron electron pair can annihilate by emitting two photons. The annihilation rate is proportional to the probability of finding an electron and a positron at the same position in a spin singlet state (see eq. (21) in ). The expectation values of the positron-electron delta functions ($`\delta _{e^+e^{}}=\mathrm{\Psi }|\delta (𝐫_e^{}𝐫_{e^+})|\mathrm{\Psi }`$) are $`1.4\times 10^2`$, $`1.1\times 10^2`$ and $`1.1\times 10^2`$, for Lie<sup>+</sup>, LiPs and LiPse<sup>+</sup>. Due to the possible inaccuracy of the energy and wave function of the LiPse<sup>+</sup> system the annihilation rate should be considered as a qualitative estimate and it is about $`\mathrm{\Gamma }_{2\gamma }=4.4\times 10^9`$ sec<sup>-1</sup>. The (H<sup>-</sup>,e<sup>+</sup>,e<sup>+</sup>) is a positively charged system so one may try to add one more electron to see if it remains stable. The convergence of the energy is shown in Fig. 3. The energy of the system slowly converges to the lowest (HPs+Ps) threshold and the size of the system continuously increases showing that this system is unlikely to be bound. Surprisingly, however, by adding two electrons to (H<sup>-</sup>,e<sup>+</sup>,e<sup>+</sup>) one gets a bound system as shown in Fig. 3. This system “H<sup>-</sup>Ps<sub>2</sub>” contains a proton, two positrons and four electrons, and can also be considered as a three-body system of a proton a Ps<sup>-</sup> and a Ps<sup>-</sup> ion as an analogy of the H<sup>-</sup> ion (where the electrons are replaced by the composite Ps<sup>-</sup> ions). The convergence of the energy is slow and the calculation of a more accurate binding energy would require a considerably larger basis dimension (see Table I.) We have shown, for the first time, that neutral atoms can bind not only a single positron but a more complex positive charge, the Ps<sup>+</sup> ion as well. Besides the two cases (HPse<sup>+</sup> and LiPse<sup>+</sup>) it is quite possible that other systems can be also bound. Although the investigation of larger systems is beyond the scope of the present method, other approaches (like QMC or Fixed core SVM ) might be used to study the possible bound state of Ps<sup>+</sup> (or two positrons) with larger atoms/ions. Examples are (1) the recent QMC study of positronic water and a new study with the Fixed core SVM which confirms the existence of the LiPse<sup>+</sup> and shows that a larger ion (Na<sup>+</sup>) can also bind a Ps<sub>2</sub> molecule. The investigation of these exotic systems are very important from theoretical point of view. These systems serve as test grounds for new methods: They provide a special environment where not only the electron-electron but other interleptonic correlations are also important. While the chance of experimental observation of these systems is even more challenging than those of the positronic atoms , some of the properties of positronic systems can be affected by these bound states and the theoretical prediction of their existence might be very useful. Systems, similar to (p<sup>+</sup>,e<sup>+</sup>,e<sup>+</sup>,e<sup>-</sup>,e<sup>-</sup>) might exist in semiconductors. Both the charged exciton (system of two electrons and a hole, akin to Ps<sup>-</sup>) and the biexciton (two electrons and two holes, similar to Ps<sub>2</sub>) have been experimentally observed . Larger systems of “multiexcitons” (system of several electron-hole pairs) have also been observed . These systems are of course different from the electron-positron systems because the electron-hole mass ratio ($`\sigma =m_e/m_h`$) differs from unity and also because there is no annihilation so their observation might be easier. The stability for electrons and positrons indicates the stability for systems with slightly different mass ratios. The present study might give a hint for the existence of similar systems in semiconductors as well. In GaAs, for example, there are heavy holes ($`\sigma =0.196`$) and light holes ($`\sigma =0.707`$). A system similar to (p<sup>+</sup>,e<sup>+</sup>,e<sup>+</sup>,e<sup>-</sup>,e<sup>-</sup>) would comprise two electrons, a heavy, and two light holes. This work was supported by the U. S. Department of Energy, Nuclear Physics Division, under contract No. W-31-109-ENG-39 and OTKA grant No. T029003 (Hungary).
no-problem/9911/quant-ph9911110.html
ar5iv
text
# Comment on the “Maxwell Equations as the One-Photon Quantum Equation” by A. Gersten [Found. Phys. Lett. 12, pp. 291-298 (1999)] Submitted to “Foundation of Physics Letters” ## Abstract We show that the Gersten derivation of Maxwell equations can be generalized. It actually leads to additional solutions of ‘$`S=1`$ equations’. They follow directly from previous considerations by Majorana, Oppenheimer, Weinberg and Ogievetskii and Polubarinov. Therefore, generalized Maxwell equations should be used as a guideline for proper interpretations of quantum theories. preprint: EFUAZ FT-99-78 In the paper the author studied the matrix representation of the Maxwell equations, both the Faraday and Ampere laws and the Gauss law. His consideration is based on the equation (9) $$\left(\frac{E^2}{c^2}\stackrel{}{𝐩}^{\mathrm{\hspace{0.17em}2}}\right)\stackrel{}{𝚿}=\left(\frac{E}{c}I^{(3)}\stackrel{}{𝐩}\stackrel{}{𝐒}\right)\left(\frac{E}{c}I^{(3)}+\stackrel{}{𝐩}\stackrel{}{𝐒}\right)\stackrel{}{𝚿}\left(\begin{array}{c}p_x\\ p_y\\ p_z\end{array}\right)\left(\stackrel{}{𝐩}\stackrel{}{𝚿}\right)=0\text{Eq. (9) of ref. [1]}$$ (1) Furthermore, he claimed that the solutions to this equation should be found from the set $`\left({\displaystyle \frac{E}{c}}I^{(3)}+\stackrel{}{𝐩}\stackrel{}{𝐒}\right)\stackrel{}{𝚿}=0\text{Eq. (10) of ref. [1]}`$ (2) $`\left(\stackrel{}{𝐩}\stackrel{}{𝚿}\right)=0\text{Eq. (11) of ref. [1]}`$ (3) Thus, Gersten concluded that the equation (9) is equivalent to the Maxwell equations (10,11). As he also correctly indicated, such a formalism for describing $`S=1`$ fields has been considered by several authors before. See, for instance, ; those authors mainly considered the dynamical Maxwell equations in the matrix form. However, we straightforwardly note that the equation (9) of is satisfied also under the choice<sup>*</sup><sup>*</sup>*We leave the analysis of possible functional non-linear (in general) dependence of $`\chi `$ and $`_\mu \chi `$ on the higher-rank tensor fields for future publications. $`\left({\displaystyle \frac{E}{c}}I^{(3)}+\stackrel{}{𝐩}\stackrel{}{𝐒}\right)\stackrel{}{𝚿}=\stackrel{}{𝐩}\chi `$ (4) $`\left(\stackrel{}{𝐩}\stackrel{}{𝚿}\right)={\displaystyle \frac{E}{c}}\chi ,`$ (5) with some arbitrary scalar function $`\chi `$ at this stage. This is due to the fact thatSee the explicit form of the angular momentum matrices in Eq. (6) of the Gersten paper. $$(\stackrel{}{𝐩}\stackrel{}{𝐒})^{jk}\stackrel{}{𝐩}^k=iϵ^{jik}p^ip^k0$$ (or after quantum operator substitutions $`\text{rot}\text{grad}\chi =0`$). Thus, the generalized coordinate-space Maxwell equations follow after the similar procedure as in : $$\mathbf{}\times \stackrel{}{𝐄}=\frac{1}{c}\frac{\stackrel{}{𝐁}}{t}+\mathbf{}\mathrm{𝐼𝑚}\chi $$ (6) $$\mathbf{}\times \stackrel{}{𝐁}=\frac{1}{c}\frac{\stackrel{}{𝐄}}{t}+\mathbf{}\mathrm{𝑅𝑒}\chi $$ (7) $$\mathbf{}\stackrel{}{𝐄}=\frac{1}{c}\frac{}{t}\mathrm{𝑅𝑒}\chi $$ (8) $$\mathbf{}\stackrel{}{𝐁}=\frac{1}{c}\frac{}{t}\mathrm{𝐼𝑚}\chi .$$ (9) If one assumes that there are no monopoles, one may suggest that $`\chi (x)`$ is a real field and its derivatives play the role of charge and current densities. Thus, surprisingly, on using the Dirac-like procedureThat is to say, on the basis of the relativistic dispersion relations $$\left(E^2c^2\stackrel{}{𝐩}^2m^2c^4\right)\mathrm{\Psi }=0,\text{(1) of ref. }\text{[2]}.$$ of derivation of “free-space” relativistic quantum field equations, Gersten might in fact have come to the inhomogeneous Maxwell equations!<sup>§</sup><sup>§</sup>§One can also substitute $`(4\pi i\mathrm{}/c)\stackrel{}{𝐣}`$ and $`(4\pi i\mathrm{})\rho `$ in the right hand side of (2,3) of the present paper and obtain equations for the current and the charge density $`{\displaystyle \frac{1}{c}}\stackrel{}{\mathbf{}}\times \stackrel{}{𝐣}=0,`$ (10) $`{\displaystyle \frac{1}{c^2}}{\displaystyle \frac{\stackrel{}{𝐣}}{t}}+\stackrel{}{\mathbf{}}\rho =0,`$ (11) which coincide with equations (13,17) of \[9b\]. The interesting question is: whether such defined $`\stackrel{}{𝐣}`$ and $`\rho `$ may be related to $`_\mu \chi `$. Furthermore, I am not aware of any proofs that the scalar field $`\chi (x)`$ should be firmly connected with the charge and current densities, so there is sufficient room for interpretation. For instance, its time derivative and gradient may also be interpreted as leading to the 4-vector potential. In this case, we need some mass/length parameter as in \[11a,d\]. Both these interpretations were present in the literature (cf. also ). Below we discuss only one aspect of the above-mentioned problem with additional scalar field and its derivatives in generalizations of the Maxwell formalism. It is connected with the concept of notoph of Ogievetskiĭ and Polubarinov (in the US journal literature it is known as the Kalb-Ramond field).In my opinion, Prof. S. Weinberg \[15, p.208\] confirmed this idea when considering a spin-0 4-vector field in his famous book. The related problem of misunderstandings of the Weinberg theorem $`BA=\lambda `$ is slightly discussed too; $`A`$ and $`B`$ are eigenvalues of angular momenta corresponding to certain representation of the Lorentz Group, $`\lambda `$ is the helicity \[5, p.B885\]. Actually, after perfoming the Bargmann-Wigner procedure of description of higher-spin massive particles by totally symmetric spinor of higher rank, we derive the following equations for spin 1: $`_\alpha F^{\alpha \mu }+{\displaystyle \frac{m}{2}}A^\mu =0,`$ (12) $`2mF^{\mu \nu }=^\mu A^\nu ^\nu A^\mu ,`$ (13) In the meantime, in the textbooks, the latter set is usually written as $`_\alpha F^{\alpha \mu }+m^2A^\mu =0,`$ (14) $`F^{\mu \nu }=^\mu A^\nu ^\nu A^\mu ,`$ (15) The set (14,15) is obtained from (12,13) after the normalization change $`A_\mu 2mA_\mu `$ or $`F_{\mu \nu }\frac{1}{2m}F_{\mu \nu }`$. Of course, one can investigate other sets of equations with different normalization of the $`F_{\mu \nu }`$ and $`A_\mu `$ fields. Are all these sets of equations equivalent? – I asked in a recent series of my papers. Ogievetskiĭ and Polubarinov argued that in the massless limit “the system of $`2s+1`$ states is no longer irreducible; it decomposes and describes a set of different particles with zero mass and helicities $`\pm s`$, $`\pm (s1)`$,…$`\pm 1`$, $`0`$ (for integer spin and if parity is conserved; the situation is analogous for half-integer spins).” Thus, they did in fact contradict the Weinberg theorem. But, in I presented explicit forms of 4-vector potentials and of parts of the antisymmetric tensor (AST) field and concluded that the question should be solved on the basis of the analysis of normalization issues. Here they are in the momentum representation: $$u^\mu (𝐩,+1)=\frac{N}{\sqrt{2}m}\left(\begin{array}{c}p_r\\ m+\frac{p_1p_r}{E_p+m}\\ im+\frac{p_2p_r}{E_p+m}\\ \frac{p_3p_r}{E_p+m}\end{array}\right),u^\mu (𝐩,1)=\frac{N}{\sqrt{2}m}\left(\begin{array}{c}p_l\\ m+\frac{p_1p_l}{E_p+m}\\ im+\frac{p_2p_l}{E_p+m}\\ \frac{p_3p_l}{E_p+m}\end{array}\right),$$ (16) $$u^\mu (𝐩,0)=\frac{N}{m}\left(\begin{array}{c}p_3\\ \frac{p_1p_3}{E_p+m}\\ \frac{p_2p_3}{E_p+m}\\ m+\frac{p_3^2}{E_p+m}\end{array}\right),u^\mu (𝐩,0_t)=\frac{N}{m}\left(\begin{array}{c}E_p\\ p_1\\ p_2\\ p_3\end{array}\right)$$ (17) and $`𝐁^{(+)}(𝐩,+1)`$ $`=`$ $`{\displaystyle \frac{iN}{2\sqrt{2}m}}\left(\begin{array}{c}ip_3\\ p_3\\ ip_r\end{array}\right)=+e^{i\alpha _1}𝐁^{()}(𝐩,1),`$ (18) $`𝐁^{(+)}(𝐩,0)`$ $`=`$ $`{\displaystyle \frac{iN}{2m}}\left(\begin{array}{c}p_2\\ p_1\\ 0\end{array}\right)=e^{i\alpha _0}𝐁^{()}(𝐩,0),`$ (19) $`𝐁^{(+)}(𝐩,1)`$ $`=`$ $`{\displaystyle \frac{iN}{2\sqrt{2}m}}\left(\begin{array}{c}ip_3\\ p_3\\ ip_l\end{array}\right)=+e^{i\alpha _{+1}}𝐁^{()}(𝐩,+1),`$ (20) and $`𝐄^{(+)}(𝐩,+1)`$ $`=`$ $`{\displaystyle \frac{iN}{2\sqrt{2}m}}\left(\begin{array}{c}E_p\frac{p_1p_r}{E_p+m}\\ iE_p\frac{p_2p_r}{E_p+m}\\ \frac{p_3p_r}{E+m}\end{array}\right)=+e^{i\alpha _1^{}}𝐄^{()}(𝐩,1),`$ (21) $`𝐄^{(+)}(𝐩,0)`$ $`=`$ $`{\displaystyle \frac{iN}{2m}}\left(\begin{array}{c}\frac{p_1p_3}{E_p+m}\\ \frac{p_2p_3}{E_p+m}\\ E_p\frac{p_3^2}{E_p+m}\end{array}\right)=e^{i\alpha _0^{}}𝐄^{()}(𝐩,0),`$ (22) $`𝐄^{(+)}(𝐩,1)`$ $`=`$ $`{\displaystyle \frac{iN}{2\sqrt{2}m}}\left(\begin{array}{c}E_p\frac{p_1p_l}{E_p+m}\\ iE_p\frac{p_2p_l}{E_p+m}\\ \frac{p_3p_l}{E_p+m}\end{array}\right)=+e^{i\alpha _{+1}^{}}𝐄^{()}(𝐩,+1),`$ (23) where we denoted a normalization factor appearing in the definitions of the potentials (and/or in the definitions of the physical fields through potentials) as $`N`$ (which can, of course, be chosen in arbitrary way, not necessarily as to be proportional to $`m`$)The possibility of appearance of additional mass factors in commutation relations was also analyzed by us in the recent series of papers. and $`p_{r,l}=p_1\pm ip_2`$. Thus, we find that in the massless limit we may have in general divergent parts of 4-potentials and AST field, thus prohibiting to set $`m=0`$ in the equations (12-15). They are usually removed by “electrodynamic” gauge transformations. But, it was shown that the Lagrangian constructed from the $`(1,0)(0,1)`$ (or antisymmetric tensor) fields admits another kind of “gauge” transformations, namely $`F_{\mu \nu }F_{\mu \nu }+_\nu \mathrm{\Lambda }_\mu _\mu \mathrm{\Lambda }_\nu `$), with some “gauge” vector functions $`\mathrm{\Lambda }_\mu `$. This becomes the origin of the possibility of obtaining the quantum states (particles?) of different helicities in both the $`(1/2,1/2)`$ and $`(1,0)(0,1)`$ representations. In our formulation of generalized Maxwell equations these in-general divergent terms can be taken into account explicitly, thus giving additional terms in (6-9). As suggested in they may be applied to explanations of several cosmological puzzles. The detailed analysis of contradictions between the Weinberg theorem and the Ogievetskiĭ-Polubarinov-Kalb-Ramond conclusion (and also discussions of ) will be given in a separate publication. Here, I would only like to mention few assumptions, under which Weinberg derived his famous theorem: * The derivation is based on the analysis of the proper Lorentz transformations only. The discrete symmetry operations of the full Poincare Group (which, for instance, may lead to the change of the sign of the energy) have not been considered there. * The derivation assumes the particular choice of the coordinate frame, namely $`p_{1,2}=0`$ and $`p_3=|𝐩|`$.<sup>\**</sup><sup>\**</sup>\**As one can see, unpolarized classical $`𝐄`$ and $`𝐁`$ depend indeed on the choice of the coordinate system, Eq. (19,22). * The derivation does not assume that the antisymmetric tensor field is related to 4-vector fields by certain derivative operator. Finally, the intrinsic angular momentum operator of the electromagnetic field (which can be found on the basis of the Noether theorem) contain the coefficient functions which belong to different representations of the Lorentz Group, $`\stackrel{}{𝐒}\stackrel{}{𝐄}\times \stackrel{}{𝐀}`$ and it acts in the Fock space . Furthermore, the condition (35) $`W^\mu =kp^\mu `$ is not the only condition which can be imposed for massless particles. Namely, as stressed in \[15c\] the Pauli-Lubanski vector may be a space-like vector in this case, what would correspond to “infinite spin” representation. Finally, we would like to add some words to the Dirac derivation of equations (30-33) of the Gersten paper and their analysis. We derived the formula (for spin 1) $$\left[\stackrel{}{𝐒}^i(\stackrel{}{𝐒}\stackrel{}{𝐩})\right]^{jm}=\left[\stackrel{}{𝐩}^iI^{jm}i[\stackrel{}{𝐒}\times \stackrel{}{𝐩}]^{i,jm}p^m\delta ^{ij}\right],$$ (24) with $`i`$ being the vector index and $`j,m`$ being the matrix indices. Hence, from the equation ($`k=1`$) $$\left\{kp_t+S_xp_x+S_yp_y+S_zp_z\right\}𝝍=0,$$ (25) multiplying subsequently by $`S_x`$, $`S_y`$ and $`S_z`$ one can obtain in the case $`S=1`$ $$\left\{p_x+S_xp_tiS_yp_z+iS_zp_y\right\}^{jm}𝝍^m(\stackrel{}{𝐩}\stackrel{}{𝝍})\delta ^{xj}=0,$$ (26) $$\left\{kp_y+S_yp_tiS_zp_x+iS_xp_z\right\}^{jm}𝝍^m(\stackrel{}{𝐩}\stackrel{}{𝝍})\delta ^{yj}=0,$$ (27) $$\left\{kp_z+S_zp_tiS_xp_y+iS_yp_x\right\}^{jm}𝝍^m(\stackrel{}{𝐩}\stackrel{}{𝝍})\delta ^{zj}=0.$$ (28) One can see from the above that the equations (31-33) of ref. can be considered as the consequence of the equation (30) and additional “transversality condition” $`\stackrel{}{𝐩}\stackrel{}{𝝍}=0`$ in the case of the spin-1 consideration. So, it is not surprising that they are equivalent to the complete set of Maxwell’s equations. They are obtained after multiplications by corresponding $`𝐒`$ matrices. But, the crucial mathematical problem with such a multiplication is that the $`𝐒`$ matrices for boson spins are singular, $`\text{det}S_x=\text{det}S_y=\text{det}S_z0`$, which makes the above procedure doubtful<sup>††</sup><sup>††</sup>††After the analysis of the literature I learnt that, unfortunately, a similar procedure has been applied in the derivation of many higher-spin equations without proper explanation and precaution. and leaves room for possible generalizations. Moreover, the right hand side of the equation (30) of may also be different from zero according to our analysis above. The conclusion of my paper is: unfortunately, the possible consequences following from Gersten’s equation (9) have not been explored in full; on this basis we would like to correct his conclusion and his claim in the abstract of . – It is the generalized Maxwell equations (many versions of which have been proposed during the last 100 years, see, for instance, ) that should be used as a guideline for proper interpretations of quantum theories. ###### Acknowledgements. I greatly appreciate discussions with Profs. A. Chubykalo, L. Horwitz and A. Gersten, and the useful information from Profs. D. Ahluwalia, A. F. Pashkov, E. Recami and M. Sachs. Zacatecas University, México, is thanked for awarding the professorship. This work has been partly supported by the Mexican Sistema Nacional de Investigadores and the Programa de Apoyo a la Carrera Docente.
no-problem/9911/quant-ph9911017.html
ar5iv
text
# Creating a low-dimensional quantum gas using dark states in an inelastic evanescent-wave mirror ## I Introduction The only route to quantum degeneracy in a dilute atomic gas which has been experimentally successful so far, is evaporative cooling . Other routes to quantum degeneracy, in particular all-optical methods, have been elusive until now. Nevertheless it is interesting as well as important to keep exploring alternative methods which do not rely on atomic collisions. Such systems may be held away from thermal equilibrium and may therefore constitute a closer matter-wave analogy to the optical laser, as compared to atom lasers based on Bose-Einstein condensation . In addition, the physics will be quite different because a different physical, viz. optical, interaction would be used to populate the macroscopic quantum state: the amplification of a coherent matter wave, while emitting photons. Several proposals for an optically-driven atom laser have previously been published. They have in common that a macroscopic quantum state is populated using an optical Raman transition . One problem that has been anticipated from the beginning, is heating and trap loss caused by reabsorption of the emitted photons. Therefore later proposals and current experiments have aimed at a reduced dimensionality, based on optical pumping close to a surface . At the same time, there is also increasing interest in the low-dimensional equivalents of Bose-Einstein condensation in ultracold gases . Here we argue that an evanescent-wave mirror is particularly promising for loading a low-dimensional trap close to a surface. We extend previous work so that it can be applied to the alkali atoms. Being the favorite atoms for laser-cooling, the application to the alkalis will make this kind of experiments more easily accessible. In comparison to previous experiments with metastable rare gas atoms , the alkalis have the advantage that they do not suffer from Penning ionization. Furthermore, several alkali species have been cooled to the Bose-Einstein condensation, which makes them good candidates to create also low-dimensional quantum degeneracy. The extension to the alkalis is nontrivial because the splitting between the hyperfine ground states is not large enough to address them separately with far detuned lasers. The resulting “cross-talk” would lead to large photon scattering rates in the trap, as will be explained below. We propose to use circularly-polarized evanescent waves and to trap alkali atoms in “dark states”. This allows the detuning to be increased and the photon scattering rate to be reduced by several orders of magnitude. Finally, we note that a trap for ultracold atoms close to a surface is very interesting from the viewpoint of cavity QED. The proximity of a dielectric surface can change the radiative properties of atoms. In particular, for circularly-polarized evanescent waves it has been predicted that the radiation pressure is not parallel to the Poynting vector . However, this is beyond the scope of the present paper. ## II Generic scheme ### A Optical trap loaded by a spontaneous Raman transition We start by briefly reviewing the generic idea of loading an optical atom trap by an optical (Raman) transition. The original proposal described in Ref. is based on a $`\mathrm{\Lambda }`$-type configuration of three atomic levels, which we will indicate here by $`|t`$, $`|b`$ and $`|e`$, as shown in Fig. 1. The levels $`|t`$ and $`|b`$ (for “trapping” and “bouncing” state) are electronic ground (or metastable) states, $`|e`$ is an electronically excited state. An optical trap is created for atoms in level $`|t`$ using the optical dipole (“light shift”) potential induced by a far off-resonance laser. Level $`|b`$ serves as a reservoir of ultracold atoms, prepared by laser cooling. The ultracold atoms are transferred from the reservoir to the trap by a spontaneous Raman transition $`|b|e|t`$. Our goal is to load a large number of atoms into a single bound state $`|t,v`$ of the trapping potential, where $`v`$ is the vibrational quantum number. If the atoms are bosons, the transition probability into state $`|t,v`$ should be enhanced by a factor $`1+N_v`$, where $`N_v`$ is the occupation of the final state $`|t,v`$. If the rate at which atoms are pumped from $`|b`$ to $`|t`$ exceeds a threshold value, the buildup of atoms in $`|t,v`$ should rapidly increase. The Raman filling process can thus be stimulated by the matter wave in the trapped final state, leading to matter-wave amplification. The associated threshold is reached when, for some bound state $`|t,v`$, the unenhanced filling rate exceeds the unavoidable loss rate. The threshold can be lowered either by decreasing the loss rate or by increasing the overlap of wavefunctions (“Franck-Condon factor”). Ideally, the energy separation between states $`|t`$ and $`|b`$ should be so large that they can be addressed separately by different lasers. Examples are alkaline earth atoms or metastable rare gas atoms. The loading scheme has been applied successfully to load metastable argon atoms into a far off-resonance lattice and into a quasi-2D planar matter waveguide . The two metastable states of Ar\* are separated by 42 THz. In this paper we concentrate on <sup>87</sup>Rb atoms, which we use in our experiments. Here the separation between the two hyperfine ground states $`F=1,2`$ is only 6.8 GHz. This requires a modification as will be discussed in Sec. III. ### B The problem of photon reabsorption It has been recognized early on that the photon emitted during the Raman process can be reabsorbed and thus remove another atom from the trap. This will obviously counteract the gain process and may even render the threshold unreachable . This conclusion may be mitigated in certain situations, such as in highly anisotropic traps , in small traps with a size of the order of the optical wavelength and in the so-called festina lente regime . Our approach is to aim for a low-dimensional geometry, with at least one strongly confining direction $`z`$, so that the Lamb-Dicke parameter $`kz_0=\sqrt{\omega _\mathrm{R}/\omega }1`$ in that direction . Here $`k`$ is the optical wavevector, $`z_0=\sqrt{\mathrm{}/2m\omega }`$ is the r.m.s. width of the ground state of the trap with frequency $`\omega `$ for an atomic mass $`m`$, and $`\omega _\mathrm{R}=\mathrm{}k^2/2m`$ the recoil frequency. A low-dimensional geometry should reduce the reabsorption problem because the emitted photon has a large solid angle available to escape without encountering trapped atoms. Furthermore, we expect to compress the phase-space density by loading the low-dimensional optical trap by an evanescent-wave mirror (“atomic trampoline”), using optical pumping. ## III Loading a low dimensional trap ### A Inelastic evanescent-wave mirror We now discuss the specific way in which the generic scheme discussed above is being implemented in our experiment. Our implementation is based on an evanescent-wave mirror, using explicitly the level scheme of <sup>87</sup>Rb atoms. The role of the states $`|t`$ and $`|b`$ is played by the two hyperfine sublevels of the ground state $`5s^2S_{1/2}(F=1,2)`$, which are separated by 6.8 GHz. We take the lower level, $`F=1`$ as the “bouncing state” $`|b`$ and the upper level, $`F=2`$, as the “trapping state” $`|t`$. We consider a configuration of laser beams as sketched in Fig. 2(a). An evanescent wave is generated by total internal reflection of a “bouncer” beam inside a prism. This bouncer is blue detuned with respect to a transition starting from the $`F=1`$ ground state, with detuning $`\delta _1`$. A second laser beam, the “trapper” beam, is incident on the prism surface from the vacuum side and is partially reflected from the surface. The reflected wave interferes with the incident wave to produce a set of planar fringes, parallel to the prism surface. Note that even with 4% reflectivity of an uncoated glass surface the fringe visibility will be 0.38. The trapper beam can be either red or blue detuned, the former having the advantage that it automatically provides transverse confinement. In Fig. 2 we sketched the situation for blue detuning, confining the atoms vertically in the intensity minima, but allowing them to move freely in the transverse direction. We assume that the loss rate due to moving out of the beam is slow compared to other loss rates, such as that due to photon scattering. Alternatively one can obtain transverse confinement by using multiple trapper beams from different directions, which interfere to yield a lattice potential. Similarly, one can create an optical lattice using multiple bouncer beams, see e.g. Fig. 4. Ultracold atoms, in the bouncing state $`F=1`$, are dropped onto the prism and are slowed down by the repulsive light-shift potential induced by the bouncer beam, see Fig. 2(b). If the potential is strong enough, the atoms turn around before they hit the prism and bounce back up. This is called an “evanescent-wave mirror”, or “atomic trampoline” and has been demonstrated by several groups . We are here interested in interrupting the bouncing atoms halfway during the bounce, near the classical turning point. The interruption can occur when the atom scatters an evanescent-wave photon and makes a Raman transition to the other hyperfine ground state, $`F=2`$. This Raman transition yields a sudden change of the optical potential, because for an atom in $`F=2`$ the detuning is larger by approximately the ground state hyperfine splitting. This mechanism has been used for evanescent-wave cooling . In our case, we tailor the potentials so that the bouncer potential dominates for $`F=1`$ and the trapper for $`F=2`$. The atom is thus slowed down by the bouncer and then transferred into the trapping potential. As long as the probability for undergoing a Raman transition during the bounce is not too large, $`p1e^2`$, the transition will take place predominantly near the turning point, for two reasons. Firstly, the atoms spend a relatively long time near the turning point. Secondly, the intensity of the optical pump (the evanescent wave) is highest in the turning point. The probability that the atoms end up in the lowest bound state of the trapping potential has been estimated to be on the order of $`1020\%`$, albeit for somewhat different geometries . The resulting compression of a three-dimensional cloud into two dimensions is in fact dissipative, i.e. it can increase the phase-space density. ### B Phase space compression As an illustration we give the result of a classical trajectory simulation. We start from the dimensionless phase-space distribution $`\mathrm{\Phi }(z,v)`$ for the vertical motion of a single atom cooled in optical molasses, shown in Fig. 3(a). Note that $`v`$ denotes here the vertical velocity component and that we drop the subscript $`z`$ in $`v_z`$ throughout this paper. The phase-space density has been made dimensionless by dividing it by the phase-space density of quantum states. The latter is given by $`m/h`$ (states per unit area in the $`(z,v)`$ space), where $`h`$ is Planck’s constant. The distribution $`\mathrm{\Phi }(z,v)`$ can be interpreted as the probability that the atom is in an arbitrary quantum state localized around $`(z,v)`$. The atom, described by the classical distribution $`\mathrm{\Phi }(z,v)`$, is assumed to enter the evanescent wave at a velocity $`v_i=p_i/m`$, determined by its velocity in the molasses $`v`$ and the height $`z`$ from which it falls. Inside the evanescent wave the atom moves as a point particle along a phase-space trajectory $`(z(t),v(t))`$, governed by the evanescent-wave potential. Assuming that the saturation parameter is small, the potential is given by $`U(z)=U_0\mathrm{exp}(2\kappa z)`$, where $`\kappa =k_L\sqrt{n^2\mathrm{sin}^2\theta _i1}`$ is the decay constant of the evanescent wave, with $`k_L`$ the free-space laser wavevector, $`\theta _i`$ the angle of incidence and $`n`$ the index of refraction. Similarly, the photon scattering rate is given by $`\mathrm{\Gamma }^{}(z)=\mathrm{\Gamma }_0^{}\mathrm{exp}(2\kappa z)`$, with $`\mathrm{\Gamma }_0^{}/U_0=\mathrm{\Gamma }/\mathrm{}\delta `$, where $`\mathrm{\Gamma }`$ is the natural linewidth and $`\delta `$ is the laser detuning. Finally, the Raman transition rate is given by $`R(z)=R_0\mathrm{exp}(2\kappa z)`$, with $`R_0=b\mathrm{\Gamma }_0^{}`$, where $`b`$ is the branching ratio, i.e. the probability that photon scattering leads to a Raman transition. The Raman rate gives the local probability per unit time that the trajectory is interrupted. Due to the stochastic nature of the spontaneous Raman transition, we obtain a probability distribution over pumping coordinates where the trajectory through phase space is interrupted due to the transition, $`\rho (z_p,v_p)`$. Not surprisingly, we see in Fig. 3(b) that this distribution has the shape of a “mountain ridge” following the phase-space trajectory. Our goal is to load the pumped atoms into a bound state of a trap near the surface. Therefore the number of interest is the peak value of $`\rho (z_p,0)`$, which occurs for a value of $`z`$ near the turning point. In Fig. 3(b) we see that the peak value of $`\rho (z_p,0)`$ is about $`1000`$ times higher than the initial peak value of $`\mathrm{\Phi }(z,0)`$ in optical molasses (Fig. 3(a)). The peak value of 0.11 can be interpreted as the trapping probability in the ground state of the trap that collects the atoms. This value is quite comparable to previous calculations by different methods . The position of the turning point should be adjusted to coincide with the center of the trap, for example by adjusting $`U_0`$ or $`\kappa `$. The trapping probability can be maximized by changing the value of $`\kappa `$ and/or the ratio $`U_0/R_0`$, in such a way that $`U_0/R_0=mv_i/2\kappa `$. This corresponds to a situation where the probability for reaching the turning point without being optically pumped is $`e^1`$. If the pumping rate is very high, too many atoms are pumped before they reach the turning point. If the pumping rate is very low, too many atoms bounce without being pumped at all. If the optical pumping is done by the same laser that induces the bouncing potential, we have $`U_0/R_0=\mathrm{}\delta /\mathrm{\Gamma }b`$, so that we obtain an optimum value for the detuning: $`\delta /\mathrm{\Gamma }=bp_i/2\mathrm{}\kappa `$. Experimentally it may be advantageous to use separate lasers for the mirror potential and for pumping so that this restriction on the detuning does not apply. Obviously we should be somewhat careful in assigning quantitative meaning to the result of our classical simulation. In particular we should verify that the distribution $`\rho (z_p,0)`$ is broad on the characteristic length scale of the atomic wavefunction near the turning point. The latter is determined by the slope of the bouncing potential near the turning point and is given by $`\kappa ^1(\mathrm{}\kappa /p_i)^{2/3}.`$ For the same parameters as used in Fig. 3(b) this characteristic width is $`22`$ nm, indeed smaller than the width of $`\rho (z_p,0)`$, which is $`50`$ nm. ## IV Photon scattering ### A Metastables versus alkalis The level scheme used in the proposal of Ref. was inspired by metastable rare gas or alkaline earth atoms. In those cases two (meta)stable states can usually be found with a large energy separation. This makes it relatively straightforward to separate the bouncing and trapping processes. We extend those ideas here, applying them to the typical level scheme of the alkalis. In this case the separation between two stable states is limited to the ground state hyperfine splitting. We therefore turn to the issue of photon scattering by atoms after they have been transferred into the trap. More specifically, our main concern is scattering of bouncer light. The rate of scattering light from the trapping laser can in principle be made negligibly small by choosing a large enough detuning. This can be done because the trapping potential can be much shallower than the bouncing potential and therefore need not be $`F`$-state specific. For example, if the atoms are dropped from 6 mm above the prism, their kinetic energy will be 0.6 mK, corresponding to a minimum bouncing potential of 12 MHz. For the trapping potential, on the other hand, a depth of less than 1 MHz should be sufficient, since most of the external energy of the atom has been used for climbing the bouncing potential. For the bouncing state, $`F=1`$, the trapping potential then appears as a small ripple superimposed on the bouncing potential. The scattering of bouncer light is more difficult to avoid. Ideally, the interaction of the atoms with the bouncer should vanish completely as soon as they are transferred into the $`F=2`$ state. In reality, however, the bouncer connects both ground states, $`F=1`$ and $`F=2`$ to the excited state through a dipole-allowed transition. We can approach the ideal situation by a proper choice of the bouncer detuning. For the simplified three-level scheme of Fig. 1, a limitation is imposed by the ground state hyperfine splitting $`\delta _{\mathrm{GHF}}=2\pi \times 6.8`$ GHz. A good distinction between the $`F=1`$ and $`F=2`$ states is only obtained if the bouncer detuning is small, $`\delta _1\delta _{\mathrm{GHF}}`$. However, a very small detuning is undesirable because it leads to an increased photon scattering rate and thus heating during the bounce. The number of photons scattered during the bounce is approximately given by $`\overline{n}_{\mathrm{sc}}(\mathrm{\Gamma }/\delta _1)p_i/\mathrm{}\kappa ,`$ where $`p_i60\mathrm{}k_L`$ is the momentum of a Rb atom falling from a height of about 6 mm and $`\kappa 0.15`$ $`k_L`$ (for an angle of incidence $`\theta _i=\theta _c+0.01`$). If we operate in the regime $`b\overline{n}_{\mathrm{sc}}2`$ (i.e. until the turning point we have $`b\overline{n}_{\mathrm{sc}}1`$) and set $`b=0.5`$, this requires a detuning $`\delta _1100\mathrm{\Gamma }2\pi \times 0.6`$ GHz. After the atom has been transferred into the trapping potential for $`F=2`$, the detuning of the bouncer will be $`\delta _2=\delta _1+\delta _{\mathrm{GHF}}2\pi \times 7.4`$ GHz $`1200\mathrm{\Gamma }`$. The trapped atoms will then scatter bouncer light at an unacceptably high rate of typically $`5\times 10^3`$ s<sup>-1</sup>. ### B Dark states The limitation imposed by the hyperfine splitting, $`\delta _1\delta _{\mathrm{GHF}}`$, can be overcome by making use of dark states. This requires a more detailed look at the Zeeman sublevels of the hyperfine ground states. We consider the state $`|F=m_F=2`$ and tune the bouncer laser to the D1 resonance line (795 nm, $`5s^2S_{1/2}5p^2P_{1/2}`$). If this light is $`\sigma ^+`$ polarized, the selection rules require an excited state $`|F^{}=m_F^{}=3`$, which is not available in the $`5p^2P_{1/2}`$ manifold and so $`|F=m_F=2`$ is a dark state with respect to the entire D1 line. The state selectivity of the interaction with bouncer light now no longer depends on the detuning, but rather on a selection rule. Therefore the bouncer detuning can now be chosen large compared to $`\delta _{\mathrm{GHF}}`$. The new limitation on the detuning is the fine structure splitting of the D-lines, 7.2 THz for Rb. This reduces the photon scattering rate by 3 orders of magnitude. Note that the heavier alkalis are more favorable in this respect because of the larger fine structure splitting. The price to be paid is the restriction to two specific Zeeman sublevels $`|F=\pm m_F=2`$ and the need for a circularly-polarized evanescent wave. ## V Circularly-polarized evanescent waves In this section we briefly describe two methods for the generation of evanescent waves with circular polarization, using either a single bouncer beam or a combination of two. We also calculate the resulting photon scattering rates. There are several other ways to generate circularly-polarized evanescent waves using multiple beams. The two methods described here serve as examples. ### A Single beam A circularly-polarized evanescent wave can be obtained using a single incident laser beam if it has the proper elliptical polarization, i.e. the proper superposition of $`s`$ and $`p`$ polarization. The $`s`$, or TE, mode yields an evanescent electric field parallel to the surface and perpendicular to the plane of incidence. The evanescent field of the $`p`$, or TM, mode is elliptically polarized in the plane of incidence, with the long axis of the ellipse along the surface normal. It is straightforward to calculate the input polarization that yields circular polarization in the evanescent wave. One finds that the required ellipticity of the input polarization is the inverse of the refractive index, $`n^1`$. Here the ellipticity is defined as the ratio of the minor and major axes of the ellipse traced out by the electric field vector. The required orientation $`\phi `$ of the ellipse depends on the angle of incidence, $`\mathrm{tan}\phi =\sqrt{n^2\mathrm{sin}^2\theta _i1}/\mathrm{cos}\theta _i`$. Close to the critical angle, $`\phi 0`$, and the ellipse has its major axis perpendicular to the plane of incidence. Following this prescription, the resulting evanescent wave will be circularly polarized, with the plane of polarization perpendicular to the surface. However, the plane of polarization is not perpendicular to the in-plane component of the $`k`$-vector. Here the evanescent wave differs from a propagating wave, which has its plane of polarization always perpendicular to the $`k`$-vector (and Poynting vector). For the evanescent wave the plane of circular polarization is also perpendicular to the Poynting vector. However, the Poynting vector is not parallel to the in-plane $`k`$-vector, but tilted sideways by an angle $`\pm \chi `$ given by $`\mathrm{tan}\chi =\sqrt{n^2\mathrm{sin}^2\theta _i1}=\kappa \lambda _0/2\pi `$, with $`\lambda _0`$ the vacuum wavelength of the light. Close to the critical angle, $`\chi 0`$ and the plane of polarization becomes perpendicular to the in-plane wave vector, as for propagating waves. We can estimate the photon scattering rate of an atom in the dark state $`|F=m_F=2`$, residing in the circularly polarized evanescent wave of the bouncer beam. Ideally, this scattering rate is due only to off-resonant excitation to the $`5p^2P_{3/2}`$ manifold (D2 line; 780 nm.) Choosing the bouncer detuning at 100 GHz (with respect to the D1 line) yields a scattering rate $`\mathrm{\Gamma }_{D2}^{}=3.5`$ s<sup>-1</sup>. In practice there will also be scattering due to polarization impurity. For example, assuming this impurity to be 10<sup>-3</sup>, we obtain a scattering rate of $`\mathrm{\Gamma }_{D1,\sigma ^{}}^{}=10.6`$s$`^1.`$ ### B Two crossing $`s`$-waves Alternatively, evanescent waves of circular polarization can also be produced using two (or more) bouncer beams. If we cross two TE-polarized evanescent waves at 90, we will produce a polarization gradient as sketched in Fig. 4. Lines of circular polarization are now produced with the plane of polarization parallel to the surface. Lines of opposite circular polarizations alternate, with a distance of approximately $`\lambda _0/2\sqrt{2}`$ between neighbouring $`\sigma ^+`$ and $`\sigma ^{}`$ lines. This configuration offers interesting opportunities. The light field can be decomposed into two interleaved standing wave patterns, one for $`\sigma ^+`$ and one for $`\sigma ^{}`$ polarization. An atom in the state $`|F=m_F=2`$ is dark with respect to the $`\sigma ^+`$ standing wave only. However it does interact with the $`\sigma ^{}`$ standing wave and therefore can be trapped in its nodes. The bouncer light will thus play a double role. First it slows the atoms on their way down to the surface. Then, after the atoms have been optically pumped, the bouncer light will transversely confine the atoms. We thus expect a one-dimensional lattice of atomic quantum wires with alternating spin states, very much like a surface version of previously demonstrated optical lattices . The transverse lattice structure may also allow the use of transverse Sisyphus cooling or Raman sideband cooling . It is not strictly necessary to cross the evanescent waves at a right angle. It does have the advantage that the total intensity is constant across the polarization pattern. The same could also be achieved by using counterpropagating evanescent waves with orthogonal polarizations. For any other angle, the intensity varies spatially so that the atoms bounce on a corrugated optical potential. However, even with a uniform intensity, most atoms will see a corrugated potential. The potential depends on the local polarization and on the atom’s magnetic sublevel through the Clebsch-Gordan coefficients. Only for the state $`|F=1,m_F=0`$ the dipole potential is independent of the polarization. One could of course prepare the falling atoms in $`|F=1,m_F=0`$ using optical pumping. The local circular polarization $`\sigma ^\pm `$ will tend to pump the atom into the local dark state $`|F=\pm m_F=2`$. However the optical pumping transition has a branching ratio of only 1/6 (using a dedicated resonant pumping beam). By contrast, for an atom starting in $`|F=m_F=1`$, the branching ratio is 1/2. Therefore starting in $`|F=1,m_F=0`$ is conceptually simple, but probably not optimal. A disadvantage of creating circularly polarized evanescent waves on a lattice is that an additional source of photon scattering appears. We approximate the transverse potential near the minimum as a harmonic oscillator. Choosing again the bouncer detuning at 100 GHz, the harmonic oscillator frequency will be about $`\omega =2\pi \times 480`$ kHz. An atom in state $`|2,2`$, in the ground state of the harmonic oscillator associated with the $`\sigma ^{}`$ node has a gaussian wavefunction with wings extending into the region with $`\sigma ^{}`$ light. The resulting scattering rate can be estimated as $`\mathrm{\Gamma }_{\mathrm{HO}}^{}\frac{1}{4}\omega \mathrm{\Gamma }/(\delta _1+\delta _{\mathrm{GHF}})52`$ s<sup>-1</sup>, where the bouncer detuning was again chosen at 100 GHz. The scattering rate can be further suppressed to $`\mathrm{\Gamma }_{\mathrm{HO}}^{}18`$ s<sup>-1</sup> by raising the bouncer detuning to 300 GHz. For an even larger detuning the off-resonant scattering by the D2 line, $`\mathrm{\Gamma }_{D2}^{}`$, starts to dominate. ### C Feasibility We should point out that our two examples to produce circularly-polarized evanescent waves are not meant to be exhaustive. Several other methods can be devised, some being more experimentally challenging than others. For the single-beam method the incident beam must be prepared with the correct ellipticity as well as the correct orientation. It will probably be difficult to measure the polarization of the evanescent wave directly. One should therefore prepare the incident polarization using well-calibrated optical retarders and using calculated initial settings. The fine-tuning could then be done, e.g. by optimizing the lifetime of the trapped atoms in the dark state. For the two-beam method of Fig. 4 we have assumed for simplicity that the two interfering evanescent waves have the same decay length (i.e. the same angle of incidence) and the same amplitude. Equal decay lengths for the two waves can be enforced by making use of a dielectric waveguide . Alternatively, one may deliberately give the two beams a slightly unequal decay length, and at the same time give the wave with the shorter decay length a larger amplitude. In this case there will always be one particular height above the surface where the two beams have equal amplitude, as required for interfering to circular polarization. Therefore this procedure would make the circular polarization somewhat self-adjusting. The height where circular polarization occurs is tunable by changing the relative intensity of the two beams. Obviously, the final word on the feasibility can only be given experimentally. In our experiment we are presently pursuing a variation on Fig. 4, including the just mentioned self-adjusting properties. ## VI Conclusion In conclusion, we have shown that inelastic bouncing on an evanescent-wave mirror (“atomic trampoline”) is a promising method for achieving high phase-space density in low-dimensional optical traps. The phase space compression is achieved by means of a spontaneous Raman transition, which is highly spatially selective for atoms near the turning point of the evanescent-wave mirror potential. We have extended previous work based on the level schemes of metastable rare gas atoms for application to alkali atoms. This requires suppression of the high photon scattering rate, resulting from the relatively small ground hyperfine splitting in the alkalis. We have shown how the photon scattering rate can be reduced by several orders of magnitude, by trapping the atoms in dark states. This requires the use of circularly-polarized evanescent waves, which can be generated by several methods as discussed. If built up from multiple beams, the evanescent field may play a double role, generating a bouncing as well as a trapping potential. This could lead to an array of quantum wires for atoms. ## VII Acknowledgments This work is part of the research program of the “Stichting voor Fundamenteel Onderzoek van de Materie” (Foundation for the Fundamental Research on Matter) and was made possible by financial support from the “Nederlandse Organisatie voor Wetenschappelijk Onderzoek” (Netherlands Organization for the Advancement of Research). The research of R.S. has been made possible by financial support from the Royal Netherlands Academy of Arts and Sciences.
no-problem/9911/astro-ph9911289.html
ar5iv
text
# TWO DIFFERENT ACCRETION CLASSES IN SEYFERT 1 GALAXIES AND QSOs ## 1. Introduction The QSO and active galaxy phenomena can be successfully explained by the accretion of gas onto central massive black holes (MBHs) (Lynden-Bell (1969); Rees (1984)). Recent progress in both theory and observation strongly supports this explanation. First, compact massive dark objects—possibly massive black holes—have been detected in the cores of some nearby galaxies based on high spatial resolution observations of stellar dynamics (Kormendy & Richstone (1995); Magorrian et al. (1998)). Second, the radiation spectra from the nuclei of Sgr A and NGC 4258 can be modelled well by the advection-dominated accretion flows (ADAF) onto MBHs ( Narayan, Yi & Mahadevan (1995); Lasota et al. (1996)). In addition, the iron K$`\alpha `$ line profile suggests the existence of a MBH and cold disk material in the center of Seyfert 1 galaxies (Iwasawa et al. (1996)). Currently, four solutions of the accretion process around MBHs are known (Chen et al. (1995)). The most famous two are the thin disk model (Shakura & Sunyaev (1973)) and the ADAF model (Ichimaru (1977); Rees et al. (1982) Narayan & Yi 1994, 1995a, 1995b; Abramowicz et al. (1995)), which generally result from different accretion rate. The radiation spectra output of these models are distinctly different (Narayan, Mahadevan & Quataert (1998)). It is therefore reasonably easy to tell from observations whether a system has a thin or ADAF disk. The ADAF model provides a good fit to the spectra of Galactic Black Hole X-ray Binary sources (BHXBs), which should be a low mass accretion version of Seyfert 1 galaxies and QSOs (Esin, McClintock & Narayan (1997); Esin et al. (1998)). This implies that at least some Seyfert galaxies and/or QSOs, which were believed to have a thin disk accretion, may instead accrete material via the ADAF mode. Thus the question of whether a QSO or an active galaxy has an ADAF or a thin accretion disk is of great current interest. To date, the most reliable estimate of masses of central MBHs in Seyfert 1 galaxies uses reverberation data (Peterson & Wandel (1999)). With known MBH masses, the accretion rates of active galaxies and QSOs can be estimated from their bolometric or ionizing luminosity (Wandel, Peterson & Malkan (1999), hereafter WPM). Furthermore, a remarkable correlation between the soft X-ray spectral indices and the full width at half maximum (FWHM) of H$`\beta `$ has been found by Boller, Brandt & Fink (1996) and Wang, Brinkmann & Fink (1996). A possible interpretation is that the correlation is caused by the variation of accretion rate in different objects(Wandel & Yahil (1985); Pounds et al. (1994)). The change in soft X-ray spectral slope from flat to steep may reflect the change in accretion mode from a low state to a high state. It is reasonable to constrain theoretical models by investigating the statistical relation between the soft X-ray spectra and the estimated accretion rates for a sample of objects. In this letter, we show that a sample of QSOs and Seyfert 1 galaxies with available estimated MBH masses can be classified into two distinct classes with different accretion rates. These two classes should correspond to different accretion disks, i.e. ADAF and thin disk. In each class, the soft X-ray spectral slope strongly correlates with the estimated accretion rate. We describe the available data in §2. The statistical analysis is presented in §3, and an explanation of the correlation between the X-ray slope and the accretion rate is discussed in §4. ## 2. The Data Assuming that the line-emitting matter is gravitationally bound and hence has a near-Keplerian velocity dispersion (indicated by the line width), WPM estimated the mass of the central black holes for 17 Seyfert 1 galaxies and 2 QSOs using reverberation data. The central MBH masses were also measured by the same method for three additional Seyfert 1 galaxies (Mrk279, NGC3516 and NGC4593) which were not included in WPM’s sample (Ho (1998)). Laor (1998) estimated the MBH masses for 19 QSOs using the H$`\beta `$ width and the empirical relation $`r_{\mathrm{BLR}}(\mathrm{H}\beta )=15L_{44}^{1/2}`$ (Kaspi et al. (1996)), where $`r_{\mathrm{BLR}}`$ is the size of H$`\beta `$ emitting region in the broad line region and $`L_{44}=L(0.11\mu \mathrm{m})`$ in units of $`10^{44}\mathrm{ergs}\mathrm{s}^1`$. Laor’s method may overestimate the black hole mass, as WPM pointed out that the slope of the empirical relation may be flatter than 1/2. Fortunately, there is a common object to Laor and WPM’s sample, the QSO PG0953+414, whose mass was estimated to be $`3\times 10^8\mathrm{M}_{}`$ by Laor, and $`1.5\times 10^8\mathrm{M}_{}`$ by WPM. We will thus use a calibration factor of 0.5 for the central black hole masses of the QSOs in Laor’s sample. Generally, we can estimate the accretion rates for QSOs and Seyfert 1 galaxies from their bolometric luminosity. In WPM’s sample, they list the ionizing luminosity of individual objects, derived from the reverberation and photoionization method. Following that method, we estimate the ionizing luminosities for Mrk279, NGC3516 and NGC4593. Laor (1998) estimated the bolometric luminosities for the QSOs in his sample using the empirical relation $`L_{\mathrm{bol}}=8.3\nu L_\nu (3000\mathrm{\AA })`$ (Laor & Draine (1993)) for the objects of the sample. For the common object, PG0953+414, we compare the estimated $`\mathrm{L}_{\mathrm{bol}}`$ in Laor’s table 1 with $`\mathrm{L}_{\mathrm{ion}}`$ in WPM, and find that $`L_{\mathrm{bol}}=3.8L_{\mathrm{ion}}`$ and $`L_{\mathrm{ion}}/L_{\mathrm{Edd}}0.53L_{\mathrm{bol}}/L_{\mathrm{Edd}}`$. We take this relation as an estimation of the Eddington ratio of ionizing luminosity for the objects in Laor’s sample. We list in table 1 the central black hole masses and the Eddington ratios of ionizing luminosity for all the objects studied. Since the ionizing and bolometric luminosities have been estimated, and the black hole mass determination is the most reliable one, these values give a direct measurement of the Eddington ratio, $`L/L_{\mathrm{Edd}}L_{\mathrm{bol}}/M_{\mathrm{BH}}`$, and hence the accretion rate $`\dot{m}=\dot{M}/\dot{M}_{\mathrm{Edd}}`$. In general, the X-ray emission spectra of QSOs and Seyfert 1 galaxies can be well fitted by a power-law ($`f_\nu \nu ^\alpha `$, where $`\alpha `$ is spectral index and $`\mathrm{\Gamma }=1+\alpha `$ is photon index). The soft X-ray spectra for most objects in table 1 are available in the literature. We list the spectral indices in table 1 both for the soft X-ray range (0.1-2.4keV) observed by ROSAT and for the hard X-ray range (3-10keV) observed by ASCA. For the soft X-ray spectra, we adopt the spectral indices of the fitting by using a power-law with the column density absorption as a free parameter. For several objects without such fitting, we use the spectral indices determined assuming a fixed column density absorption as given by the Galactic column density. There is little difference in the ASCA spectra fitted by different Fe K$`\alpha `$ line models. We adopt the photon indices given by Nandra et al. (1997) in their table 4 by fitting 3-10keV spectra with a power-law plus a Schwarzschild geometry disk line. The ASCA photon indices of several objects, which are not included in Nandra et al. (1997), are also listed in table 1. ## 3. Statistical analysis Figure 1 shows the spectral indices as a function of the ratio $`\mathrm{Log}(L_{\mathrm{ion}}/L_{\mathrm{Edd}})`$, and hence the accretion rate. We notice that several objects in our sample, such as 3C273, 3C120, 3C390.3, 3C323.1, PKS1302-102 and PKS2135-147, are radio-loud galaxies or QSOs. The X-ray spectra of radio-loud galaxies and QSOs are generally flatter than those of radio-quiet ones (Laor et al. (1997); Wang, Lu & Zhou (1998)). It may be caused by the X-ray emission from the relativistic jet. Indeed, all the radio-loud objects exhibit flat soft X-ray spectra in our sample. They will be excluded in the following statistical analysis. In addition, the soft X-ray spectrum of NGC3227 is very flat, which may be caused by a dusty warm absorber (Komassa & Fink (1998)). It will also be excluded in the statistical analysis. We apply the Pearson linear correlation test (Press et al. (1992)) to the $`\alpha _{\mathrm{ROSAT}}`$ versus $`\mathrm{Log}(L_{\mathrm{ion}}/L_{\mathrm{Edd}})`$ relation for radio-quiet objects. A significant correlation is found, with a linear correlation coefficient $`R=0.80`$ ($`P_\mathrm{r}=1.4\times 10^7`$). The soft X-ray spectra become steeper when the Eddington ratio of the ionizing luminosity increases. It is evident that there is a discontinuity near the critical point of $`\mathrm{Log}(L_{\mathrm{ion}}/L_{\mathrm{Edd}})1.4`$ and the data in figure 1 are distributed in two separate classes. One is the lower $`\mathrm{Log}(L_{\mathrm{ion}}/L_{\mathrm{Edd}})`$, representing a lower accretion rate class, the other is the higher $`\mathrm{Log}(L_{\mathrm{ion}}/L_{\mathrm{Edd}})`$, representing higher accretion rate class. The two distinct classes may correspond to the accretion of ADAF and thin disk since the break point corresponds to the critical threshold between the two models (see following section). The Pearson linear correlation coefficients for the two different classes are $`R=0.88`$ ($`P_\mathrm{r}=1.4\times 10^4`$) and $`R=0.89`$ ($`P_\mathrm{r}=2.0\times 10^6`$), respectively. They can be fitted by straight lines: $`\alpha _{\mathrm{ROSAT}}=(2.69\pm 0.24)+(0.69\pm 0.12)\mathrm{Log}(L_{\mathrm{ion}}/L_{\mathrm{Edd}})`$ for the ADAF class, and $`\alpha _{\mathrm{ROSAT}}=(2.00\pm 0.06)+(0.60\pm 0.08)\mathrm{Log}(L_{\mathrm{ion}}/L_{\mathrm{Edd}})`$ for the thin disk class. Most Seyfert 1 galaxies in the WPM sample belong to the ADAF class and all QSOs in Laor’s sample belong to the thin disk class. It is interesting to note that five Seyfert 1 galaxies (Mrk110, Mrk335, Mrk509, Mrk590 and NGC4051) join in the thin disk class. Those Seyfert 1s have small central black holes but normal Seyfert luminosities. Those support the existence of the two-class distribution. The relation between $`\mathrm{\Gamma }_{\mathrm{ASCA}}`$ and $`\mathrm{Log}(L_{\mathrm{ion}}/L_{\mathrm{Edd}})`$ is presented in figure 2. There are only 16 objects in figure 2 because the others have no ASCA data. We find that there is a linear correlation between $`\mathrm{\Gamma }_{\mathrm{ASCA}}`$ and $`L_{\mathrm{ion}}/L_{\mathrm{Edd}}`$ with a correlation coefficient $`R=0.74`$ ($`P_\mathrm{r}=0.006`$) (after we exclude the radio-loud objects and NGC3227). The hard X-ray spectra become steeper when $`\mathrm{Log}(L_{\mathrm{ion}}/L_{\mathrm{Edd}})`$ increases. As figure 2 shows, the photon indice $`\mathrm{\Gamma }_{\mathrm{ASCA}}`$ increases with increasing accretion rate for ADAF class. It is hard to determine the behavior of the hard X-ray spectra vs. $`\mathrm{Log}(L_{\mathrm{ion}}/L_{\mathrm{Edd}})`$ for the thin disk class because there are only 4 objects. Further observations are needed to confirm the classification. ## 4. Discussion We find two distinct classes in the $`\alpha _{\mathrm{ROSAT}}`$ versus $`\mathrm{Log}(L_{\mathrm{ion}}/L_{\mathrm{Edd}})`$ plane for a sample of Seyfert 1 galaxies and QSOs. This classification is striking because it reveals the existence of two accretion modes in these objects. The critical threshold between the two classes is $`\mathrm{Log}(L_{\mathrm{ion}}/L_{\mathrm{Edd}})1.4`$ (see in figures 1 and figures 2). The corresponding $`\mathrm{Log}(L_{\mathrm{bol}}/L_{\mathrm{Edd}})`$ is about -1.1, which is just the critical accretion rate $`\mathrm{Log}(\dot{m}_{crit})1.1`$ (see the figure 7 in Narayan, Mahadevan & Quataert 1998). Since $`\dot{m}_{crit}\alpha ^2`$ (Esin, McClintock & Narayan (1997)), where $`\alpha `$ is the viscosity parameter, the observed discontinuity in figures 1 suggests $`\alpha 0.3`$. Below $`\dot{m}_{\mathrm{crit}}`$, the accretion is via an ADAF at small radii and a thin disk at large radii. Because much of the viscously generated energy is advected to the black hole and $`L_{\mathrm{bol}}/L_{\mathrm{Edd}}30\dot{m}^2`$ (for the viscosity parameter $`\alpha =0.3`$, Esin, McClintock & Narayan (1997)), we can derive that $`\dot{m}`$ of the ADAF class in our sample is distributed in the range (0.01–0.08), which corresponds to the low and intermediate states of accretion in BHXBs. The existence of the ADAF class in Seyfert 1s and QSOs is also supported by the fact that NGC 4151 has almost an identical spectrum to the BHXB GX339-4 (Zdziarski et al. (1998)). Above $`\dot{m}_{\mathrm{crit}}`$, the accretion is via a thin disk and the accretion rate $`\dot{m}`$ is approximately $`L_{\mathrm{bol}}/L_{\mathrm{Edd}}`$. It may correspond to the high and/or very high states in BHXBs (Esin, McClintock & Narayan (1997)). According to the best fitting straight lines for the two classes, we derive that $`\alpha _{\mathrm{ROSAT}}\dot{m}^{1.38}`$ for the ADAF class, and $`\alpha _{\mathrm{ROSAT}}\dot{m}^{0.60}`$ for the thin disk class. For objects in the thin disk class, it is easy to understand that the soft X-ray slope steepens with the increase in accretion rate and the soft X-ray spectra are steeper than the hard X-ray spectra. The temperature of the inner thin disk goes up while the accretion rate increases, therefore the big blue bump moves into EUV and/or soft X-ray range and the soft X-ray spectrum becomes steeper. As $`\dot{m}`$ increases, the disk flux to irradiate the corona increases. This will cause the corona to cool more efficiently because of Compton cooling, and so the hard X-ray slope will steepen with the increasing $`\dot{m}`$. This cannot be confirmed in our sample because of the lack of ASCA observations for the objects in the thin disk class. For the ADAF class objects, the X-ray spectra in the ROSAT and ASCA bands both steepen when $`\dot{m}`$ increases. This contradicts the ADAF radiation model of the low accretion state with $`\dot{m}(0.010.08)`$, where the calculated X-ray spectra should be harder and smoother because the optical depth goes up and causes a corresponding increase in Compton y-parameter when $`\dot{m}`$ increases (Esin, McClintock & Narayan (1997)). However, it can be understood if we adopt a model where the accretion disk consists of two zones: an outer standard thin disk extending from a large radius down to $`r_{\mathrm{tr}}`$ and an inner ADAF from $`r_{\mathrm{tr}}`$ down to $`r=3`$ (Narayan (1996)). If $`r_{\mathrm{tr}}`$ decreases with increasing $`\dot{m}`$, and if $`r_{\mathrm{tr}}`$ decreases below $`10^{1.5}`$, the ADAF X-ray emission spectra become dramatically softer because of the cooling effect when the radiation from the disk is Compton–scattered by the hot gas in the ADAF. It is similar to the intermediate state in BHXBs (Esin, McClintock & Narayan (1997)). The reflection component would be more important for thin disk and less important for an ADAF since the solid angle is larger than the former case. This would cause the X-ray spectra of the objects in thin disk class softer than those of the objects the ADAF class (Zdziarski, Lubinski, & Smith 1999). We can see in figures 1 and figures 2 that the spectral indices of the objects in the thin disk class tends to be softer than those of the objects in the ADAF class, statistically. The accretion flows may have other forms, e.g. ADIOS (advection dominated inflow and outflow solution, Blandford & Begelman (1999)), which may be a possible alternative explanation for the relation. Detailed models should be explored to reveal more information on the two accretion classification. Recent observations show that 1H0419-577 is a “two-state” soft X-ray Seyfert 1 galaxy (Guainazzi et al. (1998)). In its hard state, the entire 1.8–40keV spectrum can be well-described by a simple flat ($`\mathrm{\Gamma }1.55`$) featureless power-law, while in its soft state, the soft X-ray spectrum is quite steep ($`\mathrm{\Gamma }2.5`$). The 0.5-2.5keV flux changes by a factor of $`6`$ from its soft state to its hard state. Assuming that $`L_{\mathrm{ion}}`$ is proportional to the flux at 0.5-2.5keV, $`L_{\mathrm{ion}}/L_{\mathrm{Edd}}`$ also changes by a factor of 6. Based on its optical observation, the Eddington ratio of ionizing luminosity is estimated to be less than 0.08. So, 1H0419-577 is an object in the ADAF class (Guainazzi et al. (1998)). The Log$`(L_{\mathrm{ion}}/L_{\mathrm{Edd}})`$ of this object is predicted to be $`1.72`$ in its soft state by the fitted straight line for the ADAF class in section 3. Therefore, its Log($`L_{\mathrm{ion}}/L_{\mathrm{Edd}}`$) is about -2.4 in the hard state. The inferred hard X-ray spectral index from figure 2 is consistent with the observations. We summarize our main result as the existence of two different accretion classes: the ADAF and the thin disk accretion class in Seyfert 1 galaxies and QSOs. The soft X-ray spectrum becomes steeper when the accretion rate increases in both classes. The detailed fitting of Seyfert 1 galaxy and QSO spectra with ADAF and/or thin disk models should be explored to reveal the underlying physics, as has been done for BHXBs and low-luminosity active nuclei. We thank an anonymous referee for useful comments which improve this paper. We thank Neta Bahcall, Scott Tremaine and Joseph Weingartner for a critical reading.
no-problem/9911/chao-dyn9911014.html
ar5iv
text
# On the Turbulent Dynamics of Polymer Solutions ## Abstract We study properties of dilute polymer solutions which are known to depend strongly on polymer elongation. The probability density function (PDF) of polymer end-to-end extensions $`R`$ in turbulent flows is examined. We demonstrate that if the value of the Lyapunov exponent $`\lambda `$ is smaller than the inverse molecular relaxation time $`1/\tau `$ then the PDF has a strong peak at the equilibrium size $`R_0`$ and a power tail at $`RR_0`$. This confirms and extends the results of . There is no essential influence of polymers on the flow in the regime $`\lambda \tau <1`$. At $`\lambda >1/\tau `$ the majority of molecules is stretched to the linear size $`R_{\mathrm{op}}R_0`$. The value of $`R_{\mathrm{op}}`$ can be much smaller than the maximal length of the molecules because of back reaction of the polymers on the flow, which suppresses velocity gradients thus preventing the polymers from maximal possible stretching. Dynamics of dilute polymer solutions is an important subject both from theoretical and practical points of view. Possible applications rely mainly on the fact that low concentrations of polymer molecules can lead to substantial changes in hydrodynamics. The most striking effect related to polymers is probably the so-called drag reduction in turbulent flows. A consistent explanation of this effect is a long-standing question . One believes that the drag reduction is related to the effective increase of the viscosity due to the presence of polymers . Here we address some aspects of this phenomenon. An important underlying property of polymers is their flexibility . At equilibrium, a polymer molecule coils up into a spongy ball with a radius $`R_0`$. For dilute solutions with concentrations $`n`$ satisfying $`nR_0^31`$, the influence of equilibrium size molecules on hydrodynamic properties can be neglected. When placed in a flow, the molecule is deformed into an elongated structure of ellipsoidal form which can be characterized by its end-to-end extension $`R`$. Since the number $`N`$ of monomers in a long-chain polymer molecule is large, $`R`$ can be much larger than $`R_0`$. It explains why minute amounts of polymers can produce an essential effect. It was shown in that in sufficiently intensive flows polymer molecules get strongly extended due to stretching. This is the key mechanism providing an essential back reaction of the polymer molecules on the flow. Here we consider turbulent dynamics of polymer solutions. We assume that $`R`$ is always much smaller than the viscous length of the turbulent flow, $`r_v`$. Therefore, molecules can be treated as immersed into a spatially smooth external velocity field . In this case the dynamics of polymer stretching is determined only by the gradients of the velocity. Since the gradients in turbulent flows are correlated at the viscous length, all the molecules inside regions with size of the order of $`r_v`$ are subject to the same gradient, and therefore are stretched coherently. As long as one can neglect the hydrodynamic interactions between molecules, the problem is reduced to dynamics of a single molecule. We investigate the behavior of polymer molecules with the extensions $`R`$ satisfying $`R_0RR_{\mathrm{max}}`$, where $`R_{\mathrm{max}}`$ is the maximal size of the polymer. Random walk arguments show that the entropy of such molecules is quadratic in $`R`$ in this interval, which leads to Hooke’s law (see e.g. Ref. ). That is why one can expect a linear dynamics of the molecules. Even though hydrodynamic interactions of monomers make polymer’s dynamics inherently nonlinear, the interactions can be neglected for elongated molecules. This expectation is confirmed by recent experiments with DNA molecules where an exponential relaxation of a single molecule was observed. Numerics and theoretical arguments presented in Ref. also show the linear character of the molecule dynamics for $`R_0RR_{\mathrm{max}}`$. In experiments a number of the molecule eigen modes has been seen. We will take into account only the mode which has the largest relaxation time $`\tau `$, because the other modes are harder to get excited in turbulent flows. A starting point of our theory is the dynamic equation for the vector $`𝑹`$ which can be defined, say, via the inertia tensor (per mass of a monomer) $`R_\alpha R_\beta `$ of the elongated molecule. Then $`𝑹`$ determines the orientation and the largest size of the molecule. We assume the following dynamic equation for the vector (cf. Refs. ) $`{\displaystyle \frac{d}{dt}}R_\alpha =R_\beta _\beta v_\alpha {\displaystyle \frac{R_\alpha }{\tau }},`$ (1) where $`\tau `$ is the relaxation time. The velocity gradient must be taken at the molecule position. The role of non-linearity in the extended equation for $`𝑹`$ (and in the system of equations for $`N`$ coupled beads) is examined in Ref. . For our purposes this non-linearity as well as the thermal noise is irrelevant (see the discussion below). To get rid of inessential degrees of freedom responsible for the orientation of the molecule we write $`𝑹=R𝒏`$ passing to the absolute value $`R`$ of the vector $`𝑹`$. Then we obtain from Eq. (1) (cf. ) $`{\displaystyle \frac{d\rho }{dt}}=\zeta {\displaystyle \frac{1}{\tau }},{\displaystyle \frac{dn_\alpha }{dt}}=n_\beta _\beta v_\alpha \zeta n_\alpha ,`$ (2) $`R=R_0\mathrm{exp}(\rho ),\zeta =n_\alpha n_\beta _\beta v_\alpha .`$ (3) We see that the evolution of $`\rho `$ is determined by the scalar function $`\zeta `$ which is a functional of the velocity field. For turbulent flows where the velocity randomly varies in time one should use a statistical approach. A natural first step is to take the polymers being passively embedded into the fluid, disregarding their back reaction on the flow. We will demonstrate that there is a wide region of applicability of this approximation. Neglecting the back reaction we can treat the velocity dynamics as independent of polymers. Then $`\zeta `$, defined by Eq. (3) is independent of $`\rho `$. We will not specify the velocity statistics. Irrespective of its character one can use the large deviation theory (see e.g. Ref. devoted to different aspects of Lagrangian dynamics in turbulent flows). The scheme presented below is valid for any random flow. Integrating Eq. (2) we get $`\rho (t)=\rho _0+z{\displaystyle \frac{t}{\tau }},z={\displaystyle _0^t}𝑑t^{}\zeta (t^{}),`$ (4) where $`\rho _0`$ is the value of $`\rho `$ at $`t=0`$. One should keep in mind that the expression (4) for $`\rho `$ is correct if one can neglect the presence of the boundaries $`R_0`$ and $`R_{\mathrm{max}}`$ where Eq. (1) is violated. The integral $`z`$ in Eq. (4) possesses some universal properties for times much larger than the correlation time $`\tau _\zeta `$ of the random process $`\zeta `$. For turbulent flows $`\tau _\zeta `$ can be estimated as the characteristic time of the Lagrangian motion on the viscous scale, which coincides with the characteristic inverse strain on this scale. For $`t\tau _\zeta `$ the variable $`z`$ can be considered as a sum of a large number of independent variables. Then in order to establish the statistics of $`z`$ for fluctuations near its mean value one can use the central limit theorem. If we are interested in large deviations from the mean, a more general formulation is needed (see e.g. ). Namely, the PDF of $`z`$ can be written as the homogeneous function $`𝒢(t,z){\displaystyle \frac{1}{\sqrt{2\pi \mathrm{\Delta }t}}}\mathrm{exp}\left[tS\left({\displaystyle \frac{z\lambda t}{t}}\right)\right],`$ (5) $`\lambda =\zeta ,\mathrm{\Delta }={\displaystyle 𝑑t^{}\left(\zeta (t)\zeta (t^{})\lambda ^2\right)}.`$ (6) “The entropy density” $`S`$ is a functional of the velocity statistics. It is impossible to calculate $`S`$ without knowing the statistics explicitly. Fortunately, only general properties of $`S`$ (such as positivity and convexity) are needed for us. The central limit theorem is reproduced by Eq. (5) if to consider a vicinity of the entropy maximum where $`S(x)x^2/(2\mathrm{\Delta })`$. The constant $`\lambda `$ defined in (6) is the principal Lyapunov exponent of the turbulent flow, which is the average logarithmic rate of growth of the distance between two initially close Lagrangian trajectories. As follows from Eq. (4), $`𝒢(t,z)`$ determines the conditional probability that $`\rho (t)`$ has the value $`\rho _0+zt/\tau `$ provided $`\rho (0)=\rho _0`$. Therefore one can write the equation $`𝒫(t,\rho )={\displaystyle 𝑑\rho _0𝒢(t,\rho \rho _0+t/\tau )𝒫(0,\rho _0)}`$ (7) for the PDF $`𝒫(t,\rho )`$. In the stationary case $`𝒫`$ is $`t`$-independent and Eq. (7) can be treated as a relation determining the PDF. Writing $`𝒫`$ as the Laplace integral $`𝒫(\rho )=𝑑\alpha \mathrm{exp}(\alpha \rho )\stackrel{~}{𝒫}(\alpha )`$ we observe that the convolution in Eq. (7) becomes a product and the equation can be easily resolved. The value of $`\stackrel{~}{𝒫}(\alpha )`$ is non-zero if $`{\displaystyle \frac{dx}{\sqrt{2\pi \mathrm{\Delta }t}}\mathrm{exp}\left[\alpha xtS\left(\frac{x}{t}+\frac{1}{\tau }\lambda \right)\right]}=1.`$ (8) Apart from the trivial solution $`\alpha =0`$ this equation defines $`\alpha `$ uniquely. Since $`t\tau _\zeta `$, one can use the saddle-point approximation in calculating the integral (8). It gives the condition $`\alpha =S^{}(\beta +1/\tau \lambda ),`$ (9) where $`\beta `$ is the saddle-point value of the ratio $`(\rho \rho _0)/t`$. Equating the integral in the left-hand side of Eq. (8) (calculated in the same approximation) to unity we get the equation for $`\beta `$ $$S\left(\beta \lambda +\frac{1}{\tau }\right)\beta S^{}\left(\beta \lambda +\frac{1}{\tau }\right)=0.$$ (10) It is important that $`\beta `$ is independent of $`t`$ and $`\rho `$. Solving Eq. (10) and substituting the result into Eq. (9) we find the exponent $`\alpha `$. The trivial solution $`\beta =\lambda 1/\tau `$ of Eq. (10), corresponding to $`\alpha =0`$, should be discarded. We conclude that a single component $`\stackrel{~}{𝒫}(\alpha )`$ is non-zero and therefore $`𝒫(\rho )\mathrm{exp}(\alpha \rho )`$. Recalculating this distribution of $`\rho `$ into that of $`R`$ we obtain the power tail of the PDF of the molecule size $`R`$ $$𝒫(R)R_0^\alpha R^{\alpha 1}.$$ (11) For positive $`\alpha `$ the normalization integral $`𝑑R𝒫(R)`$ is determined by small $`R`$, which means that the majority of molecules has nearly equilibrium size. On the contrary, the normalization integral diverges at large $`R`$ if $`\alpha <0`$. Then the majority of molecules is strongly stretched. Another way to obtain the result (11) is to consider the typical fluctuation making the largest contribution into the tail of the PDF. Starting from a nearly equilibrium shape, that is from $`\rho _01`$, the velocity stretches the molecule up to $`\rho 1`$. The contribution of fluctuations with stretching period $`t`$ is equal to $`𝒢(t,\rho +t/\tau )`$. It has a sharp maximum at time $`t_{}`$ determined from $`d𝒢(t,\rho +t/\tau )/dt=0`$. This condition gives $`t_{}=\rho /\beta `$. The probability density is thus dominated by fluctuations with stretching period $`t_{}`$. It is proportional to $`𝒢(t_{},\rho +t_{}/\tau )`$ which reproduces Eq. (11) with $`\alpha `$ given by Eq. (9). Note that the characteristic value of the velocity gradient for the relevant fluctuations is given by $`\zeta \rho /t_{}+1/\tau `$ and is of the order $`1/\tau `$. Let us establish the dependence of $`\alpha `$ on the control parameter, which is the strength of velocity fluctuations at the viscous length measured by the Lyapunov exponent $`\lambda `$. As $`\lambda `$ tends to zero, the function $`S(x)`$ contracts to $`x=0`$ and therefore $`\alpha `$ tends to infinity, which implies strong suppression of the tail. It is quite natural since in a weak flow the molecules are only weakly stretched. Note that even for intense flows the Lyapunov exponent $`\lambda `$ is suppressed in the regions where the rotation rate dominates the strain rate. As $`\lambda `$ increases, the exponent $`\alpha `$ decreases and at a certain level of fluctuations approaches zero. If $`\lambda `$ is close to $`1/\tau `$ then one can use the quadratic approximation for $`S`$ which leads to the law $$\beta =\frac{1}{\tau }\lambda ,\alpha =\frac{2}{\mathrm{\Delta }}\left(\frac{1}{\tau }\lambda \right).$$ (12) We see that $`\alpha `$ changes its sign at $`\lambda =1/\tau `$. Thus the majority of molecules becomes stretched when $`\lambda >1/\tau `$. This can be interpreted as the criterion for the coil-stretch transition in turbulent flows discussed in . We can use Eq. (7) only if $`\rho `$ and $`\rho _0`$ belong to the asymptotic region between zero and $`\rho _{\mathrm{max}}`$, where Eq. (1) is valid. The saddle-point approximation used above gives $`\rho _0=\rho \beta t`$. Thus the above scheme works only if $`t<t_{}=\rho /\beta `$ (here we assume $`\beta >0`$, i.e. $`\alpha >0`$). Then the polymer molecules spend most of the time fluctuating near the equilibrium shape, occasionally getting stretched by strain fluctuations which overcome the elastic reaction. The fluctuations leading to a given $`R`$ have the duration $`t_{}\rho /\beta `$. Since $`\beta `$ tends to zero when $`\lambda 1/\tau `$ one should observe a critical behavior $`t_{}|\lambda 1/\tau |^1`$ in accordance with Eq. (12). We see that in the vicinity of $`\lambda =1/\tau `$ the time $`t_{}`$ is much larger than $`\tau _\zeta `$ which justifies our scheme. Similar considerations are valid for $`\alpha <0`$. One can generalize the scheme taking into account a number of molecular eigen modes. Since the critical value of $`\lambda `$ is determined by the inverse relaxation time, then in the vicinity of the critical value corresponding to the principal mode, the other modes are at most weakly excited. However, they can be important at larger $`\lambda `$. The rest of the paper is devoted to the discussion of the back influence of the polymers on the flow. A consistent investigation should be based on the complete system of equations coupling turbulence with polymers. One of these equations is the modified Navier-Stokes equation $`(_t+𝒗)v_\alpha =_\alpha p+\nu ^2v_\alpha +_\beta \mathrm{\Pi }_{\alpha \beta },`$ (13) where $`\mathrm{\Pi }_{\alpha \beta }`$ is the polymer contribution to the stress tensor. Equation (13) should be supplemented with the equation describing dynamics of $`\mathrm{\Pi }_{\alpha \beta }`$. In the considered case $`\mathrm{\Pi }_{\alpha \beta }`$ can be defined as a sum of stresses of polymer molecules in a volume divided by the mass of the fluid inside the volume . We are interested in the situation when the molecules are strongly elongated. Then due to Hooke’s law the stress of such molecule is proportional to $`R_\alpha R_\beta `$. Next, taking the volume smaller than the viscous length we deal with coherently elongated molecules. Therefore the stress tensor can be written as $`\mathrm{\Pi }_{\alpha \beta }=\mathrm{\Pi }_0\mathrm{exp}(2\rho )n_\alpha n_\beta ,`$ (14) where $`n_\alpha `$ is a unit vector, $`\mathrm{\Pi }_0\mathrm{exp}(2\rho )`$ is the principal eigenvalue of $`\mathrm{\Pi }_{\alpha \beta }`$ and the elongated molecules correspond to $`\rho >0`$. Then from Eq. (1) we get the same Eqs. (2) for $`\rho `$ and $`𝒏`$, where $`d/dt`$ should be understood as the material derivative $`_t+𝒗`$. Thus the velocity $`𝒗`$ is now coupled to $`\rho `$ and $`𝒏`$ via Eqs. (13,14). Note that the constant $`\mathrm{\Pi }_0`$ in Eq. (14) is proportional to the concentration of the polymer molecules. Let us consider the PDF of $`R`$ not assuming that the flow is unperturbed by the polymers. We start from the case $`\lambda \tau <1`$. One recovers Eq. (11) if the back influence is small, i.e. $`\mathrm{\Pi }\nu v`$ for the relevant fluctuations characterized by $`v1/\tau `$. Since $`\mathrm{\Pi }R^2`$, the polymer contribution in the stress tensor grows with $`R`$ and the inequality $`\mathrm{\Pi }\nu v`$ is violated for the molecules with $`RR_{\mathrm{back}}`$. The value of $`R_{\mathrm{back}}`$ can be found from the estimate $`\nu /\tau \mathrm{\Pi }_0R_{\mathrm{back}}^2/R_0^2`$. For $`RR_{\mathrm{back}}`$ the back reaction switches on and suppresses the velocity fluctuation. Hence, the probability of fluctuations producing $`R>R_{\mathrm{back}}`$ is small and hence at $`RR_{\mathrm{back}}`$ the PDF decays much faster than prescribed by Eq. (11). Now we study the case $`\lambda >1/\tau `$. For $`RR_{\mathrm{back}}`$ the polymer stress tensor is small and as explained above velocity is decoupled from the elastic degrees of freedom. Since stretching is stronger than the elastic force, $`R`$ grows for any typical velocity realization. On the other hand, at $`RR_{\mathrm{back}}`$ the polymer stress influences the velocity, suppressing it strongly for sufficiently large $`R`$. This leads to a decrease in $`R`$. Therefore, the majority of molecules has sizes near an optimal size $`R_{\mathrm{op}}>R_{\mathrm{back}}`$. The PDF is an increasing function of $`R`$ at $`R<R_{\mathrm{op}}`$ and decays fast at $`R>R_{\mathrm{op}}`$. In this state velocity gradients can be estimated as $`1/\tau `$ . This can be proven e.g. by averaging Eq. (2). It means that the Lyapunov exponent of the solution is smaller than that of the solvent at the same energy input. The energy dissipation is related mainly to the polymer stress tensor and hence $`R_{\mathrm{op}}`$ grows as the input of energy increases . The effective viscosity defined as the proportionality coefficient between $`\mathrm{\Pi }`$ and $`v`$, also grows. Note that there exists an interval where $`R_{\mathrm{op}}R_{\mathrm{max}}`$. This contradicts the widely accepted view that at some level of turbulent fluctuations there is a sharp transition between the state where most of the molecules have $`RR_0`$ and the state where all the molecules are stretched up to $`R_{\mathrm{max}}`$. We conclude that for $`\alpha >0`$ (i.e. $`\lambda <1/\tau `$) the end-to-end extensions of the majority of molecules are of the order of the equilibrium size $`R_0`$, and there is no essential contribution to the stress tensor. For $`\alpha <0`$ (i.e. $`\lambda >1/\tau `$) extensions of most of the molecules are of the order of $`R_{\mathrm{op}}R_0`$. Then the polymer stress tensor $`\mathrm{\Pi }`$ is estimated as $`\mathrm{\Pi }_0R_{\mathrm{op}}^2/R_0^2`$. Its value can be much larger than the viscous contribution $`\nu \lambda `$ . The above analysis implies that $`R_{\mathrm{back}}R_{\mathrm{max}}`$ since at $`RR_{\mathrm{max}}`$ one must consider non-linear corrections to Hooke’s law and hence to Eq. (1). The condition $`R_{\mathrm{back}}R_{\mathrm{max}}`$ is realized for sufficiently high concentrations of polymers. Then the fluid displays non-Newtonian properties. When most of the molecules are stretched up to $`R_{\mathrm{max}}`$ but the back reaction is not switched on, one has a Newtonian fluid whose properties do not differ significantly from the properties of the solvent. This is the case for very dilute solutions where $`\mathrm{\Pi }\nu v`$. Since a turbulent flow is multiscale, the real picture is more complicated. We have shown that the main characteristics of the flow that determines the behavior of polymer molecules is the Lyapunov exponent $`\lambda `$, which is defined at the viscous scale. Hence, the dynamics of a molecule is sensitive to the fluid motion at the viscous scale whereas the velocity varies over a wide interval of scales. Therefore the Lyapunov exponent varies in time and space over scales from the inertial interval. We thus have an “intermittent picture”: In the regions where $`\lambda <1/\tau `$ one deals with a Newtonian fluid with the viscosity $`\nu `$ of the solvent, whereas in the regions where $`\lambda >1/\tau `$ the polymers are strongly stretched and the effective viscosity can be much larger than $`\nu `$. As the Reynolds number $`\mathrm{Re}`$ increases, the relative volume of the regions with $`\lambda >1/\tau `$ increases and the averaged (over space) viscosity grows. The average value of $`R_{\mathrm{op}}`$ also grows. After it has reached the value of the order $`R_{\mathrm{max}}`$, the back influence cannot grow anymore. It means that the effective viscosity first grows and then decreases back to the solvent value $`\nu `$. Note that the effective viscosity varies smoothly without onset. As a consequence, the drag reduction also varies smoothly with Re, having a maximum at some intermediate value. Experiments seem to confirm our picture (see, e.g., ). To avoid misunderstanding let us stress that we consider conventional turbulent flows which have the inertial interval of scales. In the inertial interval the polymer back reaction is small compared to the non-linear term in Eq. (13). In principle, in some region of parameters the back reaction can be stronger than this non-linearity everywhere and the properties of the fluid are drastically different . This case requires a separate analysis. We thank A. Groisman for numerous talks that initiated this work. We are indebted to G. Falkovich for important remarks. Helpful discussions with V. Steinberg are greatfully acknowledged. We thank M. Chertkov for the possibility to read his work before its publication. The work was supported by the grants of Minerva Foundation, by the Edward and Anna Mitchell Research Fund at the Weizmann Institute and by the Landau-Weizmann Prize program. E.B. acknowledges support by NEC Research Institute, Inc.
no-problem/9911/hep-ph9911529.html
ar5iv
text
# QCD corrections to 𝑒⁺⁢𝑒⁻→4⁢"jets" ## 1 Motivation: LEP physics QCD four-jet production in $`e^+e^{}`$ annihilation can be measured at LEP and can be studied in its own right. First of all, $`e^+e^{}\text{4 jets}`$ is the lowest order process which contains the non-abelian three-gluon-vertex at tree level and thus allows for an measurement of the colour factors $`C_F`$, $`C_A`$ and $`T_R`$ of QCD. This in turn may be used to put exclusion limits on light gluinos. Furthermore the QCD process is a background to W-pair production, when both W’s decay hadronically and to certain channels of the search of the Higgs boson like $`e^+e^{}Z^{}ZH\text{4 jets}`$. The one-loop matrix elements required for an NLO study of four-jet production are also an essential input for an NNLO calculation of three-jet production. The latter one would be needed to reduce theoretical uncertainties in the extraction of the strong coupling at the Z-pole. In general leading-order calculations in QCD give a rough description of the process under consideration, but they suffer from large uncertainties. The arbitrary choice of the renormalization scale gives rise to an ambiguity, which is reduced only in an next-to-leading order calculation. Furthermore the internal structure of a jet and the sensitivity to the merging procedure of the jet algorithm are modelled only in an NLO analysis. Both uncertainties are related to the appearance of logarithms, ultraviolet in nature in the first case, infrared in the latter, which are calculated explicitly only in an NLO calculation. A NLO calculation proceeds in two steps: First, one needs the relevant amplitudes, in our case $`e^+e^{}4\text{partons}`$ and $`e^+e^{}5\text{partons}`$ at tree level and $`e^+e^{}4\text{partons}`$ at one loop. Among these, the one-loop amplitudes are the most complicated ones and we will comment on their calculation in the next section. The second step requires setting up a numerical Monte Carlo program which has to deal with infrared divergences. We will focus on this point in the third section. In the last section we will give some numerical results. ## 2 One-loop amplitudes We used a variety of modern techniques in order to calculate the one-loop amplitudes efficiently. These include colour decomposition, where amplitudes are decomposed into simpler gauge-invariant partial amplitudes with definite colour structure, and the spinor helicity method, which consists in expressing all Lorentz four-vectors and Dirac spinors in terms of massless two-component Weyl-spinors. Their use divides the task into smaller, more manageable pieces. Also a decomposition inspired by supersymmetry proved to be useful, where the particles running around the loop are reexpressed in terms of supermultiplets. In a second step the cut technique and factorization in collinear limits are used to constrain the analytic form of the partial amplitudes. As an example we explain in more detail the cut technique , which is based on unitarity. To obtain the coefficients of the basic box, triangle or bubble integrals one considers the cuts in all possible channels. Each phase-space integral is rewritten with the help of the Cutkosky rules as the imaginary part of a loop amplitude. The power of this method lies within the fact, that on each side of the cut one has a full tree amplitude and not just a single Feynman diagram. This method allows one to reconstruct the one-loop amplitude up to terms without an imaginary part. The remaining terms were obtained by examining the collinear limits. For the reduction of tensor pentagon integrals we used a new reduction algorithm , based on the Schouten identity and Weyl spinors, which does not introduce artifical Gram determinants in the denominator. The one-loop amplitudes for the first subprocess $`e^+e^{}q\overline{q}Q\overline{Q}`$ were calculated in refs. and the amplitudes for the second subprocess $`e^+e^{}`$ $`q\overline{q}gg`$ in refs. . The calculations of the two groups agree with each other. ## 3 Numerical implementation: Mercutio The second major part of a general purpose NLO program for four jets is coding the one-loop amplitudes and the five parton tree-level amplitudes in a numerical Monte Carlo program. At leading order the task is relatively simple: One parton corresponds to one jet. At NLO however, a jet can be modeled by two partons. At NLO the cross section receives contributions from the virtual corrections and the real emission part. Only the sum of them is infrared finite, whereas when taken separately, each part gives a divergent contribution. Several methods to handle this problem exist, such as the phase-space slicing method , the subtraction method and the dipole formalism . We have chosen the dipole formalism. Within the dipole formalism one subtracts and adds again a suitably chosen term: $`\sigma ^{NLO}`$ $`=`$ $`{\displaystyle \underset{n+1}{}}\left(d\sigma ^Rd\sigma ^A\right)+{\displaystyle \underset{n}{}}\left(d\sigma ^V+{\displaystyle \underset{1}{}}𝑑\sigma ^A\right)`$ (1) The approximation term $`d\sigma ^A`$ has to fullfill the following two requirements: First, $`d\sigma ^A`$ must be a proper approximation to $`d\sigma ^R`$, with the same pointlike singular behaviour in $`D`$ dimensions as $`d\sigma ^R`$. Secondly, $`d\sigma ^A`$ must be analytically integrable in $`D`$ dimensions over the one-parton subspace leading to the soft and collinear divergences. Let me now turn to the details of the Monte Carlo integration. The heart of any Monte Carlo integration is the random number generator. Among other things, it should have a long period and should not introduce artifical correlations. As the default random number generator we use $`s_i`$ $`=`$ $`\left(s_{i24}+s_{i55}\right)\text{mod}\mathrm{\hspace{0.33em}2}^{32}.`$ (2) It was proposed by Mitchell and Moore and has a period of $`2^f(2^{55}1)`$, where $`0f32`$. Massless fourmomenta are generated with the help of the RAMBO-algorithm . This algorithm generates events with a uniform weight. Adaptive importance sampling is implemented using the VEGAS-algorithm . A naive implementation of the dipole formalism will give large statistical errors when performing a Monte Carlo integration over the real corrections with dipole factors subtracted. In order to improve the efficiency of the Monte Carlo integration we remap the phase space to make the integrand more flat. A simplified model for the term $`d\sigma ^Rd\sigma ^A`$ would be $`F`$ $`=`$ $`{\displaystyle \underset{0}{\overset{1}{}}}𝑑x\left({\displaystyle \frac{f(x)}{x}}{\displaystyle \frac{g(x)}{x}}\right)`$ (3) where $`f(0)=g(0)`$ is assumed. $`f(x)/x`$ corresponds to the original real emission part with a soft or collinear singularity at $`x=0`$, $`g(x)/x`$ corresponds to the subtraction term of the dipole formalism. Eq. (3) can be rewritten as $`F`$ $`=`$ $`{\displaystyle \underset{0}{\overset{y_{\mathrm{min}}}{}}}𝑑x{\displaystyle \frac{f(x)g(x)}{x}}+{\displaystyle \underset{\mathrm{ln}y_{\mathrm{min}}}{\overset{0}{}}}𝑑y\left(f(e^y)g(e^y)\right),`$ (4) where $`y_{\mathrm{min}}`$ is an artificial parameter separating a numerically dangerous region from a stable region. Using the Taylor expansion for $`f(x)g(x)`$, one sees that the first term gives a contribution of order $`O(y_{\mathrm{min}})`$. In the second term the $`1/x`$ behaviour has been absorbed into the integral measure by a change of variables $`y=\mathrm{ln}x`$, and the integrand tends to be more flat. It should be noted that there is no approximation involved. ## 4 Numerical results Numerical programs for $`e^+e^{}4\text{jets}`$ have been provided by four groups: MENLO PARC , DEBRECEN , EERAD2 and MERCUTIO . Various cross-checks have been performed among these programs and they agree within statistical errors. Here we report on the numerical program “Mercutio”, which was written in C++. The four-jet fraction is defined as $`R_4`$ $`=`$ $`{\displaystyle \frac{\sigma _{4jet}}{\sigma _{tot}}}.`$ (5) The values obtained for the four-jet fraction for the DURHAM algorithm with $`y_{cut}=0.01`$ for various energies are given in table 1. The decrease with energy is mainly due to the running of the strong coupling. With the numerical program for $`e^+e^{}\text{4 jets}`$ one may also study the internal structure of three-jets events. One example is the jet broadening variable defined as $`B_{jet}`$ $`=`$ $`{\displaystyle \frac{1}{n_{jets}}}{\displaystyle \underset{jets}{}}{\displaystyle \frac{\underset{a}{}|p_a^{}|}{\underset{a}{}|\stackrel{}{p}_a|}}`$ (6) Here $`p_a^{}`$ is the momentum of particle $`a`$ transverse to the jet axis of jet $`J`$, and the sum over $`a`$ extends over all particles in the jet $`J`$. The jet broadening variable is calculated for three-jet events defined by the DURHAM algorithm and $`y_{cut}=0.1`$. This choice is motivated by a recent analysis of the Aleph collaboration . Figure 1 shows the distribution of the jet broadening variable. ## References
no-problem/9911/cond-mat9911231.html
ar5iv
text
# Anisotropy in the Compressible Quantum Hall State ## 1 Introduction As summarized in the table below, the physics at the half-filled $`l`$-th Landau level is drastically changed in the different Landau level space. In particular, the nature of the anisotropic states discovered in the third or higher Landau levels are unknown until now. At the half-filled lowest and second Landau levels, transition to the anisotropic state was also observed in the presence of the periodic potential or in-plane magnetic field, respectively. The charge density wave state (CDW) is a candidate of the anisotropic state. We study the compressible charge density wave (CCDW) states, which has no energy gap, by the Hartree-Fock approximation using the von Neumann lattice formalism. There are two types of The CCDW states, the one is the unidirectional charge density wave (UCDW) state and the other is the compressible Wigner crystal (CWC) state. The UCDW state has a charge density which is uniform in one direction and oscillates in the other direction. The charge density of the CWC state has the same periodicity of the von Neumann lattice. The classification of the CDW is given in the next table. ## 2 Mean Field Theory on the von Neumann Lattice Let us consider the two-dimensional electron system in a perpendicular magnetic field $`B`$ which is described by a Hamiltonian $`H=H_0+H_{\mathrm{int}}`$, $`H_0`$ $`=`$ $`{\displaystyle d^2r\psi ^{}(𝐫)\frac{(𝐩+e𝐀)^2}{2m}\psi (𝐫)},`$ (1) $`H_{\mathrm{int}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle d^2rd^2r^{}}:\rho (𝐫)V(𝐫𝐫^{})\rho (𝐫^{}):.`$ (2) where $`\rho (𝐫)=\psi ^{}(𝐫)\psi (𝐫)`$, $`\times 𝐀=B`$, and $`V(𝐫)=q^2/\mathrm{r}`$. In the von Neumann lattice formalism, the electron field is expanded as $`\psi (𝐫)=_{l,𝐗}b_l(𝐗)W_{l,𝐗}(𝐫)`$, where $`b`$ is an anti-commuting annihilation operator and $`𝐗`$ is an integer valued two-dimensional coordinate. The Wannier basis $`W_{l,𝐗}(𝐫)`$’s are orthonormal complete basis in the $`l`$-th Landau level as $`(W_{l,𝐗}(𝐫),W_{l^{},𝐗^{}}(𝐫))=\delta _{ll^{}}\delta _{𝐗,𝐗^{}}`$. $`W_{l,𝐗}(𝐫)`$ are localized at two-dimensional lattice sites $`m𝐞_1+n𝐞_2`$ for $`𝐗=(m,n)`$, where $`𝐞_1=(ra,0)`$, $`𝐞_2=(a/r\mathrm{tan}\theta ,a/r)`$, and $`a=\sqrt{2\pi /eB}`$. The area of the unit cell is $`𝐞_1\times 𝐞_2=a^2`$. We set $`a=1`$ in the following calculation. The Bloch wave basis $`u_{l,𝐩}(𝐫)=_𝐗W_{l,𝐗}(𝐫)e^{i𝐩𝐗}`$ is another useful basis. We can obtain the charge density profile of the CDW state using this basis. The lattice momentum $`𝐩`$ is defined in the Brillouin zone (BZ), $`|p_i|\pi `$. Using this basis, we obtain another expansion of the electron field as $`\psi (𝐫)=_l_{\mathrm{BZ}}\frac{d^2p}{(2\pi )^2}a_l(𝐩)u_{l,𝐩}(𝐫)`$. The Fourier transformed density operator $`\stackrel{~}{\rho }(𝐤)=d^2r\rho (𝐫)e^{i𝐤𝐫}`$ is written as $$\stackrel{~}{\rho }(\stackrel{~}{𝐤})=\underset{l,l^{}}{}_{\mathrm{BZ}}\frac{d^2p}{(2\pi )^2}a_l^{}(𝐩+𝐤)M_{ll^{}}(𝐤)e^{if(𝐩+𝐤,𝐩)}a_l^{}(𝐩),$$ (3) where $`\stackrel{~}{𝐤}=(k_x/r,rk_yk_x/r\mathrm{tan}\theta )`$. The phase function $`f`$ is given by $`f(𝐩^{},𝐩)=_𝐩^𝐩^{}(\stackrel{~}{𝐀}(𝐩)\lambda (𝐩))𝑑𝐩`$, where $`\stackrel{~}{𝐀}(𝐩)=(p_y/2\pi ,0)`$, which represents a uniform magnetic field in the momentum space $`_p\times \stackrel{~}{𝐀}(𝐩)=1/2\pi `$. The following boundary condition is required, $`e^{i\lambda (𝐩+2\pi 𝐍)i\lambda (𝐩)}=(1)^{N_x+N_y}e^{iN_yp_x},N_x,N_y:\mathrm{integers}`$. The matrix $`M_{l,l^{}}`$ is given by $`\left(\frac{l}{l^{}}\right)^{\frac{1}{2}}\left(\frac{k_x+ik_y}{\sqrt{4\pi }}\right)^{l^{}l}L_l^{(l^{}l)}(\frac{k^2}{4\pi })e^{\frac{k^2}{8\pi }}`$, for $`ll^{}`$ and $`M_{l^{}l}(𝐤)=M_{ll^{}}^{}(𝐤)`$. The free Hamiltonian $`H_0`$ and interaction Hamiltonian $`H_{\mathrm{int}}`$ become $`H_0`$ $`=`$ $`{\displaystyle \underset{l,\mathrm{X}}{}}\omega _c(l+{\displaystyle \frac{1}{2}})b_l^{}(𝐗)b_l(𝐗),`$ (4) $`H_{\mathrm{int}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{𝐗_i,l_i,l_i^{}}{}}:b_{l_1}^{}(𝐗_1)b_{l_1^{}}(𝐗_1^{})V_{l_1l_1^{}l_2l_2^{}}(𝐗,𝐘,𝐙)b_{l_2}^{}(𝐗_2)b_{l_2^{}}(𝐗_2^{}):`$ where $`𝐗=𝐗_1𝐗_1^{}`$, $`𝐘=𝐗_2𝐗_2^{}`$, and $`𝐙=𝐗_1𝐗_2^{}`$. Thus the system is translationally invariant on the lattice. $`\stackrel{~}{V}(𝐤)=2\pi q^2/k`$ for $`𝐤0`$ and $`\stackrel{~}{V}(0)=0`$ due to the charge neutrality condition. The Hamiltonian $`H`$ is invariant under the transformation $`b_l(𝐗)e^{i𝐊𝐗}b_l(𝐗)`$, $`\lambda (𝐩)\lambda (𝐩+𝐊)+K_xp_y`$. This transformation is the magnetic translation in the momentum space $`𝐩𝐩+𝐊`$. This invariance is referred to as the K-invariance in the composite fermion model. ## 3 Hartree-Fock energy for the CCDW states We consider only the intra-Landau level’s energy of the $`l`$-th Landau level. The filling factor $`\nu `$ is written as $`\nu =l+\overline{\nu }`$. Mean field $`U_l(𝐗^{},𝐗)=b_l^{}(𝐗)b_l(𝐗^{})`$ for the CCDW which has the translational invariance on the von Neumann lattice, that is, $`U_l(𝐗𝐗^{})=U_l(𝐗,𝐗^{}),U_l(0)=\overline{\nu }`$. The Hartree-Fock Hamiltonian in the $`l`$-th Landau level, then, becomes $$H_{\mathrm{HF}}^{(l)}=\underset{𝐗,𝐗^{}}{}U_l(𝐗𝐗^{})\{\stackrel{~}{v}_l(2\pi (\widehat{𝐗}\widehat{𝐗}^{}))v_l(\widehat{𝐗}\widehat{𝐗}^{})\}\{b_l^{}(𝐗)b_l(𝐗^{})\frac{1}{2}U_l(𝐗^{}𝐗)\}.$$ (5) where $`\stackrel{~}{v}_l(𝐤)=\{L_l(\frac{k^2}{4\pi })\}^2e^{\frac{k^2}{4\pi }}\stackrel{~}{V}(𝐤)`$, $`v_l(𝐗)=\frac{d^2k}{(2\pi )^2}\stackrel{~}{v}_l(𝐤)e^{i𝐤𝐗}`$, and $`\widehat{𝐗}=(rm+n/r\mathrm{tan}\theta ,n/r)`$ for $`𝐗=(m,n)`$. Thus the continuum system with a magnetic field is transformed to the lattice system without a magnetic field! The self-consistency equations for the kinetic energy $`\epsilon _l`$ is given by $`\epsilon _l(𝐩,\overline{\nu })=_{\mathrm{BZ}}\frac{d^2p^{}}{(2\pi )^2}\stackrel{~}{v}_l^{\mathrm{HF}}(𝐩^{}𝐩)\theta (\mu _l\epsilon _l(𝐩^{},\overline{\nu }))`$, where $`\mu _l`$ is the chemical potential and $`\stackrel{~}{v}_l^{\mathrm{HF}}`$ is defined by $`\stackrel{~}{v}_l^{\mathrm{HF}}(𝐩)=_𝐗\{\stackrel{~}{v}_l(2\pi (\widehat{𝐗}))v_l(\widehat{𝐗})\}e^{i𝐩𝐗}`$. The energy per particle in the $`l`$-th Landau level is given by $`E^{(l)}=\frac{1}{2\overline{\nu }}_\mathrm{X}|U_l(𝐗)|^2\{\stackrel{~}{v}_l(2\pi \widehat{𝐗})v_l(\widehat{𝐗})\}`$. $`E^{(l)}`$ is a function of $`\overline{\nu }`$, $`r`$ and $`\theta `$. The parameters $`r`$ and $`\theta `$ are determined so as to minimize the energy $`E^{(l)}`$ at a fixed $`\overline{\nu }`$. Existence of a Fermi surface breaks the K-invariance inevitably. There are two types of self-consistent Fermi seas. (a) Belt-shaped Fermi sea (UCDW state) : $$U_l(𝐗)=\delta _{m,0}\frac{\mathrm{sin}(p_\mathrm{F}n)}{\pi n},p_\mathrm{F}=\pi \overline{\nu }.$$ (6) (b) Diamond-shaped Fermi sea (CWC state for $`\overline{\nu }=1/2`$) : $$U_l(𝐗)=\frac{2}{(\pi )^2}\frac{\mathrm{sin}\frac{\pi }{2}(m+n)\mathrm{sin}\frac{\pi }{2}(mn)}{m^2n^2}.$$ (7) We calculated the Hartree-Fock energy for (a) and (b) at $`l<4`$. As a result, we found that the UCDW state is the lowest energy state in all cases. Therefore the UCDW state is the most plausible state in the CCDW states. Using the mean field of (a), the kinetic term in $`H_{\mathrm{HF}}^{(l)}`$ is written as $$K_{\mathrm{HF}}^{(l)}=\underset{m}{}\frac{dp_y}{2\pi }a_{l,m}^{}(p_y)\epsilon _l(p_y,\overline{\nu })a_{l,m}(p_y),$$ (8) where $`a_{l,m}(p_y)=_nb_l(𝐗)e^{ip_yn}`$ for $`𝐗=(m,n)`$. Therefore the UCDW state is regarded as a collection of the one-dimensional lattice Fermi-gas systems which extend to the y-direction. Using the Buttiker-Landauer formula, the conductance of the UCDW $`\sigma _{xx}=0,`$ $`\sigma _{yy}=n_x\frac{e^2}{2\pi }`$, where $`n_x`$ is a number of the one-dimensional channels. If we take $`\sigma _{xy}=\nu e^2/2\pi `$, the resistance becomes $`\rho _{xx}=\frac{n_x}{\nu ^2}\frac{2\pi }{e^2}`$, $`\rho _{yy}=0`$. Thus the formation of the UCDW leads the anisotropy in the magnetoresistance. ## 4 Summary and discussion We have studied the CCDW state, which is gapless state and has an anisotropic Fermi surface. We obtained two types of the CCDW state, the UCDW state and CWC state. By calculating the Hartree-Fock energy, the UCDW is found to have a lower energy at the half-filled Landau levels. The UCDW state is regarded as a system which consists of many one-dimensional lattice Fermi-gas systems which extend to the uniform direction. Formation of this structure could be the origin of the anisotropy observed in experiments. Theoretical works to include fluctuations around the mean field solution are necessary. Since there is no energy gap in the CCDW state, the fluctuation effect might be large compared with the gapfull CDW state. ## References
no-problem/9911/gr-qc9911088.html
ar5iv
text
# Quantized shells as a tool for studying semiclassical effects in general relativity ## 1 Introduction A compact object which collapses gravitationally can be used as a probe to test physics at different length scales. In fact, as long as the collapse proceeds, the characteristic size of the object will subsequently cross all the relevant scales, which include the Compton wavelengths $`\mathrm{}_\varphi `$ of the particle excitations the object is made of and, eventually, the Planck length $`\mathrm{}_p`$. Starting with a large size, the dynamics can be initially described by classical general relativity, therefore avoiding the issue of the initial conditions which plagues the case of the expansion (e.g., of the universe in quantum cosmology). Then, the quantum nature of the matter in the object will become relevant presumably around $`\mathrm{}_\varphi `$ and a quantum theory of gravity is required near $`\mathrm{}_p`$. In this picture one must also accommodate for the role played by the causal horizon generated by the collapsing object. The horizon is a sphere with radius $`R_G=2M`$ determined by the total energy $`M/G`$ of the system ($`G`$ is the Newton constant). If the object does not lose energy, $`M`$ is constant in time and one may simply take the point of view of a (proper) observer comoving with some particle of the infalling matter, for whom the horizon is totally harmless. The situation is rather different for a radiating object, since one can then consider also an external observer who witnesses the collapse by detecting the emitted radiation<sup>1</sup><sup>1</sup>1We conceive an observer as some material device and not as an abstract concept.. For such an observer $`R_G`$ is the radius of a (apparent) horizon which shrinks in time with an uncertainty in location related to the probabilistic (quantum mechanical) nature of the emission process (see for an analogy with the Unruh effect). Further, there is an uncertainty (of order $`\mathrm{}_\varphi `$) in the position of the particles of the collapsing body and this all renders the issue of (viewing) an emitting source crossing its own horizon of particular relevance, both for the theoretical understanding of gravitation and for astrophysical applications. It seems appropriate to tackle this hard topic by considering simple cases which capture the basic features of the problem and reduce the technical difficulties. Such an example is given by the spherically symmetric distributions of matter with infinitesimal thickness known in general relativity as thin shells . They can either be pulled just by their own weight or fall in the external gravitational field of a (spherical) source placed at their centre, the latter case being of interest for studying the accretion of matter onto a black hole and the Hawking effect . ## 2 Effective action The equations of motion for thin shells are usually obtained in general relativity from the junction conditions between the embedding four-metric and the intrinsic three-metric on the surface of the shell of radius $`r=R`$ . Their dynamics can also be described by an effective action which is obtained by inserting in the general expression for the Einstein-Hilbert action the Schwarzschild solution with mass parameter $`M_0`$ ($`0`$) as the space-time inside the shell ($`r<R`$) and the Vaidya solution with mass function $`m`$ ($`>M_0`$) outside the shell ($`r>R`$). When $`\dot{M}`$, the time derivative of $`m`$ evaluated on the outer surface of the shell, is negative, the shell emits null dust. The effective action is given by $`S_s`$ $`=`$ $`{\displaystyle \frac{dt}{G}\left[R\beta R\dot{R}\mathrm{tanh}^1\left(\frac{\dot{R}}{\beta }\right)\right]_{out}^{in}}`$ (1) $`{\displaystyle 𝑑tNE}`$ $`+{\displaystyle \frac{dt}{2G}\left[\frac{\dot{M}R^2}{R2M}\frac{MR\dot{R}}{R2M}\right]},`$ and is a functional of $`R`$, $`M`$ and $`N`$ (lapse function of the three-metric on the shell); $`E=E(R,M)`$ is the shell energy, $`\beta ^2=(12m/R)N^2+\dot{R}^2`$ and $`[F]_{out}^{in}=F_{in}F_{out}`$ denotes the jump of the function $`F`$ across the shell. The first two integrals in (1) were known from an analogous derivation for non-radiating shells and the third integral properly accounts for the fact that a radiating shell defines an open (thermo)dynamical system<sup>2</sup><sup>2</sup>2This term becomes dynamically irrelevant when the radiation ceases .. ### 2.1 Equations of motion The Euler-Lagrange equations of motion, $`{\displaystyle \frac{\delta S_s}{\delta N}}`$ $``$ $`[H_G+E]=0`$ (2) $`{\displaystyle \frac{\delta S_s}{\delta R}}`$ $`=`$ $`0`$ (3) $`{\displaystyle \frac{\delta S_s}{\delta M}}`$ $``$ $`K=0,`$ (4) represent, respectively, the (primary) Hamiltonian constraint corresponding to the time-reparametrization invariance of the shell, an equation for the surface tension $`P=E/A`$ ($`A=4\pi R^2`$ is the area of the surface of the shell) and a second (primary) constraint which relates $`\dot{M}`$ to the luminosity of the shell. The equations (2) and (3) coincide with the standard junction conditions , but (4) only appears in this approach . Since from (2)-(4) it follows that $`\dot{H}_G+\dot{E}`$ $``$ $`\dot{A}{\displaystyle \frac{\delta S_s}{\delta A}}\dot{M}{\displaystyle \frac{\delta S_s}{\delta M}}=0`$ (5) $`\dot{K}\ddot{H}`$ $`=`$ $`0,`$ (6) no new (secondary) constraint arises and the dynamical system defined by $`S_s`$ is consistent. One can therefore set $`N=N(t)`$ and $`M=M(t)`$ any fixed function of the time and correspondingly solve for $`R=R(t)`$. ### 2.2 Thermodynamics The meaning of the identity (5) is that the total energy of the system is conserved and it can also be cast in the form of the first principle of thermodynamics , $`dE=PdA+\text{d}\text{-}Q,`$ (7) where $`\dot{Q}\dot{M}`$ is the luminosity. A second principle can also be introduced, at least in the quasi-static limit $`\dot{R}^212M/R`$, by defining an entropy $`S`$ and a temperature $`T`$ such that $`dS={\displaystyle \frac{\text{d}\text{-}Q}{T}}`$ (8) is an exact differential, which yields $`T`$ $`=`$ $`{\displaystyle \frac{a}{8\pi k_B}}{\displaystyle \frac{\mathrm{}_p^{1b}}{\left(\mathrm{}M\right)^b}}{\displaystyle \frac{1}{\sqrt{12M/R}}}`$ (9) $``$ $`{\displaystyle \frac{T_{a,b}}{\sqrt{12M/R}}},`$ where $`k_B`$ is the Boltzmann constant, $`a`$ and $`b`$ are constants and $`T/T_{a,b}`$ is the Tolman factor. We note in passing that $`T_{1,1}`$ is the Hawking temperature of a black hole of mass $`M`$ . ### 2.3 Microstructure A microscopic description of the shell can be obtained by considering $`n`$ close microshells of Compton wavelength $`\mathrm{}_\varphi =\mathrm{}/m_\varphi `$ . One then finds that such a (many-body) system is gravitationally confined within a thickness $`\mathrm{\Delta }`$ around the mean radius $`R`$ <sup>3</sup><sup>3</sup>3This is the reason we refer to $`R`$ as a collective variable. and can estimate $`\mathrm{\Delta }`$ from an Hartree-Fock approximation for the wavefunction of each microshell. This yields $`\left({\displaystyle \frac{\mathrm{\Delta }}{R}}\right)^{3/2}Gn\mathrm{}m_\varphi {\displaystyle \frac{\mathrm{}_\varphi }{R^2}}{\displaystyle \frac{\mathrm{}_\varphi R_G}{R^2}}.`$ (10) which, for $`RR_G`$, is negligibly small provided $`R\mathrm{}_\varphi ,`$ (11) in agreement with the naive argument that the location of an object cannot be quantum mechanically defined with an accuracy higher than its Compton wavelength. In the limit (11) one can second quantize the shell by introducing a (scalar) field $`\varphi `$ with support within a width $`\mathrm{\Delta }`$ around $`R`$ and Compton wavelength $`\mathrm{}_\varphi `$ and obtains (neglecting terms of order $`\mathrm{\Delta }`$ and higher) $`E`$ $``$ $`{\displaystyle \frac{1}{2\mathrm{}_\varphi }}\left[{\displaystyle \frac{\pi _\varphi ^2}{R^2}}+R^2\varphi ^2\right]`$ (12) $`+H_{int}(\varphi ,M,\dot{M},R),`$ where $`\pi _\varphi `$ is the momentum conjugated to $`\varphi `$ and $`H_{int}`$ describes the local interaction between the matter in the shell and the emitted radiation. When $`H_{int}0`$, one expects that $`M=M(t)`$ becomes a dynamical variable which cannot be freely fixed and the luminosity should then be determined by the corresponding Euler-Lagrange equation (4) from purely initial conditions for $`R`$ and $`M`$ (in any gauge $`N=N(t)`$) . ## 3 Semiclassical description Lifting the time-reparametrization invariance of the shell to a quantum symmetry yields the Wheeler-DeWitt equation corresponding to the classical Hamiltonian constraint (2). For the non-radiating case ($`\dot{M}=H_{int}=0`$) and in the proper time gauge ($`N=1`$) it is given by $`\left[\widehat{H}_G(P_R,R)+\widehat{E}(P_\varphi ,\varphi ;R)\right]\mathrm{\Psi }=0.`$ (13) One can study (13) in the Born-Oppeheimer approach by writing $`\mathrm{\Psi }(R,\varphi )`$ $`=`$ $`\psi (R)\chi (\varphi ;R)`$ (14) $``$ $`\psi _{WKB}(R)\chi (\varphi ;R),`$ where $`\psi _{WKB}`$ is the semiclassical (WKB) wavefunction for the radius of the shell. This allows one to retrieve the semiclassical limit in which $`R`$ is a collective (semi)classical variable driven by the expectation value of the scalar field Hamiltonian operator $`\widehat{E}`$ over the quantum state $`\chi `$, $`H_G+\widehat{E}=0,`$ (15) while $`\chi `$ evolves in time according to the Schrödinger equation $`i\mathrm{}{\displaystyle \frac{\chi }{t}}=\widehat{E}\chi .`$ (16) In general, in order to obtain (15) and (16) from (13), one needs to assume that certain fluctuations (corresponding to quantum transitions between different trajectories of the collective variable $`R`$) be negligible . For the present case one can check a posteriori that this is true if the condition (11) holds (that is, $`n`$ is sufficiently big, see (10)) . ### 3.1 Particle production and backreaction The equation (16) can be solved beyond the adiabatic approximation by making use of invariant operators . In particular, one can choose $`\chi `$ as the state with initial (proper) energy $`E_0=m_0/G`$ and radius $`R_0`$ and expand in the parameter of non-adiabaticity $`\delta ^2=\mathrm{}_\varphi ^2M/R_0^3`$ to obtain (to first order in $`\delta ^2`$) $`m=G\widehat{E}m_0\left(1+{\displaystyle \frac{R_0^3\dot{R}^2}{R_GR^2}}\delta ^2\right),`$ (17) which is an increasing function for decreasing $`R`$. This signals a (non-adiabatic) production of matter particles in the shell along the collapse. Since the total energy $`M/G`$ is conserved, such a production can be viewed as a transfer of energy from the collective degree of freedom $`R`$ to the microscopic degree of freedom $`\varphi `$ and one therefore expects a slower approach towards the horizon. In fact, the equation of motion (15) for $`R`$, $`\dot{R}^2={\displaystyle \frac{R_G}{2R}}+2\left(1{\displaystyle \frac{2m}{R_G}}\right){\displaystyle \frac{\mathrm{}_\varphi ^2m^2}{R_G^2R^2}}.`$ (18) can be integrated numerically along with (17) to compute $`m=m(t)`$ and the corresponding backreaction on the trajectory $`R=R(t)`$ confirms the above qualitative argument . ### 3.2 Gravitational fluctuations One can also study the effects due to higher WKB order terms in the gravitational wavefunction by defining $`\psi (R)=f(R)\psi _{WKB}(R),`$ (19) where $`f`$ must then satisfy $`i\mathrm{}{\displaystyle \frac{f}{t}}\mathrm{}{\displaystyle \frac{\mathrm{}_p^2}{2R}}{\displaystyle \frac{^2f}{R^2}}|_{R_c},`$ (20) in which $`R_c=R_c(t)`$ is the (semi)classical trajectory $`\psi _{WKB}`$ is peaked on and we have neglected terms of order $`\dot{R}/R`$ and higher. Acceptable solutions to (20) are given by plane waves with wave numbers $`\lambda \mathrm{}_p`$ and negative “energy” $`E_\lambda ={\displaystyle \frac{\mathrm{}}{2R_c}}{\displaystyle \frac{\mathrm{}_p^2}{\lambda ^2}},`$ (21) which agrees with the fact that the gravitational contribution to the total (super)Hamiltonian has the opposite sign with respect to matter. Another basic feature of (21) is that $`E_\lambda `$ is proportional to $`R_c^1`$, which makes it negligible with respect to $`\widehat{E}`$ for large radius, but one then expects appreciable corrections as the collapse proceeds. One can indeed take $`E_\lambda `$ into account in an improved semiclassical equation for the trajectory of the shell $`H_G+\widehat{E}+E_\lambda =0,`$ (22) which predicts a breakdown of the semiclassical approximation for values of $`R`$ larger than the limit (11) and possibly larger than $`R_G`$. This would imply that the whole shell becomes a quantum object far before reaching the (quantum mechanically unacceptable) singularity $`R=0`$.
no-problem/9911/astro-ph9911181.html
ar5iv
text
# 1997 October Event(s) in the Crab Pulsar ## 1. Introduction In 1975 the dispersion measure, rotation measure and scattering of the Crab pulsar displayed an extreme level of activity (Lyne & Thorne 1975; Isaacman & Rankin 1977). These disturbances in the propagation of radiation from the pulsar were ascribed to thermal plasma associated with the Crab nebula. In general the variations of the Crab pulsar’s dispersion measure are larger than that expected for the interstellar medium, and are thus likely the result of nebular material (Backer et al. 1993). In recent years the dispersion measure and scattering have been undergoing a new series of large variations (Backer & Wong 1996). Column density variations of 0.1 pc cm<sup>-3</sup> over time scales of months are seen. If one places these variations in the filamentary web surrounding the optical synchrotron nebula, then the likely transverse velocity of the pulsar-Earth line of sight relative to the plasma $`200`$ km s<sup>-1</sup>. In this case one can estimate that the characteristic density of the perturbations is $`2500`$ e cm<sup>-3</sup> and the typical transverse length is $`10^{14}`$ cm. This density is reasonably consistent with density estimates of the filaments from optical line measurements (Hester et al. 1996, Sankrit et al. 1998) that have a linear resolution which is several orders of magnitude larger, $`10^{16}`$ cm. There has also been a unique cluster of glitches in the rotation of the star during the past few years (Wong, Backer & Lyne 1999). In October 1997 amidst this era of large variations of dispersion measure and other plasma propagation parameters as well as internal “seismic” events, a dispersion measure “glitch” (sudden change in less than one week) of 0.12 cm<sup>-3</sup>pc was noticed in measurements both at the NRAO Green Bank site and at the University of Manchester Jodrell Bank site (see Smith & Lyne in this volume). These observatories have small dedicated telescopes for pulsar monitoring. In fact, during the glitch event pulses were simultaneously detected at both dispersions. Subsequently Smith & Lyne found that for about two months prior to the dispersion event a faint replica (“ghost”) of the pulsed emission following the main pulse and interpulse components was detected in the Jodrell Bank 610-MHz data. Receding ghost components are observed a few months after the dispersion glitch. The Green Bank observations confirmed the 610-MHz ghost components and further provided detections at 327 MHz. The phase of the ghost components slowly converged to that of the regular emission a week before the dispersion glitch. While many of the phenomena can be explained by an imperfect plasma lens passing through the line of sight at the distance of the Crab nebula filaments, the occurence of an unusual spindown of the neutron star phase during the dispersion glitch, and a small but more conventional spinup glitch of the star two months after the dispersion glitch provide arguments for consideration of plasma propagation in the vicinity of the star. The ghost of the main pulse and that of the interpulse are not always identical which adds further confusion to interpretation. In this brief report, the 327-MHz sequence of events is described. This is followed by a discussion of the optics that might give rise to these events. A full report is being submitted for journal publication. ## 2. 327-MHz event I will refer to the pulse profiles in Figure 1 before and after the dispersion glitch as “old” and “new” respectively. The old pulse peak amplitude is reasonably steady at the start, then has two peaks centered on 1997 day 265 and day 275, and then fades away by day 300. The new pulse is shifted in phase by 0.15 (5 ms) and has amplitudes which rise steadily from day 290 to day 365. The new pulses appear faint owing to a large pulse broadening that I will attribute to severe multipath propagation in the extra dispersion medium. The ghost component of the main pulse appears at phase 0.4 around day 250 and moves along a quadratic path toward the main pulse at phase 0.28 around day 275. The ghost of the interpulse follows a similar path. The peak intensities are typically 1-2% times that of the corresponding pulse components. Note that during days 285 to 299 there is emission present simultaneously from old and new pulses. I interpret this in terms of two propagation paths from the pulsar in the next section. Extrapolation of the old pulse phase to day 320 leads to an offset between new and old pulses of 5.1 ms at 327 MHz. The multipath propagation delay from both new and old pulses have been removed in this estimate. The corresponding number at 610 MHz is 2.1 ms. These two delays are not consistent with the cold plasma, quadratic dispersion law. There appears to be a 0.9-ms achromatic delay. The source of this delay is actually just detectable in the fading old pulse emission: in Figure 1 the peak of the main and interpulse is rapidly drifting to the right during days 290 to 299. Most of this drift is achromatic; i.e., it is seen at both 327 MHz and 610 MHz. Achromatic variations of pulse arrival time are most naturally interpreted as seismic events internal to the neutron star. Thus just at the time of this dispersion glitch with emission seen along two lines of sight simultaneously, there appears to be a spindown of the pulsar of rather large amplitude – large with respect to the timing noise in this pulsar which is characterized by a random walk in frequency. I will proceed with discussing the ghost component and dispersion glitch in terms of propagation through a plasma prism in the filamentary interface of the Crab nebula, and leave the “coincidence” of an internal effect aside. A final remark here, in admittedly weak support of ignoring the nominally internal event, is that the pulsar has been extremely active with many spin glitches over the 1996-1998 interval. ## 3. Plasma Prism Model The dispersion glitch and the simultaneous appearance of radiation along nominal dispersion (old) and extra dispersion (new) paths can readily be explained by a wedge of plasma moving into the line of sight. The extra dispersion largely disappears after 250 days. The start of the steady decline can be seen in the drifting location of the new pulse in Figure 1 between days 300 and 365. Thus the dispersion changes can be explained as a plasma prism passing through the line of sight and oriented so that one observes a fairly abrupt change of dispersion. The refractive property of a simple uniform density plasma prism, with its cold plasma index of refraction less than unity, is such as to bend the new pulse into the line of sight and allow simultaneous observation of the old and new pulse. From the observed gradient in dispersion I estimate a refraction angle of 1 $`\mu `$as at 327 MHz. Refraction of this magnitude at a point 2 pc along the line of sight from the pulsar to Earth would lead to a small time delay of 0.2 ms at 327 MHz. Of course this geometrically delayed path would also have the new dispersion delay. Figure 2 presents the optics of this plasma prism model more explicitly by taking a cut of the total wave phase as the wave exits the plasma prism. The wave phase consists of a geometric part, a cut through a quadratic ‘Fresnel bowl’, and the plasma prism part shown to the right of the line of sight. What one observes at any instant is signal from the stationary phase points. With a plasma prism, as with a gravitational lens, one sees an odd number of signal paths, in this case 3. Path 1 is the unperturbed geometric line of sight. Path 2 is on the leading edge of the prism and path 3 from within the prism. Thus path 2 would have approximately the same dispersion as path 1 while 3 would have the excess dispersion of the prism. The flux one observes from each stationary point depends on the size of the stationary phase patch, or more explicitly the second derivative of the phase (e.g. Clegg et al. 1998). For the unperturbed line of sight, the size is that of the first Fresnel zone. As the line of sight moves toward the prism’s edge, the location of 2 and 3 changes roughly linearly, and the excess delay along this path will change approximately quadratically. Further this geometric path delay effect is approximately achromatic. Thus we can identify point 2 with the ghost component and point 3 with the simultaneous appearance of new and dispersed pulse. The signal from point 3 comes through the body of the prism which, as stated above, has the further property of excess multipath propagation. This effect can lead to the difficulty of seeing the third pulse during days when the ghost pulse is seen (245-275). ## 4. Conclusion I have presented the 327-MHz Crab pulsar data from late 1997. These show a number of events – ghost components and dispersion glitch and spindown. The principal features can be explained by propagation through a plasma prism that takes several months to drift through the line of sight. The physical parameters of the prism – size of $`3\times 10^{14}`$ cm and density of 1200 e cm<sup>-3</sup> – are reasonably consistent with those determined by forbidden line observations of the filamentary gas surrounding the optical synchrotron nebula. The density is possibly high given the small length scale in relation to the filamentary structure discussed by Hester et al. 1996 and Sankrit et al. 1998. The density can be reduced by choosing a sheet or pencil geometry that extends into the plane of the sky by larger scale than what is detected in the transverse direction (see similar argument for interstellar tiny HI structures in Heiles 1997). Acknowledgements I thank the NRAO staff at the Green Bank site who have helped to maintain the pulsar monitoring telescope program there, Graham Smith and Andrew Lyne for many discussions on these still puzzling events, and Berkeley students Tony Wong and Jay Valanju who assisted in data analysis. ## References Backer, D., et al. 1993, ApJ, 404, 636 Backer, D., & Wong, T. 1996, ASP CS, 105, 87 Clegg, A., Fey, A., & Lazio, J. 1998, ApJ, 496, 253 Heiles, C. 1997, ApJ, 491, 193 Hester, J., et al. 1996, ApJ, 456, 225 Isaacman, R., & Rankin, J. 1977, ApJ, 214, 214 Lyne, A., & Thorne, D. 1975, MNRAS, 172, 97 Sankrit, R., & Hester, J. 491, ApJ, 1997, 796 Wong, T., Backer, D., & Lyne, A. 1999, in preparation
no-problem/9911/hep-ph9911364.html
ar5iv
text
# 1 Introduction ## 1 Introduction The study of neutrino masses and lepton mixing may provide crucial clues towards the solution of the general problem of fermion masses and mixing. One of the striking features of lepton mixing is the fact that an explanation of the atmospheric neutrino anomaly through neutrino oscillations requires a large mixing angle for $`\nu _\mu \nu _\tau `$ or $`\nu _\mu \nu _{sterile}`$. This is to be contrasted with the quark sector, where the mixing is small. Neutrinos have a distinguishing feature of being the only known fermions which are neutral with respect to all conserved charges, namely electric charge and colour. As a result, they can have Majorana masses, which in particular can arise through the seesaw mechanism . The seesaw mechanism provides a very natural and attractive explanation of the smallness of the neutrino masses compared to the masses of the charged fermions of the same generation through the existence of heavy singlet neutrinos $`\nu _R`$ <sup>1</sup><sup>1</sup>1For recent studies of the seesaw mechanism see, e.g., .. However, this mechanism does not fix completely the overall scale of the light neutrino masses since the mass scale of $`\nu _R`$, though naturally large, is not precisely known. Moreover, the ratios of the light neutrino masses as well as the lepton mixing angles remain arbitrary: one can easily obtain any desired values of these parameters by properly choosing the neutrino Dirac mass matrix and Majorana mass matrix of singlet neutrinos. Therefore by itself, without any additional assumptions, this mechanism has limited predictive power. To gain more predictivity one has to invoke additional assumptions. In this letter, we study how phenomenologically viable neutrino masses and mixings can be generated within the framework of the seesaw mechanism, together with a reasonable set of assumptions, which can be summarized as follows: (i) We work in the framework of three generation $`SU(2)_L\times U(1)`$ model, with the addition of three right-handed neutrino fields, which are singlets under $`SU(2)_L\times U(1)`$. No Higgs triplets are introduced and thus the effective mass matrix for the left-handed Majorana neutrinos is entirely generated by the seesaw mechanism, being given by $$m_L=m_DM_R^1m_D^T,$$ (1) where $`m_D`$ denotes the neutrino Dirac mass matrix and $`M_R`$ stands for the Majorana mass matrix of right-handed neutrinos. (ii) We assume that the neutrino Dirac mass matrix $`m_D`$ has a hierarchical eigenvalue structure, analogous to the one for the up-type quarks. This is a GUT-motivated assumption. However, for our arguments, the only important point is that the eigenvalues of $`m_D`$ be hierarchical, their exact values do not play an essential rôle. (iii) We assume that the charged lepton and neutrino Dirac mass matrices, $`m_l`$ and $`m_D`$, are “aligned” in the sense that in the absence of the right-handed mass $`M_R`$, the leptonic mixing would be small, as it is in the quark sector. In other words, we assume that the left-handed rotations that diagonalize $`m_l`$ and $`m_D`$ are the same or nearly the same. Again, this assumption is motivated by GUTs. We therefore consider that the large lepton mixing results from the fact that neutrinos acquire their mass through the seesaw mechanism. (iv) We assume that the Dirac and Majorana neutrino mass matrices are unrelated. The exact meaning of this assumption will be explained in the next section. We shall investigate whether the seesaw mechanism, constrained by our set of assumptions, can lead to a phenomenologically viable neutrino mass matrix and, if so, whether it can help us to understand some of the salient features of the leptonic mixing. In particular, it would be interesting to understand why the mixing angle $`\theta _{23}`$ responsible for the atmospheric $`\nu _\mu \nu _\tau `$ oscillations is large, while the mixing angle $`\theta _{13}`$ which governs the subdominant $`\nu _e\nu _{\mu (\tau )}`$ oscillations of atmospheric neutrinos and long baseline $`\nu _e\nu _{\mu (\tau )}`$ oscillations is small. Another interesting question is whether this constrained seesaw mechanism can help us to discriminate among possible neutrino mass hierarchies – normal hierarchy, inverted hierarchy and quasi-degeneracy. Furthermore, it would be useful if the seesaw mechanism could provide some guidance as to the possible solutions to the solar neutrino problem – large mixing angle MSW (LMA), small mixing angle MSW (SMA) and vacuum oscillations (VO) solutions. We shall address these issues within the constrained seesaw mechanism described above. ## 2 General framework As previously mentioned, we work in the context of the standard three generations $`SU(2)_L\times U(1)`$ model, where the only additional fields are the three right-handed neutrinos $`\nu _{iR}`$ <sup>2</sup><sup>2</sup>2Only two of the three known experimental indications of nonzero neutrino mass (solar neutrino problem , atmospheric neutrino data and the accelerator LSND results ) can be explained through neutrino oscillations with just three light neutrino species. As the LSND result is the only one that has not yet been independently confirmed, we choose not to consider it here.. The most general charged lepton and neutrino mass terms can be written as $$_{mass}=(m_l)_{ij}\overline{l}_{iL}l_{jR}+(m_D)_{ij}\overline{\nu }_{iL}\nu _{jR}+\frac{1}{2}(M_R)_{ij}\nu _{iR}^TC\nu _{jR}+h.c.,$$ (2) where $`m_l`$ and $`m_D`$ stand for the charged lepton and neutrino Dirac mass matrices arising from Yukawa coupling with the Higgs doublet, while $`M_R`$ denotes the Majorana mass matrix of right-handed neutrinos. Since the right-handed Majorana mass terms are $`SU(2)_L\times U(1)`$ invariant, $`M_R`$ is naturally large, not being protected by the low energy gauge symmetry. The matrices $`m_l`$ and $`m_D`$ are in general arbitrary complex matrices, while $`M_R`$ is a symmetric complex matrix. Without loss of generality we may choose a weak basis (WB) where the charged lepton mass matrix is diagonal, with real positive eigenvalues. The lepton mass matrices can be written as $`m_l`$ $``$ $`d_l=diag(m_e,m_\mu ,m_\tau ),`$ $`m_D`$ $`=`$ $`V_Ld_\nu V_R^{},`$ $`M_R`$ $`=`$ $`U_RDU_R^T,`$ (3) where $`d_l`$, $`d_\nu `$ and $`D`$ are diagonal, real positive matrices while $`V_L`$, $`V_R`$ and $`U_R`$ are unitary matrices. In the absence of $`M_R`$, the leptonic mixing matrix $`V`$ entering in the charged-current weak interactions would be given by $`V=V_L`$. Our assumption (iii) that $`m_l`$ and $`m_D`$ are “aligned” in their left-handed rotations means that $`V_L`$ is assumed to be close to the unit matrix, thus implying that in the absence of $`M_R`$ leptonic mixing would be small, in analogy with the quark sector. The mass terms in Eq. (3) are written in a WB, therefore the gauge currents are still diagonal. It should be emphasized that one has a large freedom to make WB transformations which leave the gauge currents diagonal but alter the mass terms. One can use this freedom to choose, e.g., a WB basis where both $`m_l`$ and $`M_R`$ are diagonal. However, for our arguments, it will be more convenient to choose a different $`\nu _R`$ basis, to be specified below. The effective mass matrix of the light left-handed neutrinos resulting from the seesaw mechanism can then be written as $$m_L=V_Ld_\nu W_RD^1W_R^Td_\nu V_L^T,$$ (4) where $`W_R=V_R^{}U_R^{}`$. The physical leptonic mixing among the light neutrinos which enters in the probabilities of neutrino oscillations is given by the matrix $`V`$ that diagonalizes $`m_L`$: $$V^Tm_LV=diag(m_1,m_2,m_3).$$ (5) Here $`m_i`$ ($`i=1,2,3`$) are the masses of the light neutrinos. We shall disregard possible CP violation effects in the leptonic sector and assume the neutrino mass matrix to be real. Its eigenvalues $`m_i`$ can be of either sign, depending on the relative CP parities of neutrinos. The physical neutrino masses are $`|m_i|`$. One of the challenges is how to obtain large mixing in $`V`$ without resorting to fine tuning. We shall show that this is possible in the framework of the seesaw mechanism, together with the assumptions listed in sec. 1. Following our assumption (iii), we shall consider that $`V_L1`$ in the WB where $`m_l`$ is diagonal. One can then write $$m_L=d_\nu (M_R^{})^1d_\nu ,$$ (6) where $$(M_R^{})^1=W_RD^1W_R^T,$$ (7) thus fixing the $`\nu _R`$ basis. It is useful to write the explicit form of $`m_L`$ as $$m_L=\left(\begin{array}{ccc}m_u^2M_{11}^1& m_um_cM_{12}^1& m_um_tM_{13}^1\\ m_um_cM_{12}^1& m_c^2M_{22}^1& m_cm_tM_{23}^1\\ m_um_tM_{13}^1& m_cm_tM_{23}^1& m_t^2M_{33}^1\end{array}\right),$$ (8) where $`M_{ij}^1(M_R^{})_{ij}^1`$, and $`m_u`$, $`m_c`$ and $`m_t`$ denote the eigenvalues of $`m_D`$. For our numerical estimates we shall take them to be equal to the masses of the corresponding up-type quarks, but for our general arguments their precise values are unimportant. We shall adopt the parametrization of the leptonic mixing matrix $`V`$ which coincides with the standard parametrization of the quark mixing matrix and identify the mixing angle responsible for the dominant channel of the atmospheric neutrino oscillations with $`\theta _{23}`$, the one governing the solar neutrino oscillations with $`\theta _{12}`$ and the mixing angle which governs the subdominant $`\nu _e\nu _{\mu (\tau )}`$ oscillations of atmospheric neutrinos and long baseline $`\nu _e\nu _{\mu (\tau )}`$ oscillations with $`\theta _{13}`$. Assuming that $`m_1,m_2m_3`$ and $`\theta _{23}45^{}`$ (which is the best fit value of the Super-Kamiokande data ) and taking into account that the CHOOZ experiment indicates that $`\theta _{13}1`$ , it can be shown that $`m_L`$ must have the approximate form $$m_L=m_0\left(\begin{array}{ccc}\kappa & \epsilon & \epsilon ^{}\\ \epsilon & 1+\delta \delta ^{}& 1\delta \\ \epsilon ^{}& 1\delta & 1+\delta +\delta ^{}\end{array}\right),$$ (9) where $`\kappa `$, $`\epsilon `$, $`\epsilon ^{}`$, $`\delta `$ and $`\delta ^{}`$ are small dimensionless parameters. Comparing Eqs. (8) and (9), one concludes that the following relations should hold, in leading order: $$m_c^2M_{22}^1=m_t^2M_{33}^1=m_cm_tM_{23}^1.$$ (10) These relations seem to indicate that in order to obtain the form of Eq. (9), strong correlations are required between the entries of $`d_\nu `$ and those of $`(M_R^{})^1`$, in apparent contradiction with our assumption (iv). However, it can be readily seen that in fact there is no conflict. Obviously, the form of $`(M_R^{})^1`$ depends on the $`\nu _R`$ basis one chooses. In the definition of $`(M_R^{})^1`$ given by Eq. (7), we have included part of the right-handed rotation arising from the diagonalization of $`m_D`$, namely $`V_R`$ enters in $`W_R`$ defined as $`W_R=V_R^{}U_R^{}`$. Therefore $`(M_R^{})^1`$ contains information about the Dirac mass sector, and Eq. (10) is not necessarily in conflict with our assumption (iv). This assumption has to be formulated in terms of weak-basis invariants. What should be required is that the ratios of the eigenvalues of $`(M_R^{})^1`$ should not be related to the ratios of the eigenvalues of $`m_D`$. In order to see how the phenomenologically favoured form of $`m_L`$ can be achieved without contrived fine tuning between the parameters of the Dirac and Majorana sectors, let us first consider the two-dimensional sector of $`m_L`$ in the 2-3 subspace, which is responsible for a large $`\theta _{23}`$. We shall write the diagonalized Dirac mass matrix $`d_\nu `$ using the dimensionless parameters $`p`$ and $`q`$: $$d_\nu =m_tdiag(p^2q,p,\mathrm{\hspace{0.33em}1}),p=m_c/m_t10^2,q=m_um_t/m_c^20.4.$$ (11) It follows from Eq. (6) that the 2-3 sector of $`(M_R^{})^1`$, in order to lead to the 2-3 structure of Eq. (9) (with all elements approximately equal to unity up to a common factor), should have the following form: $$M_R^1\left(\begin{array}{cc}1& p\\ p& p^2\end{array}\right).$$ (12) The eigenvalues of the matrix in Eq. (12) are 0 and $`1+p^2`$, and thus by choosing the pre-factor to be $`const/(1+p^2)`$ one arrives at the matrix $`M_R^1`$ of the desired form with $`p`$\- and $`q`$-independent eigenvalues. The question is now whether it is possible to find a $`3\times 3`$ matrix whose 2-3 sector corresponds to Eq. (12) while its eigenvalues (or invariants) are independent of $`p`$ and $`q`$. This turns out to be possible, and the simplest form of $`(M_R^{})^1`$ fulfilling this requirement is $$(M_R^{})^1\frac{1}{1+p^2}\left(\begin{array}{ccc}(1+p^2)\gamma & \sqrt{1+p^2}(\beta \alpha p)& \sqrt{1+p^2}(\alpha +\beta p)\\ \sqrt{1+p^2}(\beta \alpha p)& 1& p\\ \sqrt{1+p^2}(\alpha +\beta p)& p& p^2\end{array}\right),$$ (13) where the dimensionless parameters $`\alpha `$, $`\beta `$ and $`\gamma `$ do not depend on $`p`$ and $`q`$. It is straightforward to check that the eigenvalues of the matrix in Eq. (13) are $`p`$\- and $`q`$-independent. The easiest way to do that is by noting that $`(M_R^{})^1`$ can be written as $$(M_R^{})^1=S_R^T(M_R^0)^1S_R$$ (14) with $$S_R=\left(\begin{array}{ccc}1& 0& 0\\ 0& c_\varphi & s_\varphi \\ 0& s_\varphi & c_\varphi \end{array}\right),(M_R^0)^1=\frac{1}{2M}\left(\begin{array}{ccc}\gamma & \beta & \alpha \\ \beta & 1& 0\\ \alpha & 0& 0\end{array}\right),$$ (15) and $$c_\varphi =\mathrm{cos}\varphi ,s_\varphi =\mathrm{sin}\varphi ,\varphi =\mathrm{arctan}p.$$ (16) From Eqs. (6), (14), (15) and (16) one obtains $$m_L=\frac{m_t^2}{2M}\frac{p^2}{1+p^2}\left(\begin{array}{ccc}q_{}^{}{}_{}{}^{2}p^2\gamma & q^{}p(\beta \alpha p)& q^{}(\alpha +\beta p)\\ q^{}p(\beta \alpha p)& 1& 1\\ q^{}(\alpha +\beta p)& 1& 1\end{array}\right),$$ (17) where $$q^{}q\sqrt{1+p^2}q.$$ (18) It is worth emphasizing that we have obtained $`m_L`$ of the desired form, while abiding by our assumptions. Comparison of Eqs. (9) and (17) leads to the following identification for the parameters of the phenomenological mass matrix of light neutrinos: $$\kappa =q_{}^{}{}_{}{}^{2}p^2\gamma ,\epsilon =q^{}p(\beta \alpha p),\epsilon ^{}=q^{}(\alpha +\beta p),\delta =\delta ^{}=0.$$ (19) The largest eigenvalue of the matrix $`m_L`$ in Eq. (17), i.e. the mass of the heaviest of the three light neutrinos is $$m_3\frac{m_t^2}{M}\frac{p^2}{1+p^2}\frac{m_c^2}{M}.$$ (20) It scales as $`m_c^2`$ rather than as usually expected $`m_t^2`$. It has to be identified with $`\mathrm{\Delta }m_{atm}^2(26)\times 10^3`$ eV<sup>2</sup>, which gives $$M(10^{10}10^{11})\mathrm{GeV},$$ (21) i.e. an intermediate mass scale rather than the GUT scale. It has been shown in that the MSW effect can only occur for neutrinos, and in particular the LMA and SMA solutions of the solar neutrino problem are only possible, if the parameters of the mass matrix $`m_L`$ in Eq. (9) satisfy $$|4\delta \delta ^2|>|2\kappa (\epsilon ^2+\epsilon ^2)|.$$ (22) Since $`\delta =\delta ^{}=0`$ in Eq. (17), it is clear that the only solution to the solar neutrino problem that is not automatically ruled out in the case under consideration is the VO solution. It is interesting to ask whether our scheme can be modified so that nonzero values for $`\delta `$ and $`\delta ^{}`$ be obtained. One simple possibility would be to assume an incomplete alignment between the mass matrix of charged leptons and the Dirac mass matrix of neutrinos: $`V_L1`$ instead of $`V_L=1`$. Then the effective mass matrix of light neutrinos $`m_L`$ would be obtained from Eq. (17) by the additional rotation by $`V_L`$. Taking for simplicity this rotation to be in the 2-3 subspace, one can readily make sure that it indeed yields nonzero $`\delta `$ and $`\delta ^{}`$. However, in this case they are related by $`\delta =\delta ^2/4`$. Therefore the left-hand side (l.h.s.) of (22) vanishes, i.e. this inequality is not satisfied and the MSW solutions of the solar neutrino problem are still not possible. The fact that an additional rotation in the 2-3 subspace does not change the l.h.s. of (22) becomes obvious by noticing that $`4\delta \delta ^2`$ coincides with the determinant of the $`2\times 2`$ submatrix of Eq. (9) in the 2-3 subspace. Thus, to accommodate the LMA or SMA solutions of the solar neutrino problem through the $`V_L`$ rotation one should consider a matrix $`V_L`$ of a more general form. There is, however, another way to achieve the same goal. One can arrive at $`\delta ,\delta ^{}0`$ even with $`V_L=1`$ if one considers the following simple modification of the the matrix $`(M_R^0)^1`$ in Eq. (15): $$(M_R^0)^1=\frac{1}{2M}\left(\begin{array}{ccc}\gamma & \beta & \alpha \\ \beta & 1& 0\\ \alpha & 0& r\end{array}\right),$$ (23) i.e. the 33-element of the matrix now is nonzero. The requirement $`|\delta |,|\delta ^{}|1`$ translates into $`|r|p^2`$. This yields the following effective mass matrix for the light neutrinos: $$m_L\frac{m_t^2}{2M}\frac{p^2}{1+p^2}\left(\begin{array}{ccc}q_{}^{}{}_{}{}^{2}p^2\gamma & q^{}p(\beta \alpha p)& q^{}(\alpha +\beta p)\\ q^{}p(\beta \alpha p)& 1r/4p^2& 1r/4p^2\\ q^{}(\alpha +\beta p)& 1r/4p^2& 1+3r/4p^2\end{array}\right),$$ (24) This means that now $$\delta r/4p^2,\delta ^{}r/2p^2,$$ (25) and so the l.h.s. of (22) is nonzero, i.e. SMA and LMA solutions of the solar neutrino problem are possible. The parameters $`\kappa `$, $`\epsilon `$ and $`\epsilon ^{}`$ in this case are the same as in the case $`r=0`$, i.e. are given by Eq. (19) <sup>3</sup><sup>3</sup>3The particular case of the neutrino mass matrix of the form (24) with $`\beta =\gamma =r=0`$ (which allows only the VO solution of the solar neutrino problem) was obtained in .. ## 3 Numerical examples We shall now give some illustrative examples of the values of the parameters for which all three types of the solutions – SMA, LMA and VO – are possible. We concentrate here on the case of normal mass hierarchy ($`m_1,m_2m_3`$); the cases of inverted mass hierarchy and quasi-degeneracy will be discussed in sec. 4. The neutrino mass squared differences which enter in the probabilities of the solar and atmospheric neutrino oscillations are related to the eigenvalues of the effective mass matrix of light neutrinos $`m_L`$ via $`\mathrm{\Delta }m_{}^2\mathrm{\Delta }m_{21}^2`$, $`\mathrm{\Delta }m_{atm}^2\mathrm{\Delta }m_{31}^2m_3^2`$. Consider first the following choice of the parameters of the matrix $`(M_R^0)^1`$ in Eq. (23): $`\alpha =1.110^2`$, $`\beta \alpha `$, $`\gamma \stackrel{<}{_{}}\beta ^2`$, $`r=810^6`$. This gives the following values of the parameters of the mass matrix of light neutrinos $`m_L`$ in (9): $`\kappa \stackrel{<}{_{}}10^9`$, $`\epsilon 310^5`$, $`\epsilon ^{}4.510^3`$, $`\delta 0.04`$, $`\delta ^{}0.08`$. Diagonalization of $`m_L`$ can then be easily performed either by making use of the approximate analytic expressions derived in or numerically. It gives the following values of the masses and mixings of light neutrinos: $$m_13.610^6\mathrm{eV},m_22.310^3\mathrm{eV},m_32m_00.06\mathrm{eV},$$ $$\mathrm{sin}^22\theta _{12}6.210^3,\mathrm{sin}\theta _{13}1.710^3,$$ (26) where we have taken $`m_0\sqrt{\mathrm{\Delta }m_{atm}^2}/20.03`$ eV. In all the cases we consider, the value of the mixing angle $`\theta _{23}`$ is very close to $`45^{}`$ by construction of our mass matrix $`m_L`$. From Eq. (26) one finds $`\mathrm{\Delta }m_{}^25.310^6`$ eV<sup>2</sup>, i.e. this choice of the parameters leads to the SMA solution of the solar neutrino problem <sup>4</sup><sup>4</sup>4For recent analyses of the solar neutrino data and allowed ranges of the parameters see .. The corresponding mass eigenvalues of heavy Majorana neutrinos are $`M_1610^{10}`$ GeV, $`M_2M_35.510^{11}`$ GeV. Let us now choose $`\alpha =0.75`$, $`\beta \alpha `$, $`\gamma \stackrel{<}{_{}}\beta ^2`$, $`r=1.510^5`$. This yields $`\kappa \stackrel{<}{_{}}410^6`$, $`\epsilon 210^3`$, $`\epsilon ^{}0.3`$, $`\delta 0.073`$, $`\delta ^{}0.146`$. Diagonalization of $`m_L`$ then gives $$m_14.6210^3\mathrm{eV},m_27.8610^3\mathrm{eV},m_32m_00.06\mathrm{eV},$$ $$\mathrm{sin}^22\theta _{12}0.83,\mathrm{sin}\theta _{13}0.11,$$ (27) with $`\mathrm{\Delta }m_{}^2410^5`$ eV<sup>2</sup>, i.e. this choice of the parameters leads to the LMA solution of the solar neutrino problem. The corresponding mass eigenvalues of heavy Majorana neutrinos are $`M_14.610^{10}`$ GeV, $`M_27.410^{10}`$ GeV, $`M_31.110^{11}`$ GeV. Finally, let us choose $`\alpha =1.1310^3`$, $`\beta \stackrel{<}{_{}}\alpha `$, $`\gamma \stackrel{<}{_{}}\beta ^2`$, $`r=210^8`$. This yields $`\kappa \stackrel{<}{_{}}10^{11}`$, $`\epsilon \stackrel{<}{_{}}310^6`$, $`\epsilon ^{}4.510^4`$, $`\delta 10^4`$, $`\delta ^{}210^4`$. Diagonalization of $`m_L`$ gives $$m_16.9510^6\mathrm{eV},m_21.2910^5\mathrm{eV},m_32m_00.06\mathrm{eV},$$ $$\mathrm{sin}^22\theta _{12}0.91,\mathrm{sin}\theta _{13}1.610^4,$$ (28) with $`\mathrm{\Delta }m_{}^21.210^{10}`$ eV<sup>2</sup>, i.e. this choice of the parameters leads to the VO solution of the solar neutrino problem. The corresponding mass eigenvalues of heavy Majorana neutrinos are $`M_1610^{10}`$ GeV, $`M_2M_35.310^{13}`$ GeV. Alternatively, one could choose $`\alpha =1.7510^2`$, $`\beta \stackrel{<}{_{}}\alpha `$, $`\gamma \stackrel{<}{_{}}\beta ^2`$, $`r=0`$. This gives $`\kappa \stackrel{<}{_{}}2.510^9`$, $`\epsilon \stackrel{<}{_{}}510^5`$, $`\epsilon ^{}710^3`$, $`\delta =\delta ^{}=0`$. One then obtains $`\mathrm{\Delta }m_{}^21.110^{10}`$ eV<sup>2</sup>, $`\mathrm{sin}\theta _{13}2.510^3`$ and $`\theta _{12}\theta _{23}45^{}`$, i.e. this choice of the parameters leads to the VO solution of the solar neutrino problem with bi-maximal neutrino mixing . The mass eigenvalues of heavy Majorana neutrinos in this case are $`M_1610^{10}`$ GeV, $`M_2M_33.410^{12}`$ GeV. The examples given here demonstrate that with the inverse mass matrix of right-handed neutrinos $`(M_R^{})^1`$ of the form (23), depending on the values of its parameters, all three main neutrino oscillations solutions to the solar neutrino problem – SMA, LMA and VO – can be realized in the framework of the constrained seesaw mechanism. ## 4 Inverted mass hierarchy and quasi-degeneracy We shall now briefly discuss the other possible neutrino mass hierarchies – the inverted mass hierarchy $`|m_3||m_1||m_2|`$ and the quasi-degenerate case with $`|m_1||m_2||m_3|`$. For the case of the inverted mass hierarchy there are three main zeroth order textures corresponding to the limit $`|m_1|=|m_2|`$, $`\theta _{13}=0`$, $`\theta _{23}=45^{}`$ (see, e.g., ): $$m_L\left(\begin{array}{ccc}\pm 2& 0& 0\\ 0& 1& 1\\ 0& 1& 1\end{array}\right);\left(\begin{array}{ccc}0& 1& 1\\ 1& 0& 0\\ 1& 0& 0\end{array}\right).$$ (29) The first two textures differ only by the sign of the 11-term. One can then invert Eq. (6) to find the inverse mass matrices of right-handed neutrinos $`(M_R^{})^1`$ which lead to these textures. To be consistent with our assumption (iv), these matrices must satisfy the following requirement: it should be possible to obtain each of them by a $`p`$\- and $`q`$-dependent rotation of a $`p`$\- and $`q`$-independent matrix. In other words, their eigenvalues $`\mathrm{\Lambda }_i`$ ($`i=1,2,3`$) must be $`p`$\- and $`q`$-independent. Since all mass matrices in Eq. (29) have one zero eigenvalue, so do the corresponding matrices $`(M_R^{})^1`$, and their determinants vanish. Therefore it is sufficient to check if their traces ($`\mathrm{\Lambda }_1+\mathrm{\Lambda }_2+\mathrm{\Lambda }_3`$) and second invariants ($`\mathrm{\Lambda }_1\mathrm{\Lambda }_2+\mathrm{\Lambda }_1\mathrm{\Lambda }_3+\mathrm{\Lambda }_2\mathrm{\Lambda }_3`$) can be made $`p`$\- and $`q`$-independent by a proper choice of the pre-factors in Eq. (29). One can readily make sure that for the first two textures (first matrix in Eq. (29) with both signs of the 11 element) this is impossible: if the trace of the corresponding matrix $`(M_R^{})^1`$ is made $`p`$\- and $`q`$-independent, then the second invariant depends on $`p`$ and $`q`$, and vice versa. Therefore these zeroth-order textures do not satisfy our conditions (i) – (iv). Perturbing these textures by adding small terms to each their element will not change this conclusion. The situation is different for the second matrix in Eq. (29). The corresponding matrix $`(M_R^{})^1`$ has zero determinant and trace, and the second invariant can be always made $`p`$\- and $`q`$-independent by a proper choice of the pre-factor. However, this texture is ruled out on different grounds. It describes the bi-maximal mixing with the inverted hierarchy, which means $`\theta _{12}=45^{}`$. In this case only the VO solution of the solar neutrino problem is possible . This implies $`\mathrm{\Delta }m_{21}^2=\mathrm{\Delta }m_{}^210^{10}`$ eV<sup>2</sup>. Small nonvanishing values of $`\mathrm{\Delta }m_{21}^2`$ are achieved when zeros in the texture matrix are filled in with small terms, and for the VO solution these small terms should be $`\stackrel{<}{_{}}10^8`$. The diagonalization of the corresponding matrix $`(M_R^{})^1`$ then gives the following values of the masses of the heavy singlet neutrinos $`M_i=\mathrm{\Lambda }_i^1`$: two singlet neutrinos are almost degenerate with $`M_1M_210^8`$ GeV; the third mass eigenvalue turns out to be well above the Planck scale: $`M_310^{22}`$ GeV, clearly not a physical value. Thus, this case of the inverted mass hierarchy is ruled out as well. It is interesting to note that in our argument we have not used the condition (iv), i.e. the mass matrices obtained from the last texture in Eq. (29) are excluded on the basis of our assumptions (i)-(iii) only. The mass matrices leading to the quasi-degenerate neutrino mass spectrum can be considered and ruled out using arguments analogous to those applied to the cases of the first two textures in Eq. (29). ## 5 Discussion We have shown that the seesaw mechanism, supplemented by the set of assumptions listed in the Introduction, leads to phenomenologically viable mass matrices of light active neutrinos. The mixing angle $`\theta _{23}`$ responsible for the dominant channel of the atmospheric neutrino oscillations can be naturally large without any fine tuning. The fact that the seesaw mechanism can lead to large lepton mixing even if the mixing in both the Dirac and Majorana sectors is small has been known for some time . Our approach gives a simple explanation to this fact: if the eigenvalues of the Dirac mass matrix of neutrinos $`m_D`$ are hierarchical, and the entries $`M_{ij}^1`$ of the inverse Majorana mass matrix $`(M_R^{})^1`$ have the hierarchy $`M_{22}^1M_{23}^1M_{33}^1`$ (which implies small mixing in the 2-3 sector of right-handed Majorana neutrinos), then the multiplication of $`(M_R^{})^1`$ by $`m_D`$ from the left and from the right suppresses the 22 element of the resulting effective mass matrix $`m_L`$ to a larger extent than it suppresses the 23 element, which in turn is more suppressed than the 33 element. This can lead to all the elements of the 2-3 sector of the resulting matrix $`m_L`$ being of the same order, yielding a large mixing angle $`\theta _{23}`$. Although the constrained seesaw mechanism allows to obtain a large mixing angle $`\theta _{23}`$ in a very natural way, it does not explain why $`\theta _{23}`$ is large: the largeness of this mixing angle is merely related to the choice of the inverse mass matrix of heavy singlet neutrinos, Eq. (15) or (23). However, once this choice has been made, the smallness of the mixing angle $`\theta _{13}`$ which determines the element $`V_{e3}`$ of the lepton mixing matrix can be readily understood. For the case of the normal mass hierarchy $`m_1,m_2m_3`$ the value of $`\theta _{13}`$ can be expressed in terms of the entries of the effective mass matrix $`m_L`$ in Eq. (9) as $`\mathrm{sin}\theta _{13}(\epsilon +\epsilon ^{})/2\sqrt{2}`$ . From Eq. (19) one then finds $`\mathrm{sin}\theta _{13}q(\alpha +2\beta p)/2\sqrt{2}q\alpha /2\sqrt{2}0.14\alpha `$, assuming $`\beta \alpha p^1100\alpha `$. Since all the solutions of the solar neutrino problem require $`|\alpha |<1`$ in order to have small enough $`\mathrm{\Delta }m_{}^2`$ (see sec. 3), the smallness of $`\theta _{13}`$ follows. We have shown that all three main neutrino oscillations solutions to the solar neutrino problem – small mixing angle MSW, large mixing angle MSW and vacuum oscillations – are possible within the constrained seesaw. The mechanism does not favour any of these solutions over the others. The seesaw mechanism we have studied naturally leads to the normal neutrino mass hierarchy while disfavouring the inverted mass hierarchy and quasi-degenerate neutrinos. For LMA and SMA solutions of the solar neutrino problem, the masses of the heavy singlet neutrinos are of the order $`10^{10}10^{11}`$ GeV. For the VO solution, the lightest of the singlet neutrinos has the mass of the same order of magnitude, whereas the masses of the other two are $`10^{12}10^{13}`$ GeV. The authors are grateful to K.S. Babu, M. Lindner, P. Lipari, M. Lusignoli, R.N. Mohapatra, G. Senjanović and A.Yu. Smirnov for useful discussions. This work was supported in part by the TMR network grant ERBFMRX-CT960090 of the European Union. The work of E.A. was supported by Fundação para a Ciência e a Tecnologia through the grant PRAXIS XXI/BCC/16414/98.
no-problem/9911/cond-mat9911142.html
ar5iv
text
# Optimization Algorithms Based on Renormalization Group ## 1 Introduction In the last twenty years or so, we have realized that the interaction between physics and information sciences, such as operations research, optimization, pattern recognition and theories of learning, can be useful in studies of all of these fields. For example, the problem of spin glasses has been providing us with important insights not only for solid state physics but also for artificial intelligence. In addition, Kirkpatrick, Gelatt and Vecci demonstrated that the knowledge of statistical mechanics could be used in optimization problems by introducing fictitious temperature into those problems, which opened up possibilities of applying or generalizing various physical ideas to other fields. Study of physics has also benefited from the information sciences. One of the most important unanswered questions in physics of disordered systems is the one concerning the nature of the low temperature phase in spin glasses in three dimensions. While few analytical means is available for answering this question, the numerical approach also has a serious difficulty. Namely, when one uses the Monte Carlo method, which is the most commonly used numerical technique for disordered systems, one always encounters a severe slowing down which makes it practically impossible to study a sizable system below the critical temperature. This difficulty seems to be closely related to the fact that the problem of finding the ground states of, say, the Edwards-Anderson (EA) spin glass model in three dimension, is NP-hard. In general, therefore, if one develops an efficient heuristic algorithm for NP-hard optimization problems, the above-mentioned difficulty in study of spin glasses would be also removed. In fact, such an attempt was made by Grötschel et al, who tried to apply the linear programming technique to the general spin glass optimization problems. It is suggestive to compare the case of two dimensions with that of three dimensions. In two dimensions, the ground state problem is not NP-hard and some polynomial time algorithms were proposed. However, in this case, the system does not have a phase transition at any finite temperature. In addition, many heuristic algorithms have been shown to be very efficient in two dimensions while in three dimensions few of them has turned out to be powerful for large scale problems. In this paper, we present another example of application of a physical idea to optimization problems. While the method discussed below works more efficiently than some other naive optimization methods such as the simulated annealing, in three dimensions it works not as efficiently as in two dimensions, similar to other heuristic methods. It is, however, illuminating to take a close look at this method because for this method we can clearly see the reason for relatively poor performance in higher dimensions and it casts a new light on the relationship between physical nature of a system and computational complexity of its ground state problem. Renormalization group is one of most important ideas that the physics of this century produced. We attempted to construct a heuristic optimization algorithm exploiting this idea. There, we proposed a renormalization transformation in which multiple initial solutions are used to decompose the whole system into small pieces, i.e., “block spins”. Then, neglecting all internal degrees of freedom inside each piece, we obtain a new “renormalized” problem. The method was successfully applied to the spin glass problem in two dimensions. For models in higher dimensions, however, it was not clear if the algorithm was as efficient or not, because we could not clearly see the asymptotic behavior of the required computational time as a function of the problem size. Later, Usami and Kano used the renormalization group idea for optimization in a different approach. They claimed that the method works fine for the traveling salesman problem. Recently, Houdayer and Martin combined the above mentioned renormalization transformation with the genetic algorithm. They demonstrated that the resulting algorithm is as efficient as other “state-of-art” heuristic optimization algorithms for a wide class of problems including spin glasses in three dimensions and the traveling salesman problem. In this paper, we concentrate on the EA spin glass model described by the following Hamiltonian for concreteness: $$=\underset{ij}{}J_{ij}S_iS_j$$ where $`S_i`$ being an Ising variable and $`J_{ij}`$ a quenched random variable. ## 2 Cross Breeding Operation Our first attempt to combine the renormalization group idea with heuristic optimization algorithms was inspired by Swendsen and Wang’s replica Monte Carlo method for simulating spin glass systems at finite temperature. Swendsen and Wang considered an ensemble of identical systems, which they called “replicas”, each at different temperature. By comparing two replicas at slightly different temperatures, they decomposed into clusters of spins. To be more specific, they considered the exclusive-or of two Ising spins, each locating at the corresponding site on a different replica. Regarding the resulting 0 and 1 as two “colors”, a two-toned map of the system was obtained. (See the “$`n=2`$” case in Fig.1 .) A cluster of spins, then, was defined as a group of spins with the same color connected by couplings $`J_{ij}`$. We can choose the inter-cluster coupling constants so that the detailed balance condition in terms of the original spins may be satisfied even in the updating process where clusters, rather than the individual original spins, are used as updating units. In this way, Swendsen and Wang succeeded in reducing the autocorrelation time by orders of magnitude in two dimensions. The success was, however, not as complete for three dimensions as in the two dimensional case. One of the obvious reasons why their algorithm does not work in higher dimensions is that the clusters grow too big. In the system at a certain temperature, regions of the size of the correlation length are randomly activated. It would be desirable to identify each of such activated clusters. If we compare only two systems at the same temperature, however, two adjacent regions are often “accidentally” activated in both systems. In such cases, these two clusters are not recognized as separate objects. Since we have only two colors in Swendsen and Wang’s algorithm, it is increasingly probable that the same color is assigned to two sizable regions adjacent to each other in higher dimensions. With a small number of colors, therefore, structures of a size comparable to the correlation length are not effectively updated. In order to overcome this difficulty we considered the following cross-breeding operation. The inputs to this operation are multiple “parents”, i.e., spin configurations, and the outcome is an “offspring”, a configuration which is better than, or at least as good as, the best among the parents. The number of parents, $`n`$, is adjusted to maximize the efficiency of the whole algorithm. A cluster, or a “block spin”, is defined as the maximal connected set of spins in which an arbitrary two spins have the same relative orientation for all the parents. Technically speaking, we assign a “color” defined by $$c_i\underset{\mu =2}{\overset{n}{}}2^{\mu 2}\left(\frac{S_i^{(1)}S_i^{(\mu )}+1}{2}\right)$$ to each site $`i`$, where $`S_i^{(\mu )}`$ is the $`i`$-th spin in the $`\mu `$-th parent. This is a natural generalization of the exclusive-or. We, then, define a cluster as a connected set of sites with the same color. Once clusters are identified, we regard each cluster as a single renormalized spin and assign a one-bit degree of freedom to it. The next step is quenching all $`n`$ configurations with these renormalized spins as updating units. Relative orientation of original “bare” spins inside each renormalized spin does not change hereafter throughout the whole cross-breeding operation. After this quench procedure, we compare resulting configurations and eliminate duplication, i.e., we discard a configuration if there is another one identical to it. Next, we again compare the resulting configurations and redefine block spins in the same fashion as described above. New block spins consist of previous block-spins and therefore larger than those. We “heat up” all the remaining parents but the one with the lowest energy. As a result, these configurations become random in terms of block spins. We then quench all remaining configurations and eliminate duplication. These procedures are repeated until the only one configuration remains. Finally we take the last configuration as the “offspring”. Since we do not heat up the configuration with the lowest energy at any stage, it is guaranteed that the energy of the offspring is not larger than the lowest energy among parents. ## 3 Various Structures of Algorithms Besides the difficulty due to too large clusters, there is yet another point that we have to consider in order to cast Swendsen and Wang’s idea into a form of an optimization algorithm. Namely, in their algorithm the block spin transformation is repeatedly applied only to the configurations at a certain fixed temperature. While this was sufficient for a finite temperature Monte Carlo simulation, it would be certainly insufficient for optimization because the cluster size will never grow large. The fixed cluster size is as problematic as too big clusters, because such fixed-size clusters can not update the system effectively at low temperatures where the correlation length is much larger than the cluster size. ### 3.1 Chain Structure In the replica optimization method, we applied the cross-breeding operation described above repeatedly to the outcome of the previous cross-breeding operation, expecting that as a configuration goes through more cross-breeding operations the size of block spins becomes larger until finally it becomes as large as the whole system. There, we proposed two options for implementing this idea. One is repeated application of the cross-breeding along an “assembly line” ((a) in Fig.2 ) and the other is the cross-breeding operations organized as a hierarchical tree ((b) in Fig.2 ) as we discuss below. In the “assembly-line” structure, at each cross-breeding, one of the inputs is the outcome of the last cross-breeding whereas the other inputs are spin configurations of the first generation. The spin configurations of the first generation are created from random spin configurations through quenching. Here quenching is nothing but the poor technique of optimization often called the “greedy algorithm”. Therefore, in general, there is a large difference in the fitness of the best input and those of the others. As a result in most cases the offspring resembles the best input much more than the others. The other parents work as perturbations to the currently-best solution. The advantage of this structure is the easiness of implementation. In addition, the computational cost for moving to the next generation does not increase as we go to higher generations. On the other hand, we cannot expect to gain from one application of the cross-breeding in this structure as much as we can get from going to the next generation in the other structures discussed below. However, for small scale applications, we found that this structure was advantageous. ### 3.2 Hierarchical Tree Structure While the assembly-line structure is simple to implement, we expect that it does not find ground states efficiently in larger scale applications. As the first parent becomes better, it is increasingly difficult to find large clusters by which the first parent can be transformed into better states, only with perturbations produced by the first generation configurations. This is because usually the block spins found in cross breeding operation with the first generation configurations are small. Therefore it is desirable to choose equally good $`n`$ spin configurations as the inputs of the cross-breeding operations. This naturally leads to the tree structure. Namely, we consider a hierarchical tree with a leaf attached to each end point at the tree top, i.e., the leftmost column in (b) of Fig.2 . The procedure goes rightward from the tree top. At each branching point, represented by an open circle in Fig.2 , we apply the cross-breeding operation to the parents. This results in a new spin configuration which is, then, placed at the position next to the branching point. We repeat this procedure for all branching points until we get the last leaf for the root, the right-most one. As we go rightwards on the tree, the typical size of renormalized spins becomes larger. In two dimensions, the exponent, $`\theta `$, which characterizes the dependence of the excitation energy upon the size of spin clusters is negative. Therefore as we proceed in the algorithm, the typical excitation energy we handle becomes smaller. This is why the “smaller-scale-first-larger-scale-later” strategy works fine at least in two dimensions. This may mean that the present strategy works only for systems that does not have a phase transition at a finite temperature, since the negative value of $`\theta `$ signifies the absence of a phase transition at a finite temperature. For general cases where larger scale structures may correspond to larger energy, a more sophisticated consideration will be required. Using this method, we have studied two-dimensional spin glass systems to find that the thermal exponent does not agree with the stiffness exponent in contrast to what had been generally taken as granted. We also found recently that the elementary excitations in this model are fractal clusters of spins and that the scaling exponent characterizing their excitation energy agrees with the thermal exponent whereas it certainly differs from the stiffness exponent. ### 3.3 Genetic-Pool Structure — Genetic Algorithm Houdayer and Martin proposed yet another structure for the algorithm. Namely, they combined the cross-breeding operation with the genetic algorithm. The genetic pool of the first generation is an ensemble of the first-generation spin configurations. (See (c) in Fig.2 .) In order to generate a configuration of the $`(g+1)`$-th generation, they choose $`n`$ configurations randomly from the pool of the $`g`$-th generation, cross-breed them, and put the offspring into the $`(g+1)`$-th generation pool. They claimed that the resulting algorithm is efficient not only for the spin glass problems in two dimensions but also for a wider class of optimization problems, including the three-dimensional EA model and the traveling salesman problem, although the systematic study of the asymptotic efficiency of the algorithm has not been carried out. ## 4 A Complementary Algorithm Usami and Kano proposed another method for optimization inspired by the renormalization group idea. The method is specialized for the traveling salesman problem. They considered decompositions of systems into cells of the same size and shape. They first consider such a decomposition on a large scale and obtain a coarse-grained problem. Then, using the obtained global approximate solution, they consider another decomposition on a smaller scale. Their approach is complementary to ours in that their method deals with large scales first and small scales later. They claimed that the method is very efficient for the traveling salesman problem. In light of the relationship between the length scales and the energy scales discussed above, this “larger-scale-first” strategy may be appropriate because in a typical instance of the traveling salesman problem, larger length scale correspond to larger energy, i.e., $`\theta >0`$. ## 5 Discussions and Summary We have reviewed a few heuristic optimization algorithms based on the idea of the renormalization group. Since their asymptotic efficiency has not been thoroughly investigated, it is not clear if these approaches are useful for larger scale problems. However, at least, they show very clear advantage over other heuristic algorithms such as the simulated annealing for various optimization problems of interest. We have also discussed the relationship among phase transitions, the sign of the excitation-energy exponent $`\theta `$, and computational complexity of problems. Our discussion suggests that the simultaneous optimization on different scales may be required for a wider class of problems. ## Acknowledgements This work is supported by Grant-in-Aid for Scientific Research Program (No.11740232) from Mombusho, Japan.
no-problem/9911/gr-qc9911030.html
ar5iv
text
# Quintessence inhomogeneous cosmology ## 1 Introduction Recently, there have been claims in the literature that the Universe, besides its content in normal matter and radiation, must possess a not yet identified component (usually called quintessence matter, Q-matter for short) , , , characterized by a negative pressure, and possibly a cosmological term. These claims were prompted at the realization that the clustered matter component can account at most for one third of the critical density. Therefore, an additional “soft” (i.e. non-clustered) component is needed if the critical density predicted by many inflationary models is to be achieved. Very often the geometry of the proposed models is very simple, just Friedmann-Lemaître-Robertson-Walker (FLRW). In constrast to FLRW models, inhomogeneous spaces are in general compatible with heat fluxes, and these might imply important consequences such as inflation or the avoidance of the initial singularity . Here we focus on an isotropic but inhomogeneous spherically symmetric universe which besides a material fluid contains a self-interacting scalar field (which can be interpreted as Q-matter), and a cosmological term, $`\mathrm{\Lambda }`$ which, in general, may vary with time. Density inhomogenities triggered by gravitational instability must be present at any stage of evolution. We only mention that the negative pressure associated to Q-matter and $`\mathrm{\Lambda }`$ will tend to slow down the growing modes (see e.g. , ), and shift the epoch of matter-radiation equality toward more recent times. ## 2 Einstein-Klein-Gordon field equations Let us consider a shear–free spherically–symmetric spacetime with metric $$ds^2=\frac{1}{F(t,r)^2}\left[v(t,r)^2dt^2+dr^2+r^2\text{d}\mathrm{\Omega }^2\right].$$ (1) where as usual $`\text{d}\mathrm{\Omega }^2\text{d}\theta ^2+\mathrm{sin}^2\theta \text{d}\varphi ^2`$. Units have been chosen so that $`c=G=1`$. As sources of the gravitational field we take: a fluid of material energy density $`\rho _f=\rho _f(r,t)`$, hydrostatic pressure $`p_f=p_f(r,t)`$, with a radial heat flow ($`q_r=q_r(r,t)`$ and $`q_t=q_\theta =q_\varphi =0`$), plus a cosmological term, related to the energy density of vacuum by $`\mathrm{\Lambda }=8\pi \rho _{vac}`$, that depends only on time $`\mathrm{\Lambda }=\mathrm{\Lambda }(t)`$, and a self-interacting scalar field $`\varphi `$ driven by the potential $`V(\varphi )`$ whose equation of state is $`p_\varphi =\left(\gamma _\varphi 1\right)\rho _\varphi `$. Hence the scalar field can be interpreted as Q-matter -see e.g. . The stress energy-tensor of the normal matter, with a heat flow, plus Q-matter (scalar field) and the cosmological term is $$T_k^i=(\rho _f+\rho _\varphi +p_f+p_\varphi )u^iu_k+(\mathrm{\Lambda }p_fp_\varphi )\delta _k^i+q^iu_k+q_ku^i,$$ (2) As equation of state for the fluid we choose $`p_f=\left(\gamma _f1\right)\rho _f`$ where $`\gamma _f`$ is a function of $`t`$ and $`r`$. Taking into account the additivity of the stress-energy tensor it makes sense to consider an effective perfect fluid description with equation of state $`p=\left(\gamma 1\right)\rho `$ where $`p=p_f+p_\varphi `$, $`\rho =\rho _f+\rho _\varphi `$ and $$\gamma =\frac{\gamma _f\rho _f+\gamma _\varphi \rho _\varphi }{\rho _f+\rho _\varphi },$$ (3) is the overall (i.e. effective) adiabatic index. The requirement that the cosmological term $`\mathrm{\Lambda }`$ is just a function of $`t`$ leads to the restriction that $`\gamma `$ also depends only on $`t`$ to render the system of Einstein equations integrable. The nice result we are seeking is a solution that has an asymptotic FLRW stage, with $`\mathrm{\Lambda }`$ evolving towards a constant, and the heat flow vanishing in that limit . To write the Einstein equations we use the ansatz $`F=a(t)+b(t)x`$ and $`v=c(t)+d(t)x`$ with the constraints $`a(t)d(t)=b(t)c(t)`$. This set of metrics contains those of Modak ($`b=0`$) , Bergmann ($`c=a,d=b`$) and Maiti ($`b=d=ka/4,`$ with $`k=0,\pm 1`$) . Another possibility arises when $`d=0`$, then re-defining the time by $`vdtdt`$, the Einstein equations are $$\rho +\mathrm{\Lambda }=12ab+3\dot{a}^2+6\dot{a}\dot{b}x+3\dot{b}^2x^2,$$ (4) $$p\mathrm{\Lambda }=\left(2b\ddot{b}3\dot{b}^2\right)x^2+2\left(2b^23\dot{a}\dot{b}+a\ddot{b}+\ddot{a}b\right)x8ab3\dot{a}^2+2a\ddot{a},$$ (5) $$q_r=4\sqrt{x}\dot{b}\left(a+bx\right)^2,$$ (6) where $`x=r^2`$ and the overdot indicates $`/t`$. Imposing that $`\mathrm{\Lambda }=\mathrm{\Lambda }(t)`$, the general solution to these equations has the form $$a=2\mathrm{exp}\left(𝑑tw\right)𝑑tw^2\frac{dt}{w^2},b=\mathrm{exp}\left(𝑑tw\right),$$ (7) where $`w=2/𝑑t(23\gamma )`$, provided $`\gamma 2/3`$. Inserting (7) in (4), (5) and (6) we easily compute the cosmological constant and the heat flow. Finally the FLRW metric is $$ds^2=\frac{1}{\left(1+Mr^2\right)^2}\left[d\tau ^2+R^2\left(dr^2+r^2d\mathrm{\Omega }^2\right)\right],$$ (8) where we have introduced the time coordinate $`d\tau =dt/a`$, $`M=b/a`$ and $`R=1/|a|`$. This metric is conformal to FLRW, and the conformal factor approaches unity when $`M0`$. ## 3 Constant adiabatic index When $`\gamma `$ is a constant different from $`2/3`$, the general solution of (4) and (5) becomes $$a(t)=C_1\mathrm{\Delta }t^{\frac{2}{3\gamma 2}}+C_2\mathrm{\Delta }t^{\frac{3\gamma }{3\gamma 2}}\frac{1}{3}K\mathrm{\Delta }t^{6\frac{\gamma 1}{3\gamma 2}},$$ (9) $$M(t)=K\left(C_1+C_2\mathrm{\Delta }t^1\frac{1}{3}K\mathrm{\Delta }t^2\right)^1.$$ (10) Two alternatives of asymptotically expanding universes appear depending on the map between $`t`$ and $`\tau `$. Case $`\mathrm{\Delta }t0`$ In this limit we obtain $$R(\tau )\frac{1}{C_2}\left[\frac{2C_2\left(13\gamma \right)}{23\gamma }\mathrm{\Delta }\tau \right]^{\frac{3\gamma }{2\left(3\gamma 1\right)}},$$ (11) $$\mathrm{\Lambda }(\tau )\frac{3\left(23\gamma \right)^2}{4\left(13\gamma \right)^2\mathrm{\Delta }\tau ^2},$$ (12) $$q_r(r,\tau )\frac{8KC_2}{3\gamma 2}r\left[\frac{2C_2\left(13\gamma \right)}{23\gamma }\mathrm{\Delta }\tau \right]^{\frac{9\gamma }{2\left(13\gamma \right)}},$$ (13) for $`\gamma 1/3`$. When $`1/3<\gamma <2/3`$ we have, for large cosmological time $`\tau `$, an accelerating universe that homogenizes with vanishing cosmological term and heat flow. In this stage we have a final power-law expansion era. For $`\gamma =1/3`$ we have asymptotically a de Sitter universe with finite limit cosmological term. For the remaining values of $`\gamma `$ the universe begins at a homogeneous singularity with a divergent cosmological term. When $`\gamma <1/3`$, the heat flux asymptotically vanishes near the singularity, while for $`\gamma >2/3`$ it diverges. Case $`\mathrm{\Delta }t\mathrm{}`$ In this limit we obtain $$R(\tau )\frac{3}{K}\left[\frac{K\left(43\gamma \right)}{3\left(23\gamma \right)}\mathrm{\Delta }\tau \right]^{\frac{6\left(1\gamma \right)}{43\gamma }},$$ (14) $$\mathrm{\Lambda }(\tau )\frac{24\left(23\gamma \right)^2}{\left(43\gamma \right)^2\mathrm{\Delta }\tau ^2},$$ (15) $$q_r(r,\tau )24\frac{\left(23\gamma \right)^2}{\left(43\gamma \right)^3}\frac{r}{\mathrm{\Delta }\tau ^3},$$ (16) for $`\gamma 2/3`$. When $`2/3\gamma 1`$ the universe homogenizes for large cosmological time with vanishing cosmological term and heat flow. When $`\gamma =1`$, the late time evolution changes to an asymptotically Minkowski stage. For $`1<\gamma <4/3`$ the universe starts homogeneously in the remote past with a vanishing scale factor, cosmological term and heat flow. For the remaining values of $`\gamma `$ the universe begins at a homogeneous singularity with a divergent cosmological term. An exact solution with explicit dependence on the asymptotic cosmological time $`\tau `$ can be found when the integration constants $`C_1`$ and $`C_2`$ vanish. In such a case the metric is $$ds^2=\frac{1}{\left(1+m\mathrm{\Delta }\tau ^{2\frac{23\gamma }{43\gamma }}r^2\right)^2}\left[d\tau ^2+\mathrm{\Delta }\tau ^{12\frac{1\gamma }{43\gamma }}\left(dr^2+r^2d\mathrm{\Omega }^2\right)\right],$$ (17) where $`m`$ is a redefinition of the old integration constant $`K`$, the adiabatic index $`\gamma `$ and $`r_0`$. The last constant was introduced by scaling the radial coordinate $`rr_0r`$. ## 4 Asymptotic evolution to a quintessence–dominated era As a first stage of towards more general scenarios with a slowly time–varying $`\gamma `$, we will explore a model that evolves towards an asymptotic FLRW regime dominated by Q–matter (i.e. the scalar field). We will show that this system approaches to the constant $`\gamma `$ solutions for large times found above. In this regime the equations of Einstein-Klein-Gordon become $$3H^2\rho _f+\frac{1}{2}\dot{\varphi }^2+V\left(\varphi \right)+\mathrm{\Lambda },$$ (18) $$\ddot{\varphi }+3H\dot{\varphi }+\frac{dV\left(\varphi \right)}{d\varphi }0,$$ (19) where $`H=\dot{R}/R`$ and a dot means $`d/d\tau `$ in this section. In last section we found that the general asymptotic solution for the scale factor $`R(\tau )\mathrm{\Delta }\tau ^\alpha `$ has the power–law behaviors (11), (14) for any value of the effective adiabatic index $`\gamma `$. Then, using these expressions and (12) and (15) together with (18) and (19), we can investigate the asymptotic limit in which the energy of the scalar field dominates over the contribution of the perfect fluid. In the regime that $`3\alpha \gamma _f>2`$ the adiabatic scalar field index can be approximated by $$\gamma _\varphi \frac{2}{3\alpha }\left[1+\left(1\frac{3\gamma _f}{2}\sigma \right)\right],$$ (20) where $`\sigma =\rho _f/\rho _\varphi 1`$. Inserting these equations in (3) we obtain the first correction to the effective adiabatic index $$\gamma \frac{2}{3}\left[1\pm \sqrt{\frac{11}{12}\left(3\gamma _f2\right)\sigma }\right]$$ (21) The negative branch of (21) yields a consistent asymptotic solution for the range $`\frac{1}{3}<\gamma <\frac{2}{3}`$. We note that this solution describes a deflationary stage with a limiting exponent $`\alpha =1`$. Oftenly power-law evolution of the scale factor is associated with logarithmic dependence of the scalar field on proper time . Thus, assuming that $`\varphi (\tau )C\mathrm{ln}\tau `$ with the constant $`C`$ to be determined by the system of equations (18) and (19), and using these expressions together with (12) and (15) in (18) and (19) it follows that the leading term of $`V(\varphi )`$ for large $`\varphi `$ is $$V\left(\varphi \right)V_0e^{A\varphi }$$ (22) Using the dominant value of the effective adiabatic index we find $`A^2=2`$, $`V_0=2`$ and $`C=1/\sqrt{2}`$. The models considered in this section are based on the notion of “late time dominating field” (LTDF), a form of quintessence in which the field $`\varphi `$ rolls down a potential $`V(\varphi )`$ according to an attractor solution to the equations of motion. The ratio $`\sigma `$ of the background fluid to the field energy changes steadily as $`\varphi `$ proceeds down its path. This is desirable because in that way the Q-matter ultimately dominates the energy density and drives the universe toward an accelerated expansion , . ## 5 Concluding remarks We have investigated a class of solutions of the Einstein field equations with a variable cosmological term, heat flow and a fluid with variable adiabatic index that includes those of Modak, Bergmann and Maiti and contains a new exact conformally flat solution. We have found that asymptotically expanding universes occur when $`1/3<\gamma <1`$ that homogenizes for large cosmological time with vanishing cosmological term and heat flow. For $`1/3<\gamma <2/3`$ the evolution is given by (11) and corresponds to a power-law accelerated expansion for large cosmological time $`\tau `$. On the other hand, when $`2/3\gamma <1`$ even though an asymptotic negative cosmological term occurs, the universe evolves toward a decelerated expansion. The particular case $`\gamma =1/3`$ leads asymptotically to a de Sitter universe with a finite limit for $`\mathrm{\Lambda }`$. We have shown that homogeneization occurs also for a time dependent adiabatic index provided it has a constant limit, for $`t0`$ and $`t\mathrm{}`$, repectively, and is analytic about these points. We have carried out a detailed analysis of a model in which Q–matter dominates over cold dark matter. This LTDF solution is an attractor because, even for large initial inhomogeneities and a wide range of initial values for $`\varphi `$ and $`\dot{\varphi }`$, the evolution approaches a common path. It was shown that this model can be realized for a wide range of potentials provided they have an exponential tail. Our LTDF model only requires that the potential has an asymptotic exponential shape for large $`\varphi `$. So, the interesting and significant features of our model are: (a) a wide range of initial conditions are drawn towards a common evolution; (b) the LTDF maintain some finite difference in the equation of state such that the field energy eventually dominates and the universe enters a period of acceleration. ## Acknowledgments This work was partially supported by the Spanish Ministry of Education under Grant PB94-0718, and the University of Buenos Aires under Grant TX-93.
no-problem/9911/astro-ph9911025.html
ar5iv
text
# TreePM: A code for Cosmological N-Body Simulations ## 1 Introduction Observations suggest that the present universe is populated by very large structures like galaxies, clusters of galaxies etc. Current models for formation of these structures are based on the assumption that gravitational amplification of density perturbations resulted in the formation of large scale structures. In absence of analytical methods for computing quantities of interest, numerical simulations are the only tool available for study of clustering in the non-linear regime. Last two decades have seen a rapid development of techniques and computing power for cosmological simulations and the results of these simulations have provided valuable insight into the study of structure formation. The simplest N-Body method that has been used for studying clustering of large scale structure is the Particle Mesh method (PM hereafter). The genesis of this method is in the realisation that the Poisson equation is an algebraic equation in Fourier space, hence if we have a tool for switching to Fourier space and back, we can calculate the gravitational potential and the force with very little effort. It has two elegant features in that it provides periodic boundary conditions by default, and the force is softened naturally so as to ensure collisionless evolution of the particle distribution. However, softening of force done at grid scale implies that the force resolution is very poor. This limits the dynamic range over which we can trust the results of the code between a few grid cells and about a quarter of the simulation box (Bouchet and Kandrup, 1985; Bagla and Padmanabhan, 1997. Many efforts have been made to get around this problem, mainly in the form of P<sup>3</sup>M (Particle-Particle Particle Mesh) codes (Efstathiou et al, 1985; Couchman 1991). In these codes, the force computed by the particle mesh part of the code is supplemented by adding the short range contribution of nearby particles, to improve force resolution. The main problem with this approach is that the particle-particle summation of the short range force takes a lot of time in highly clustered situations. Another, more subtle problem is that the force computed using the PM method has anisotropies and errors in force at grid scale – these errors are still present in the force calculated by combining the PM force with short range corrections (Bouchet and Kandrup, 1985). A completely different approach to the problem of computing force are codes based on the tree method. In this approach we consider groups of particles at a large distance to be a single entity and compute the force due to the group rather than sum over individual particles. There are different ways of defining a group, but by far the most popular method is that due to Barnes and Hut (1986). Applications of this method to Cosmological simulations require including periodic boundary conditions. This has been done using Ewald’s method (Ewald, 1921; Rybicki, 1986; Hernquist, Bouchet and Suto, 1991; Springel, Yoshida and White, 2001). Ewald’s method is used to tabulate the correction to the force due to periodic boundary conditions. This correction term is stored on a grid (in relative separation of a pair of particles) and the interpolated value is added to the pairwise force. Some attempts have been made to combine the high resolution of a tree code with the natural inclusion of periodic boundary conditions in a PM code by simply extending the P<sup>3</sup>M method and replacing the particle-particle part for short range correction with a local tree (Xu, 1995). In this paper we present a hybrid N-Body method that attempts to combine the good features of the PM and the tree method, while avoiding the problems of the P<sup>3</sup>M and the TPM methods. Our approach is to divide force into long and short range components using partitioning of unity, instead of taking the PM force as given. This allows us greater control over errors, as we shall see below. The plan of the paper is as follows: §2 introduces the basic formalism of both the tree and PM codes. §2.3 gives the mathematical model for the TreePM code. We analyse errors in force for the TreePM code in §3. Computational requirements of our implementation of the TreePM code are discussed in §4. A discussion of the relative merits of the TreePM method with respect to other N-Body methods follows in §5. ## 2 The TreePM Method ### 2.1 Tree Code We use the approach followed by Barnes and Hut (1986). In this, the simulation volume is taken to be a cube. The tree structure is built out of cells and particles. Cells may contain smaller cells (subcells) within them. Subcells can have even smaller cells within them, or they can contain a particle. We start with the simulation volume and add particles to it. If two particles end up in the same subcell, the subcell is geometrically divided into smaller subcells until each subcell contains either subcells or at most one particle. The cubic simulation volume is the root cell. In three dimensions, each cubic cell is divided into eight cubic subcells. Cells, as structures, have attributes like total mass, location of centre of mass and pointers to subcells. Particles, on the other hand have the traditional attributes like position, velocity and mass. More details can be found in the original paper (Barnes and Hut, 1986). Force on a particle is computed by adding contribution of other particles or of cells. A cell that is sufficiently far away can be considered as a single entity and we can just add the force due to the total mass contained in the cell from its centre of mass. If the cell is not sufficiently far away then we must consider its constituents, subcells and particles. Whether a cell can be accepted as a single entity for force calculation is decided by the cell acceptance criterion (CAC). We compute the ratio of the size of the cell $`d`$ and the distance $`r`$ from the particle in question to its centre of mass and compare it with a threshold value $$\theta =\frac{d}{r}\theta _c$$ (1) The error in force increases with $`\theta _c`$. There are some potentially serious problems associated with using $`\theta _c1/\sqrt{3}`$, a discussion of these is given in Salmon and Warren (1994). One can also work with completely different definitions of the CAC (Salmon and Warren, 1994; Springel, Yoshida and White, 2001). Irrespective of the criterion used, the number of terms that contribute to the force on a particle is much smaller than the total number of particles, and this is where a tree code gains in terms of speed over direct summation. We will use the Barnes and Hut tree code and we include periodic boundary conditions for computing the short range force of particles near the boundaries of the simulation cube. Another change to the standard tree walk is that we do not consider cells that do not have any spatial overlap with the region within which the short range force is calculated. We also use an optimisation technique to speed up force calculation (Barnes, 1990). ### 2.2 Particle Mesh Code A PM code is the obvious choice for computing long range interactions. Much has been written about the use of these in cosmological simulations (e.g., see Hockney and Eastwood, 1988) so we will not go into details here. PM codes solve for the gravitational potential in the Fourier space. These use Fast Fourier Transforms (FFT) to compute Fourier transforms, and as FFT requires data to be defined on a regular grid the concept of mesh is introduced. The density field represented by particles is interpolated onto the mesh. Poisson equation is solved in Fourier space and an inverse transform gives the potential (or force) on the grid. This is then differentiated and interpolated to the position of each particle in order to calculate the displacements. Use of a grid implies that forces are not accurate at the scale smaller than the grid cells. A discussion of errors in force in a PM code can be found in Efstathiou et al (1985) and elsewhere (Bouchet and Kandrup, 1985; Bagla and Padmanabhan, 1997). The error in force can be very large at small scales but it drops to an acceptable number beyond a few grid cells, and is negligible at large scales. We use the Cloud-in-Cell weight function for interpolation. We solve the Poisson equation using the natural kernel, $`1/k^2`$; this is called the poor man’s Poisson solver (Hockney and Eastwood, 1988). We compute the gradient of the potential in Fourier space. ### 2.3 TreePM Code We now turn to the question of combining the tree and the PM code. We wish to split the inverse square force into a long range force and a short range force. The gravitational potential can be split into two parts in Fourier space (Ewald, 1921). $`\phi _k`$ $`=`$ $`{\displaystyle \frac{4\pi G\varrho _k}{k^2}}`$ $`=`$ $`{\displaystyle \frac{4\pi G\varrho _k}{k^2}}\mathrm{exp}\left(k^2r_s^2\right){\displaystyle \frac{4\pi G\varrho _k}{k^2}}\left(1\mathrm{exp}\left(k^2r_s^2\right)\right)`$ $`=`$ $`\phi _k^l+\phi _k^s`$ $`\phi _k^l`$ $`=`$ $`{\displaystyle \frac{4\pi G\varrho _k}{k^2}}\mathrm{exp}\left(k^2r_s^2\right)`$ (3) $`\phi _k^s`$ $`=`$ $`{\displaystyle \frac{4\pi G\varrho _k}{k^2}}\left(1\mathrm{exp}\left(k^2r_s^2\right)\right)`$ (4) where $`\phi ^l`$ and $`\phi ^s`$ are the long range and the short range potentials, respectively. The splitting is done at the scale $`r_s`$. $`G`$ is the gravitational coupling constant and $`\varrho `$ is density. The expression for the short range force in real space is: $$𝐟^s(𝐫)=\frac{Gm𝐫}{r^3}\left(\mathrm{erfc}\left(\frac{r}{2r_s}\right)+\frac{r}{r_s\sqrt{\pi }}\mathrm{exp}\left(\frac{r^2}{4r_s^2}\right)\right)$$ (5) Here, $`\mathrm{erfc}`$ is the complementary error function. These equations describe the mathematical model for force in the TreePM code. The long range potential is computed in the Fourier space, just as in a PM code, but using eqn.(3) instead of eqn.(2.3). This potential is then used to compute the long range force. The short range force is computed directly in real space using eqn.(5). In the TreePM method this is computed using the tree approximation. The short range force falls rapidly at scales $`rr_s`$, and hence we need to take this into account only in a small region around each particle. We have plotted the long range and the short range force (eqn.(5)) as a function of $`r`$ in fig.1 to show their dependence on scale. We have chosen $`r_s=1`$ here. The short range force closely follows the total force up to about $`2r_s`$ and then falls rapidly, its magnitude falls below $`1\%`$ of the total force by $`5r_s`$. The long range force reaches a peak around $`2r_s`$. It makes up most of the total force beyond $`3.5r_s`$. It falls with scale below $`2r_s`$, becoming negligible below $`r_s/2`$. Evaluation of special functions for calculating the short range force can be time consuming. To save time, we compute an array containing the magnitude of the short range force. The force between any two objects, particle-cell or particle-particle, is computed by linearly interpolating between the nearby array elements multiplied by the unit vector $`𝐫`$. It is necessary for the array to sample the force at sufficiently closely spaced values of $`r`$ in order to keep error in interpolation small. ## 3 Error Estimation In this section we will study errors in force introduced by various components of the TreePM code. We will only list salient points here and the reader is referred to a more comprehensive study for details (Bagla and Ray, 2002). We start by estimating the error in force due to one particle. The long range force of a particle is calculated using the PM method, but using eqn.(3) instead of eqn.(2.3). The cutoff at high wave numbers largely removes the effect of the grid and we find that the dispersion in the long range force is very small, e.g. for $`r_s1`$ grid length the dispersion is smaller than $`1\%`$ of the total force at all scales. There is a systematic offset in the long range force that is larger than the dispersion. This offset is induced by the interpolating function, and can be corrected (White, 2000; Bagla and Ray, 2002) by de-convolving the square of the interpolating function (we need to interpolate twice). This deconvolution does not affect the dispersion in any significant manner. There are no errors in computing the short range force for one particle, hence the only source of errors is in the calculation of the long range force in this case. All the errors arise due to anisotropies in the long range force. The errors in the long range force increase as we approach small scales, but the contribution of the long range force to the total force falls sharply below $`2r_s`$ and hence the errors also drop rapidly. There is a peak in errors around $`2r_s`$$`3r_s`$, and for $`r_s=1`$ maximum rms error in force of one particle is $`1\%`$ of the total force. In calculating the total force, we added the short range force to the long range force at all scales. However, this is not necessary as beyond some scale, the contribution of small scale force to the total force drops to a negligible fraction of the total force. We will call the scale upto which we add the small scale force as $`r_{cut}`$. The short range force is just below $`1\%`$ of the total force at $`r_{cut}=5r_s`$. We choose this value of $`r_{cut}`$ for the TreePM code. The other source of error is the tree approximation that we use for computing the short range force. The first correction term is due to the quadrapole moment of the particle distribution in the cell, however the magnitude of this error is larger than in the inverse square force due to a more rapid variation in force with distance. In the worst case, this error can be more than twice the error in the corresponding case of inverse square force (Bagla and Ray, 2002). In more generic cases, errors due to this effect tend to cancel out and the net error is small. Apart from this effect, there is also a dispersion introduced by the tree approximation. The magnitude of this dispersion varies monotonically with $`\theta _c`$. One factor that we have to weigh in is that the execution time is small for large $`\theta _c`$ and small $`r_{cut}`$. Given these considerations, the obvious solution is to choose the smallest $`r_s`$ and the largest $`\theta _c`$ that gives us a sufficiently accurate force field. It is important to estimate the errors in a realistic situation, even though we do not expect errors to add up coherently in most situations. We test errors for two distributions of particles: a homogeneous distribution and a clumpy distribution. For the homogeneous distribution, we use randomly distributed particles in a box. We use $`262144`$ particles in a $`64^3`$ box for this distribution. We compute the force using a reference setup ($`r_s=4`$, $`r_{cut}=6r_s`$, $`\theta _c=0`$) and the setup we wish to test ($`r_s=1`$, $`r_{cut}=5r_s`$, $`\theta _c=0.5`$). It can be shown that the errors in the reference setup are well below $`0.5\%`$ for the entire range of scales (Bagla and Ray, 2002). We compute the fractional error in force acting on each particle, this is defined as, $$ϵ=\frac{\left|𝐟𝐟_{ref}\right|}{\left|𝐟_{ref}\right|}.$$ (6) Fig.2 shows the cumulative distribution of fractional errors. The curves show the fraction of particles with error greater than $`ϵ`$. The thick line shows this for the homogeneous distribution. Error $`ϵ`$ for $`99\%`$ of particles is less than $`3.5\%`$. Results for the clumpy distribution of particles are shown by the dashed line. We used the output of a CDM simulation (fig.3a) run with the TreePM code. Errors in this case are much smaller, as compared to the homogeneous distribution, as in the case of tree code (Hernquist, Bouchet and Suto, 1991). Error $`ϵ`$ for $`99\%`$ of particles is around $`2\%`$, as compared to $`3.5\%`$ for the homogeneous distribution. There are two noteworthy features of this figure. One is that the error for the homogeneous distribution is higher. The main reason for this is similar to that in tree codes, though the effect is much smaller here. When we are dealing with a homogeneous distribution, the total force on each particle is very small because forces due to nearly identical mass distributions on opposite sides cancel out. This near cancellation of large numbers gives rise to errors that decrease as the net result of these cancellations grows. In a tree code, we calculate the force due to all the particles in the simulation box whereas in the TreePM method we add up the contribution of only those within a sphere of radius $`r_{cut}`$. This is the reason for the difference in these two curves being much less pronounced than the corresponding curves for the tree code (Hernquist, Bouchet and Suto, 1991). The other feature is that the shape of the curves for the homogeneous distribution and the clumpy distribution is different. This is because we begin to see the effect of the error due to tree approximation in case of clumpy distribution. In case of the homogeneous distribution, the distribution of particles is close to isotropic around any given particle and hence the error cancels out. This error can be controlled by reducing $`\theta _c`$. We end this section with a brief comparison of the TreePM code with a PM code. We ran a simulation of the sCDM model ($`262144`$ particles, $`64`$h<sup>-1</sup>Mpc box) with a PM code (Bagla and Padmanabhan, 1997) and with the TreePM code discussed here. Fig.3 shows a slice from these simulations; fig.3a shows the simulation with the TreePM code and fig.3b shows the same for a PM code. The large scale structures are the same in the two but there are significant differences at small scales. The halos are much more compact in the TreePM simulation, and large halos show more substructure. These differences are also clear in the two point correlation function $`\overline{\xi }(r)`$ plotted in fig.4. The thick line shows the correlation from the TreePM simulation and the dashed line shows the same for the PM simulation. As expected from fig.3 and from general considerations, the correlation function in the TreePM simulation matches with that from the PM simulation at large scales, but at small scales, the TreePM simulation has a higher correlation function. We have checked the accuracy of evolution by checking the rate of growth for the correlation function in the linear regime and also by looking for scale invariance of the correlation function for power law models. For more details please see (Bagla and Ray, 2002). ## 4 Computational Resources In this section, we describe the computational resources required for the present implementation of the TreePM code. Given that we have combined the tree and the PM code, the memory requirement is obviously greater than that for either one code. We need four arrays for the PM part, the potential and the force. The rest is exactly the same as a standard Barnes and Hut tree code. With efficient memory management, we need less than $`160`$MB of RAM for a simulation with $`128^3`$ particles in a $`128^3`$ mesh for most part. In absence of memory management, this requirement can go up to 250MB. These are the numbers for floating point numbers, if we use double precision variables then this requirement goes up by a factor of two. Table 1 lists the time required per time step per particle for three values of the number of particles. These were run on a 533MHz Alpha workstation (EV5) and compiled with the native F90 compiler, a $`1`$GHz Pentium III desktop or a $`1.6`$GHz P-4 and compiled with the Intel F90 compiler. Column 1 lists the number of particles and col.2, 3 and 4 list the time per step per particle for an unclustered distribution. This number increases much slower than the total number of particles, as expected from the theoretical scaling of $`O(N\mathrm{ln}N)`$. Column 5 of table gives the same number for a highly clustered particle distribution, similar in clustering strength to that shown in fig.3. Column 6 lists the time per step per particle taken by the tree code for the particle distribution used in col.4. It is clear that the TreePM code is faster than the tree code by a factor of about $`4.5`$. It is also clear that this code performs well even on inexpensive hardware. The performance of this code can be improved further by including features like individual time steps for particles. It is expected that adding individual time steps will improve the performance by a factor of two or more. ## 5 Comparison with other Methods Amongst other codes that try to augment the performance of PM codes are the P<sup>3</sup>M (Efstathiou et al, 1985; Couchman, 1991) codes and the TPM code (Xu, 1995). Following subsections compare TreePM with these codes. ### 5.1 P<sup>3</sup>M and AP<sup>3</sup>M There are two main differences between P<sup>3</sup>M codes (Efstathiou et al, 1985; Couchman, 1991) and the TreePM code presented here. One is that most P<sup>3</sup>M codes use the natural cutoff provided by the grid for the long range force, i.e. these take the PM force to be the long range force. Hence errors in the PM force are present in the P<sup>3</sup>M force. In contrast, the TreePM code uses an explicit cutoff that allows us to limit errors near the grid scale. The second difference is in terms of the time taken for the adding the short range correction as a function of clustering. In both instances, the short range force is added for particles within a fixed radius $`r_{cut}`$. This process is of order $`O(Nnr_{cut}^3(1+\overline{\xi }(r_{cut})))`$ for the P<sup>3</sup>M method, where $`N`$ is the number of particles in the simulation, $`n`$ is the number density of particles and $`\overline{\xi }(r_{cut})`$ is the average number of excess particles around a particle, here excess is measured compared to a homogeneous distribution of particles with the same number density. At early times this reduces to $`O(Nnr_{cut}^3)`$, but at late times, when the density field has become highly non-linear ($`\overline{\xi }(r_{cut})1`$), it becomes $`O(Nnr_{cut}^3\overline{\xi }(r_{cut}))`$. As the density field becomes more and more clumpy, the number of operations required for computing the short range force increase rapidly. This is to be compared with the number of operations required for adding the short range correction in the TreePM code: $`O(N\mathrm{log}(nr_{cut}^3(1+\overline{\xi }(r_{cut}))))`$. The linear and the non-linear limits of this expression are $`O(N\mathrm{log}(nr_{cut}^3))`$ and $`O(N\mathrm{log}(nr_{cut}^3\overline{\xi }(r_{cut})))`$, respectively. Thus the variation in the number of operations with increase in clustering is much less for TreePM code than a P<sup>3</sup>M code. The problem is not as severe as outlined for the Adaptive P<sup>3</sup>M code (Couchman, 1991) but it still persists. Therefore the TreePM code has a clear advantage over the P<sup>3</sup>M and AP<sup>3</sup>M code for simulations of models where $`\overline{\xi }(r_{cut})`$ is very large. In turn, P<sup>3</sup>M codes have one significant advantage over TreePM, these require much less memory. This gives P<sup>3</sup>M codes an advantage on small machines and for simulations of models where $`\overline{\xi }(r_{cut})`$ is not much larger than unity. ### 5.2 TPM Before we go into the differences between the TreePM and TPM methods, we would like to summarise the TPM method (Xu, 1995) here. The TPM method is an extension of the P<sup>3</sup>M method in that the PM force is taken to be the long range force and a short range force is added to it. Tree method is used for adding the short range correction instead of the particle-particle method. There are some further differences, e.g. correction is added only for particles in high density regions implying that the resolution is non-uniform. At each time step, high density regions are identified and a local tree is constructed in each of these regions for computing the short range correction. Thus, there are two clear differences between the TreePM and the TPM method: * The TPM code uses the usual PM force to describe the long range component. In contrast, the TreePM code uses an explicit cutoff ($`r_s`$). * TreePM treats all the particles on an equal footing, we compute the short range (eqn(5)) and the long range force for each particle. In the TPM code, the short range force is computed only for particles in the high density regions. ## 6 Discussion Preceeding sections show that we have developed a new method for doing cosmological N-Body simulations with a clean mathematical model. The model splits force into long and short range forces using a parameter $`r_s`$. By choosing this parameter judiciously, in conjunction with two other parameters that arise in the implementation of this model ($`r_{cut}`$ and $`\theta _c`$) we can obtain a configuration that matches our requirements for the error budget. It is possible to devise a more complex scheme for splitting the force into two parts but the one we have chosen seems to be the optimal scheme from the point of view of errors in force calculation as well as CPU time (Bagla and Ray, 2002). Apart from improving control over errors, the TreePM code also leads to a significant gain in speed over the traditional tree code. TreePM code is also amenable to parallelisation along the lines of (Dubinski, 1996), and is likely to scale well because the communication overhead is much more limited. Work in this direction is in progress and will be reported elsewhere (Bagla, 2002). ## Acknowledgement I would like to thank Rupert Croft, Lars Hernquist, Suryadeep Ray, Volker Springel and Martin White for insightful comments and discussions. Part of the work reported in this paper was done while the author was at the Harvard-Smithsonian Center for Astrophysics.
no-problem/9911/astro-ph9911518.html
ar5iv
text
# REFERENCES Comment on “The Bright Side of Dark Matter” In a significant recent paper A. Edery undertakes a new study of light deflection in generalizations of general relativity (GR). He claims to prove that any metric-based gravitational theory that proposes to explain the flat rotation curves of disk galaxies without postulating dark matter halos must conflict with observations of gravitational lensing by galaxies and clusters of galaxies because any such theory inevitably make a negative contribution to light deflection. Here we show that some of the basic steps of Edery’s argument are invalid, and no such general result obtains. Edery considers the metric of a spherical mass $`M`$ to be given by $`ds^2=B(M,r)dt^2A(M,r)dr^2r^2d\mathrm{\Omega }^2`$, with which he calculates the deflection of a light ray approaching from infinity and then receding to infinity. The agreement with the solar-system tests of general relativity requires that $`AB1`$ to high accuracy in the vicinity of the sun. Unjustifiably, Edery extends this requirement to galactic scale, where in conjunction with the requirement of flat rotation curves, it leads to suppressed light deflection. (It is easy to see that without the constraint $`AB=1`$, flat rotation curves and enhanced light bending are consistent.) But solar–system tests do not constrain the form of $`A`$ or $`B`$ on galactic scale (except in the context of a specific gravitation theory). Consider, for example, a modified-gravity theory with a mass scale, $`M_0`$, below which it merges with GR \[i.e. $`A^1(M,r)=B(M,r)=12GM/r`$ for $`M<M_0`$\]. If $`M_0`$ is between a solar mass and galactic masses, all solar-system results agree with those of GR, while the form of $`A`$ or $`B`$ on galactic scale depends on the exact theory. Similarly, if the departure from GR occurs only above a certain length scale, intermediate between interstellar and galactic scale, $`AB=1`$ in the solar–system, there is clearly no anomalous contribution to light bending within the solar system, while again nothing can be generically deduced about $`A`$ or $`B`$ on galactic scale. In this connection we note another deficiency in Edery’s arguments: he calculates the deflection angle accumulated along the ray’s path all the way from infinity, and much of the undesirable negative contribution comes from the asymptotic region. However, Edery’s assumed form of the metric is only valid out to limited radii: for galaxies only to a few megaparsecs where the growth of the gravitational potential saturates as the galaxy’s field merges with the cosmological one; and near the sun only to a tenth of a parsec where the mean field of the galaxy takes over. Alternative gravity theories are generically nonlinear, so one cannot consider the contribution of the sun separately from its galactic environment. Edery has also failed to realize that in solar light–deflection experiments, only the difference of deflection angles for two light paths, one grazing the sun, and one passing about one earth–sun distance away, is actually measured. In such difference the contribution from large distances, so crucial to his point about negative light deflection, tends to cancel out. Therefore, it is easy to devise phenomenologically valid theories in which solar–system and galactic predictions are unconnected. Edery supposes such connection as unavoidable because he fails to realize that (i) the field of a totally isolated mass is phenomenologically relevant only up to a limited distance, and (ii) the sun and galaxies are sufficiently different in mass, size, etc., so as to permit theories that describe the two cases with totally different metric coefficients. This occurs, for example, in theories in which, in the spirit of MOND, departure from standard gravitation sets in only below a certain acceleration scale (which is of order of the sun’s acceleration at a fraction of the interstellar distance). If an alternative theory exhibits $`AB1`$ on galactic scale, no contradiction between flat rotation curves and enhanced light–bending need appear. Both desired features coexist in Sanders’ stratified theory, which also predicts the PPN parameters for solar system tests in the measured ranges. We also note that the dark-matter plus GR standard doctrine eludes Edery’s argument for suppressed deflection by having $`AB1`$ around galaxies (since the matter density even outside the visible galaxy does not vanish), but $`AB1`$ near the sun where no dark matter is needed. But Edery does not tell us why we cannot have a modified–gravity theory that gives for the metric of the visible matter in the whole universe exactly what GR gives with dark matter ? Indeed, his statements are not really about theories, but about metrics coming from unspecified equations. By claiming that he only assumes a metric theory (one that obeys the equivalence principle), he is driven to the conclusion that there is no metric for the world that gives at the same time flat rotation curves, enhanced light bending by galaxies, and consistency with the solar–system tests. This would seem to exclude the metric calculated from GR with dark matter - a symptom of the untenability of Edery’s sweeping claim. We acknowledge support of the Israel Science Foundation to JDB and MM and of MINERVA to MM. J. D. Bekenstein Racah Institute of Physics Givat Ram, Jerusalem 91904 Israel M. Milgrom Weizmann Institute of Science Rehovot, 76100 Israel R. H. Sanders Kapteyn Astronomical Institute Groningen, the Netherlands
no-problem/9911/hep-th9911250.html
ar5iv
text
# Coupling of vector fields at high energies ## 1 Introduction The classical field theory describes the two long ranging fields of electromagnetism and gravitation. Some investigations were made to find a common formalism for unification of these fields and interactions , even to find a connection between gravitation and quantum theory , but no convincing conception was found . (For a survey see also ). Extended theories of gravitation based on supersymmetry predict new particles. These quantum theories of gravitation contain not only the spin-2 graviton, but include also spin-0 and spin-1 gravitons (graviscalar and graviphoton) as well as spin-3/2 gravitino for the description of gravitational interactions . Composite models of particles provide an alternative approach for gravitational interactions. The interaction-particles (bosons) are described as composite states of the fermions . Thus the spin-2 graviton can be considered as a bound state of two spin-1 gravitons . This bounded state should be resulted by the coupling to itself and should also generate the self-interactions, which are described by the nonlinear equations of general relativity. This non-linearity results from the fact that the spin-2 graviton couples to everything and thereby also to itself. Within the covariant formalism of general relativity, the gravitational field can be described as a pure vectorfield in an approximation for weak fields, where self-interactions must be neglected. In the case of a non-approximate description of vectorfields the self-interactions must be removed completely. Thus unlike the spin-2 gravitons, the spin-1 gravitons as also the photons are no sources of themselves, for which no self-interactions results. This means that the spin-1 graviton couples to everything but not to itself, which requires a certain symmetry for the gravitational vectorfield. This is also shown by linear field equations and the appearance of local conservation laws. Therefore the gravitational vectorfield has the property of a sourcefree field and cannot occur in a spherosymmetrical configuration. The coupling to other (non-gravitational) interactions with the break of the spherical symmetry should generate spin-1 gravitons. Therefore, it should be possible to discuss the spin-1 graviton (graviphoton) and the classical vector field of gravitation together with the photon and the classical electromagnetic field within a common formalism of the linear Maxwell equations. Although the spin-2 gravitons cannot be described within the linear formalism , this represents a further step towards the unification of electromagnetic and gravitational interactions. The gravitational fields are described by nonlinear equations in general relativity. Their nonlinearity structure is caused by the self-interaction. If we assume that mass and all forms of energy are gravitational charges, only one kind of gravitational charge can exist. So there is no mechanism to generate dipole fields like in electrodynamics, where two kinds of charge are possible. We conclude that gravitational dipole fields can exist, but they have to be source-free (no kind of charge like mass or electric charge). Linear field equations like the homogeneous Maxwell equations describe exactly those source-free gravitational fields. From a counterpart view between sources and fields (p. 368 E. of ) the priority of the fields should be preferably considered. We can have fields without any source but no source without any field. The source results from a spherical configuration of the field. Thus, charge or mass and also all forms of energy as sources are “built” from fields and the conservation of the sources is the consequence of this construction. Based on these assumptions we present two applications, where secondary fields can be generated from primary fields. The source-freedom of these secondary fields is an important condition, which prohibits a spherical configuration. Thus, spherosymmetrical solutions of the secondary fields (like monopoles) are excluded. In consideration of source-freedom we show that this scalar coupling factor depends on the value of the primary field energy. At the point of maximum coupling strength, the strength of the interactions caused from both primary and secondary fields must be equal. So the secondary fields can only have either less or equal energy than the primary fields. Thus, we have to calculate the values of field energy for the point of maximum coupling strength. This paper is organized as follows: In section 2 we see that the consideration of source-free gravitational fields yields to a description by linear field equations. In sec.3 we show that the condition of source-freedom leads to scalar coupling coefficients, which depend on the square of the field energy of the primary field. In sec.4 we present the couplings between primary and secondary fields within two applications. The interactions of primary and secondary fields are discussed in sec.5. We determine the value of the maximum coupling strength and their values of field energy in sec.6. In sec.7 we discuss the break of the discrete symmetries of parity, time reversal and charge conjugation. Section 8 contains the conclusions. ## 2 Source-free gravitational fields The mathematical description of gravitational fields is given by Einstein’s field equations, which contains two contracted forms of the Riemann-Christoffel curvature tensor, the Ricci tensor and the curvature scalar. Gravitional fields can be compensated locally by the transition to an accelerated coordinate system but the global field remains, because of the Christoffel symbols are transformed in a non-linear way. The local compensation of the gravitational fields is the result of the equivalence principle. For weak fields we can use a linear approximation by neglecting the self-interaction. Some components of the Christoffel symbols vanish and the covariant derivatives are reduced to partial ones so the remaining equations receive a linear structure like the Maxwell equations in electrodynamics. Further we can derive the Newtonian approximation and equations, which describe gravitomagnetic fields (p. 197-205 of ) as a result from the Thirring-Lense effect . These linear equations are valid only as an approximation for weak gravitational fields and constant velocities but not for accelerated coordinate systems. To acquire attractive forces among the same kind of charge, we have to set negative signs at the sources in the linear equations of the vector fields. Therefore, we receive negative densities of energy and the transition to accelerated coordinate systems becomes impossible. This difficulty from negative signs only occurs for a vector theory of gravitation . The transition to the covariant formalism also has its difficulties especially at the localization of the gravitational field energy (p. 139 eq.(21.6) of and p. 466-468 of ). For a vector theory of gravitation we can avoid these difficulties only by homogeneous field equations, describing fields without any sources. In this case the linear field equations are exactly valid not only approximately. The remaining specific components $`\widehat{\mathrm{\Gamma }}`$ describing the gravitational vectorfield are transformed by linear transformations like a tensor, because the second term vanishes: $$\widehat{\mathrm{\Gamma }}_{\mu \nu }^\lambda =\frac{x^\lambda }{x^\rho }\frac{x^\sigma }{x^\nu }\frac{x^\tau }{x^\mu }\widehat{\mathrm{\Gamma }}_{\tau \sigma }^\rho $$ (1) The fact that these components representing the gravitational vectorfield obey linear transformations causes the vanishing of the following term: $$\widehat{\mathrm{\Gamma }}_{\mu \nu }^\sigma \widehat{\mathrm{\Gamma }}_{\sigma \lambda }^\rho \widehat{\mathrm{\Gamma }}_{\mu \lambda }^\sigma \widehat{\mathrm{\Gamma }}_{\sigma \nu }^\rho =0$$ (2) Thus, the Riemann-Christoffel curvature tensor of these components is reduced to the linear form: $$\widehat{R}_{\mu \lambda \nu }^\rho =\frac{\widehat{\mathrm{\Gamma }}_{\mu \nu }^\rho }{x^\lambda }\frac{\widehat{\mathrm{\Gamma }}_{\mu \lambda }^\rho }{x^\nu }$$ (3) Here, the vanishing Riemann-Christoffel curvature tensor is not only a criterion for accelerated coordinate systems, but here also characterizes the homogeneous part of a gravitational field. The possibility of different signs of the linear curvature tensor yields to the fact that the contraction of this curvature tensor has to vanish. $$\widehat{R}_{\mu \nu }=\frac{\widehat{\mathrm{\Gamma }}_{\mu \nu }^\rho }{x^\rho }=\frac{1}{2}\mathrm{}\widehat{g}_{\mu \nu }=0$$ (4) By the linear field equations follows the additional constraint: $$\widehat{\mathrm{\Gamma }}_{\rho \nu }^\rho =\frac{1}{\sqrt{\widehat{g}}}\frac{\sqrt{\widehat{g}}}{x^\nu }=0$$ (5) Therefore, the determinant of the specific metric tensor remains unchanged: $$\widehat{g}=det\widehat{g}_{\mu \nu }=1$$ (6) Thus, we receive as contributions to the metric tensor $`\widehat{g}_{\mu \nu }`$ only diagonal elements and we always obtain two invariant remaining spatial components by the source-freedom constraint: $$\widehat{g}_{\mu \nu }=diag(\left(1\frac{2U}{c^2}\right),\left(\frac{1}{1\frac{2U}{c^2}}\right),1,1)$$ (7) $$U=\left|\stackrel{}{g}\stackrel{}{x}\right|=\frac{c^2}{2}\left|h_{00}\right|$$ (8) From the specific form of $`\widehat{g}_{\mu \nu }`$ results no contribution to the Christoffel symbols. These specific properties of the gravitational vector field allows the expression of the specific Christoffel symbols by an antisymmetric field strength tensor and a vector potential similar to electrodynamics. By the transition from covariant derivatives to the partial ones the connections between field strength tensors and vector potentials are the same. (p. 106 eq.(4.7.2), and p. 108 eq.(4.7.11) of ). (Hence the $`g_{\mu \rho }`$ stands for any metric tensor.) With the four-velocity $`u^\lambda =\widehat{\gamma }(c,\stackrel{}{v})`$ and $`u_\lambda u^\lambda =c^2`$, we receive the following relations for the vector potential and the field strength components: $$h_{0\mu }=\frac{2}{c}A_{[g]\mu }$$ (9) $$g_{\mu \rho }\widehat{\mathrm{\Gamma }}_{\nu \lambda }^\rho \frac{u^\lambda }{\widehat{\gamma }}=\widehat{\mathrm{\Gamma }}_{\mu ,\nu \lambda }\frac{u^\lambda }{\widehat{\gamma }}=F_{[g]\mu \nu }=\frac{A_{[g]\nu }}{x^\mu }\frac{A_{[g]\mu }}{x^\nu }$$ (10) $$\widehat{\gamma }=\frac{1}{\sqrt{1\frac{v^2}{c^2}}}$$ (11) Considering these vector fields described by linear field equations, we arrive at two results within a common formalism. In both cases no self-interaction remains and the equations have an exact linear structure for all applications: -The field is not a gravitational field, it can be described by the inhomogeneous Maxwell equations. In this case two kind of charges might exist and we receive the equations of electrodynamics. -The field is a gravitational field, therefore it must be source-free and we obtain with the Lorentz-gauge: $$\frac{F_{[g]}^{\mu \nu }}{x^\mu }=\mathrm{}A_{[g]}^\nu =\mathrm{}h^{0\nu }=0$$ (12) We obtain for the metrical representation by the invariant line element: $$ds^2=\left(1\frac{2U}{c^2}\right)c^2dt^2+\left(\frac{1}{1\frac{2U}{c^2}}\right)dx^2dy^2dz^2+2h_{0\mu }dx^\mu cdt$$ (13) Unlike in general relativity we obtain a local conservation of energy and momentum of the source-free gravitational fields, because energy and momentum can be localized and these fields are transformed in a linear way. The gravitational vector field couples to the current density of energy and momentum: $$j_{[m]}^\nu =\frac{\widehat{\gamma }}{c^2}u_\mu T^{\mu \nu }$$ (14) For the interaction of the vector field we obtain by the local conservation law: $$\frac{T^{\mu \nu }}{x^\mu }=\widehat{\mathrm{\Gamma }}_{\mu \lambda }^\nu T^{\mu \lambda }=F_{[g]}^{\nu \lambda }j_{[m]\lambda }$$ (15) The following effects occur with the source-free gravitational fields as well as with the gravitational fields in general relativity: -The gravitational red- and blue shift of spectral lines. -The deflection of light. -The time delay of signals and scale contraction. -The precession of a gyroscope. Thus, these source-free gravitational fields are able to exert forces on all masses and forms of energy but cannot be generated by themselves. Especially for the case of dipole fields we see that the conservation of the centre of gravity also requires a vanishing gravitational dipole moment (p. 975 of ). Therefore, gravitational dipole radiation (spin-1 graviton or graviphoton) cannot have sources from mass or energy. ## 3 The source-free generation of fields In all discussed cases the electric or magnetic fields are the only components of the primary field $`[e]`$, while $`[s]`$ stands for any secondary field. We set for the field strength tensors: $$F_{[s]}^{\mu \nu }=F_{[e]}^{\mu \nu }K$$ (16) With: $`F_{[e]}^{\mu \nu }=`$ primary field. $`F_{[s]}^{\mu \nu }=`$ secondary field. $`K=`$ scalar coupling coefficient. By considering the source-freedom of the secondary field we receive: $$\frac{F_{[s]}^{\mu \nu }}{x^\mu }=\frac{F_{[e]}^{\mu \nu }}{x^\mu }K+F_{[e]}^{\mu \nu }\frac{K}{x^\mu }=0$$ (17) For the vector potentials we receive the following conditions: $$A_{[s]}^\nu =A_{[e]}^\nu KA_{[e]}^\nu \frac{K}{x_\mu }A_{[e]}^\mu \frac{K}{x_\nu }=0$$ (18) and obtain with the Lorentz-condition for $`A_{[e]}^\nu `$ and $`A_{[s]}^\nu `$ the following terms: $$\mathrm{}A_{[s]}^\nu =\mathrm{}A_{[e]}^\nu K+A_{[e]}^\nu \mathrm{}K=0$$ (19) $$A_{[e]}^\nu \frac{K}{x^\nu }=0$$ (20) For the symmetric energy-momentum tensor: $$T_{[e]}^{\mu \nu }=\frac{1}{\mu _0}\left(g^{\mu \rho }F_{[e]\rho \lambda }F_{[e]}^{\lambda \nu }+\frac{1}{4}g^{\mu \nu }F_{[e]\rho \lambda }F_{[e]}^{\rho \lambda }\right)$$ (21) we receive for the couplings: $$T_{[s]}^{\mu \nu }=T_{[e]}^{\mu \nu }K^2$$ (22) The divergence-free energy-momentum tensor results from the source-freedom of the secondary field: $$\frac{T_{[s]}^{\mu \nu }}{x^\mu }=K\frac{T_{[e]}^{\mu \nu }}{x^\mu }+2T_{[e]}^{\mu \nu }\frac{K}{x^\mu }=0$$ (23) With the electric current density follows: $`\mu _0j_{[e]}^\nu K+F_{[e]}^{\mu \nu }{\displaystyle \frac{K}{x^\mu }}`$ $`=`$ $`0`$ (24) $`F_{[e]}^{\nu \lambda }j_{[e]\lambda }+T_{[e]}^{\mu \nu }{\displaystyle \frac{2}{K}}{\displaystyle \frac{K}{x^\mu }}`$ $`=`$ $`0`$ (25) We can also receive the last two equations by using the covariant divergence, because the covariant derivatives and partial derivatives are the same for the scalar coupling coefficients. The electric current density is calculated from the covariant divergence of the electromagnetic field strength tensor. For a stationary consideration we look at the spatial components, which characterize the local behaviour of the scalar coupling factor related to the local variation of the field strength and the energy density: $`(\stackrel{}{}K)\stackrel{}{E}`$ $`=`$ $`K\varrho _{[e]}`$ (26) $`\left((\stackrel{}{}K)\times \stackrel{}{B}\right)`$ $`=`$ $`K\stackrel{}{j}_{[e]}`$ (27) $`2\stackrel{}{}KE^2`$ $`=`$ $`K\varrho _{[e]}\stackrel{}{E}`$ (28) $`2\stackrel{}{}KB^2`$ $`=`$ $`K\left(\stackrel{}{j}_{[e]}\times \stackrel{}{B}\right)`$ (29) Here we can see that in the inhomogeneous part of the field (near the charge and current densities) the coupling coefficient decreases, which means a local decoupling of the fields. However, the coupling coefficient reach a local maximum in the homogeneous part of the field. Hitherto, only the local variation was considered. However, this gives no information about the absolute value of the coupling strength concerning the hole field volume. For calculating the general variation of the coupling strength, we integrate over the field volume: $$\underset{V}{}\left(\frac{2}{K}T_{[e]}^{\mu \nu }\frac{K}{x^\mu }\right)𝑑V=\underset{V}{}\left(\frac{T_{[e]}^{\mu \nu }}{x^\mu }\right)𝑑V=0$$ (30) We can see after the volume integration that all local variations of the coupling coefficient as well as of the energy density of the field vanish if the field energy is conserved. $$\underset{V}{}\left(\frac{T_{[e]}^{\mu \nu }}{x^\mu }\right)𝑑V=\underset{V=\sigma }{}T_{[e]}^{\mu \nu }𝑑\sigma _\mu $$ (31) The right-hand side of the previous equation shows the equivalent integration over the surrounding surface of the volume: $`\sigma _\mu =\left(\frac{V}{x^\mu }\right)`$ This shows that the general variation of the coupling strength must have a dependency from the field energy. The general variation $`\frac{K}{x^\mu }`$ of the coupling strength can be expressed by the energy of the primary field: $`W_{[e]}^{\mu \nu }=\underset{V}{}T_{[e]}^{\mu \nu }𝑑V`$ $$\frac{W_{[e]}^{\mu \nu }}{x^\mu }=\frac{2}{K}W_{[e]}^{\mu \nu }\frac{K}{x^\mu }$$ (32) It is seen from this equation that an increasing field energy provides an increasing coupling strength. This equation describes only the relation between the coupling strength $`K`$ and the primary field energy $`W_{[e]}^{\mu \nu }`$ without any interactions of sources like mass or charge. These interactions are discussed in section 5. We have to include that our scalar coupling coefficient $`K`$ must be a Lorentz scalar and is therefore only dependent from Lorentz invariant values. We can receive for the fields a similar relativistic energy-momentum equation, from which a Lorentz scalar is achieved. This results by the product of the covariant tensor $`T^{\mu \nu }`$ with its contravariant counterpart (p. 609 of ). However, in general the volume integration $`W_{[e]}^{\mu \nu }=\underset{V}{}T_{[e]}^{\mu \nu }𝑑V`$ cancels the Lorentz invariance of the resulted scalar. To restore the Lorentz invariance the volume integration must be restricted to the rest frame of the field. By the following relation: $$P^\mu P_\mu c^2=W_f^2p^2c^2=m_0^2c^4=\left(\frac{1}{2}\right)W_{[e]}^{\mu \nu }W_{[e]\mu \nu }$$ (33) with the corresponding field values: $`P^0={\displaystyle \frac{W_f}{c}}`$ $`=`$ $`{\displaystyle \underset{V}{}}{\displaystyle \frac{1}{2\mu _0c}}\left({\displaystyle \frac{E^2}{c^2}}+B^2\right)𝑑V`$ (34) $`P^i=\stackrel{}{p}`$ $`=`$ $`{\displaystyle \underset{V}{}}{\displaystyle \frac{1}{\mu _0c^2}}\left(\stackrel{}{E}\times \stackrel{}{B}\right)𝑑V`$ (35) the Lorentz scalar of the field follows as a rest mass $`m_0`$, which is not a fixed value but only depends on the field strength and the volume of the field within the rest frame. The possibility of the definition of a field rest frame is connected with the timelike property of the 4-vector $`P^\mu `$, which is expressed by the following condition: $$P^\mu P_\mu 0$$ (36) For fields with spherical symmetry this condition is not strictly uphold. Also any influence of the secondary fields in a spherical configuration is cancelled, which prohibits a determination of a Lorentz invariant coupling coefficient $`K`$. The homogeneous part of the field can be considered as a field rest frame by the fact that there the coupling coefficient $`K`$ reaches its local maximum. For the rest frame of the field, expressed by the rest mass $`\left(m_00\right)`$, we can have two different cases for the primary field: Case A (a field of stationary charge distributions): We only have a field of stationary charges therefore, only a pure electric field exists. So the rest mass of the primary field is represented by the electric field only. $$m_0(E,V)=\frac{1}{c^2}\underset{V}{}\frac{1}{2\mu _0}\left(\frac{E^2}{c^2}\right)𝑑V$$ (37) Case B (a field of stationary current distributions): We only have a field of constant currents therefore, only a pure magnetic field exists. The rest mass of the primary field is represented by the magnetic field only. $$m_0(B,V)=\frac{1}{c^2}\underset{V}{}\frac{1}{2\mu _0}\left(B^2\right)𝑑V$$ (38) Only a pure electric or a pure magnetic field can be considered as a primary field, for a generation of a coupled secondary field. So the value of the scalar coupling coefficient depends on the value of the Lorentz invariant square of the specific field energy of electric or magnetic field, but never on both of them within the same case. We see that these applications of couplings are not possible in the direct way for radiation fields, because real photons have no rest mass. $$P^\mu P_\mu =\mathrm{}^2\left(\frac{\omega ^2}{c^2}k^2\right)=0$$ (39) The condition $`P^\mu P_\mu 0`$ is only valid for virtual particles like electric (Coulomb-) photons and magnetic photons, which are off-mass-shell photons. Thus, any conceivable coupling between electromagnetic dipole radiation and gravitational dipole radiation is generated by the oscillating primary and secondary fields itself and cannot be generated in the direct way as in the case between primary and secondary fields. We receive for the Lorentz invariant terms of the field: $$2\left(W_{[e]}^{\mu \nu }W_{[e]\mu \nu }\right)\frac{K}{x^\mu }=\frac{1}{2}K\frac{\left(W_{[e]}^{\mu \nu }W_{[e]\mu \nu }\right)}{x^\mu }$$ (40) For this differential equation we need a special nontrivial solution with the property: $`dKK`$ and the boundary conditions: $`K(W_{[e]}=0)=0`$ and $`K(W_{[e]}=W_0)=K_0`$. Hence, no special derivation variable is necessary: $$4(W_{[e]}^2)dK=Kd(W_{[e]}^2)=K\mathrm{}(W_{[e]}^2)$$ (41) We integrate after separation: $$\underset{K_0}{\overset{K(W)}{}}\frac{dK}{K}=\frac{1}{4}\left(\frac{W_{[e]}^2W_0^2}{W_{[e]}^2}\right)$$ (42) and arrive at the final result: $$K(W_{[e]})=K_0\mathrm{exp}\left[\frac{1}{4}\left(1\frac{W_0^2}{W_{[e]}^2}\right)\right]$$ (43) ## 4 The applications of the couplings ### 4.1 Application 1 The coupling of the source-free gravitational field with the electromagnetic field. We set for the field strength tensors: $$F_{[g]}^{\mu \nu }=F_{[e]}^{\mu \nu }K_{[e]}$$ (44) written in components of the fields: $`\stackrel{}{E}`$ $``$ $`K_{[e]}=\stackrel{}{g}`$ (45) $`\stackrel{}{B}`$ $``$ $`K_{[e]}=\stackrel{}{\mathrm{\Omega }}`$ (46) Here the coupling coefficient must have the expression of the specific charge: $`K_{[e]}=\left[\left|\frac{e}{m}\right|\right]`$ ### 4.2 Application 2 The coupling of: -a magnetic field with an electric field. -an electric field with a magnetic field. We set for the field strength tensors: $$F_{[b]}^{\mu \nu }=F_{[e]}^{\mu \nu }\frac{K_{[v]}}{c}$$ (47) written in components of the fields: $`\stackrel{}{E}`$ $``$ $`{\displaystyle \frac{K_{[v]}}{c^2}}=\stackrel{}{B}`$ (48) $`\stackrel{}{B}`$ $``$ $`K_{[v]}=\stackrel{}{E}`$ (49) Here the coupling coefficient must have the expression of the velocity: $`K_{[v]}=\left[\left|\stackrel{}{v}\right|\right]`$ Together with the first application we receive: $`\left(\stackrel{}{E}{\displaystyle \frac{K_{[v]}}{c^2}}\right)`$ $``$ $`K_{[e]}=\stackrel{}{\mathrm{\Omega }}`$ (50) $`\left(\stackrel{}{B}K_{[v]}\right)`$ $``$ $`K_{[e]}=\stackrel{}{g}`$ (51) ### 4.3 The directions of the primary and secondary field components For the linear coupling of the field components we can have two directions either the parallel or the opposite parallel direction. From this fact results that the interaction forces of the primary and secondary field can be either added or subtracted dependending on the sign of the charges. To find the right direction (both directions within the same case are impossible!), we consider the qualities of the electrically charged particles concerning the interactions within the coupled fields. For the electro-gravitational coupling we find that in the case of the coupling along the anti-parallel direction, the negative charged particles show additive forces and positive charged particles show subtractive forces. To transfer the momentum and energy from primary to the secondary field the forces generated by the two coupled fields must have the same direction. This is fulfilled in this case by the negative charged particles. We observe this fact in the most materials, the momentum and energy of the electromagnetic current is indeed mostly carried by negative charged particles especially by the electrons. Thus, the consideration of the coupling along the anti-parallel direction is justified here. For the coupling between electric and magnetic fields these considerations are more difficult here; no transfer like in the aforementioned approach occurs in a direct way, because there is no magnetic charge. In the coupled electric and magnetic fields the electrically charged particles move along the direction of the electric field and their magnetic moments are oriented along the direction of the magnetic field. A determined direction of the coupling between these two fields, causes a break of the symmetry between left-handed and right-handed particles. An example of a break of the left-right symmetry occurs within the weak interaction, which prefers the left-handed particles. According to the preference of the left-handed particles, the consideration of the coupling along the antiparallel direction is justified. ### 4.4 The coupling strength of both applications The equations of the coupling coefficients have the same structure for both applications. The only difference is within the coupling coefficients and the specifc energy values: $`K_{[e]}=K_q`$ $``$ $`\mathrm{exp}\left[{\displaystyle \frac{1}{4}}\left(1{\displaystyle \frac{W_{eg}^2}{W_{[e]}^2}}\right)\right]`$ (52) $`K_{[v]}=K_c`$ $``$ $`\mathrm{exp}\left[{\displaystyle \frac{1}{4}}\left(1{\displaystyle \frac{W_{eb}^2}{W_{[e]}^2}}\right)\right]`$ (53) The strength of the maximum coupling remain constant if the field energy exceeds the values of $`W_{eg}`$ or $`W_{eb}`$. The upper boundary of the coupling coefficients $`K_q,K_c`$ as well as the threshold values of the field energies $`W_{eg},W_{eb}`$, will be calculated in section 6. ## 5 The interactions of the fields The source-free secondary fields are characterized as free fields. Therefore, the interaction terms cannot be calculated directly because of the vanishing field divergence, which results from the source-freedom constraint. To calculate these interactions, we determine the Lagrangian of primary fields and include all interactions, which result from both (primary and secondary) fields. This means that the energy of interaction of the secondary fields is transferred from the primary fields. The inclusion of all interactions modifies the current density in the Lagrangian. With $`c^2j_{[m]}^\nu =\widehat{\gamma }u_\mu T^{\mu \nu }`$ as a current density of energy and momentum and a possible magnetic current density $`j_{[b]}^\nu `$ and the primary current density $`j_{[e]}^\nu `$, we receive for the complete Lagrangian: $$=\frac{1}{4\mu _0}F_{[e]\mu \nu }F_{[e]}^{\mu \nu }j_{[e,m,b]}^\nu A_{[e]\nu }$$ (54) with the currents: $$j_{[e,m,b]}^\nu =j_{[e]}^\nu +j_{[m]}^\nu K_{[e]}+j_{[b]}^\nu \epsilon _0K_{[v]}$$ (55) We obtain from the Euler-Lagrange equations of the fields: $$\frac{F_{[e]}^{\mu \nu }}{x^\mu }=\mu _0j_{[e,m,b]}^\nu $$ (56) and determine for each term describing the specific interaction of the fields and currents: $`{\displaystyle \frac{T_{[E]}^{\mu \nu }}{x^\mu }}`$ $`=`$ $`F_{[e]}^{\nu \lambda }j_{[e]\lambda }`$ (57) $`{\displaystyle \frac{T_{[G]}^{\mu \nu }}{x^\mu }}`$ $`=`$ $`F_{[e]}^{\nu \lambda }j_{[m]\lambda }K_{[e]}`$ (58) $`{\displaystyle \frac{T_{[B]}^{\mu \nu }}{x^\mu }}`$ $`=`$ $`F_{[e]}^{\nu \lambda }j_{[b]\lambda }\epsilon _0K_{[v]}`$ (59) We obtain an energy balance equation as a local conservation law by summing up these interaction terms together with the divergence of the energy density of the primary field and integrating over the volume. However, the coupling strength must remain at a constant value. As we have shown in sec.3 a volume integration is necessary to include the general variation of the coupling strength $`K`$. Together with the interactions of the fields within each application we obtain as a local conservation: $`{\displaystyle \frac{T_{[e]}^{\mu \nu }}{x^\mu }}`$ $`=`$ $`\left({\displaystyle \frac{2}{K_{[e]}}}T_{[e]}^{\mu \nu }{\displaystyle \frac{K_{[e]}}{x^\mu }}\right)+{\displaystyle \frac{T_{[E]}^{\mu \nu }}{x^\mu }}+{\displaystyle \frac{T_{[G]}^{\mu \nu }}{x^\mu }}`$ (60) $`{\displaystyle \frac{T_{[e]}^{\mu \nu }}{x^\mu }}`$ $`=`$ $`\left({\displaystyle \frac{2}{K_{[v]}}}T_{[e]}^{\mu \nu }{\displaystyle \frac{K_{[v]}}{x^\mu }}\right)+{\displaystyle \frac{T_{[E]}^{\mu \nu }}{x^\mu }}+{\displaystyle \frac{T_{[B]}^{\mu \nu }}{x^\mu }}`$ (61) The variation of the field energy is on the left-hand side of the equation. One the right-hand side we have the variation of the coupling strength and the terms of the interactions. The primary field strength tensor satisfies the wave equation of the fields: $$\frac{}{x^\nu }\frac{F_{[e]}^{\mu \nu }}{x^\mu }=0$$ (62) Thus we receive from the Lagrange equations: $$\frac{j_{[e,m,b]}^\nu }{x^\nu }=0$$ (63) With the following condition for all currents: $$j^\nu \frac{K}{x^\nu }=\varrho u^\nu \frac{K}{x^\nu }=0$$ (64) $$\frac{j^\nu }{x^\nu }=0$$ (65) Therefore follows the local conservation for each current and we obtain generally with each coupling coefficient: $$u^\nu \frac{K}{x^\nu }=0$$ (66) This relation can be verified by the energy dependence relation of the coupling coefficients. (We would obtain the same results by the covariant derivatives, because W and K are scalars.) We set for the derivation of K: $$dK=\frac{K}{2}\frac{W_0^2}{W^3}dW$$ (67) From which results: $$u^\nu \frac{K}{x^\nu }=\frac{K}{2}\frac{W_0^2}{W^3}\left(u^\nu \frac{W}{x^\nu }\right)=0$$ (68) The fact that $`W`$ depends only on the field values of the E- or the B- field and on the field volume $`V`$, with $`W(E/B,V)=m_0(E/B,V)c^2`$, yields to the following relation: $$u^\nu \frac{W}{x^\nu }=u^\nu \left(\frac{W}{\left(E/B\right)_i}\frac{\left(E/B\right)_i}{x^\nu }+\frac{W}{V}\frac{V}{x^\nu }\right)=0$$ (69) where $`\left(E/B\right)`$ means the E- or B- field components, due to the two cases (A or B) for the primary field. From the conservation of the electric current follows the charge conservation and from the conservation of the mass current follows the conservation of energy and momentum. Thus, any discussed interaction of primary and secondary fields is only possible with these conserved currents. The spatial and time components of the four-force of mass and electric charge including both coupling coefficients are described by the following relations: $`\left(m=m_0\widehat{\gamma }\right)`$ $$\left(mK_{[e]}q_e\right)\left(\left(\stackrel{}{E}+\stackrel{}{v}\times \stackrel{}{B}\right)K_{[v]}\left(\stackrel{}{B}\frac{\stackrel{}{v}}{c^2}\times \stackrel{}{E}\right)\right)=\stackrel{}{F}$$ (70) $$\left(mK_{[e]}q_e\right)\left(\stackrel{}{E}K_{[v]}\stackrel{}{B}\right)\stackrel{}{v}=P$$ (71) For a concluding remark, we note for the calculation of multipole moments that for the secondary fields the monopole moment vanishes (because of the source-freedom). Thus, all multipole expansions begin with the dipole moment as the leading term. All higher multipole moments of the secondary fields are composed of dipole moments generated from those fields. ## 6 The maximum strength of the couplings and their corresponding values of field energies The main criterion for the maximum strength of the couplings is the equal strength of both interactions from the primary field as well as from the secondary field. This criterion characterizes the maximum values of the coupling coefficients and is justified by the energy conservation between primary and secondary fields. For the coupling between electromagnetic and gravitational fields we determine the relation between the quantities of the electric charge and the rest mass of the lightest particles. The energy of the electric field surrounding that electric charge can be expressed as the rest mass of the electric charge. In spite of the impossibility to calculate these values from the classical field theory, this relation of quantities between charge and rest mass can be justified by the quantization of both values of electric charge and rest mass. We have to determine this relation of quantities between the electric charge and the rest mass from the experimental values. The lightest known electrically charged particels are electrons and the positrons as their charged couterparts. We assume that the mass shift between electron and the electron-neutrino as the neutral partner results exclusively from the electric charge, from which results a connection between electric charge and rest mass. Therefore, we can use their specific charge as a characterization for the upper boundary value of the aforementioned strength of coupling. For a determined energy value $`W_{eg}`$ we have: $$0K_{[e]}K_q=\left|\frac{e}{m_e}\right|\text{for:}0W_{[e]}W_{eg}$$ (72) and: $$K_{[e]}=K_q=\left|\frac{e}{m_e}\right|\text{for:}W_{[e]}W_{eg}$$ (73) with the elementary charge: $`q_e=e`$ and the rest mass $`m_0=m_e`$. For the coupling between electric and magnetic fields at the other application, we have the value of the light velocity as the quantity of the upper boundary value of this strength of coupling. For a determined energy value $`W_{eb}`$ we have: $$0K_{[v]}K_c=\left|c\right|\text{for:}0W_{[e]}W_{eb}$$ (74) and: $$K_{[v]}=K_c=\left|c\right|\text{for:}W_{[e]}W_{eb}$$ (75) To calculate the values of the field energies $`W_{eg}`$ and $`W_{eb}`$ we have to compare the strength of the interactions of the primary and secondary fields. We set for the relation of these interactions: (electromagnetic and gravitational interaction) $$\frac{W_{[G]}}{W_{[E]}}=\left|\frac{m_eK_{[e]}}{e}\right|=\frac{K_{[e]}}{K_q}$$ (76) (electric and magnetic interaction) $$\frac{W_{[B]}}{W_{[E]}}=\left|\frac{q_b\epsilon _0K_{[v]}}{e}\right|$$ (77) For the relation between electric and magnetic fields, we have to define the magnetic charge to compare these interactions. The value of this elementary charge $`q_b`$ is defined by the quantum condition of Dirac : $$q_b=\mathrm{\Phi }_b=\frac{2\pi \mathrm{}}{e}$$ (78) This condition also occurs with the Aharonov-Bohm effect therefore, there is no evidence for the existence of magnetic monopoles. We obtain for the comparison of the interactions: $$\frac{W_{[B]}}{W_{[E]}}=\frac{K_{[v]}}{c}\frac{1}{2\alpha _e}$$ (79) which differes by the factor of $`2\alpha _e`$. The Sommerfeld constant $`\alpha _e`$ is defined by: $$\alpha _e=\frac{e^2}{4\pi \epsilon _0\mathrm{}c}$$ (80) So we find for the maximum couplings: $`K_{[e]}`$ $`=`$ $`K_qW_{[G]}=W_{[E]}`$ (81) $`K_{[v]}`$ $`=`$ $`cW_{[B]}=W_{[E]}{\displaystyle \frac{1}{2\alpha _e}}`$ (82) For a description by dimensionless coupling constants similar to the Sommerfeld constant we can define : $`\alpha [K_{[e]}]`$ $`=`$ $`\alpha _e{\displaystyle \frac{K_{[e]}}{K_q}}\alpha _e`$ (83) $`\alpha [K_{[v]}]`$ $`=`$ $`\alpha _e{\displaystyle \frac{K_{[v]}}{c}}\alpha _e`$ (84) Therefore these coupling coefficients have the property of a running coupling constant. All these secondary interactions were generated from electromagnetic fields, which are the primary fields in all these applications and cannot become stronger than the electromagnetic interactions. To calculate the quantities of equal interaction strength we determine the ratio of strength of those interactions, which have no dependency from any coupling coefficients. So we have to calculate the ratio of strength between electromagnetic and gravitational interactions (electrodynamics and general relativity) at the specific case for electromagnetic dipole fields and gravitational quadrupole fields. These ratios of strength can be described by the the exchange ratios of spin-1 particles and spin-2 particles, which can be calculated by the strength of the electromagnetic dipole radiation and the gravitational quadrupole radiation. To calculate the ratio of radiations we consider the electron-positron system, which has a non-vanishing dipole moment. (For the strength of the electromagnetic dipole radiation and the strength of the gravitational quadrupole radiation see for example: p. 208 eq.(67.8) / 216 eq.(70.1) and p. 423 eq.(110.16) / 424 of .) To acquire only the restmass as the source of radiation, we have to calculate nonrelativistically. In the electron-positron system we can determine $`r^2\omega ^2`$ by the following connection: $$r^2\omega ^2=\frac{\alpha _e^2c^2}{4}$$ (85) Then we receive for the ratio of the radiations: $$\frac{P_D}{P_Q}=\frac{5}{12}\frac{4}{\alpha _e^2}\frac{e^2}{4\pi \epsilon _0\gamma m_e^2}=\frac{5}{3\alpha _e^2}\frac{W_{[E]}}{W_{[M]}}$$ (86) ($`\gamma =`$G, Newton constant). For an equal exchange ratio of spin-1 particles (dipole radiation) and spin-2 particles (quadrupole radiation) we set $`P_D=P_Q`$. Therefore, we can deduce: $$W_{[E]}=\frac{3\alpha _e^2}{5}W_{[M]}$$ (87) and with $`W_{[B]}2\alpha _e=W_{[E]}`$: $$W_{[B]}=\frac{3\alpha _e}{10}W_{[M]}$$ (88) The threshold values are determined by the Planck scale of quantum gravity. The Planck energy is defined as: $$W_P=\sqrt{\frac{\mathrm{}c^5}{\gamma }}$$ (89) To get the energy values we set: $`W_P`$ $`=`$ $`W_{[M]}`$ (90) $`W_{[eg]}`$ $`=`$ $`W_{[E]}`$ (91) $`W_{[eb]}`$ $`=`$ $`W_{[B]}`$ (92) From this result the threshold values of field energies: $`W_{[eg]}`$ $`=`$ $`\sqrt{{\displaystyle \frac{\mathrm{}c^5}{\gamma }}}{\displaystyle \frac{3\alpha _e^2}{5}}`$ (93) $`W_{[eb]}`$ $`=`$ $`\sqrt{{\displaystyle \frac{\mathrm{}c^5}{\gamma }}}{\displaystyle \frac{3\alpha _e}{10}}`$ (94) These values of field energies have the numerical size of: $`W_{[eg]}`$ $`=`$ $`62,510^3J`$ (95) $`W_{[eb]}`$ $`=`$ $`4.2810^6J`$ (96) $`W_P`$ $`=`$ $`1.9510^9J`$ (97) with: $`\alpha _e=1/137.035`$ We note that: $`W_{[eb]}>W_{[eg]}`$ because of the factor $`2\alpha _e`$. For large distances and classical fields the threshold values of energy ($`W_{eg}`$ and $`W_{eb}`$) for the maximum coupling strength are different from each other and also different from the the value of the Planck energy $`W_P`$. We remember the connection between the scalar coupling coefficients and the threshold values of field energies: $`K_{[e]}`$ $`=`$ $`K_q\mathrm{exp}\left[{\displaystyle \frac{1}{4}}\left(1{\displaystyle \frac{W_{eg}^2}{W_{[e]}^2}}\right)\right]`$ (98) $`K_{[v]}`$ $`=`$ $`K_c\mathrm{exp}\left[{\displaystyle \frac{1}{4}}\left(1{\displaystyle \frac{W_{eb}^2}{W_{[e]}^2}}\right)\right]`$ (99) These threshold values of the energies $`W_{eg}`$ and $`W_{eb}`$ are also dependent on the Sommerfeld constant. At small distances this constant $`\alpha _e`$ becomes an effective running coupling constant $`\alpha _e\left(Q^2\right)`$, which dependency from the energy (transfer of the particle-momentum) is described by renormalization effects (see for example p. 594 (eq. 12-127) of ): $$\frac{1}{\alpha _e\left(Q^2\right)}=\frac{1}{\alpha _e}+\frac{b_e}{2\pi }\mathrm{ln}\left(\frac{\left|Q\right|}{\mathrm{\Lambda }}\right)$$ (100) (Where $`\mathrm{\Lambda }`$ stands for the fixing point of renormalization, Q for the particle-momentum transfer and $`b_e(<0)`$ corresponds to the number of the charged fermions.) Those effective coupling constants occur in the electroweak and strong interactions, which should be unified at higher energies . The following equation defines an analogous constant for the gravitational interaction depending on the energy in a direct way (not by renormalization): $$\alpha _g=\frac{m^2\gamma }{\mathrm{}c}=\frac{W^2\gamma }{\mathrm{}c^5}$$ (101) Thus, we receive at the Planck energy: $$\alpha _g1\text{for:}WW_P$$ (102) At the Planck scale we should expcect that all interactions of particles have the same strength. This justifies the following conditions: $$\left\{\begin{array}{c}W_{eg}\\ W_{eb}\end{array}\right\}W_P\text{for:}\alpha _e\left(Q^2\right)1$$ (103) The case of small distances results that the values $`W_{eg}`$ and $`W_{eb}`$ lie near the Planck energy $`W_P`$. The quantization of the gravitational fields (spin-1 graviton) can be realized in an analogous way as it is possible for the electromagnetic fields in the QED . For spin-2 gravitons the approach of canonical quantization is only possible for linear equations . However, as we have already noted, this approach is excluded for spin-2 paricles . All fields in our discussed approach are part of an Abelian gauge theory of massless spin-1 particles. We receive for the general Lagrangian in this quantum field theory (analogous to quantum electrodynamics) with: $`\stackrel{~}{q}=\left(q_e+mK_{[e]}+q_b\epsilon _0K_{[v]}\right)`$ $`_{}`$ $`=`$ $`{\displaystyle \frac{1}{4\mu _0}}F_{[e]\mu \nu }F_{[e]}^{\mu \nu }`$ (104) $`_{}`$ $`=`$ $`\stackrel{~}{q}A_{[e]\nu }\overline{\psi }\gamma ^\nu \psi `$ (105) $`_𝒟`$ $`=`$ $`\overline{\psi }\left(i\gamma ^\nu _\nu m_0{\displaystyle \frac{c}{\mathrm{}}}\right)\psi `$ (106) $``$ $`=`$ $`_{}+_{}+_𝒟`$ (107) At the scale of the Planck energy $`W_P`$, the theory of quantum gravitation must be extended to include a quantization of spin-2 particles, which lies outside from our approach for the present. ## 7 The violation of the discrete symmetries The common interactions of the primary and secondary fields yield to several consequences concerning the discrete symmetries of parity, time reversal and charge conjugation. | Symmetry | C | P | T | CP | CT | PT | CPT | | --- | --- | --- | --- | --- | --- | --- | --- | | Interaction | | | | | | | | | E/B | + | - | - | - | - | + | + | | E/G | - | + | + | - | - | + | - | Table: The conservation and violation of discrete symmetries by the connected interactions with primary and secondary fields of the discussed applications. In the bottom row of the table we find an obvious violation of CPT. This violation is caused by the violation of the electric charge symmetry. The PT-symmetry is generally conserved, because no axial vectors, which conserve CP (like in the weak interaction), occur as sources of the fields. To determine the maximum coupling strength between the electromagnetic and gravitational fields, we have defined in the previous section a relation of quantity between the electric charge and the restmass of the lightest charged particles (electrons and positrons) independent of their sign of charge. Therefore, both particles must have the same magnitude of rest mass (p. 158 of ), which is also required by the CPT-theorem . In general, the CPT-symmetry is conserved by every relativistic invariant field theory, which contains no coupling of gravitational interactions with other interactions. Any coupling of gravity with other interactions causes violation of the charge symmetry, because gravitational charge (mass or energy) can not change its sign by charge conjugation . Therefore, the charge symmetry is violated by the electro-gravitational interaction. For our considerations of the fields result two facts: (i) The electric charges, like mass and energy, are sources of fields, which have a spherical configuration. (ii) The secondary fields resulting from our coupling approach cannot have such spherical configuration. Thus, it seems that the charge symmetry and the CPT conservation hold for fields with a spherical symmetry but not for those without spherical symmetry. ## 8 Conclusions We investigate the generation of fields from other fields by scalar coupling assuming the priority of the fields in comparison with their sources. This kind of field generation is only possible for fields with a non-spherical configuration. Therefore, the spherical symmetry of all these fields is generally broken. This procedure allows a generation of gravitational fields, which can be described by linear field equations. The strengths of the discussed couplings depend on the energy of the electric or magnetic fields. The maximum coupling strength is determined by the specific electric charge of the lightest charged paricles (electrons and positrons). Thus, we are able to describe interacting electric, magnetic, and gravitational spin-1 fields, as well as their quantization, within a common formalism. For the quantum field theory approach we find that the maximum coupling strength is reached at the Planck energy as one should expect for a quantum field theory of gravitational fields. As we have seen, these couplings of two different fields have important consequences for the discrete symmetries. The coupling of gravitational fields with other fields causes an obvious CPT violation. Violations of CPT are also conjectured in string field theory and in the standard model . However, it seems that for gravitational fields the CPT theorem is not strictly applicable because general relativity is not a Poincaré-invariant theory . Thus, our investigations yield to the following conclusions: It is known that the CPT-symmetry is conserved for the sources, which always generate spherical fields. However, in our approach the CPT violation occurs through the coupling between the electromagnetic and gravitational fields. Therefore, there must be a difference between the sources, which are surrounded by fields with spherical symmetry, and the fields without spherical symmetry, resulting from our approach.
no-problem/9911/hep-ph9911472.html
ar5iv
text
# 1 Introduction ## 1 Introduction The interaction of the soft pions at the comparatively low energies is described by the effective Lagrangian $$L_\pi =\frac{1}{2}\left[(_\mu \sigma )^2+(_\mu \stackrel{}{\pi })^2\right],$$ (1) including three isovector pion fields $`\pi _i`$, $`i=1,2,3`$, and an auxiliary scalar field $`\sigma `$ obeying the constraint $$\sigma ^2+\stackrel{}{\pi }^2=f_\pi ^2,$$ where $`f_\pi =92`$ MeV is the pion decay constant. The pion field represents the chiral phase of the quark condensate, that is why the parameterization through the unitary matrix is quite natural in this approach. Constructing the matrix $$U=\frac{1}{f_\pi }\left(\sigma +i\stackrel{}{\pi }\stackrel{}{\tau }\right),U^+U=\mathrm{\hspace{0.17em}1},$$ the Lagrangian (1) takes the form $$L_{eff}=\frac{f_\pi ^2}{4}\mathrm{Tr}_\mu U_\mu U^+.$$ (2) The Lagrangian (2) involves non-linear terms responsible for the many-pion interaction and allows for the classical solutions. The important class of plane wave type solutions for the pion field was described by A.A.Anselm , the Disoriented Chiral Condensate (DCC) being the particular case of it with the wave vector $`\stackrel{}{k}=0`$ . In the collision of high energy particles (or, better, heavy ions) the system is ”warmed up” to the temperatures at which the chiral symmetry is restored. Then, in the course of cooling, the symmetry breaks again and the scalar, $`\overline{q}q`$, or pseudoscalar, $`\overline{q}\gamma _5\tau _aq`$, condensate with the quantum numbers of $`\sigma `$ or $`\pi ^a`$ mesons settles out. All four orientation (3 pions and $`\sigma `$-meson) are equivalent in the isotopic $`O(4)`$ space, and if the pion condensate, $`\overline{q}\gamma _5\tau _aq`$, is produced somewhere it means the formation of DCC domain. Possible scenarios for the classical pion field (DCC) production at the high energies and the experimental signatures of DCC have been widely discussed in the literature . One of the most significant evidence of DCC is the distribution over the neutral and charged pions. The multiplicity distribution obeys the Poisson law for independent production, so the number of $`\pi ^0`$ mesons is 1/3 of the total number of pions in the large multiplicity event while the distribution of the ratio $`f=n^0/n_{tot}`$ is close to $`\delta (f1/3)`$ (here $`n^0`$ is the number of $`\pi ^0`$, $`n_{tot}`$ is the total number of pions). In case of a classical pion field creation the orientation of the isotopic vector $`\stackrel{}{A}^a`$, which determines the ratio of charged and neutral pions, is chosen once for the whole classical field domain. It results into the large probability, $$\frac{dw}{df}=\frac{1}{2\sqrt{f}},$$ (3) of the events with anomalous small number of neutral pions . There have not been still a real progress in experimental attempts to observe such events (in $`p\overline{p}`$ collisions at FNAL collider and in nucleus-nucleus reactions in CERN) although the hope to create DCC in this conditions was not originally much. The heavy ion collisions at the new colliders (RHIC, LHC) seem to be more promising. In this paper we deal with another possibility - with classical pion fields, which can be produced and exist in the presence of an external source i.e. nucleons or quarks of an incident nucleus. Contrary to the Skyrmion type fields for which the isotopic direction of the vector is strongly correlated with the coordinate space position (so called ”hedgehog ansatz”) here the direction is described by the single vector $`\stackrel{}{A}^a`$ at the whole space just as it is for Anselm solution or DCC. Therefore the distribution (3) for the ratio of charged and neutral particles is still valid for the treated fields. The structure and physical nature of these solutions is rather similar to electromagnetic (Coulomb) field of a charged particle, however the nonlinear pion interaction (2) allows for the existence of several solutions with different energies. The nucleon or nucleus in the cloud of such fields can be regarded as an excited state or resonance. Below we shall consider the spherically symmetric solutions at first in the chiral limit, putting the pion mass $`m_\pi =0`$, and then we shall add the mass term to the Lagrangian (2). The mass term looks like the source in the classical equation and gives rise to the piece of the form $`m_\pi ^2\mathrm{sin}\phi `$, where $`\phi =|\pi |/f_\pi `$. The contribution of the external source is opposite in sign to the mass term. For a weak fields $`\phi 1`$ the source acts as an attractive potential and allows thereby for a stable solution – a stationary state. Even for a strong field $`\phi 1`$ the stationary state is possible in the case of large size source. Once this classical solution is created in the heavy ion collision (if created) it can live for the long time slowly decaying through the emission of very soft pions (in the nucleus reference frame). We describe the classical equation for the fields surrounding the nucleon and possible excitations (resonances) of this system in Section 2. The more interesting case of a large nucleus is considered in Section 3. ## 2 Classical solutions in chiral limit The pion interaction with fermions (quarks or nucleons) is given by the microscopic Lagrangian $$L_f=\overline{q}i\gamma ^\mu _\mu qg\overline{q}\left(\sigma +i\stackrel{}{\pi }\stackrel{}{\tau }\gamma _5\right)q.$$ (4) Further we shall treat it as an interaction with external classical source generally described by isoscalar $`\rho (x)=\overline{q}(x)q(x)`$ and isovector $`\rho _V(x)=\overline{q}(x)\stackrel{}{\tau }\gamma _5q(x)`$ densities. For the simplicity we shall deal with the situation when the source has zero isospin, so that $`\rho _V(x)=0`$, and the Lagrangian takes the form $$L=\frac{f_\pi ^2}{4}\mathrm{Tr}_\mu U_\mu U^+\frac{1}{4}gf_\pi \rho \mathrm{Tr}\left(U+U^+\mathrm{\hspace{0.17em}2}\right),$$ (5) the last term being zero for $`U=1`$. Upon varying the Lagrangian with respect to the pion matrix $`U`$, the equation of motion yields $$_\mu \left[U^+_\mu U\right]=\frac{g}{2f_\pi }\rho (x)\left[U^+U\right].$$ In what follows the matrix $`U(x)`$ is sought among the kind of exact solutions found in refs.: $$U(x)=V^1e^{i\tau _3f(x)}V,$$ (6) where $`V`$ is an arbitrary but constant unitary matrix. The function $`f(x)`$ obeys the equation $$^2f(t,x)=\frac{g}{f_\pi }\rho (x)\mathrm{sin}f(t,x).$$ We are interested in the stationary solutions for which $$\mathrm{\Delta }f(x)=\frac{g}{f_\pi }\rho (x)\mathrm{sin}f(x),$$ where $`\mathrm{\Delta }=_i^2`$ is the Laplace operator. Supposing the function $`f(x)`$ to be decreasing at the space infinity this equation can be transformed to the nonlinear integral form $$f(x)=\frac{1}{4\pi }\frac{g}{f_\pi }\frac{d^3y\rho (y)}{|xy|}\mathrm{sin}f(y).$$ (7) The total energy accumulated in the pion field is generally finite: $$E=gf_\pi d^3x\rho (x)\left[\frac{1}{2}f\mathrm{sin}f+\mathrm{cos}f\mathrm{\hspace{0.17em}1}\right].$$ (8) There are two different regimes for the equation (8) - the regime of small and large size of the source. In the first case the density is concentrated in the small space, and since we are looking for a smooth function we can put $`\mathrm{sin}f(y)\mathrm{sin}f(0)`$ in the integrand (7). Then we obtain the Coulomb-like potential at the finite distances from the source $$f(x)=\frac{1}{4\pi }\frac{g}{f_\pi }\frac{1}{|x|}\mathrm{sin}f,$$ with the charge-like constant $`f=f(0)`$ obeying the equation $$f=\frac{1}{a}\mathrm{sin}f,$$ (9) in which the dimensionless parameter $`a`$, $$\frac{1}{a}=\frac{g}{4\pi f_\pi }d^3y\frac{\rho (y)}{|y|},$$ is proportional to the radius of the source. The equation (9) has a set of solution for $`a1`$ $$\mathrm{sin}f_n\pi an,n=\mathrm{\hspace{0.17em}0},\pm 1,\pm 2,\mathrm{},|n|\frac{1}{\pi a}.$$ As a result the pion energy (8) turns out to be quantized despite the pure classical treatment: $$E_n=gf_\pi \left[\frac{1}{2}f_n\mathrm{sin}f_n+\mathrm{cos}f_n\mathrm{\hspace{0.17em}1}\right].$$ (10) The lesser the effective dimensionless radius of the source, $`a`$, the more energy levels appear. For $`a0`$ they form two quasi-continuous bands for even and odd states approximated for not very large $`n`$ as $$E_ngf_\pi \left[(1)^n\mathrm{\hspace{0.17em}1}+\frac{1}{2}\pi ^2n^2a\right],$$ (11) with energy spacing $`g\pi ^2f_\pi a`$ between the levels in the each band. The lower levels in the odd band in (11) have the negative energy. One has to emphasize here that $`E_n`$ are only the pion field energies whereas the total energy comprises, except this, the energy of the source itself, that is the energy of the quarks or nucleons inside. The smaller the volume they occupy the large energy they have. Therefore there is a competition between these two effects which, could lead under certain conditions to a bound state formation. In the second regime there are no finite energy solutions except for the $`f=0`$. Indeed, after rescaling the variables $`\rho (x)=1/r_0^3\overline{\rho }(x/r_0)`$, $`f(x)=\overline{f}(x/r_0)`$, where $`r_0`$ is a characteristic size, where the density is non zero, the equation (7) reads $$\overline{f}(z)=\frac{g}{4\pi f_\pi r_0}\frac{d^3z^{}\overline{\rho }(z^{})}{|zz^{}|}\mathrm{sin}\overline{f}(z^{}),$$ so only the zero solution is allowed for large enough $`r_0`$. The simple estimation shows that the critical radius $`r_0`$ needed for the nontrivial solution should be smaller than 1 fm in the order of magnitude. The energy $`E_1`$ for $`r_01`$ fm is of the order of the mass difference between the baryon resonances and it is not excluded that some/part of this states are due to an excitation of the ”Coulomb-like” pion field considered above. ## 3 Large size source The last statement on the absence of the solution for the large source is valid only for the fixed constant $`g`$ while the value of $`g=g_{\pi NN}A`$ increases for a heavy ion with the atomic number $`A`$ faster than the characteristic radius $`r_0A^{1/3}`$. This is an interesting case which should be discussed in more detail. Let us put the nucleon density $`\rho =const`$ inside the nuclear and include the pion mass. It results into the equation $$\mathrm{\Delta }f(x)=\left(\frac{g}{f_\pi }\rho (x)m_\pi ^2\right)\mathrm{sin}f(x),$$ (12) where the density $`\rho (x)=\rho _0\theta (R|x|)`$ and $`R`$ is the nuclear radius. For the normal nuclear density the effective strength of the source $`G=\rho _0\frac{g}{f_\pi }4.7`$ fm<sup>-2</sup>. For the weak fields the equation can be linearized and takes the simple form $$\mathrm{\Delta }f(x)=(Gm_\pi ^2)f(x).$$ (13) The well known spherically symmetric solution reads $$f(r)=B_1\frac{\mathrm{sin}(rb)}{r}\text{for }r=|x|<R,$$ with $`b=\sqrt{Gm_\pi ^2}`$ and $$f(r)=B_2\frac{\mathrm{exp}(rm_\pi )}{r}\text{for }r>R.$$ (14) In order to match the solutions at $`r=R`$ the logarithmic derivative of the first expression should obey the condition $$\frac{d\mathrm{ln}(rf(r))}{dr}_{(\text{at }r=|x|=R)}=bctg(Rb)=m_\pi .$$ (15) The fine tuning of nuclear radius $`R`$, which is needed to satisfy (15), looks, of course, rather unlikely. However it is possible to provide the matching in non-linear case by choosing the value of $`f(0)O(1)`$ (for large enough $`b>\pi /R`$). We find such solutions numerically for a reasonable ion radius $`R=5.6`$ \- $`5.9`$ fm with the field amplitudes $`f(0)1`$ \- $`2`$, respectively. The results are shown in Fig.1 by the solid curves. This solution in some sense has a similar nature with the classical Coulomb field, the amplitude of which is fixed by the charge (and the size) of the source as well. To calculate the energy of the pion field we include the pion mass term $`m_\pi ^2`$ in (8) and obtain a very small value $`E=8`$ MeV for $`f(0)=0.96`$ (this is still a weak field) but $`E=280`$ MeV for the case of $`f(0)=2`$. The fields shown by the solid curves in Fig.1 look as the result of the pion condensation in a heavy nuclear which was studied many years ago mainly in terms of the Fermy Liquid Theory. A possibility to observe the pion condensation in heavy ion collisions is discussed in . Unfortunately, such a solution can not be realized. There is no pion condensation at the normal nuclear density $`\rho _0`$. What is happened? The Lagrangian (4) is correct being written in terms of the quarks and pions. We have applied it to the nucleon instead of the quarks. Taking the value $`g=g_{\pi NN}`$ we assume the whole nucleon mass to be generated by the interaction of the nucleon with the classical $`\sigma `$ field $`<\sigma >=f_\pi `$. In the framework of the effective Lagrangian (1),(2) it leads to a very strong interaction of the soft pions with the nucleons. On the other hand the $`\pi N`$ scattering amplitude is rather small corresponding to a so-called $`\sigma `$-term, which is about 20-30 times less than the proton mass. That is why we take a smaller value of the coupling $`g`$ (say, $`G=0.18`$ fm<sup>-2</sup>) in the better agreement with the nucleon $`\sigma `$-term. Then a heavy ion acts as a rather weak source. For $`G<0.067`$ fm<sup>-2</sup> and a reasonable ion size ($`R6`$ fm) the resulting attractive potential is even not sufficiently strong to form a bound state. However it is possible to form for $`G=0.18`$ fm<sup>-2</sup> the quasistationary long-living state, which decays only due to the non-linear effects. Indeed, in a weak field approximation one can find the time dependent solution in the form $`f(t,x)=exp(iEt)f(x)`$, where the function $`f(x)`$ satisfies eq.(13) with $`Gm_\pi ^2`$ replaced by $`G+E^2m_\pi ^2`$. Choosing the eigenenergy $`E130`$ MeV to provide the matching at $`r=R`$ we obtain the solution shown in Fig.1 by the dotted curve (for $`R=5.9`$ fm). Being produced in a heavy ion collision (if it is) it can live for the long time slowly decaying through the emission of very soft pions (in the nucleus reference frame) due to non-linear terms in the Lagrangian (2). The life-time of the solution decreases with the field amplitude $`f`$, which determines the multiplicity $`n_\pi `$ of the pions in the classical field. The dotted curve presented in Fig.1 corresponds to $`n_\pi =2Ef_\pi ^2f^2(x)d^3x12.3`$ The existence of such a solution may help in production of a ”breather” like classical pion fields . Interaction with the source (nuclear matter) makes this solution more stable and long-lived. The presence of the source could also stabilize under certain conditions the solutions of ”pion string” kind , for which the direction of the isotopic vector $`\stackrel{}{A}^a`$ is correlated with the azimuthal angle in the coordinate $`x,y`$ plane. Nevertheless the production of the classical pion field we discussed first (solid curves in Fig.1) can not be completely excluded in the high energy heavy ion collisions. If a kind of ”quark bag” (large drop of quark plasma) would be formed after collision instead of the outgoing ”nuclear”, then we, probably, come back to the quark-pion Lagrangian (4) with a large coupling $`g`$ and the corresponding value $`G35`$ fm<sup>-2</sup>. The interaction now would be enough to form a stationary classical pion field (pion condensate) in the drop of quark matter. Acknowledgments One of us (A.S.) is grateful to Prof. K. Goeke for hospitality and to Prof. L. McLerran for discussions. This work is supported by grants DFG-RFFI-436-M3/540 and RFBR 98-02-17636.
no-problem/9911/astro-ph9911373.html
ar5iv
text
# Quasi-periodic X-ray Flares from the Protostar YLW15 ## 1. Introduction Low-mass Young Stellar Objects (YSOs) evolve from molecular cloud cores through the protostellar (ages $``$10<sup>4-5</sup> yr), the Classical T Tauri (CTTS: $``$ 10<sup>6</sup> yr) and Weak-lined T Tauri (WTTS: $``$10<sup>7</sup> yr) phases to main sequence. Protostars are generally associated with the Class 0 and I spectral energy distributions (SEDs), which peak respectively in the millimeter and infrared (IR) bands. Bipolar flows are accompanied with this phase, suggesting dynamic gas accretion. CTTSs have still circumstellar disks though they have expelled or accreted the infalling envelopes. They are associated with the Class II spectra, which peak at the near-IR. Finally, as the circumstellar disk disappears, YSOs evolve to WTTSs, associated with Class III stars. Early stellar evolution is reviewed by Shu, Adams, & Lizano (1987) and André & Montmerle (1994). The Einstein Observatory discovered that T Tauri Stars (TTSs), or Class II and Class III infrared objects, are strong X-ray emitters, with the luminosities of 100–10000 times larger than solar flares. These X-rays showed high amplitude time variability like solar flares. The temperature ($``$1 keV) and plasma density ($`n_e`$10<sup>11</sup>cm<sup>-3</sup>) are comparable to those of the Sun, hence the X-ray emission mechanism has been thought to be a scaled-up version of solar X-ray emission; i.e., magnetic activity on the stellar surface enhanced by a dynamo process (Feigelson & DeCampli 1981; Montmerle et al. 1983). X-ray and other high energy processes in YSOs are reviewed by Feigelson & Montmerle (1999). In contrast to TTSs, Class I infrared objects are generally surrounded by circumstellar envelopes of $`A_V`$ up to $``$ 40 or more, hence are almost invisible in the optical, near infrared and even soft X-ray bands. The ASCA satellite, sensitive to high energy X-rays up to 10 keV which can penetrate heavy absorption, has found X-rays from Class I objects in the cores of the R CrA, $`\rho `$ Oph, and Orion clouds at the hard band ($`>`$ 2 keV) (Koyama et al. 1994; Koyama et al. 1996; Kamata et al. 1997, Ozawa et al. 1999). Even in the soft X-ray band, deep exposures with the ROSAT Observatory detected X-rays from YLW15 in $`\rho `$ Oph (Grosso et al. 1997) and CrA (Neuhäuser & Preibisch 1997). A notable aspect of these pioneering observations was the discovery of X-ray flares from Class I stars. ROSAT discovered a giant flare from the protostar YLW15 with total luminosity (over the full X-ray band) of 10<sup>34-36</sup>erg s<sup>-1</sup>, depending on the absorption. ASCA observed more details of X-ray flares from protostars EL29 in the Opiuchus, R1 in the R CrA core, and SSV63E+W in the Orion, which are associated with larger $`N_H`$ $``$ 10<sup>22-23</sup> cm<sup>-2</sup> than seen in TTSs. All these findings led us to deduce that greatly enhanced magnetic activity, already well-established in older TTSs, is present in the earlier protostellar phase. Stimulated by these results, and to search for further examples of the protostellar activity in the X-ray band, we have performed an extensive follow-up observation of a core region in $`\rho `$ Oph, with several Class I X-ray sources. The follow-up observation was made with ASCA 3.5 years after the first observation (Koyama et al. 1994, Kamata et al. 1997). Some previously bright Class Is became dim, while other Class Is were identified as hard X-ray sources. This paper discusses the brightest hard X-ray source, YLW15, concentrating on the characteristics and implications of its peculiar time behavior: quasi-periodic hard X-ray flares. For comparison with previous results, we assume the distance to the $`\rho `$ Oph region to be 165 pc (Dame et al. 1987) throughout of this paper, although new Hipparcos data suggest a closer distance $`d120`$ pc (Knude & Hog 1998). ## 2. Observation We observed the central region of $`\rho `$ Oph cloud with ASCA for $``$100 ks on 1997 March 2–3. The telescope pointing coordinates were $`\alpha `$(2000) = 16h 27.4m and $`\delta `$(2000) = $``$24 30. All four detectors, the two Solid-state Imaging Spectrometers (SIS 0, SIS 1) and the two Gas Imaging Spectrometers (GIS 2, GIS 3) were operating in parallel, providing four independent data sets. Details of the instruments, telescope and detectors are given by Burke et al. (1991), Tanaka, Inoue, & Holt (1994), Serlemitsos et al. (1995), Ohashi et al. (1996), Makishima et al. (1996), and Gotthelf (1996). Each of the GIS was operated in the Pulse Height mode with the standard bit assignment that provides time resolutions of 62.5 ms and 0.5 s for high and medium bit rates, respectively. The data were post-processed to correct for the spatial gain non linearity. Data taken at geomagnetic rigidities lower than 6 GV, at elevation angles less than 5 from the Earth, and during passage through the South Atlantic Anomaly were rejected. After applying these filters, the net observing time for both GIS 2 and GIS 3 was 94 ks. Each of the SIS was operated in the 4-CCD/Faint mode (high bit rate) and in the 2-CCD/Faint mode (medium bit rate). However, we concentrate on the 4-CCD/Faint results in this paper since YLW15 is out of the 2-CCD field of view. The data were corrected for spatial and gain non-linearity, residual dark distribution, dark frame error, and hot and flickering CCD pixels using standard procedures. Data were rejected during South Atlantic Anomalies and low elevation angles as with GIS data. In order to avoid contamination due to light leaks through the optical blocking filters, we excluded data taken when the satellite viewing direction was within 20 of the bright rim of the Earth. After applying these filters, the net observing time for the 4-CCD mode was 61 ks for SIS 0 and 63 ks for SIS 1. Towards the end of this observation, we detected an enormous flare from T Tauri star ROXs31 which is located close to YLW15 (see §3.1, source 6 in Figure 1). The peak flux of ROXs31 is 1–2 orders of magnitude larger than YLW15 (see §4.2), and its broad point spread function contaminates YLW15 during the flare. Therefore, we excluded the GIS and SIS data taken during the flare of ROXs31 in all analysis of YLW15. ## 3. Results and Analysis ### 3.1. Images Figure 1 shows X-ray images of the $`\rho `$ Ophiuchi Core F region in the two different energy bands (left panel: 0.7–2 keV, right panel: 2–10 keV), obtained with SIS detectors. Class I sources are indicated by crosses. Since the absolute ASCA positional errors can be as large as 40<sup>′′</sup> for SISs (Gotthelf 1996), we compared the ASCA peak positions to the more accurately known IR positions of two bright sources in the SIS field, ROXs21 (source 5) and ROXs31 (source 6), which are indicated by filled circles in Figure 1 left panel. To obtain the ASCA peak positions, we executed a two-dimensional fitting in the 0.7–2 keV band; we fitted these sources with a position-dependent point spread function in the 0.7–2 keV band and a background model. This procedure was done in the Display45 analysis software package (Ishisaki et al. 1998). The position of ROXs31 was based on the flare phase of this source, while the position of ROXs21 is based on the data before the flare of ROXs31. IR positions are provided by Barsony et al. (1997). The ASCA SIS positions had an average offset (weighted mean by photon counts) of $`+`$0.18 s in right ascension and $``$7.4<sup>′′</sup> in declination from the IR frame. This positional offset is corrected in Figure 1. After the boresight error correction, remaining excursions between the X-ray and IR positions are 5.5<sup>′′</sup> (rms), which is consistent with the SIS position uncertainty for point sources (Gotthelf 1996). We take the systematic positional error to be 5.5<sup>′′</sup>. From the 2–10 keV band image, we find that the X-ray fluxes from Class I protostars EL29 and WL6 (sources 3 and 4 in Figure 1, right panel) are fainter by one third and less than one third, respectively, comparing to those in the first ASCA observation made in August 1993 (Kamata et al. 1997). The brightest X-ray source in the 2–10 keV band is an unresolved source at $`\alpha `$(2000) = 16h 27m 27.0s and $`\delta `$(2000) = $``$24 40 50<sup>′′</sup> in the position corrected frame. We derived this peak position by the two-dimensional fitting in 2–10 keV band. Since the statistical error is 1<sup>′′</sup>, the overall X-ray error (including the systematic error) is $`\pm 6^{\prime \prime }`$. The closest IR source is YLW15 with VLA position of $`\alpha `$(2000) = 16h 27m 26.9s and $`\delta `$(2000) = $``$24 40 49.8<sup>′′</sup> ($`\pm 0.5^{\prime \prime }`$; Leous et al. 1991), located 1.5<sup>′′</sup> away from the X-ray source. The next nearest source is GY263 with IR position of $`\alpha `$(2000) = 16h 27m 26.6s and $`\delta `$(2000) = $``$24 40 44.9<sup>′′</sup> ($`\pm `$1.3<sup>′′</sup>; Barsony et al. 1997). The source is located 5.5<sup>′′</sup> from the X-ray position on the border of the X-ray position error circle. Thus we conclude that the hard X-rays are most likely due to the Class I source YLW15. ### 3.2. X-ray Lightcurve of YLW15 We extracted a lightcurve from a $`3^{}`$ radius circle around the X-ray peak of YLW15 (see Figure 1). Before the enormous flare from T Tauri star ROXs31, which occurred in the last phase of this observation (see §2), we detected another large flare from Class II source SR24N, located about $`7^{}`$ away from YLW15 in the GIS field of view. To subtract the time variable contamination from the SR24N in the extended flux of YLW15, we selected a $`3^{}`$ radius background region (see Figure 1), equidistant from SR24N and YLW15. On the other hand, using such a background, we cannot exclude the contamination from ROXs21 (source 5 in Figure 1), which is 2 arcmin apart from YLW15. Since the X-rays from ROXs21 are dominant below 2 keV (see §3.3 and Fig.3), we examine time variability only in the hard X-ray band ($`>`$ 2 keV) in which the flux is dominated by YLW15. Figure 2 (upper panel) shows the background-subtracted lightcurve in the 2–10 keV band with the sum of the SIS (SIS 0 and 1) and GIS (GIS 2 and 3) images. The lightcurve shows a sawtooth pattern with three flares. The peak fluxes of the flares become successively less luminous. Each flare exhibits a fast-rise and an exponential decay with an $`e`$-folding time of 31$`\pm 1`$ ks ($`\chi ^2/d.o.f.`$ = 61/46), 33$`\pm 3`$ ks (80/47), and 58$`{}_{}{}^{+24}{}_{13}{}^{}`$ ks (33/24), for the first, the second, and the third flares, respectively. We show the best fit lightcurves for the second and the third flares with dashed lines, and show the best-fit quasi-static model (see §4.1.1) for the first flare with a solid line in Figure 2 upper panel. ### 3.3. Time-Sliced Spectra of YLW15 To investigate the origin of the quasi-periodic flares, we made time-sliced spectra for the time intervals given in Figure 2. We extracted the source and background data for each phase from the same regions as those in the lightcurve analysis (see §3.2 and Figure 1). We found that all the spectra show a local flux minimum at $``$1.2 keV. For example, we show the spectra obtained with SISs at phases 1 and 8 in Figure 3. This suggests that the spectra have two components, one hot and heavily absorbed, the other cool and less absorbed. Then we examined possible contamination from the bright, soft X-ray source ROXs21 (source 5 in Figure 1). We extracted the spectrum of ROXs21 from a $`2^{}`$ radius circle around its X-ray peak. We extracted the data only during phase 1, in order to be free from contamination from the flare on SR24N, which occurred during phases 4–6 (see §3.2). The background data for ROXs21 were extracted from a $`2^{}`$ radius circle equidistant from YLW15 and ROXs21 during phase 1. After the subtraction of the background, the spectrum of ROXs21 is well reproduced by an optically thin thermal plasma model of about 0.6 keV temperature with absorption fixed at $`N_\mathrm{H}=`$1.3$`\times `$10<sup>21</sup> cm<sup>-2</sup> ($`A_V`$ = 0.6 mag; Bouvier and Appenzeller 1992) with $`\chi ^2/d.o.f.`$ = 17/23. The flux of the soft component of YLW15 is about 30% of the flux of ROXs21, which is equal to the spill-over flux from ROXs21. Thus the soft X-ray component found in YLW15 spectra is due to contamination from the nearby bright source ROXs21. Having obtained the best-fit spectrum for ROXs21, we fitted the spectrum of YLW15 in each phase with a two-temperature thermal plasma model. The cool component is set to the contamination from ROXs21, and the hot component is from YLW15. For YLW15, free parameters are temperature ($`kT`$), emission measure ($`EM`$), absorption ($`N_\mathrm{H}`$) and metal abundance. For ROXs21, $`EM`$ is the only free parameter and the other parameters are fixed to the best-fit values obtained in phase 1. We found no significant variation in $`N_\mathrm{H}`$ of YLW 15 from phase to phase, hence, we fixed the $`N_\mathrm{H}`$ to the best-fit value at phase 1. The resulting best-fit parameters of YLW15 for each time interval are shown in Table 1. The best-fit spectra of phases 1 and 8 are illustrated in Figure 3, and the time evolution of the best-fit parameters are shown in Figure 2. ## 4. Discussion In this observation, we detected hard X-rays ($`>`$ 2 keV) from the Class I source YLW15 for the first time; the source was not detected in the first ASCA observation executed 3.5 years later (Kamata et al. 1997). On the other hand, the Class I sources EL29 and WL6, which emitted bright hard X-rays in the first observation, became very faint (see §3.1). Then we conclude that hard X-rays from Class I protostars in the $`\rho `$ Oph cloud are highly variable in the time spans, and we suspect that the non-detection of hard X-rays from other Class I objects is partly due to the long-term time variability. From YLW15, we discovered a peculiar time behavior: quasi-periodic flares. We shall now discuss the relation between each flare from YLW15 and the physical conditions. ### 4.1. Physical Parameters of the Triple Flare If the three intermittent flares are attributed to a single persistent flare, with the three events due to geometrical modulation, such as occultation of the flaring region by stellar rotation or orbit in an inner disk, then only the emission measure should have shown three peaks. The temperature would have smoothly decreased during the three flares. However, in our case, we see the temperature following the same pattern as the luminosity, decreasing after each jump, as indicated in Table 1 and Figure 2 middle panel. To test further, we fitted the temperatures in phases 1–8 with a single exponential decay model, and confirmed that it is rejected with 89 % confidence. We then interpret the variability of YLW15 as due to a triple flare, in which each flare followed by a cooling phase. Now, let us label phases 1–3, 4–6, and 7–8 as Flare I, II, and III, respectively. In this subsection we will use the cooling phases to estimate the physical conditions of the plasma in each flare. Details are given in the Appendix. #### 4.1.1 Flare I Here we assume that the plasma responsible for Flare I is confined in one semi-circular magnetic loop with constant cross section along the tube flux, with length $`L`$, and diameter-to-length (aspect) ratio $`a`$, based on the general analogy of solar-type flares. To the decay of Flare I, we apply a quasi-static cooling model (van den Oord & Mewe 1989), in which the hot plasma cools quasi-statically as a result of radiative (90 %) and conductive (10 %) losses (see detailed comment in Appendix §1). If the cooling is truly quasi-static, $`T^{13/4}/EM`$ shows constant during the decay (where $`T`$ is plasma temperature and $`EM`$ is the emission measure). We find for the three successive bins (phases 1–3) of Flare I that $`(T/10^7\mathrm{K})^{13/4}/(EM/10^{54}\mathrm{cm}^3)=61\pm 55,59\pm 42,25\pm 18`$, which are not inconsistent with a constant value, taking into account the large error bars. Then, fitting our data with the quasi-static model, we find satisfactory values of $`\chi _{red}^2`$ for the count rates, temperature, and emission measure (see panel 1 in Table 2). We conclude that our hypothesis of quasi-static cooling is well verified; an underlying quiescent coronal emission is not required to obtain a good fit. The best fits are shown by solid lines in Figure 2. From the peak values of $`T`$ and $`EM`$, and the quasi-static radiative time scale, we derived loop parameters as listed in Table 3. The detailed procedure is written in Appendix. The aspect ratio $`a`$ of 0.07 is within the range for solar active-region loops ($`0.06a_{}0.2`$, Golub et al. 1980), which supports the assumed solar analogy. #### 4.1.2 Flares II and III The appearance of Flare II can be interpreted either by the reheating of the plasma cooled during Flare I, or by the heating of a distinct plasma volume, independent from that of Flare I. In the latter case, the lightcurve and $`EM`$ would be the sum of Flare I component and new flare component. The $`EM`$ of the new flare component can be derived by subtracting the component extrapolated from Flare I. The derived $`EM`$ are 1.6$`\pm `$0.4 (phase 4), 2.5$`\pm `$0.8 (phase 5), and 1.9$`\pm `$1.5 (phase 6). Then we fitted the lightcurve and $`EM`$ with the quasi-static model assuming the above two possibilities. However, in both of the cases, the model did not reproduce the lightcurve and $`EM`$ simultaneously, and the quasi-static cooling model cannot be adopted throughout the triple flare but Flare I. For simplicity, we fitted the parameters in Flare II with an exponential model. The best-fit parameters are shown in Table 2, and each models for the reheating and the distinct flare assumptions are shown by the dashed and the dotted lines in Figure 2, respectively. The obtained $`\chi _{red}^2`$ and the timescale for each lightcurve and $`EM`$ were similar between the two assumptions. Therefore both of the possibilities cannot be discriminated. As for $`EM`$, both of the fits show no decay or very long time decay, which is not seen in the usual solar flares. The constant feature in $`EM`$ makes the quasi-static model unacceptable. Since we cannot derive the aspect ratio of Flare II by fitting with quasi-static cooling model, assuming the ratio derived in Flare I and that radiative cooling is dominant, we deduced the plasma parameters as shown in Table 3. Here, we derived the values assuming the reheating scenario (the former assumption). The results show that the plasma density and the volume remain roughly constant from Flare I. This supports that Flare II resulted from reheating of the plasma created by Flare I. As for Flare III, because of the poor statistics and short observed period, the fits to lightcurve with the exponential model give no constrain between the above two possibilities. The results are shown in Table 2 and Figure 2. We derived the plasma parameters of Flare III in the same way for Flare II, as shown in Table 3. These values are similar to those in Flare I and II. All these are consistent with the scenario that a quasi-periodic reheating makes the triple flare. The heating interval is $``$20 hour. The loop size is approximately constant through the three flares and is as large as $``$14 $`R_{}`$. The periodicity and the large-scale magnetic structure support a scenario that an interaction between the star and the disk occurred by the differential rotation and reheated the same loop periodically (e.g., Hayashi, Shibata, & Matsumoto 1996). The detail will be presented in Paper II (Montmerle et al. 1999). ### 4.2. Comparison with Other Flares Among YSOs, TTSs have been known for strong X-ray time variabilities since the $`Einstein`$ Observatory discovered them (see §1). At any given moment, 5–10 % are caught in a high-amplitude flare with timescales of hours (Feigelson & Montmerle 1999). The most recent example is that of V773 Tau, which exhibits day-long flares with $`L_{X,peak}`$ 2–10 $`\times 10^{32}`$ erg s<sup>-1</sup> and very high temperatures of $`10^8`$ K (Tsuboi et al. 1998). Other examples of bright TTS X-ray flares are P1724, a WTTS in Orion ($`L_{X,peak}2\times 10^{33}`$ erg s<sup>-1</sup>; Preibisch, Neuhaüser, & Alcalá 1995), and LkH$`\alpha `$92, a CTTS in Taurus (Preibisch, Zinnecker, & Schmitt 1993). These X-ray properties resemble those of RS CVn systems. Recently, a dozen protostars have been detected in X-rays, and four of those showed evidence for flaring (RCrA core; Koyama et al. 1996, EL29; Kamata et al. 1997, YLW15; Grosso et al. 1997, and SSV63 E+W; Ozawa et al. 1999). In the “superflare” of YLW15 (Grosso et al. 1997), an enormous X-ray luminosity was recorded during a few hours. If we adopt the absorption we derived ($`N_H=`$ 3$`\times 10^{22}`$ cm<sup>-2</sup>), the absorption-corrected X-ray luminosity in 0.1–100 keV band is $`L_{X,peak}10^{34}`$ erg s<sup>-1</sup>. The “triple flare” we detected in this observation does not reach the same level as the “superflare”: $`L_{X,peak}=520\times 10^{31}`$ erg s<sup>-1</sup> in the same X-ray band. To compare the flare properties of our triple flare from YLW15 with other flare sources, including RS CVns, we selected bright flare sources as seen in Table 4. All the flare sources have a well-determined temperature using a wide range of energy band of $`Tenma,Ginga`$, and $`ASCA`$ satellites. Since the samples of YSO flares were less, we added two TTS flares; the flares on ROXs21 and SR24N detected in our observations (see §2 and §3). We analyzed them using GIS data. The densities and volumes for all the sources were derived assuming that radiative cooling is dominant. As a result, we found that although less energetic than the “superflare”, with total energies in excess of 3–6 $`\times 10^{36}`$ ergs, the triple flare are on the high end of the energy distribution for protostellar flares. While the plasma densities, temperatures, and then derived equipartition magnetic fields are typical of stellar X-ray flares, the emitting volume is huge; it exceeds those in binary systems of RS CVns by a few orders of magnitude. ## 5. Summary and Conclusions In the course of a long exposure of the $`\rho `$ Oph cloud with ASCA, we found evidence for a ‘triple flare’ from the Class I protostar YLW15. This triple flare is the first example of its kind; it shows an approximate periodicity of $`20`$ hours. Each event shows a clear decrease in the temperature, followed by reheating, with $`kT_{peak}`$ 4–6 keV, and luminosity $`L_{X,peak}`$ 5–20 $`\times 10^{31}`$ erg s<sup>-1</sup>. Apart from the periodicity, the characteristics of the flares are among the brightest X-ray detections of Class I protostars. A fitting with the quasi-static model (VM), which is based on solar flares, reproduces the first flare well, and it suggests that the plasma cools mainly radiatively, having semi-circular shape with length $`14R_{}`$ (radius of the circle $`R4.5R_{}`$) and aspect ratio $`0.07`$. The minimum value of the confining field is $`B150`$ G. The two subsequent flares have weaker intensity than the first one but consistent with the reheating of basically the same magnetic structure as the first flare. The plasma volume is huge; a few orders of magnitude larger than the typical flares in RS CVns. The fact that the X-ray flaring is periodic suggests that the cause for the heating is periodic, hence is linked with rotation in the inner parts of the protostar. The large size of the magnetic structure and the periodicity support the scenario that the flaring episode has originated in a star-disk interaction; differential rotation between the star and disk would amplify and release the magnetic energy in one rotation period or less, reheating the flare loop as observed in the second and third flares. The authors thank all the members of the $`ASCA`$ team whose efforts made this observation and data analysis available. We also thank Prof. Eric D. Feigelson, Mr. Kenji Hamaguchi, Dr. Mitsuru Hayashi, Dr. Antonio Magazzu, Mr. Michael S. Sipior, and Prof. Kazunari Shibata for many useful comments in the course of this work. Y.T. acknowledges the award of Research Fellowship of the Japan Society for Young Scientists. ## APPENDIX <br>Determination of the Physical Parameters of the Flares ### 1. Loop Parameters We make use of the general treatment for solar-type flares, put forward by van den Oord & Mewe (1989, hereafter VM). The decrease in the thermal energy of the cooling plasma is assumed to be caused by radiative ($`\tau _\mathrm{r}`$) and conductive losses ($`\tau _\mathrm{c}`$): its decay time is thus $`1/\tau _{\mathrm{eff}}=1/\tau _\mathrm{r}+1/\tau _\mathrm{c}`$. VM assumed that the flare lightcurve and temperature decrease exponentially with decay times $`\tau _\mathrm{d}`$ and $`\tau _\mathrm{T}`$, respectively. This effective cooling time is related to observed time scales by $`1/\tau _{\mathrm{eff}}=7/8\tau _\mathrm{T}+1/2\tau _\mathrm{d}`$. We assume here that the flare occurs in only one semi-circular loop with constant section along the tube flux, radius $`R`$ ($`R=L/\pi `$; length $`L`$), diameter-to-length ratio $`a`$, and volume $`V`$. VM gives an expression of $`L`$ versus $`a`$, depending on $`\tau _{\mathrm{eff}}`$, the temperature, the emission measure (hereafter $`EM`$), and the ratio $`\tau _\mathrm{r}/\tau _\mathrm{c}`$. Because of the assumed exponential behavior of the lightcurve and temperature, the moment at which this expression is applied is not important. The only restriction is that both the temperature and the $`EM`$ started to decrease. Due to the low statistics, we have only time-sliced values of the temperature and the $`EM`$. Let us call $`t_i`$ ($`t_f`$) the beginning (end) of the time interval within which the temperature was estimated, we have: $`\overline{T}=_{t_i}^{t_f}T(t^{})𝑑t^{}/(t_ft_i)`$. We use this relation to find the behavior of the temperature as a function of time. We now have a relation between $`L`$ and $`a`$, but the ratio $`\tau _\mathrm{r}/\tau _\mathrm{c}`$ is unknown, and even worse may change during the decay of the flare. An exception to this is when the flare volume cools quasi-statically, evolving through a sequence of quasi-static equilibria, where the loop satisfies scaling laws, and where $`\tau _\mathrm{r}/\tau _\mathrm{c}=cst`$. Due to the dependency of $`\tau _\mathrm{r}`$ and $`\tau _\mathrm{c}`$ on the temperature and the $`EM`$, $`\tau _\mathrm{r}/\tau _\mathrm{c}=cst`$ implies $`T^{13/4}/EM=cst`$. VM gives in that case analytical expressions for several physical quantities versus time, all depending on the quasi-static radiative time scale ($`\tau _{\mathrm{r},\mathrm{qs}}`$). $`\tau _{\mathrm{r},\mathrm{qs}}`$ can be estimated from the lightcurve which must be proportional to the radiative loss. The temperature and the $`EM`$ can be fitted as described above using this value of $`\tau _{\mathrm{r},\mathrm{qs}}`$, and give the peak values $`kT_{\mathrm{p},\mathrm{qs}}`$, $`EM_{\mathrm{p},\mathrm{qs}}`$ (see Col. – in our Table 2, panel 1 for details). Using an expression for the radiative time (eq.\[23a\] of VM), the $`EM`$ (eq. of VM) and the scaling law (SL, eq. of VM), we obtained the loop characteristics for the quasi-static model: $$a=1.38\times (\tau _{\mathrm{r},\mathrm{qs}}/10ks)^{1/2}\times (kT_{\mathrm{p},\mathrm{qs}}/keV)^{33/16}\times (EM_{\mathrm{p},\mathrm{qs}}/10^{54}cm^3)^{1/2},$$ (1) $$L=1.0R_{}\times (\tau _{\mathrm{r},\mathrm{qs}}/10ks)\times (kT_{\mathrm{p},\mathrm{qs}}/keV)^{7/8},$$ (2) $$n_e=4.4\times 10^{10}cm^3\times (\tau _{\mathrm{r},\mathrm{qs}}/10ks)^1\times (kT_{\mathrm{p},\mathrm{qs}}/keV)^{3/4}.$$ (3) Using an expression for the ratio $`\tau _\mathrm{r}/\tau _\mathrm{c}`$ on page 252 of VM, we found $`\tau _\mathrm{r}/\tau _\mathrm{c}=\mu \times f`$, with the parameter $`\mu `$ depending only on the exponent of the temperature in the expression for the radiative loss (for temperature above 20 MK, $`\mu =0.18`$), and the multiplicative function $`f`$ coming from the expression for the mean conductive energy loss (formula of VM), which is equal to 4/7 for a loop with a constant section. Thus, $`\tau _\mathrm{r}/\tau _\mathrm{c}=0.1`$. <sup>1</sup><sup>1</sup>1VM wrote $`\tau _\mathrm{r}/\tau _\mathrm{c}=\mu =0.18`$, and the analytical expression for the conductive energy loss in the quasi-static model without taking this multiplicative factor $`4/7`$ into account (see Table 5 of VM). In other words, the quasi-static model implies that 91 $`\%`$ of the lost energy are radiative losses: radiation is the dominant energy loss process. Assuming that the cooling is only radiative we used these simplified relations based on the exponential decay of the lightcurve, the temperature, and the $`EM`$: $$n_e=4.4\times 10^{10}cm^3\times (\tau _\mathrm{d}/10ks)^1\times (kT_\mathrm{p}/keV)^{3/4},\mathrm{for}\mathrm{kT}>2\mathrm{keV},$$ (4) $$L=7.4R_{}\times (a/0.07)^{2/3}\times (\tau _\mathrm{d}/10ks)^{2/3}\times (kT_\mathrm{p}/keV)^{1/2}\times (EM_\mathrm{p}/10^{54})^{1/3},$$ (5) with $`kT_\mathrm{p}`$ ($`EM_\mathrm{p}`$) the peak value of the temperature ($`EM`$). ### 2. Magnetic Field Assuming equipartition between the magnetic pressure $`B^2/8\pi `$ and the ionized gas pressure $`2n_ekT`$, we can obtain a minimum value of the magnetic field confining the emitting plasma using: $$B=28.4G\times (n_e/10^{10}cm^3)^{1/2}\times (kT/keV)^{1/2}$$ (6) ### 3. Released Energy For estimating the energy released by the flare during its cooling phase we need the peak luminosity value of this flare. As the lightcurve must be proportional to the intrinsic luminosity, we fit the time averaged intrinsic luminosities in the 0.1–100 keV band (given in Table 1) with the same model used for the lightcurve fitting. This gives the peak luminosity, $`L_{\mathrm{X},\mathrm{peak}}`$, and a characteristic decay time $`\tau `$. Thus, the total energy released by this flare is: $$E_{\mathrm{tot}}10^{35}erg\times (L_{\mathrm{X},\mathrm{peak}}/10^{32}ergs^1)\times (\tau /ks)$$ (7)
no-problem/9911/cond-mat9911249.html
ar5iv
text
# Applications of the Stell-Hemmer Potential to Understanding Second Critical Points in Real Systems ## Abstract We consider the novel properties of the Stell-Hemmer core-softened potentials. First we explore how the theoretically predicted second critical point for these potentials is related to the occurrence of the experimentally observed solid-solid isostructural critical point. We then discuss how this class of potentials can generate anomalies analogous to those found experimentally in liquid water. PACS numbers: 61.20.Gy, 61.25.Em, 65.70.+y, 64.70.Ja Simple liquids are often modeled by pairwise potentials possessing a repulsive core, mimicking the impenetrability of atoms or molecules, and an attractive part, responsible for the gas-liquid transition. For a fluid in which the interaction potential $`\varphi (r)`$ has a hard core plus an attractive part, a “softened” hard core (Fig. 1) can produce an additional line of phase transition . A general argument by Stell and Hemmer based on the symmetry between occupied and unoccupied cells in lattice gas models of fluids predicts that the the additional line of phase transition can end at a novel critical point . An explicit example of the occurrence of a second line of first order transition is given in for a one-dimensional continuum model of a fluid with long-range attraction ; this first order line can end in a critical point depending on the details of the core-softened potential. The general result is that if the repulsive part (core) of the interaction potential has a concave part (which makes it “core-softened”), then it is likely that such a novel transition occurs . Stell and Hemmer relate the occurrence of a high-density, low-temperature critical point to the known isostructural solid-solid critical point observed in some experiments . To understand the occurrence of a second transition, consider the Gibbs potential at zero temperature. The shape of the energy $`U`$ as a function of the volume $`V`$ should have the same “core-softened” shape (i.e. possesses a region where it is concave) as the inter particle potential. The stable phase is then determined by the Gibbs potential $$G(P,T)\underset{V}{\text{min}}\{U+PVTS\},$$ (1) where $`S`$ is the entropy. The right hand side of (1) is shown in Fig. 2 as a function of $`V`$, for $`T=0`$ and for different values of pressure $`P`$. At low $`P`$, the stable phase of the system has a specific volume at which the average inter-particle distance is near to the minimum of the inter-particle potential. The concavity in $`U`$ assures that, on increasing $`P`$, an additional minimum appears in $`U+PVTS`$. For high enough $`P`$, this minimum will become the lowest one and the stable phase of the system will be the one for which the mean inter-particle distance is “inside” the softened part of the core. For one-dimensional models, a first-order transition at $`T=0`$ between a dense and an open phase occurs at a pressure that can be determined exactly with the graphical construction of Fig. 2 . For short-range interactions, the entropy gives a huge contribution in one dimension, making any phase transition disappear for $`T>0`$. Hence, the contribution of the entropic term $`TS`$ makes the double well structure of Fig. 2 disappear when $`T>0`$. This may not be true in higher dimensions, and so a line of first order transitions (eventually ending in a critical point) could be present for $`T>0`$. Stronger evidence for the occurrence of a high-density, low-temperature critical point is given in upon developing and refining analytic methods to investigate the high-density region of the phase diagram of a fluid. The methods used are in principle for a dense fluid, and hence would predict a liquid-liquid transition. However, the second critical point can be related to the isostructural critical point occurring in the solid phase of materials such as Cs,Ce and of mixtures such as Sm-S and Ce-Th . For all these materials the shape of the effective pair potential is “core-softened” . Many liquid metals (Ga and Sn are prominent examples) have static structure factors $`𝒮(k)`$ that show weak subsidiary maxima, or asymmetries, in the main peak of $`𝒮(k)`$ that suggest the presence of a “structured” core that is not infinitely steep. First-principle calculation of the effective ion-ion potential for Ga leads to a core-softened potential . Monte Carlo simulations with this potential reproduce for Ga the observed anomalies in $`𝒮(k)`$. Inversion of the experimental structure factors for In, Zn, Al, Ge, Sn, Cs, Rb, Tl, and Pb, using random phase approximation or the Ornstein-Zernike equation with a closure, also results in effective core-softened potentials (Fig. 3. In addition to the second critical point, core-softened potentials can produce a density anomaly, i.e. the material can expand upon cooling. The occurrence of crossing isotherms was observed in the model . It was noted that, although isotherms crossing is not a common feature in fluids, it is to be found whenever $`(V/T)_P`$ changes sign, as it does in water at approximately 4C and atmospheric pressure. Using thermodynamic arguments, Debenedetti et al. also noted that a “softened core” can cause the thermal expansion coefficient $`\alpha _P(1/V)(V/T)_P`$ to be negative . One-dimensional fluids with core-softened potentials have been studied in relation to the molecular origin of the negative thermal expansion in two fluids, water and tellurium , which have an effective core-softened potential . Conversely, various lines of reasoning led to the introduction of phenomenological potentials for water that are core-softened . The inversion of the oxygen-oxygen radial distribution function $`g_{oo}(r)`$ for water gives an effective potential $`\varphi `$ (Fig. 3) that is core-softened . Thus core-softened potentials can be considered as zeroth-order models for water. It is natural to expect that a core-softened potential can induce a density anomaly. In a liquid, the typical inter-particle distance is distributed inside the attractive part of the potential. As temperature decreases, the distribution peaks around the minimum of the potential, causing the system to expand (Fig. 4). In addition to the density anomaly, further studies of core-softened one-dimensional fluids have discovered anomalies in the isothermal compressibility and specific heat response functions in water. These anomalies are related to the existence of a critical point at $`T=0`$ and high density, similar to what is conjectured to occur in real water . The region where the liquid has a density anomaly must have an upper boundary in the P-T plane; this boundary defines the line of density maxima ($`T_{\text{Md}}`$). The occurrence of this boundary can be understood by first recalling that $$\alpha _P\delta V\delta S\left(P\delta V^2+\delta V\delta E\right).$$ (2) When $`\alpha _P`$ is negative, the term $`\delta V\delta E`$ must also be negative; the fluctuations that give a negative sign to $`\delta V\delta E`$ correspond to the regions of the fluid where particles penetrate the softened part of the core. However, the $`\alpha _P`$ anomaly must vanish at high enough pressures where the positive $`P\delta V^2`$ term dominates. Core-softened potentials with no attractive part and no liquid-gas transition were studied in 2 and 3 dimensions by Young and Alder, giving a P-T phase diagram with a solid-fluid coexistence line similar to Ce or Cs . The fluid-solid coexistence line has a negatively sloped region, as in water. The phase diagram for a core-softened potential in 2d with no attractive part has been studied by Jagla , who explicitly finds a density anomaly in the fluid region. These results indicate that the softened part of the core can be solely responsible for the density anomaly and suggest that it can be related to the existence of negatively-sloped melting lines. The slope of a melting line is related through the Clausius-Clapeyron equation $`dP/dT=\mathrm{\Delta }S/\mathrm{\Delta }V`$ to the difference $`\mathrm{\Delta }S`$ in the entropies and $`\mathrm{\Delta }V`$ in the volumes between the fluid and the solid. On the other hand, the sign of the coefficient of thermal expansion $`\alpha _P\delta S\delta V`$ depends on the cross-correlation between entropy and volume fluctuations. Near a melting line, one expects the relevant fluctuations in a liquid to be “solid-like” as they trigger the nucleation process leading to the first order liquid-solid transition (Fig. 5). This means that the sign of $`\alpha _P\delta V\delta S`$ will be likely the same as $`dP/dT=\mathrm{\Delta }S/\mathrm{\Delta }V`$. Extensive studies of core-softened potentials in two dimensions via molecular dynamics simulations reveal a phase diagram (Fig. 6) similar to that of water (Fig. 7. Near the negatively-sloped part of the liquid-solid freezing line ($`\mathrm{\Delta }S/\mathrm{\Delta }V<0`$), the liquid exhibits a density anomaly ($`\alpha _P\delta S\delta V<0`$). In agreement with the thermodynamic considerations of , the model also reproduces the existence of a region where the isothermal compressibility grows anomalously upon cooling in the same way as in water. Moreover, the model succeeds in reproducing anomalies not only in the statics, but also in the dynamics: there is a region in which the diffusion constant anomalously increases with pressure as in water. The locus of points where the diffusivity has a maximum upon varying pressure defines the pressure of maximum diffusivity line ($`P_{\text{MD}}`$), which has been observed in water, in core-softened models, and in SPC-E simulated water ; in all these cases the $`P_{\text{MD}}`$ line occurs at higher pressures than the $`T_{\text{Md}}`$ line. The joint occurrence of density and diffusivity anomalies is also observed in two-dimensional simulations of the Gaussian-core model that, although not possessing a hard core, has a concave region in the repulsive part of the potential. Theories relating diffusivity to entropic contributions would predict the occurrence of an anomaly in the diffusion $`(D/P)_T>0`$ to be related to an anomaly in the entropy $`(P/P)_T>0`$. On the other hand, whenever there is a density anomaly, an entropy anomaly occurs, as the entropy reaches a maximum along isotherms on the $`T_{\text{Md}}`$ line. This is a consequence of the Maxwell relation $`(S/P)_T=(V/T)_P`$. In conclusion, core-softened potentials are simple realistic potentials that can model complex fluid behavior. In addition to the well-known liquid-gas transition, an analysis of the shape of free energies at low temperatures reveals the presence of a second transition, which can either be interpreted as a solid-solid or a liquid-liquid transition. Simulations indicate that if a liquid-liquid critical point exists it is likely to be in the region of the phase diagram where the liquid is metastable, at least for core-softened potentials in two dimensions . The presence of a liquid-liquid critical point provides one explanation of how anomalies in the response functions for core-softened potentials could occur—although other scenarios are also possible . Although core-softened potentials are capable of exhibiting most of the anomalies present in liquid water, most materials that are known to have an effective core-softened potential have not been studied as extensively as water, and thus the presence of anomalies in them is still an open question. Moreover, the relationship between anomalies of static (e.g., entropy) and dynamic (e.g., diffusivity) quantities is still an open issue that can be explored using core-softened potentials, possibly within the framework of existing theories . We thank M. Canpolat, E. La Nave, M. Meyer, S. Sastry, F. Sciortino, A. Skibinsky R. J. Speedy, F. W. Starr, G. S. Stell and D. Wolf, for enlighting discussions, and NSF for support. 2
no-problem/9911/quant-ph9911014.html
ar5iv
text
# First-order interference of nonclassical light emitted spontaneously at different times ## Abstract We study first-order interference in spontaneous parametric down-conversion generated by two pump pulses that do not overlap in time. The observed modulation in the angular distribution of the signal detector counting rate can only be explained in terms of a quantum mechanical description based on biphoton states. The condition for observing interference in the signal channel is shown to depend on the parameters of the idler radiation. Nonclassical interference is one of the most remarkable phenomena of quantum optics. In particular, it can be observed in experiments with spontaneous parametric down conversion (SPDC) , a nonlinear optical process in which higher-energy pump photons are converted into pairs of lower-energy photons (usually called signal and idler) inside a crystal with quadratic nonlinearity. It has been shown that the state of the signal-idler photon pair is entangled in space-time, polarization, or both . Due to the nonclassical correlation between the signal and the idler photons emitted in SPDC, the term “biphoton” has been suggested . Many experiments made use of SPDC to demonstrate fascinating topics in quantum optics, such as Bell’s inequalities violation, quantum communication, quantum teleportation, etc . All these experiments belong to basically the same category: quantum interference. The existence of the SPDC interference enables one to monitor the structure of biphoton fields. This effect can be used in quantum communication, computation, and cryptography . Among the variety of interference experiments, there is a large group of works where first-order interference is observed in signal or idler SPDC radiation emitted from spatially separated domains . This kind of interference is nonclassical. Indeed, each of the signal and idler beams has noise (thermal) statistics; from a classical viewpoint, spatially separated SPDC sources should exhibit no interference pattern in the signal or idler beams. One cannot observe stable first-order interference of light emitted by independent classical thermal sources such as, for instance, two similar light-emitting diodes. To explain the interference observed for SPDC radiation, one should take into account that each signal or idler photon is generated not from some particular point but from the whole volume with quadratic nonlinearity, pumped by a coherent pump. And indeed, in this case one cannot tell whether a photon was born at one point or another - the interference is observed in agreement with Feynman’s indistinguishability criterion. Another interesting feature of the first-order interference of SPDC is that it depends on the phases of pump, signal, and idler waves, and one can speak of “three-frequency” interference . Moreover, the condition for the interference to be observed in, say, signal radiation depends on the parameters of the idler radiation. In this Letter, we report an experimental observation of a novel type of quantum interference, in which the sources of two-photon radiation are separated not in space but in time, as SPDC is generated from a train of coherent femtosecond pulses . Depending on the experimental conditions, interference can be observed either in the angular distribution of the signal intensity (first-order interference measured by single-detector method) or in the coincidences of photocounts from the detectors registering the signal and the idler radiation (second-order interference measured by two-detector method). In this paper, we focus on the first-order interference; second-order interference effects will be discussed elsewhere. Let us consider type-II SPDC field generated in a crystal of length $`L`$ from a train of two short pump pulses. The signal radiation is separated from the idler one (Fig. 1) and its intensity is measured by a detector that selects a sufficiently narrow frequency band and a sufficiently small solid angle. (The meaning of the words ‘sufficiently small’ will be clear from further consideration.) The pump field dependence on time can be represented in the form $$E_p(r,t)=\stackrel{~}{E}(tz/u_p)\mathrm{exp}(i\omega _pt+ik_pz),$$ (1) where $`u_p`$ is the pump group velocity, $`\stackrel{~}{E}(t)`$ is the envelope, and $`\omega _p`$ is the pump central frequency. It is supposed that the spectral band of the pump is much less than $`\omega _p`$ (quasi-monochromatic case). If the pump consists of two identical pulses separated in time by $`T_p`$, then $`\stackrel{~}{E}(t)=E_0(t)+E_0(t+T_p)\mathrm{exp}(i\omega _pT_p)`$, where $`E_0(t)`$ is a single-pulse envelope. In the first order perturbation theory, the quantum state of SPDC is given by $$|\mathrm{\Psi }=|vac+\underset{k_s,k_i}{}F_{k_s,k_i}|1_{k_s}|1_{k_i},$$ (2) where $`F_{k_s,k_i}`$ is the two-photon spectral function, $`F_{k_s,k_i}`$ $`=`$ $`{\displaystyle \frac{i\chi }{\mathrm{}}}{\displaystyle _{t_0}^t}𝑑t^{}{\displaystyle _V}d^3r\stackrel{~}{E}(t^{}z/u_p)`$ (4) $`\times \mathrm{exp}\{i(\omega _p\omega _s\omega _i)t^{}+i(𝐤_p𝐤_s𝐤_i)𝐫\},`$ and the notation $`|1_{k_s}|1_{k_i}`$ means a two-photon state in the modes $`𝐤_s,𝐤_i`$. Since the pump pulse is bounded in time, the integration over $`t^{}`$ can be extended to infinite limits. As a result, it gives the Fourier transform of the pump envelope, $`\stackrel{~}{E}(\omega _s+\omega _i\omega _p)`$, which in the cw case becomes a $`\delta `$-function, $`\delta (\omega _s+\omega _i\omega _p)`$. We obtain $`F_{k_s,k_i}`$ $`=`$ $`{\displaystyle \frac{2\pi i\chi }{\mathrm{}}}\stackrel{~}{E}(\omega _s+\omega _i\omega _p)\delta (k_{sx}+k_{ix})`$ (6) $`\times \text{sinc}\left\{{\displaystyle \frac{L}{2}}\left(k_pk_{sz}k_{iz}+{\displaystyle \frac{\omega _s+\omega _i\omega _p}{u_p}}\right)\right\},`$ where, for example, $`k_{s,x}`$ and $`k_{s,z}`$ are the transverse and longitudinal components of the signal wavevector, respectively. For a two-pulse pump, the envelope spectrum is $`\stackrel{~}{E}(\omega )=E_0(\omega )\text{cos}\left\{(\omega \omega _p)\frac{T_p}{2}\right\}`$, with $`E_0(\omega )`$ denoting the Fourier transform of the single-pulse envelope. The probability of detecting a biphoton is $`P_c=|F_{k_s,k_i}|^2`$. Since we are dealing with a two-photon state, the probability $`P_s`$ of a photocount from the signal detector is calculated by integrating $`P_c`$ over all idler modes. Thus, the photon counting rate in the signal detector is $`R_s`$ $``$ $`{\displaystyle 𝑑k_{iz}𝑑k_{ix}|F_{k_s,k_i}|^2}`$ (7) $`=`$ $`{\displaystyle \frac{4\pi ^2\chi ^2L^2}{\mathrm{}^2}}{\displaystyle 𝑑k_{iz}\left|E_0\left(\omega _s+\omega _i(k_i)\omega _p\right)\right|^2}`$ (8) $`\times `$ $`\text{cos}^2\left\{(\omega _s+\omega _i(k_i)){\displaystyle \frac{T_p}{2}}\right\}`$ (9) $`\times `$ $`\text{sinc}^2\left\{{\displaystyle \frac{L}{2}}\left(k_pk_{sz}k_{iz}+{\displaystyle \frac{\omega _s+\omega _i(k_i)\omega _p}{u_p}}\right)\right\},`$ (10) where $`\omega _i(k_i)`$ is the dispersion dependence for the idler beam and $`k_{ix}=k_s\text{cos}\theta _s`$, with $`\theta _s`$ denoting the signal angle of scattering. The cosine modulation in Eq. (10) will not be averaged out by the integration over $`k_{iz}`$ if the squared sinc function is much narrower than this modulation. In this case the squared sinc acts as a delta-function in the integral, thus $`R_s(\omega _s,\theta _s)`$ $``$ $`\left|E_0\left(\omega _s+\omega _i(k_i)\omega _p\right)\right|^2`$ (12) $`\times \text{cos}^2\left\{(\omega _s+\omega _i(k_i)){\displaystyle \frac{T_p}{2}}\right\}.`$ Clearly, Eq.(12) gives a modulated structure for the counting rate of the signal detector. This modulation can be observed in several ways. One can observe interference by varying $`\omega _s`$ or by varying $`T_p`$, which both enter the squared cosine in Eq.(12). In this work, however, we obtain the modulation by varying the signal angle of scattering. This is possible since $`k_i`$, hence $`\omega _i(k_i)`$, actually depends on $`\theta _s`$. This can be shown by expanding $`\omega _i`$ in the vicinity of $`\omega _p/2`$. The sinc-square function in the integral (10) is much narrower than the cosine modulation if $$\frac{1}{d\omega _i/dk_{zi}}\frac{T_p}{L}\frac{1}{(1+\frac{1}{u_p}d\omega _i/dk_{zi})}.$$ (13) For near-collinear scattering, we can take approximately $`d\omega _i/dk_{zi}=d\omega _i/dk_iu_i`$. Hence, we obtain the following condition for observing the first-order interference in SPDC from a two-pulse pump: $$Q\frac{L(u_p^1u_i^1)}{T_p}1.$$ (14) The other condition is the assumption we have used when obtaining Eq. (12): the signal detector should select a sufficiently narrow frequency band and a sufficiently small solid angle. Indeed, the interference structure will be wiped out if Eq. (12) is integrated over a broad band of signal frequencies or angles of scattering. Thus, the requirement to the frequency band of the signal detector is $$\mathrm{\Delta }\omega _s\frac{\pi }{T_p}.$$ (15) In the experiment, the pump is the frequency-doubled radiation from a mode-locked Ti:Sapphire laser with initial central wavelength $`800`$nm. After frequency doubling, the pulse duration is $`140`$fsec, and the repetition rate is $`90`$MHz. The pump pulse is then fed into the polarization pulse splitter consisting of two Glan prisms G and a set of quartz rods QR placed between the prisms (Fig. 1). The axes of the Glan prisms are parallel to the pump polarization. The ‘fast’ and ‘slow’ axes of the quartz rods lie in the plane normal to the pump beam and are directed at $`45^{}`$ to the pump polarization. Due to the birefringence of the quartz rods, at the output of the polarization splitter each pump pulse is transformed into two pulses that have equal amplitudes but are delayed in time with respect to one another by $`L_q(u_o^1u_e^1)`$, where $`L_q`$ is the total length of the quartz rods and $`u_o`$, $`u_e`$ are group velocities of the ordinary and extraordinary waves in quartz at the pump wavelength ($`400`$nm). The SPDC radiation is generated in a BBO crystal cut for collinear frequency-degenerate type-II phase matching. The SPDC radiation is separated from the pump radiation by means of a prism (P) and a pinhole. A polarizing beam splitter (PBS) separates the signal radiation from the idler one, and the idler beam is discarded. The detector (an avalanche diode operating in the Geiger mode) is placed at the focal plane of a lens (F = $`20`$cm), so that the transverse displacement of the detector is proportional to the angle of scattering, $`xF\theta _s`$. In front of the detector, we place one of the three narrow-band filters (IF) with central wavelength $`\lambda _s=2\lambda _p=800`$nm and bandwidths $`\mathrm{\Delta }\lambda _s=1,3`$, and $`10`$nm, respectively, for different measurements. The intensity of the signal radiation is measured as a function of the angle of scattering, $`I_s(\theta _s)`$. The angle is scanned by using a step motor (SM) that moves the detector in the focal plane of the lens. The parameters are bandwidth of the filter, $`\mathrm{\Delta }\lambda _s`$, and the time delay between the two pump pulses, $`T_p`$, which is varied by using quartz rods with total length $`L_q=20`$, $`12.5`$, and $`7.5`$mm, corresponding to the delays $`T_p=744`$, $`465`$, and $`279`$fsec, respectively. The fact that each pair of pulses is actually repeated at a rate of 90 MHz leads to a fine structure in the single-counting distribution (12), which is much narrower than the bandwidth of the filters we use in our experiment. Therefore, it is not observable. To test Eq. (14), we use a BBO crystal of length $`3`$mm to generate SPDC. In Fig. 2, the intensity is plotted versus the detector displacement for all three delays $`T_p`$. All dependencies are obtained with the $`1`$ nm interference filter. Note that in this case, condition (15) is satisfied for all three delays: $`\pi /\mathrm{\Delta }\omega _s2000`$ fs. However, the interference visibility in all plots is different. The highest visibility of the interference pattern is observed for the smallest time interval between the pulses, $`279`$fsec, with $`Q3`$ \[Fig. 2(a)\]. For the intermediate delay, $`465`$fsec, the interference pattern is observed with lower visibility \[Fig.2(b)\]. In this case, $`Q2`$. Evidently, Eq. (14) is not satisfied for the largest delay $`744`$fsec ($`Q1`$), therefore, in this case the interference structure completely vanishes \[Fig. 2(c)\]. For comparison, the angular spectrum of SPDC in the case of a single-pulse pump is shown in Fig.2(d). In agreement with Eq. (12), the modulation period is larger for smaller delays. In all experimental plots, positions of the oscillation peaks are determined by the delay introduced between the pump pulses, in perfect agreement with the theoretical calculation \[shown by arrows in Figs. 2(a),(b),(c)\]. However, the theory discussed above does not give explicit description of the observed asymmetry of the angular spectral envelope. In this Letter, we only focus on the interference modulation; the shape of the angular spectrum envelope will be considered elsewhere. The spectral width of the filter, $`\mathrm{\Delta }\lambda _s`$, has a strong influence on the interference pattern. Changing the filter bandwidth from $`1`$nm to $`3`$nm, we observed a considerable decrease of the visibility. At $`\mathrm{\Delta }\lambda _s=10`$nm, no interference structure was observed, in agreement with condition (15). Let us give a physical interpretation of condition (14) for observing the first-order interference in SPDC signal radiation from a two-pulse pump. Note that although it is the signal radiation that one detects, Eq. (14) contains only the pump and the idler parameters. Considering only the signal radiation, it would seem that the only condition for the first-order interference to take place is Eq. (15), which states that the filter inserted in front of the signal detector should have smaller bandwidth compared to the pump spectrum modulation. Indeed, if the signal photon wavepackets are spread in time by more than $`T_p`$, the signal photons born from different pump pulses are at first sight indistinguishable. However, it is worth remembering that the indistinguishability criterion should be understood as indistinguishability in principle. In principle, we could equip our setup by an idler detector with a broad-band filter and register photocounts from the idler detector (Fig. 3). Then for each signal photon, detection of its twin idler photon is well localized in time with respect to the pump pulses, which could mean that we can always distinguish between a pair born from the first pulse and a pair born from the second pulse. Let us recall now that the BBO crystal has finite length $`L`$. Then a photocount in the idler detector can appear delayed from the corresponding pump pulse by any time $`0<t<\mathrm{\Delta }t_i,\mathrm{\Delta }t_i=L(u_i^1u_p^1)`$ (Fig. 3). Idler photocounts from different pump pulses become indistinguishable if $`\mathrm{\Delta }t_iT_p`$, and we obtain the second necessary condition for the interference, which is Eq. (14). There is an analogy between the first-order interference observed for SPDC generated from two spatially separated domains and for SPDC generated from two separate pump pulses. Indeed, condition (14) ensures that the crystal is long enough so that an idler photon generated by the first pump pulse can meet the second pulse (see Feynman diagram in Fig. 3). In the case of spatially separated SPDC sources, first-order interference is possible when idler waves propagate through both spatial domains where SPDC take place . Similarly to Ref. , where the effect has simple explanation in terms of the pump angular spectrum, here it can be explained by the cosine modulation of the pump frequency spectrum. Condition (14) has the following spectral interpretation: the typical scale of the pump spectrum modulation should be much larger than the width of the idler radiation spectrum, which is determined by the length of the crystal . In conclusion, we have demonstrated the first-order interference of nonclassical light generated from two pump pulses well separated in time. The interference is explained by a quantum mechanical calculation in terms of biphoton states. The interference pattern is observed in the angular distribution of the signal intensity. Interference takes place if the following condition is satisfied: the time indeterminacy of the delay between the idler photon and the corresponding pump pulse is much larger than the time interval between the pump pulses. From the spectral viewpoint, this condition means that the modulation of the pump spectrum, determined by the distance between the pulses, should be much larger than the width of the idler radiation spectrum from a cw pump, determined by the crystal length. Thus, the interference visibility is sensitive to the crystal length. It is also sensitive to the spectral width of the narrow-band filter used for the frequency selection of the signal radiation. This work was supported in part by The Office of Naval Research and an ARO-NSA grant. MVC and SPK also acknowledge partial support from the Russian Foundation for Basic Research, grant No. 97-02-17498.
no-problem/9911/nucl-th9911032.html
ar5iv
text
# Constructing Effective Pair Wave Function from Relativistic Mean Field Theory with a Cutoff ## Acknowledgements We would like to thank Professor R. Tamagaki for useful comments.
no-problem/9911/astro-ph9911067.html
ar5iv
text
# ON THE AGE ESTIMATION OF LBDS 53W091 ## 1. Introduction Precise age estimates of high redshift ($`z`$) galaxies directly constrain the epoch of galaxy formation, $`z_f`$, where $`z_f`$is defined as the epoch when the majority of stars formed. Constraining $`z_f`$is important to cosmology. For example, one of the key questions in modern cosmology has been whether the majority of stars in giant elliptical galaxies form at high redshifts through violent starbursts or during rather recent merger/interaction activities between smaller galaxies. In addition, the age of a galaxy is a unique product of just a few cosmological parameters (e.g., $`\mathrm{\Omega }`$, $`\mathrm{\Lambda }`$, $`H_0`$, $`z_f`$); thus, it can be used to constrain cosmological parameters as well. Spinrad and his collaborators recently obtained the rest-frame UV spectrum of LBDS 53W091, a very red galaxy at $`z=1.552`$, using the Keck Telescope (Dunlop et al. 1996; Spinrad et al. 1997). Based on their analysis on the UV spectrum and $`RK`$ color, they have concluded that LBDS 53W091 is at least 3.5 Gyr old already at $`z=1.552`$, which suggests $`H_0`$$`<`$ 45 km sec<sup>-1</sup> Mpc<sup>-1</sup>, in the context of the Einstein-de Sitter cosmology. When more recently measured cosmological parameters are used (e.g., $`H_0`$= 65, $`\mathrm{\Omega }=0.3`$, $`\mathrm{\Lambda }=0.7`$; Aldering et al. 1998), this age estimate suggests that LBDS 53W091 formed at $`z_f6.5`$. Spinrad et al.’s results have been disputed by two independent studies. Bruzual & Magris (1997a) combined the UV spectrum studied by Spinrad et al. with $`R`$, $`J`$, $`H`$, $`K`$ photometry and obtained an age estimate 1.4 Gyr. Heap et al. (1998) interpreted the UV spectral breaks using model atmosphere results specifically constructed for their study and using the 1997 version Yale isochrones with no convective core overshoot (see §3). They have estimated that the age of LBDS 53W091 lies between 1 and 2 Gyr. They found that the Kurucz spectral library, which is currently used in virtually all population synthesis studies including that of Spinrad et al., does not match the detailed spectral features in the UV spectrum of an F-type main sequence (MS) star obtained with HST/STIS, in terms of the magnitudes of the spectral breaks used in the analysis of Spinrad et al. (1997). This implies that Spinrad et al.’s age estimate using UV spectral breaks may suffer from some systematic errors. Such discordant age estimates undermine our efforts to use this important technique as a probe of cosmology. We have carried out a similar exercise, estimating the age of this galaxy, using the Yi population synthesis models (Yi, Demarque, & Oemler 1997). In this paper, we present the results from the UV – visible continuum analysis only. An analysis on the UV spectral breaks is currently in progress. We attempt to improve our age estimate by adopting (1) convective core overshoot in stellar model construction and (2) a realistic metallicity distribution, rather than the single-abundance assumption, in the galaxy population synthesis. We search for the sources of disagreement among various age estimates and provide a more reliable estimate. Our analysis leads to an age estimate for LBDS 53W091 of approximately 1.5 – 2.0 Gyr, only half of Spinrad et al.’s estimate, but consistent with those of Bruzual & Magris (1997a) and Heap et al. (1998). This smaller age estimate relaxes the strong constraint on $`H_0`$and/or on $`z_f`$implied by the Spinrad et al. estimate. ## 2. Model Construction An important advantage of working with the spectra of giant elliptical galaxies at $`z1`$ – 2 comes from the fact that their major light sources in the UV and visible are all relatively well understood. This redshift range corresponds approximately to the age of 1 – 5 Gyr, depending on the cosmology adopted. Most of the UV light in 1 – 5 Gyr-old stellar populations comes from stars near the main sequence turn-off (MSTO). Because the mean temperature of MSTO stars in a coeval stellar population is a reliable indicator of the age of the population, the integrated UV spectra of such galaxies can provide direct clues to their ages. At such small ages, in fact, the whole spectrum is a reasonable age indicator, because the spectral evolution is rapid with time and the stellar evolution of all major light sources (the MS and the red giant branch \[RGB\]) is reasonably well understood. Thus, we have used both UV and visible data of LBDS 53W091 for the age estimation. For the purposes of this population synthesis, we have updated the Yi models (Yi et al. 1997b) by using isochrones that are more carefully constructed than before. We have constructed stellar evolutionary tracks for the masses of 0.4 – 2.2 $`M_{}`$and the metallicities of $`Z`$ = 0.005, 0.02, and 0.04, using the Yale code. The tracks have been constructed with up-to-date input physics, including the 1995 version OPAL opacities (introduced in Rogers & Iglesias 1992), as described in Yi, Demarque, & Kim (1997). Figure 1 shows a set of stellar evolutionary tracks for $`Z=0.02`$ (approximately solar). The mean masses of MSTO stars in 1 – 5 Gyr, solar abundance populations range approximately 2.2 – 1.3 $`M_{}`$. Fig. 1 Stellar evolutionary tracks from the MS through RGB for Z=0.02, Y=0.27, and M = 0.4 – 2.2 $`M_{}`$. It is not a simple task to construct isochrones from stellar evolutionary tracks, mainly because the interpolation between tracks is not trivial. Even if the same tracks are used, different isochrone routines may introduce notable disagreements. The mass interpolation must be carried out with particular caution. A small error in mass can cause a seriously inaccurate luminosity function near the tip of the RGB, which will in turn affect the integrated spectrum. For example, when 60 points define one isochrone from the zero-age MS through RGB, an error in mass in the 4th (when age $`<`$ 5 Gyr) – 5th (age $`5`$ Gyr) digit below the decimal (in $`M_{}`$) causes a noticeable difference in the integrated magnitudes in the near-infrared, but not necessarily in the shape of the isochrone. The tip of the RGB is also difficult to locate, partly because it is somewhat sensitive to the adopted physics. Despite that, it is still important to define it as precisely and consistently as possible. Because the visible flux is dominated by bright red giants, an error in the position of the RGB tip leads to an error in the visible flux and thus in the normalized UV flux. The UV spectrum is relatively immune againt such complexities regarding the RGB construction, and thus one may argue that a UV spectrum is a better age indicator than the spectrum in longer wavelength regions. Fig. 2 New Yale Isochrones for Z=0.02, Y=0.27, t = 1, 2, 5, & 10 Gyr compared with the Revised Yale Isochrones (Green et al. 1987). The last point in each isochrone has been marked with $`diamonds`$ and $`crosses`$. It is important to define the RGB tips carefully in the UV population synthesis. Figure 2 shows a set of new isochrones constructed from the tracks shown in Figure 1, compared with those in the Revised Yale Isochrones (RYI; Green, Demarque, & King 1987). The use of improved opacities and energy generation rates has led to a change in the shape of the isochrones. The RYI were designed mainly for isochrone fitting to the observed color-magnitude diagrams (CMDs) of globular clusters, and, thus, its creators did not pay much attention to the precise location of the RGB tips. As a result, it is not uncommon to see in the RYI some discontinuities in the brightness of RGB tips as a function of age, not to mention that some isochrones for small ages do not reach the RGB tip at all. The isochrones that are used in this study are shown in Figure 3. They are available from this paper through the Astrophysical Journal. A sample is shown in Table 1. A complete set of the new Yale Isochrones will be published soon. Fig. 3 Part of the new Yale Isochrones that are used in this study. Each panel shows 1, 2, and 5 Gyr isochrones with and without overshoot. These isochrones are available from this paper through the Astrophysical Journal. The post-RGB phase in the population synthesis, including the horizontal branch (HB), has been constructed following the description of Yi et al. (1997b). The adopted input parameters are $`\eta =0.7`$ (the mass loss efficiency parameter in Reimers’ empirical formula) and $`\sigma _{HB}=0.04`$ $`M_{}`$(the HB mass dispersion parameter). The justification for these choices is provided in Yi et al. (1999). However, details of the post-RGB prescriptions hardly affect the overall integrated spectrum at small ages, which is why it is particularly feasible to estimate the ages of giant elliptical galaxies at $`z`$ = 1 – 2. All models in this paper are based on the instantaneous starburst hypothesis. Synthetic CMDs are then convolved with the Kurucz spectral library (Kurucz 1992) to generate integrated spectra. Figure 4 shows the integrated spectra of 1, 2 and 3 Gyr models for solar compositions compared to the observed data of LBDS 53W091. The rest-frame UV spectrum of LBDS 53W091 is from Spinrad et al. (1997). The $`R`$, $`J`$, $`H`$, & $`K`$ magnitudes, obtained from the same source, have been converted into relative fluxes following Equation 1. $`{\displaystyle \frac{F_X}{F_{3150}}}`$ $`=`$ $`{\displaystyle \frac{F_R}{F_{3150}}}[{\displaystyle \frac{F_X}{F_R}}]_{Vega}\mathrm{\hspace{0.17em}10}^{0.4(RX)}`$ (1) $`=`$ $`0.414[{\displaystyle \frac{F_X}{F_R}}]_{Vega}\mathrm{\hspace{0.17em}10}^{0.4(RX)},`$ (3) where $`F_X`$ is the flux from an object of magnitude $`X`$ ($`R=24.5\pm 0.2`$, $`J=20.5\pm 0.1`$, $`H=19.5\pm 0.1`$, $`K=18.7\pm 0.1`$) and $`F_{3150}`$ is the mean flux in the wavelength range 3000 – 3300 Å. Vega’s flux ratios have been computed from the Vega spectrum provided in the Kurucz library. They are $`F_J/F_R=0.147`$, $`F_H/F_R=0.052`$, and $`F_K/F_R=0.019`$. The resulting relative fluxes of LBDS 53W091 are $`F_R/F_{3150}=0.414`$, $`F_J/F_{3150}=2.423`$, $`F_H/F_{3150}=2.153`$, and $`F_K/F_{3150}=1.643`$, with approximately 20%, 10%, 10%, & 10% observational uncertainties, respectively. Our relative fluxes in $`R`$, $`J`$, and $`K`$ agree with those in Bruzual & Magris (1997a) within 1 – 2% difference, but our $`H`$ band relative flux is about 10% lower than theirs. However, this does not make an appreciable difference in the age estimate. Whether we can simply combine these two sets of different data (the UV spectrum and the photometric data) may be questionable because photometry covered a larger area of the galaxy than spectroscopy did (Spinrad et al. 1997). We will discuss this effect in §5.3. Despite that, both the UV spectrum and the photometric data indicate small ages between 1 and 2 Gyr consistently. Fig. 4 Model integrated spectra for solar abundance vs the observed data of LBDS 53W091. The x-axis shows the rest-frame wavelength. All data of LBDS 53W091 are from Spinrad et al. (1997). The four photometric data (filled circles) have been derived from their $`R`$, $`J`$, $`H`$, & $`K`$ magnitudes. The y-axis error bars are observational errors, and the x-axis ones show the effective band widths. The whole continuum is best matched if LBDS 53W091 is 1 – 2 Gyr old, when solar abundance is assumed. ## 3. Effects of Convective Core Overshoot Convective core overshoot (OS), the importance of which was first pointed out by Shaviv & Salpeter (1973), is the inertia-induced penetrative motion of convective cells, reaching beyond the convective core as defined by the classic Schwarzschild (1906) criterion. Stars develop convective cores if their masses are larger than approximately 1.3 – 1.5 $`M_{}`$, typical for the MSTO stars in 1 – 5 Gyr-old populations, such as LBDS 53W091. There has been a consensus for the presence of OS, but its extent has been controversial. Thus, conventional stellar models often do not include OS. Since the advent of the OPAL opacities, a major improvement in the stellar astrophysics, various studies have suggested a modest amount of OS; that is, OS $`0.2`$ $`H_p`$, where $`H_p`$is the pressure scale height (Stothers 1991; Demarque, Sarajedini, & Guo 1994; Kozhurina-Platais et al. 1997). OS has many effects on stellar evolution, but the most notable ones are its effects on the shape of the MSTO and on the luminosity function. As shown in Figure 5, inclusion of a modest amount of OS causes a longer stretch of the MS before the blueward motion because it induces a larger supply of hydrogen fuel from the overshooting (“overmixing”) region into the core. As a result, stars with OS ($`=0.2`$ $`H_p`$) stay longer in the core hydrogen burning stage, but in the MS rather than in the RGB. The table in Figure 5 lists the lifetime in the hydrogen burning phase and in the RGB, both in Myr. Fig. 5 The effects of convective core overshoot (OS) to the stellar evolution tracks of 1.5 and 2.0 $`M_{}`$. An increase in the adopted amount of OS causes a longer lifetime in the hydrogen burning phase relative to that in the helium burning phase and a longer lifetime in the MS than in the RGB. The RGB tips are marked with $`diamonds`$ (with OS) and $`asterisks`$ (without OS). Fig. 6 Isochrones with and without overshoot (OS). The tip of the RGB has been marked with $`diamonds`$ (OS) and $`asterisks`$ (no-OS). Note that the 1 Gyr isochrone with OS does not extend as far as its counterpart (without OS) does: the RGB tip in the 1 Gyr isochrone with OS is much fainter than that without OS. This causes a larger difference in the integrated spectrum, as shown in Figure 7. OS also causes stars to leave the RGB earlier. The diamonds in Figure 5 are the locations of the RGB tips when OS is included, while asterisks are for the no-OS case. This directly affects isochrones, as shown in Figure 6. The RGB tips in the isochrones reach farther without OS. Because most of the visible flux comes from red giants, a decrease in the RGB lifetime results in a lower visible flux. Figure 7 shows the integrated spectra with and without OS. One can see the impact of OS on the normalized integrated spectra of 1 Gyr models and thus on the age estimates. When the isochrones with OS are used in the population synthesis, the same observed integrated spectrum indicates a larger age, especially when the age is as small as 1 Gyr. Fig. 7 Model integrated spectra with and without overshoot (OS). Note that the effect of OS is appreciable only in the young ($`2`$ Gyr) model’s visible-IR spectrum. ## 4. Effects of Metallicity Mixture Most of the previous age estimates of LBDS 53W091 were based on the single abundance population models, typically for solar composition. The solar abundance approximation has been popular for decades because reliable spectral libraries were available only for solar compositions. The first obvious problem in this approximation is that there is little justification in the choice of solar composition in modeling giant elliptical galaxies. Observers have long believed that the majority of stars in giant elliptical galaxies are metal-rich (approximately twice solar) because of their extremely red colors and strong absorption lines. Moreover, most chemical evolution theories predict that giant elliptical galaxies would reach the current metallicity level within a few tenths of a Gyr of the initial starburst (e.g., Kodama & Arimoto 1997). This means that 1 – 5 Gyr-old giant elliptical galaxies may already have stars of various metallicities ranging $`Z`$ 0 through perhaps 3 – 4 times solar. Then, whether the single abundance approximation would be appropriate for modeling giant elliptical galaxies is questionable. Another problem is that the age estimate based on continuum fits is quite sensitive to the metallicity adopted. For example, if we assume a high metallicity for LBDS 53W091, the observed data of LBDS 53W091 would indicate an age smaller than 1 Gyr (Figure 8-\[c\]). On the other hand, if the majority of stars in the galaxies at such high redshifts were metal-poor, the same data would indicate a much larger age by a fcator of 2 to 3 (Figure 8-\[a\]). This difference is caused by the opacity effects that increase with increasing metallicity. As a result, the same observed spectrum can indicate significantly different ages when the true metallicity is unknown. With such a large uncertainty in age estimate, it would be extremely difficult to use the age estimates of distant galaxies to constrain cosmology. The release of the improved Kurucz theoretical spectral library (Kurucz 1992) has finally enabled realistic modeling for compositions other than solar. Thus, it is our intention to construct more realistic models than simplified solar abundance models. One can guess the effect of the use of metallicity mixtures even with a cursory inspection of Figure 8. The fraction of metal-poor stars in conventional metallicity mixture models is small. However, even with such small fractions of metal-poor stars, a combination of coeval stars of various metallicities may have a substantial UV light contribution from metal-poor stars because of the opacity effects. Fig. 8 The effects of metallicity to the integrated spectrum. Models are for 1, 2, and 5 Gyr and for no overshoot. An error in the adopted metallicity leads to a large error in the age estimate. One must adopt a realistic metallicity or metallicity distribution. It should be noted that, while we try to make our models more realisitic than single abundance models by adopting metallicity mixtures and the Kurucz library, our models are still subject to the uncertainty in the spectral library. For example, Heap et al. (1998) noted that the Kurucz library does not reproduce the correlations between some UV spectral strengths and optical colors found in the IUE sample. It is not our intention to claim that our age estimates based on composite models are free from such uncertainties in the spectral library. We only believe that the use of physically plausible metallicity mixtures would result in the models superior to single abundance models, other uncertainties remaining the same. We have adopted the chemical evolution models of Kodama & Arimoto (1997) that reasonably match the observed properties of present epoch giant elliptical galaxies. We have adopted both their metal-rich ($`<Z>0.042`$ $`Z_{}`$) “simple” and “infall” models for LBDS 53W091. Their simple model is a closed-box model where the star formation time scale is 0.1 Gyr. The galactic wind epoch, where the supernova-driven thermal energy exceeds the binding energy, is 0.353 Gyr from the beginning of the starburst. The infall model is based on the same configuration, except that the unprocessed gas in the outskirt falls into the core with the time scale of 0.1 Gyr (see Kodama & Arimoto 1997 for details). When only three metallicity bins ($`Z=0.005`$, 0.02, 0.04) are used, the fractions of metallicity groups are approximately 18% for $`Z=0.005`$, 24% for $`Z=0.02`$, 58% for $`Z=0.04`$ in the “simple” model, and 12% for $`Z=0.005`$, 31% for $`Z=0.02`$, 57% for $`Z=0.04`$ in the “infall” model. If giant elliptical galaxies at $`z=1`$ – 2 had significantly different metallicity distributions from these models, our composite models should be used for illustration purpose only. Figure 9 illustrates the likely light contributions from various metallicity groups of stars when the galaxy is 1.5 Gyr old. As shown in Figure 9-(a), the composite-“infall” model with OS matches the overall data of LBDS 53W091 reasonably well at this age. As we expected, metal-poor stars are more efficient UV sources than metal-rich stars, while the opposite is true in the longer wavelength regions (Figure 9-\[b\]). The level of the light contribution from metal-poor stars is even higher when the “simple” (instead of “infall”) model is used. This is because “simple” models generally predict a larger fraction of metal-poor stars than “infall” models do. Fig. 9 $`top`$: A composite-infall model of 1.5 Gyr of age fits the overall continuum of LBDS 53W091 reasonably well. $`bottom`$: While the visible spectrum is dominated by metal-rich stars, the UV spectrum is strongly affected by metal-poor stars. Therefore, it is important to use a realistic metallicity distribution in the galaxy population synthesis. ## 5. Results We perform $`\chi ^2`$ minimization fits using the UV spectrum and the photometric data of LBDS 53W091, and the results are following. ### 5.1. Fit to the UV Spectrum We have carried out a weighted $`\chi ^2`$ test to the UV spectrum of LBDS 53W091, using the following definition, $$\chi ^2=\frac{1}{N2}\mathrm{\Sigma }\frac{[f_\lambda (model)f_\lambda (observed)]^2}{\sigma _\lambda ^2},$$ (4) where $`N`$ and $`\sigma _\lambda `$ are the number of spectral bins and the observational errors, respectively. Figure 10 shows the measured values of reduced $`\chi ^2`$ of various models. A smaller value of $`\chi ^2`$ indicates a better fit. The spectra are normalized at 3150 Å(the average flux in the range of 3000 – 3300 Å), but the test was quite insensitive to the choice of the normalization point, as long as the normalization flux is the mean flux in a reasonably wide ($`\mathrm{\Delta }\lambda 50`$ Å) range. We modified the error value at 4973.47 Å (1948.85 Å in the rest-frame) with an approval from the observers. It was 2 orders of magnitude smaller than other values, thus $`\chi ^2`$ tests were heavily dominated by the single point, producing unreasonable results. We assume that it was an artifact. We replaced it with an interpolated value from the adjacent bins. Fig. 10 $`\chi ^2`$ test of various models to the UV spectrum of LBDS 53W091. Each line is marked by the model name and the age of the model that shows the minimum $`\chi ^2`$. Solar abundance models suggest 1.8 – 1.9 Gyr and composite models suggest approximately 2 Gyr. Inclusion of overshoot has little impact. Figure 10 shows that OS has very little effect on the age estimates based on the UV spectrum, as illustrated earlier in Figure 7. The discussion of the effects of metallicity mixtures requires a bit of clarification first. The mean metallicity of the majority of stars in giant elliptical galaxies is often believed to be approximately twice solar. If we use such a large metallicity for the single abundance population synthesis, then we would tend to underestimate the UV flux of this galaxy significantly and thus the age ($`<1.0`$ Gyr). This is because even in such metal-rich ($`<Z>`$ 2 $`Z_{}`$) galaxies the main UV sources are probably metal-poor (see Figure 9). In this sense, the effect of the use of metallicity mixtures is very large. As one can see in Figure 10, solar abundance models mimic metal-rich (2 $`Z_{}`$) composite models reasonably. Composite models indicate larger ages (1.9 – 2.1 Gyr) than solar abundance models (1.8 Gyr) by approximately 10%. This suggests that the solar abundance, used by various groups including Spinrad et al. (1997), might be a reasonable approximation in the UV population synthesis of young giant elliptical galaxies, even though it is still a factor of two smaller than the likely mean metallicity of stars in giant elliptical galaxies (c.f., Angeletti & Gianonne 1999). When the three $`\sigma `$ confidence ranges are included, our age estimates based on the $`\chi ^2`$ test using the UV spectrum alone are approximately $`1.8_{0.1}^{+0.2}`$ Gyr (solar abundance model with OS) through $`1.9_{0.1}^{+0.6}`$ Gyr (infall model with OS), where the errors are internal only and do not include observational errors. Thus, they are inconsistent with that of Spinrad et al. (1997) with more than a three $`\sigma `$ confidence (99.7%). ### 5.2. Fit to the Photometric Data Photometric data are available for LBDS 53W091 in $`R`$, $`J`$, $`H`$, & $`K`$ magnitudes, covering approximately 2000 – 9000 Å in the rest-frame wavelength at the redshift of this galaxy ($`z=1.552`$). Photometric data may be less indicative of the age of a population than the UV spectrum, but it is also less affected by the uncertainties in reddening and in metallicity. We have performed a weighted $`\chi ^2`$ test on the four photometric data points. The errors in magnitudes were also adopted from Spinrad et al. (1997). Figure 11 shows the reduced $`\chi ^2`$ as a function of age. The models with minimum $`\chi ^2`$ indicate ages of $`<2`$ Gyr unless the majority of stars in LBDS 53W091 are very metal-poor. Composite models with OS = 0.2 suggest ages of approximately 1.5 Gyr. Unlike the analysis of the UV spectrum (§5.1), inclusion of OS raises the age estimates based on the photometric data substantially, 20 – 50%. This is because OS has a large impact on the visible – IR flux when normalized to the UV (see Figure 7). If no OS is adopted, composite models suggest very small ages ($`<`$ 1.0 Gyr). The effect of inclusion of metallicity mixtures is also substantial. When the mean metallicity of stars in LBDS 53W091 is assumed to be twice solar, our models with OS suggest 25% larger ages when metallicity mixtures are included. This is because composite models contain some metal-poor stars which match the $`RJ`$ color (i.e., rest-frame UV continuum) at older ages than metal-rich stars do. When we use composite models with OS, our analysis on the photometric data of LBDS 53W091 suggests 1.5 $`\pm `$ 0.2 Gyr, which is significantly smaller than the age estimate of Spinrad et al. but consistent with that of Bruzual & Magris. This value is somewhat smaller than our estimate from the UV analysis in the previous section. ### 5.3. The New Age estimate of LBDS 53W091 Table 2 shows the list of our age estimates based on the two data sets. They range approximately 1 – 2 Gyr unless extreme metallicities are assumed. Figure 12 shows a sample fit. The observed data are shown as a continuous line and filled circles. The overplotted models are those with the infall mixture and OS, which are probably more realistic than single abundance models with no OS. In the UV, only the model with the minimum $`\chi ^2`$ (1.9 Gyr) is shown for clarity. In the longer wavelength regions, multiple models with various ages are plotted over the photometric data. This figure demonstrates the difference between the age estimates from the analyses of the UV spectrum and of the photometric data. Fig. 11 $`\chi ^2`$ test of various models to the photometric data of LBDS 53W091. Composite models with OS, the most realistic models, reach their minimum $`\chi ^2`$ near the age of 1.5 Gyr. Fig. 12 The model spectra compared to the observed data of LBDS 53W091. In the UV, only the best matching model (with the minimum $`\chi ^2`$) is overplotted, while in the longer wavelength range multiple models with various ages (shown in Gyr on lines) are shown. The UV spectrum indicates somewhat larger age than the the photometric data in the longer wavelength regions. This may be due to radial gradients in age and/or metallicity, or to reddening. This difference between the two age estimates may appear substantial to some readers. However, such a difference would not be unnatural if the data have been affected by some reddening, whether it is Galactic or internal. If we use the extinction curve of Cardelli, Clayton, & Mathis (1989), in conjunction with that of O’Donnell (1994), a moderate amount of Galactic reddening, $`E(BV)=0.04`$, reduces the difference between these two age estimates. In fact, the Galactic foreground extinction is estimated at 0.03 mag by Schlegel, Finkbeiner, & Davis (1998), along this line of sight ($`RA`$ = 17:21:17.84, $`Dec`$ = 50:08:47.7, 1950.00; $`l`$ = 76.8684, $`b`$ = 34.4531). Then, both the UV spectrum and the photometric data become consistent with an age of 1.3 Gyr (Figure 13). It is also probable that some of this difference comes from the fact that the light sources of the UV spectrum are older and/or more metal-rich than those of the photometric data, because the UV spectroscopy covered a smaller area near the center of this galaxy (Spinrad et al. 1997). Then, composite models with radial gradients in age and metallicity would be more realistic to explain the observed data. Our UV-based age estimates are in principle lower limits because the UV spectrum is likely to be dominated by the youngest population. However, if the star formation time scale was only an order of 0.1 Gyr, as many galactic chemical evolution models for giant elliptical galaxies suggest (e.g., Kodama & Arimoto 1997), this age spread effect may not be large. It is important to note that our UV-based age estimates are larger than those from the photometric data. Firstly, this may indicate the presence of at least some reddening. We have shown above that a small amount of reddening reconciles the difference between the two age estimates. Secondly, if there is any substantial age spread in LBDS 53W091, the UV-based age estimates should be smaller than the visible-based ones, because the shorter wavelength spectrum is more dominated by the younger stars. For example, if a galaxy experiences a starburst that lasts for 1 Gyr at a constant rate centered at 2 Gyr before the observation, its integrated UV spectrum would be matched best by a single burst model of an age of 1.8 – 1.9 Gyr instead of 2.0 Gyr. In some ad hoc star formation scenarios, the age underestimation can of course be larger. However, in the case of LBDS 53W091, UV-based age estimates are generally larger than the visible-based ones, which is opposite to the expectation from the population with any age spread. Thus, this likely implies very little age spread in LBDS 53W091 and the presence of some reddening, unless it is entirely due to the aperture effect. Then, our UV-based age estimates may even be an upper limit, rather than a lower limit. Without knowing the accurate amount of age spread and reddening and the level of accuracy of the models, it is not clear whether our estimates can constrain lower limits or upper limits. Fig. 13 Same as Figure 12, but with the effect of reddening included. The models are reddened according to the extinction curves of Cardelli et al. (1989) and of O’Donnell (1994). A 1.3 Gyr model matches both UV and visible – IR data reasonably. ## 6. Origin of the Disagreement among Various Models The large difference in age estimate from Spinrad et al. and from this study is mainly due to the significant difference in the model integrated spectrum. Figure 14 shows the comparison between the latest Jimenez models<sup>1</sup><sup>1</sup>1downloaded from Jimenez’ ftp site in May, 1999 (the preferred models in the analysis of Spinrad et al.) and the Yi models, both for the solar composition with no OS. The Jimenez models shown here are the new ones based on the stellar evolutionary tracks with finer mass grids and so probably improved over those used in Spinrad et al. (1997). At the time of this study, only his new models were made available to us. Fig. 14 Comparison between the Yi models (marked “Y”) and the Jimenez models (marked “J”) for solar composition. Ages in Gyr are marked on each model. Both in the UV (a) and in the visible – IR (b), the Jimenez models are substantially bluer. In the UV, Jimenez’ 1 Gyr model (1J) agrees with Yi’s model (1Y) remarkably well. However, they substantially disagree for larger ages: Yi’s 2, 3, 5 Gyr models are close to Jimenez’ 3, 5, 10 Gyr models (not all models shown here for clarity). In fact, Jimenez’ 3 Gyr and 5 Gyr models are nearly on top of Yi’s 2 Gyr and 3 Gyr models. One can easily understand why Spinrad et al. and we are achieving such different age estimates. An open diamond in Figure 14-(a) is an approximate relative flux of LBDS 53W091 normalized at 3150 Å. This relative flux of LBDS 53W091 is closely reproduced by the 1.4 Gyr model when the Yi models are used (see also Table 2) or the $`1.9_{0.1}^{+0.2}`$ Gyr models when the new Jimenez models are used (based on the reduced-$`\chi ^2`$ tests shown in Equation 2). Also plotted in Figure 14-(a) are the relative fluxes from the earlier version Jimenez models (horizontal dashed lines) used in Spinrad et al. (1997). Because we did not have access to this version Jimenez models, we read off the relative fluxes from the Figure 14 of Spinrad et al. (1997). The three dashed lines are from the top his 1, 3, and 5 Gyr models, respectively. Note that his early version models are substantially bluer than his new version. One can see that the early version Jimenez models match the LBDS 53W091 data approximately at 3 Gyr, as Spinrad et al. (1997) suggested. The fit to the UV spectrum using the new Jimenez models, which suggests an age estimate of 1.9 Gyr, is shown in Figure 16-(a). Readers are encouraged to compare Figure 16-(a) to Figure 14 of Spinrad et al. (1997). As shown in Figure 15, the Yi models are somewhat redder than the 1999 version Bruzual & Charlot (BC, private communication) models, probably because the BC models assume a larger UV light contribution from post-asymptotic giant branch (PAGB) stars, but the overall agreement is good. If the new Jimenez models are improved over his previous version used by Spinrad et al., all three groups now suggest rather consistent age estimates, between 1 and 2 Gyr. Fig. 15 Comparison between the Yi models (marked “Y”) and the 1999 version Bruzual & Charlot models (marked “BC”) for solar composition. They are in good agreement, except that the BC models are somewhat redder because they are based on the corrected Kurucz spectral library (see text). In the visible – IR, the difference between the new Jimenez models and the Yi models is even larger. Figure 14-(b) compares their 1, 3, 5, and 10 Gyr models. Jimenez’ 1, 3 and 5 Gyr models are very close to one another, while Yi’s models are quite well separated. Jimenez’ 5 and 10 Gyr models are quite close to Yi’s 2 and 4 Gyr models, respectively. The Yi models are in close agreement with the BC models, as shown in Figure 15-(b). The BC models are based on a modified Kurucz spectral library (Lejeune, Cuisinier, & Buser 1997) and thus somewhat redder than the Yi models. The difference between the Jimenez models and the Yi models, however, is not the only cause for Spinrad et al. to get such a large age estimate, i.e., 2.5 Gyr, from their analysis on the photometric data. It was also caused by the fact that they used only $`RK`$, omitting $`J`$ and $`H`$ magnitudes in their analysis. Figure 16-(b) shows that even the Jimenez models would have suggested a substantially smaller age if the whole photometric data had been used in their analysis. When we use the new Jimenez models and the reduced-$`\chi ^2`$ test on the all of the four photometric data points, the best model indicates $`1.9_{0.7}^{+0.5}`$ Gyr, which is in good agreement with the estimate from the UV analysis. Then again, all three groups scrutinized here are suggesting consistent age estimates of 1 – 2 Gyr. Such an agreement, at least on the age of LBDS 53W091, is possible because, for small ages ($`2`$ Gyr), the new Jimenez models differ from the Yi and the BC models only slightly. Fig. 16 The fit to the data of LBDS 53W091 using the new Jimenez models. Both its UV spectrum and the photometric data consistently suggest approximately 1.9 Gyr. One may then say that the age discrepancy on LBDS 53W091 has been resolved. Despite this apparent resolution, it is still quite disturbing to know that there are significant disagreements between the Jimenez models and the Yi (and the BC) models at larger ages. If such a large disagreement is due to the uncertainties in the input physics, we would have to admit that we are not ready to estimate ages of the bulk of stellar populations via continuum fitting. A notable disagreement can easily be introduced by the details in the population synthesis technique. For example, inaccurate interpolations between tracks, whether they take place during the isochrone construction or directly in the population synthesis, may cause significant difference especially in the visible – IR spectrum. The large impact of inaccurate mass interpolation has already been pointed out in §2 and perhaps demonstrated by the change in the two versions of the Jimenez models. Despite such complexities in modeling the visible – IR flux, it is not true that model spectra in the visible – IR are generally unreliable (see Dunlop 1998 for a different opinion). For example, the age estimate using the old (1987) Revised Yale Isochrones, where RGB tips were computed less accurately than in the current Yale Isochrones used in this study, is different from our current estimate only by 20%. The difference between the estimates of Bruzual & Magris (1997a), based on the BC models, and ours is also small, even though the two groups use different stellar evolutionary tracks. Besides all these, the conventional population synthesis models beautifully match the overall spectra (near-UV through near-IR) of globular clusters (e.g., Bruzual et al. 1997) at their accepted ages. In short, the visible – IR flux is more sensitive to the uncertainties in the stellar models and in the population synthesis (including isochrone construction), but the level of uncertainty should be less than 20% or so in age. Thus, it is still unclear where such a large difference in the model spectrum as present between the Jimenez models and the Yi (and the BC) models comes from. It is unlikely to be caused by the difference in the input physics in the stellar models, because the stellar evolution theory is already quite well established. All these three groups use the Kurucz spectral library or some hybrid versions originated from it, and, thus, it is also unlikely that much of the difference can be attributed to the spectral library. The source of the discrepancy will be known only when these models are compared step by step with each other. While it may be extremely difficult to directly compare different population models and find a more realistic model, it should be possible to test models by matching the observational properties of the objects whose ages are reasonably well known. Good examples include the sun, M32, and Galactic globular clusters. We are presenting the results of the tests on the sun and M32 only, because globular clusters have already been modeled successfully by a number of studies including that of Bruzual et al. (1997). The following test results may not appear relevant to some readers, because the sun and M32 are likely significantly older than LBDS 53W091. However, we would like to provide some useful sample tests that can easily be used to validate population synthesis models. ### 6.1. Test on the sun The age of the sun, approximately $`4.53\pm 0.4`$ Gyr (see Guenther & Demarque 1997 for references), is one of the best constrained quantities in astronomy. In a 5 Gyr coeval population, most of the UV light is still produced by MS stars. Thus, a 5 Gyr-old solar abundance population should exhibit a UV flux similar to that of the sun. Figure 17 shows the fits to the theoretical solar spectrum (2000 – 3500 Å) from the Kurucz library using the Yi models (a) and the new Jimenez models (b). Our test on the sun is not weighted with observational errors because the solar spectrum is a theoretical one. In this sense, we are measuring merely the mean square difference ($`MSD`$) between the model and the data instead of $`\chi ^2`$. The formula for the $`MSD`$ test is shown in Equation 3. $$MSD=\frac{1}{N2}\mathrm{\Sigma }\frac{[f_\lambda (model)f_\lambda (object)]^2}{f_\lambda (object)},$$ (5) where $`N`$ is the number of spectral bins. The best fitting model is a $`5.0\pm 0.1`$ Gyr model when the Yi models are used. This slight disagreement in the age estimate from the generally accepted solar age is perfectly expected, because we are matching a single stellar spectrum with those of composite (in the sense of containing MS, red giants, and $`etc.`$) stellar populations and the light contribution in the UV from post-MS stars (mostly from PAGB stars) is slightly larger in the far-UV than in the near-UV. The proximity of this age estimate to the accepted age of the sun once again demonstrates the high reliability of the UV-based age estimates for intermediate-age, composite populations. However, when the new Jimenez models are used, much larger ages are indicated (10 Gyr giving the best fit). This was already evident in Figure 14, where the Jimenez models appeared much bluer than the Yi models. If the continua of G-type stars (including the sun) in the Kurucz library are significantly inaccurate, an effort to achieve the right age of the sun using population synthesis models and a theoretical spectrum would not be appropriate. Yet, such tests would still serve as sanity checks for the population synthesis computations. Fig. 17 A test of the Yi (top) and new Jimenez (bottom) population synthesis models on the sun. Models are composite in the sense that they contain all evolutionary phases. But, at 5 Gyr, MS stars still dominate the UV spectrum (92should match the solar spectrum at a reasonably close age to the known one, i.e., 4.5 – 4.7 Gyr. ### 6.2. Test on M32 M32 also provides a good test, as we now have resolved visible – IR CMDs that suggest an age of $`8.5`$ Gyr (Grillmair et al. 1996). We have adopted the visible – IR spectrum of M32 from Bruzual & Magris (1997b). Figure 18 shows the fits using the solar abundance models of Yi and of Jimenez (new). Both the Yi models and the Jimenez models are based on the theoretical Kurucz spectral library, and thus their fits are limited by the known shortcoming of the Kurucz library in matching the spectra of cool stars. In addition, these single age, single abundance models may not be good approximations to M32. We find that the match becomes better, in particular in the $`U`$ band and in the IR, when chemically composite models are used. Bearing all these limitations in mind, we cautiously find that the Yi models match the M32 spectrum somewhat better. In conclusion, we note the followings. The difference in the age estimate between Spinrad et al. (1997) and this study is caused by the large difference in the model integrated spectrum. The Jimenez models, their preferred models, appear much bluer than the Yi and the BC models. The new Jimenez models, presumably improved over his earlier version, are redder than his previous models and thus closer to the Yi and the BC models. However, they are still much bluer than the Yi and the BC models. Currently, the Yi models are in reasonable agreement with the BC models and seem to match the spectra of the sun and M32 at their accepted ages better than the new Jimenez models do. Fig. 18 A test of the Yi (a) and new Jimenez (b) population synthesis models on M32 using its visible – near IR spectrum from Bruzual & Margris (1997b). Models are for the solar chemical composition and for the age of 8.5 Gyr, which has been determined from visible – near IR CMD studies (see text). The new Jimenez models deviate from the observed spectrum. ## 7. Summary and Conclusion The pioneering studies of Spinrad and his collaborators (Dunlop et al. 1996; Spinrad et al. 1997) have demonstrated the significance of precise age estimates of distant galaxies. Their suggestion that the red galaxy, LBDS 53W091, at $`z=1.552`$ is at least 3.5 Gyr old was striking because it would result in a rather strict constraint on cosmology. We have carried out a similar exercise, estimating the age of this galaxy, but only via continuum fitting. When we use the same input parameters, our age estimate is approximately 1.4 – 1.8 Gyr, substantially smaller than theirs, but consistent with those of Bruzual & Magris (1997a) and of Heap et al. (1998). The large age estimate of Spinrad et al. is apparently caused by the use of the early Jimenez models in their analysis, which are significantly bluer than the Yi and the BC models. The latest Jimenez models are somewhat redder than his earlier models and closer to the Yi models, resulting in the age estimates that are consistent with our estimates. This may indicate a resolution of the age discrepancy on LBDS 53W091. We have further improved our estimates over previous ones by adopting convective core overshoot (OS) and realistic metallicity mixtures. The inclusion of OS has little effect on the UV-based age estimates, but it raises the age estimates based on the visible data normalized to the UV by 20 – 50%. Adopting realistic metallicity distributions is also important because different metallicity groups dominate different parts of the integrated spectrum. If we assume that the majority of stars in LBDS 53W091 are already as metal-rich as those in nearby giant elliptical galaxies, the photometric data of LBDS 53W091 indicate up to a factor of two larger ages when metallicity mixtures are adopted. The UV continua of young galaxies, such as LBDS 53W091, are not sensitive to OS. In addition, solar abundance models reasonably approximate them. This relative immunity of the UV data against such complexities makes the UV spectrum of a distant galaxy a very useful age indicator. Our UV-based estimate is approximately $`2.0\pm 0.2`$ Gyr and apparently inconsistent with that of Spinrad et al., 3.5 Gyr, but consistent with the age estimates we obtained using the new Jimenez models. The photometric data of LBDS 53W091 indicate $`1.5\pm 0.2`$ Gyr. The slightly larger estimates from the UV continuum fit would be consistent with this photometry-based one if we include a small amount of reddening and/or if the core of this galaxy is somewhat older or more metal-rich than its outskirt, all of which are quite plausible. It may also indicate that there is no substantial age spread among the stars in LBDS 53W091. The age estimates of Spinrad and his collaborators were heavily based on selected UV spectral breaks. This was because UV spectral breaks were believed to be less sensitive to the uncertainties in reddening. If this is true and if we are ignoring the possible reddening effects, our age estimates should be systematically larger than theirs because reddening makes the continuum look older, which is opposite to what we have found. Thus the difference between Spinrad et al.’s estimate and ours cannot be reconciled by adopting any conventional reddening law. Our results on LBDS 53W091 are vulnerable to the uncertainties in the spectral library in matching the F-type stellar spectra in the UV. Such uncertainties may exist not only in the detailed spectral features, such as those studied by Spinrad et al. (1997) and Heap et al. (1998), but also in the continuum. The same authors of Heap et al. (1998) are currently obtaining the UV spectra of F-type stars using HST/STIS. When the project is complete, a more accurate analysis both on the spectral breaks and the continuum will be possible. There is no doubt that precise age estimates of high-$`z`$ galaxies would be very useful for constraining cosmology. In order to fully take advantage of the power of this technique, however, we first need to understand the details of the population synthesis, which are currently creating a substantial disagreement in age estimate. We propose to carry out a comprehensive investigation on the various population synthesis models through a series of standard tests on the objects whose ages have been independently determined. Such objects may include the sun, M32, and Galactic globular clusters. Our models (the Yi models) currently pass these tests reasonably. Our age estimates indicate that LBDS 53W091 formed approximately at $`z=2`$ – 3. However, our smaller age estimate for this one galaxy does not contradict work that suggests galaxies generally formed at high redshifts, regardless of the rarity of massive ellipticals at $`z1.5`$. Furthermore, we are just beginning to expand our observations of galaxies to high redshift, and so the existence of a few old galaxies at high redshifts does yet prove any galaxy formation scenario, although it can potentially constrain cosmological parameters (in the sense that the ages of a few objects can provide lower limits on the age of the Universe at that redshift). Finding no old galaxies at high redshift would support a low $`z_f`$ for the general population. Building a larger database of observations is therefore crucial to achieve a unique and statistically significant solution. Dunlop (1998) reported a discovery of another galaxy (LBDS 53W069) whose UV spectrum looks even redder than that of LBDS 53W091, although its redshift is only slightly smaller ($`z=1.43`$). This would be a stronger sign of large ages of high-$`z`$ galaxies. As more data are collected, our vision to the high-$`z`$ universe will be clearer. This work was encouraged by the open-minded response of Hyron Spinrad to our initial interest in the work on LBDS 53W091 done by him and his collaborators. We thank his group, in particular Daniel Stern, for providing the spectrum of LBDS 53W091. We are grateful to Taddy Kodama for providing his metallicity distribution models and to Gustavo Bruzual for providing the spectrum of M32. The constructive criticisms and comments of Raul Jimenez, Hyron Spinrad, Gustavo Bruzual, Sydney Barnes and Pierre Demarque improved the manuscript significantly. We owe special thanks to Raul Jimenez and Gustavo Bruzual for making their models available to us. This work was supported by the Creative Research Initiative Program of the Korean Ministry of Science & Technology grant. Part of this work was performed while S.Y. held a National Research Council-(NASA Goddard Space Flight Center) Research Associateship.
no-problem/9911/gr-qc9911002.html
ar5iv
text
# 1 Quantum fluctuations of the gravitational field ## 1 Quantum fluctuations of the gravitational field Gravity deals with the frame in which everything takes place, i.e., with spacetime. We are used to putting everything into spacetime, so that we can name and handle events. General relativity made spacetime dynamical but the relations between different events were still sharply defined. Because of quantum mechanics, in such a dynamical frame, objects became fuzzy; exact locations were substituted by probability amplitudes of finding an object in a given region of space at a given instant of time. Spacetime undergoes the quantum fluctuations of the other interactions and, even more, introduces its own fluctuations, thus becoming an active agent in the theory. The quantum theory of gravity suffers from problems (see, e.g. Refs. ) that have remained unsolved for many years and that are originated, in part, in this lack of a fixed immutable spacetime background. A quantum uncertainty in the position of a particle implies an uncertainty in its momentum and, therefore, due to the gravity-energy universal interaction, would also imply an uncertainty in the geometry, which in turn would introduce an additional uncertainty in position of the particle. The geometry would thus be subject to quantum fluctuations that would constitute the spacetime foam and that should be of the same order as the geometry itself at the Planck scale. This would give rise to a minimum length beyond which the geometrical properties of spacetime would be lost, while on larger scales it would look smooth and with a well-defined metric structure. The key ingredients for the appearance of this minimum length are quantum mechanics, special relativity, which is essential for the unification of all kinds of energy via the finiteness of the speed of light, and a theory of gravity, i.e., a theory that accounts for the active response of spacetime to the presence of energy (general relativity, Sakharov’s elasticity , strings…). Thus, the existence of a lower bound to any output of a position measurement, seems to be a model-independent feature of quantum gravity. In fact, different approaches to this theory lead to this result . Planck length $`\mathrm{}_{}`$ might play a role analogous to the speed of light in special relativity. In this theory, there is no physics beyond this speed limit and its existence may be inferred through the relativistic corrections to the Newtonian behavior. This would mean that a quantum theory of gravity could be constructed only on “this side of Planck’s border” as pointed out by Markov (as quoted in Ref. ). In fact, the analogy between quantum gravity and special relativity seems to be quite close: in the latter you can accelerate forever even though you will never reach the speed of light; in the former, given a coordinate frame, you can reduce the coordinate distance between two events as much as you want even though the proper distance between them will never decrease beyond Planck length (see Ref. , and references therein). This uncertainty relation $`\mathrm{\Delta }x\mathrm{}_{}`$ also bears a close resemblance to the role of $`\mathrm{}`$ in quantum mechanics: no matter which variables are used, it is not possible to have an action $`S`$ smaller than $`\mathrm{}`$ . Based on the work by Bohr and Rosenfeld (see e.g. Ref. ) for the electromagnetic field, Peres and Rosen and then DeWitt carefully analyzed the measurement of the gravitational field and the possible sources of uncertainty (see also Refs. ). Their analysis was carried out in the weak-field approximation (the magnitude of the Riemann tensor remains small for any finite domain) although the features under study can be seen to have more fundamental significance. This approximation imposes a limitation on the bodies that generate and suffer the gravitational field, which does not appear in the case of an electromagnetic field. The main reason for this is that, in this case, the relevant quantity that is involved in the uncertainty relations is the ratio between the charge and the mass of the test body, and this quantity can be made arbitrarily small. This is certainly not the case for gravitational interactions, since the equivalence principle precisely fixes the corresponding ratio between gravitational mass and inertial mass, and therefore it is not possible to make it arbitrarily small. Let us go into more detail in the comparison between the electromagnetic and the gravitational fields as far as uncertainties in the measurement are concerned and see how it naturally leads to a minimum volume of the measurement domain. The measurement of the gravitational field can be studied from the point of view of continuous measurements , which we briefly summarize in what follows (throughout this work we set $`\mathrm{}=c=1`$, so that the only dimensional constant is Planck’s length $`\mathrm{}_{}=\sqrt{\text{G}}`$). ### 1.1 Continuous measurements Assume that we continuously measure an observable $`Q`$, within the framework of ordinary quantum mechanics. Let us call $`\mathrm{\Delta }q`$ the uncertainty of our measurement device. This means that, as a result of our measurement, we will obtain an output $`\alpha `$ that will consist of the result $`q(t)`$ and any other within the range $`(q\mathrm{\Delta }q,q+\mathrm{\Delta }q)`$. The probability amplitude for an output $`\alpha `$ can be written in terms of path integrals $`A[\alpha ]=_\alpha 𝒟xe^{iS}`$, where $`\alpha `$ denotes not only the output but also the set of trajectories in configuration space that lead to it. For a given uncertainty $`\mathrm{\Delta }q`$, the set $`\alpha `$ is fully characterized by its central value $`q`$. We are particularly interested in studying the shape of the probability amplitude $`A`$. More precisely, we will pay special attention to its width $`\mathrm{\Delta }\alpha `$ . There are two different regimes of measurement, classical and quantum, depending on whether the uncertainty of the measuring device is large or small. The classical regime of measurement will be accomplished if $`\mathrm{\Delta }q`$ is large enough. In this regime, the width of the probability amplitude $`\mathrm{\Delta }\alpha `$ can be seen to be proportional to the uncertainty $`\mathrm{\Delta }q`$. Also, the uncertainty on the action can be estimated to be $`\mathrm{\Delta }S1`$. The quantum regime of measurement occurs when $`\mathrm{\Delta }q`$ is very small. Now the width of the probability amplitude is $`\mathrm{\Delta }\alpha 1/\mathrm{\Delta }q`$. The uncertainty in the action is also greater than unity in this case. Thus, in any regime of measurement, the action uncertainty will be greater than unity. In view of this discussion, the width $`\mathrm{\Delta }\alpha `$ of the probability amplitude will achieve its minimum value, i.e., the measurement will be optimized, for uncertainties in the measurement device $`\mathrm{\Delta }q`$ that are neither too large nor too small. When this minimum nonvanishing value is achieved, the uncertainty in the action is also minimized and set equal to one. The limitation on the accuracy of any continuous measurement is, of course, an expression of Heisenberg’s uncertainty principle. Since we are talking about measuring trajectories in some sense, a resolution limit should appear, expressing the fact that position and momentum cannot be measured simultaneously with infinite accuracy. In the classical regime of measurement, the accuracy is limited by the intrinsic uncertainty of the measuring device. On the other hand, when very accurate devices are employed, quantum fluctuations of the measuring apparatus affect the measured system and the final accuracy is also affected. The maximum accuracy is obtained when there is achieved a compromise between keeping the classical uncertainty low and keeping quantum fluctuations also small. ### 1.2 Measuring the gravitational field This discussion bears a close resemblance with the case of quantum gravity concerning the existence of a minimum length, where there exists a balance between the Heisenberg contribution $`1/\mathrm{\Delta }p`$ to the uncertainty in the position and the active response of gravity to the presence of energy that produces an uncertainty $`\mathrm{\Delta }x\mathrm{}_{}^2\mathrm{\Delta }p`$. Actually, any measurement of the gravitational field is not only extended in time, but also extended in space. These measurements are made by determining the change in the momentum of a test body of a given size. That measurements of the gravitational field have to be extended in spacetime, i.e., that they have to be continuous, is due to the dynamical nature of this field. Before analyzing the gravitational field, let us first briefly discuss the electromagnetic field whose measurement can also be regarded as continuous. In the case of an electromagnetic field, the action has the form $`S=d^4xF^2`$, where $`F`$ is the electromagnetic field strength. Then, the action uncertainty principle $`\mathrm{\Delta }S1`$ implies that $`\mathrm{\Delta }(F^2)l^41`$, which can be conveniently written as $`\mathrm{\Delta }Fl^3q/(Flq)`$, where $`l`$ is the linear size of the test body and $`q`$ is its electric charge. Here, we have already made the assumption that the quantum fluctuations of the test body are negligible, i.e., that its size $`l`$ is larger than its Compton wave length $`1/m`$, where $`m`$ is its rest mass. $`Flq`$ is just the electromagnetic energy of the test body. If we impose the condition that the electromagnetic energy of the test body be smaller than its rest mass $`m`$, the uncertainty relation above becomes in this case $`\mathrm{\Delta }Fl^3q/m`$. The conditions $`l1/m`$ and $`lq^2/m`$ that we have imposed on the test body and that can be summarized by saying that it must be classical from both the quantum and the relativistic point of view are the reflection of the following assumptions: the measurement of the field averaged over a spacetime region, whose linear dimensions and time duration are determined by $`l`$, is performed by determining the initial and final momentum of a uniformly charged test body; the time interval required for the momentum measurement is small compared to $`l`$; any back-reaction can be neglected if the mass of the test body is sufficiently high; and finally, the borders of the test body are separated by a spacelike interval. Let us now consider a measurement of the scalar curvature averaged over a spacetime region of linear dimension $`l`$, given by the resolution of the measuring device (the test body). The action is $`S=\mathrm{}_{}^2d^4x\sqrt{g}R`$, where the integral is extended over the spacetime region under consideration, so that it can be written as $`S=\mathrm{}_{}^2Rl^4`$, $`R`$ being now the average curvature. The action uncertainty principle $`\mathrm{\Delta }S1`$ gives the uncertainty relation for the curvature $`\mathrm{\Delta }Rl^4\mathrm{}_{}^2`$, which translates into the uncertainty relation $`\mathrm{\Delta }\mathrm{\Gamma }l^3\mathrm{}_{}^2`$ for the connection $`\mathrm{\Gamma }`$, or in terms of the metric tensor, $`\mathrm{\Delta }gl^2\mathrm{}_{}^2`$. The left hand side of this relation can be interpreted as the uncertainty in the proper separation between the borders of the region that we are measuring, so that it states the minimum position uncertainty relation $`\mathrm{\Delta }x\mathrm{min}(l,\mathrm{}_{}^2/l)\mathrm{}_{}`$. It is worth noting that it is the concurrence of the three fundamental constants of nature $`\mathrm{}`$, $`c`$ (which have already been set equal to 1), and G that leads to a resolution limit. If any of them is dropped then this resolution limit disappears. We see from the uncertainty relation for the electromagnetic field that an infinite accuracy can be achieved if an appropriate test body is used. This is not the case for the gravitational interaction. Indeed, the role of $`F`$ is now played by $`\mathrm{\Gamma }/\mathrm{}_{}`$, where $`\mathrm{\Gamma }`$ is the connection, and the role of $`q`$ is played by $`\mathrm{}_{}m`$. It is worth noting that by virtue of the equivalence principle, active gravitational mass, passive gravitational mass and energy (rest mass in the Newtonian limit) are all equal, and hence, for the gravitational interaction, the ratio $`q/m`$ is the universal constant $`\mathrm{}_{}`$. The two requirements of Bohr and Rosenfeld are now $`l1/m`$ and $`l\mathrm{}_{}^2m`$ so that $`l\mathrm{}_{}`$. This means that the test body should not be a black hole, i.e. its size should not exceed its gravitational radius, and that both its mass and linear dimensions should be larger than Planck’s mass and length, respectively. As in the electromagnetic case, Bohr and Rosenfeld requirements can be simply stated as follows: the test body must behave classically from the points of view of quantum mechanics, special relativity and gravitation. Otherwise, the interactions between the test body and the object under study would make this distinction (the test body on the one hand and the system under study on the other) unclear as happens in ordinary quantum mechanics: the measurement device must be classical or it is useless as a measuring apparatus. In this sense, within the context of quantum gravity, Planck’s scale establishes the border between the measuring device and the system that is being measured. We can see that the problem of measuring the gravitational field, i.e., the structure of spacetime, can be traced back to the fact that any such measurement is nonlocal, i.e. the measurement device is aware of what is happening at different points of spacetime and takes them into account. In other words, the measurement device averages over a spacetime region. The equivalence principle also plays a fundamental role: the measurement device cannot decouple from the measured system and back reaction is unavoidable. ### 1.3 Vacuum fluctuations One should expect not only fluctuations of the gravitational field owing to the quantum nature of other fields and measuring devices but also owing to the quantum features of the gravitational field itself. As happens for any other field, in quantum gravity there will exist vacuum fluctuations that provide another piece of uncertainty to the gravitational field strength. It can also be computed by means of the action uncertainty principle. Indeed, in the above analyses, we have only considered first order terms in the uncertainty because it was assumed that there was a nonvanishing classical field that we wanted to measure. However, in the case of vacuum, the field vanishes and higher order terms are necessary. Let us discuss this issue for the electromagnetic case first. The uncertainty in the action can be calculated as $`\mathrm{\Delta }S=S[F+\mathrm{\Delta }F]S[F]`$, so that $`\mathrm{\Delta }S=d^4x[2F\mathrm{\Delta }F+(\mathrm{\Delta }F)^2]`$. The action uncertainty principle then yields the relation $`\mathrm{\Delta }Fl^2Fl^2+\sqrt{(Fl^2)^2+1}`$. In the already studied limit of large electromagnetic field (or very large regions) $`Fl^21`$, the uncertainty relation for the field becomes $`\mathrm{\Delta }Fl^31/(Fl)q/m`$ obtained above. On the other hand, the limit of vanishing electromagnetic field (or very small regions of observation) $`Fl^21`$ provides the vacuum fluctuations of the electromagnetic field $`\mathrm{\Delta }Fl^21`$. In the gravitational case, the situation is similar. The gravitational action can be qualitatively written in terms of the connection $`\mathrm{\Gamma }`$ as $`S=\mathrm{}_{}^2d^4x(\mathrm{\Gamma }+\mathrm{\Gamma }^2)`$ so that the uncertainty in the action has the form $$\mathrm{\Delta }S\mathrm{}_{}^2[\mathrm{\Delta }\mathrm{\Gamma }l^3+\mathrm{\Gamma }\mathrm{\Delta }\mathrm{\Gamma }l^4+(\mathrm{\Delta }\mathrm{\Gamma })^2l^4].$$ (1.1) It is easy to argue that $`\mathrm{\Gamma }l`$ must be at most of order 1 so that the contribution of the second term is qualitatively equivalent to that of the first one. Indeed, $`\mathrm{\Gamma }l`$ is the gravitational potential which is given by $`\mathrm{\Gamma }l=\mathrm{\Gamma }_{\mathrm{ext}}l(1\mathrm{\Gamma }l)`$, $`\mathrm{\Gamma }_{\mathrm{ext}}`$ being the external gravitational field. The last term is just an expression of the equivalence principle, according to which, any kind of energy, including the gravitational one, also generates a gravitational field. Thus, $`\mathrm{\Gamma }l=\mathrm{\Gamma }_{\mathrm{ext}}l/(1+\mathrm{\Gamma }_{\mathrm{ext}}l)`$ which is always smaller than one. The action uncertainty principle then implies that $`\mathrm{\Delta }\mathrm{\Gamma }l^2l+\sqrt{l^2+\mathrm{}_{}^2}`$ and that, in terms of the metric tensor, $$\mathrm{\Delta }g1+\sqrt{1+\mathrm{}_{}^2/l^2}.$$ (1.2) For test bodies much larger than Planck size, i.e., for $`l\mathrm{}_{}`$, this uncertainty relation becomes the already obtained $`\mathrm{\Delta }g\mathrm{}_{}^2/l^2`$, valid for classical test bodies. However, for spacetime regions of very small size — closed to Planck length $`l\mathrm{}_{}`$ — this uncertainty relation acquires the form $`\mathrm{\Delta }g\mathrm{}_{}/l`$. This uncertainty in the gravitational field comes from the vacuum fluctuations of spacetime itself and not from the disturbances introduced by measuring devices with $`l\mathrm{}_{}`$ . For alternative derivations of this uncertainty relation see, e.g., Refs. . We then see that proper distances have an uncertainty $`\sqrt{\mathrm{\Delta }gl^2}`$ that approaches Planck length for very small (Planck scale) separations thus suggesting that Planck length represents a lower bound to any distance measurement. At the Planck scale, the gravitational field uncertainty is of order 1, i.e., the fluctuations are as large as the geometry itself. This is indicating that the low-energy theory that we have been using breaks down at the Planck scale and that a full theory of quantum gravity is necessary to study such regime. ## 2 Spacetime foam In his work “On the hypotheses which lie at the basis of the geometry” , written more than a century ago, Riemann already noticed that “\[…\]. If this independence of bodies from position does not exist, we cannot draw conclusions from metric relations of the great, to those of the infinitely small; in that case the curvature at each point may have an arbitrary value in three directions, provided that the total curvature of every measurable portion of space does not differ sensibly from zero. Still more complicated relations may exist if we no longer suppose the linear element expressible as the square root of a quadratic differential. Now it seems that the empirical notions on which the metrical determinations of space are founded, the notion of a solid body and of a ray of light, cease to be valid for the infinitely small. We are therefore quite at liberty to suppose that the metric relations of space in the infinitely small do not conform to the hypotheses of geometry; and we ought in fact to suppose it, if we can thereby obtain a simpler explanation of phenomena.” In the middle of this century, Weyl took these ideas a bit further and envisaged (multiply connected) topological structures of ever-increasing complexity as possible constituents of the physical description of surfaces. He wrote in this respect “A more detailed scrutiny of a surface might disclose that what we had considered an elementary piece in reality has tiny handles attached to it which change the connectivity character of the piece, and that a microscope of ever greater magnification would reveal ever new topological complications of this type, ad infinitum.” Few years later, Wheeler described this topological complexity of spacetime at small length scales as the foamlike structure of spacetime . According to Wheeler , at the Planck scale, the fluctuations of the geometry are so large and involve so large energy densities that gravitational collapse should be continuously being done and undone at that scale. Because of this perpetuity and ubiquity of Planck scale gravitational collapse, it should dominate Planck scale physics. In this continuously changing scenario, there is no reason to believe that spacetime topology remains fixed and predetermined. Rather, it seems natural to accept that the topology of spacetime is also subject to quantum fluctuations that change all its properties. Therefore, this scenario, in which spacetime is endowed with a foamlike structure at the Planck scale, seems to be a natural ingredient of the yet-to-be-built quantum theory of gravity. Furthermore, from the functional integration point of view , in quantum gravity all histories contribute and, among them, there seems unnatural not to consider nontrivial topologies as one considers not trivial geometries (see, however, Ref. ). On the other hand, it has been shown that there exit solutions to the equations of general relativity on manifolds that present topology changes. In these solutions, the metric is degenerate on a set of measure zero but the curvature remains finite. This means that allowing degenerate metrics amounts to open a doorway to classical topology change. Furthermore, despite the difficulties of finding an appropriate interpretation for these degenerate metrics in the classical Lorentzian theory, they will naturally enter the path integral formulation of quantum gravity. This is therefore an indication that topology change should be taken into account in any quantum theory of gravity (for an alternative description of topology change within the framework of noncommutative geometry, see Ref. ). Adopting a picture in which spacetime topology depends on the scale on which one performs the observations, we would conclude that there would be a trivial topology on large length scales but more and more complicated topologies as we approach the Planck scale. Spacetime foam may have important effects in low-energy physics. Indeed, the complicated topological structure may provide mechanisms for explaining the vanishing of the cosmological constant and for fixing all the constants of nature (for a recent proposal for deriving the electroweak coupling constant from spacetime foam see Ref. ). Spacetime foam may also induce loss of quantum coherence and may well imply the existence of an additional source of uncertainty. Related to this, it might produce frequency-dependent energy shifts that would slightly alter the dispersion relations for the different low-energy fields. Finally, spacetime foam has been proposed as a mechanism for regulating both the ultraviolet (see also Refs. ) and the infrared behavior of quantum field theory. It is well-known that it is not possible to classify all four-dimensional topologies and, consequently, all the possible components of spacetime foam. With the purpose of exemplifying the richness and complexity of the vacuum of quantum gravity, in what follows, we will briefly discuss a few different kinds of fluctuations encompassed by spacetime foam, where the word fluctuations will just denote spacetime configurations that contribute most to the gravitational path integral : simply connected nontrivial topologies, multiply connected topologies with trivial second homology group (i.e. with vanishing second Betti number), spacetimes with a nontrivial causal structure, i.e. with closed timelike curves, in a bounded region, and, finally, nonorientable tunnels. Hawking argued that the dominant contribution to the quantum gravitational path integral over metrics and topologies should come from topologies whose Euler characteristic $`\chi _\mathrm{E}`$ was approximately given by the spacetime volume in Planck units, i.e., from topologies with $`\chi _\mathrm{E}(l/\mathrm{}_{})^4`$. In this analysis, he restricted to compact simply-connected manifolds with negative cosmological constant $`\lambda `$. The choice of compact manifolds obeys to a normalization condition similar to introducing a box of finite volume in nongravitational physics. The cosmological constant is introduced for this purpose and it being negative is because saddle-point Euclidean metrics with high Euler characteristic and positive $`\lambda `$ do not seem to exist, so that positive-$`\lambda `$ configurations will not contribute significantly to the Euclidean path integral. Finally, simple connectedness can be justified by noting that multiply-connected compact manifolds can be unwrapped by going to the universal covering manifold that, although will be noncompact, can be made compact with little cost in the action. He then concluded that, among these manifolds, the dominant topology is $`S^2\times S^2`$ which has an associated second Betti number $`B_2=\chi _\mathrm{E}2=2`$. These results are based on the semiclassical approximation and, as such, should be treated with some caution. Compact simply-connected bubbles with the topology $`S^2\times S^2`$ can be interpreted as closed loops of virtual black holes if one realizes that the process of creation of a pair of real charged black holes accelerating away from each other in a spacetime which is asymptotic to $`^4`$ is provided by the Ernst solution . This solution has the topology $`S^2\times S^2`$ minus a point (which is sent to infinity) and this topology is the topological sum of the bubble $`S^2\times S^2`$ plus $`^4`$. Virtual black holes will not obey classical equations of motion but will appear as quantum fluctuations of spacetime and thus will become part of the spacetime foam. As a consequence, one can conclude that the dominant contribution to the path integral over compact simply-connected topologies would be given by a gas of virtual black holes with a density of the order of one virtual black hole per Planck volume. A similar analysis within the context of quantum conformal gravity has been performed by Strominger with the conclusion that the quantum gravitational vacuum indeed has a very involved structure at the Planck scale, with a proliferation of nontrivial compact topologies. Carlip has studied the influence of the cosmological constant $`\lambda `$ on the sum over topologies. It should be stressed that this cosmological constant is not related to the observed cosmological constant . Rather, it is introduced as a source term of the form $`\mathrm{}_{}^2\lambda V`$, where $`V`$ is the spacetime volume, added to the vacuum gravitational action $`\mathrm{}_{}^2R\sqrt{g}`$. In the semiclassical approximation, this sum is dominated by the saddle-points, which are Einstein metrics. The classical Euclidean action for these metrics has the form $`\stackrel{~}{v}/(\mathrm{}_{}^2\lambda )`$, where, up to irrelevant numerical factors, $`\stackrel{~}{v}=\lambda ^2V`$ is the normalized spacetime volume of the manifold and is independent of $`\lambda `$. In fact, $`\stackrel{~}{v}`$ characterizes the topology of the manifold. For instance, for hyperbolic manifolds it can be identified with the Euler characteristic. Carlip has shown that, in the semiclassical approximation, the behavior of the density of topologies, which counts the number of manifolds with a given value for $`\stackrel{~}{v}`$, crucially depends on the sign of the cosmological constant. For negative values of $`\lambda `$, the partition function receives relevant contributions from spacetimes with arbitrarily complicated topology, so that processes that could be expected to contribute to the vacuum energy might produce more and more complicated spacetime topologies, as we briefly discuss in what follows, thus providing a mechanism for the vanishing of the cosmological constant. The Euclidean path integral in the semiclassical approximation can be written as $$Z[\lambda ]=\underset{\stackrel{~}{v}}{}\rho (\stackrel{~}{v})e^{\stackrel{~}{v}/(\mathrm{}_{}^2\lambda )},$$ (2.1) where $`\rho (\stackrel{~}{v})`$ is a density of topologies. It can be argued that for negative $`\lambda `$, the density of topologies $`\rho (\stackrel{~}{v})`$ grows with the topological complexity $`\stackrel{~}{v}`$ at least as $`\rho (\stackrel{~}{v})\mathrm{exp}(\stackrel{~}{v}\mathrm{ln}\stackrel{~}{v})`$, i.e., it is superexponential . Then, after introducing an infrared cutoff to ensure the convergence of the sum above, the topologies that will contribute most to $`Z[\lambda ]`$ will lie around some maximum value of the topological complexity $`\stackrel{~}{v}_{\mathrm{max}}`$. The true cosmological constant $`\mathrm{\Lambda }`$, obtained from the microcanonical ensemble, is in this case $$\frac{1}{\mathrm{\Lambda }\mathrm{}_{}^2}=\frac{\mathrm{ln}\rho (\stackrel{~}{v})}{\stackrel{~}{v}}|_{\stackrel{~}{v}_{\mathrm{max}}}1+\mathrm{ln}\stackrel{~}{v}_{\mathrm{max}}$$ (2.2) and the “topological capacity” $$c_V=\frac{1}{\mathrm{\Lambda }^2\mathrm{}_{}^4}\left(\frac{^2\mathrm{ln}\rho (\stackrel{~}{v})}{\stackrel{~}{v}^2}\right)^1|_{\stackrel{~}{v}_{\mathrm{max}}}=\mathrm{}_{}^2\left(\frac{\mathrm{\Lambda }}{\stackrel{~}{v}_{\mathrm{max}}}\right)^1\stackrel{~}{v}_{\mathrm{max}}(1+\mathrm{ln}\stackrel{~}{v}_{\mathrm{max}}),$$ (2.3) where these quantities have been defined by analogy with the thermodynamical temperature and heat capacity, respectively. In this analogy, $`\mathrm{\Lambda }`$ plays the role of temperature while the topological complexity $`\stackrel{~}{v}`$ is analogous to the energy. According to this picture, the behavior of spacetime foam would be analogous to a thermodynamical system with negative heat capacity, in which, as we put energy into the system, a greater and greater proportion of it is employed in the exponential production of new states rather than in increasing the energy of already existing states. Similarly, since the topological capacity is negative, which is a consequence of the superexponential density of topologies, the microcanonical cosmological constant will approach a vanishing value as the maximum topological complexity $`\stackrel{~}{v}_{\mathrm{max}}`$ approaches infinity. We then see that this process, which could be expected to increase the vacuum energy $`|\mathrm{\Lambda }|`$, actually contributes to decrease it, until it approaches the smallest value $`|\mathrm{\Lambda }|=0`$. The case of positive $`\lambda `$ presents a different behavior. The topological complexity has a finite maximum value, namely, that of the four-sphere $`\stackrel{~}{v}_{\mathrm{max}}=\chi _\mathrm{E}^{\mathrm{max}}=2`$ and the density of topologies $`\rho (\stackrel{~}{v})`$ increases as $`\stackrel{~}{v}`$ decreases. The superexponential lower bound to the density of topologies given above receives the main contribution from multiply connected manifolds, among which, Euclidean wormholes have deserved much attention during the last decade (see, e.g., Ref. ). Wormholes are four-dimensional spacetime handles that have vanishing second Betti number, while the first Betti number provides the number of handles. They were regarded as a possible mechanism for the complete evaporation of black holes . An evaporating black hole would have a wormhole attached to it and this wormhole would transport the information that had fallen into the black hole to another, quite possibly far away, region of spacetime. More recently, Hawking has proposed an alternative scenario in which black holes, at the end of their evaporation process, will have a very small size and will eventually dilute in the sea of virtual black holes that form part of spacetime foam. Wormholes also constitute the main ingredient in Coleman’s proposal for explaining the vanishing of the cosmological constant and for fixing all the constants of nature (see also Ref. ). Wormholes have been studied in the so-called dilute gas approximation in which wormhole ends are far apart form each other. It should be noted, however, that, although the semiclassical approximation probably ceases to be valid at the Planck scale, it gives a clear indication that one should expect a topological density of one wormhole per unit four-volume, i.e., the first Betti number $`B_1`$ should be approximately equal to the spacetime volume $`B_1V`$ at the Planck scale. Multiply connected topology fluctuations may suffer instabilities against uncontrolled growth both in Euclidean quantum gravity (see however Ref. ) and in the Lorentzian sector . These instabilities might put serious limitations to the kind of multiply connected topologies encompassed by spacetime foam. One should also expect other configurations with nontrivial causal structure to contribute to spacetime foam. For instance, quantum time machines , have been recently proposed as possible components of spacetime foam. From the semiclassical point of view, most of the hitherto proposed time machines are unstable because quantum vacuum fluctuations generate divergences in the stress-energy tensor, i.e., are subject to the chronology protection conjecture (for a beautiful and detailed report on time machines see Ref. ). However, quantum time machines confined to small spacetime regions, for which the chronology protection conjecture does not apply , are likely to occur within the realm of spacetime foam, where strong causality violations or even the absence of a causal structure are expected. We have in fact argued that the spacetime metric undergoes quantum fluctuations of order 1 at the Planck scale. Since the slope of the light cone is determined by the speed of light obtained from $`ds^2=g_{\mu \nu }dx^\mu dx^\nu =0`$, the uncertainty in the metric will also introduce an uncertainty in the slope of the light cone of order 1 at the Planck scale so that the notion of causality is completely lost. As happens with the causal structure, orientability is likely to be lost at the Planck scale , where the lack of an arbitrarily high resolution would blur the distinction between the two sides of any surface. Therefore, nonorientable topologies can be regarded as additional configurations that may well be present in spacetime foam and thus contribute to the vacuum structure of quantum gravity. Indeed, quantum mechanically stable nonorientable spacetime tunnels that connect two asymptotically flat regions with the topology of a Klein bottle can be constructed as a generalization of modified Misner space . The presence of quantum time machines or nonorientable tunnels in spacetime amounts to the existence of Planck-size regions in which violations of the weak energy condition occur. Although from the classical point of view, the weak energy condition seems to be preserved, it is well-known (see, e.g., Ref. ) that quantum effects may well involve such exotic types of energy. ## 3 Loss of quantum coherence The quantum structure of spacetime would be relevant at energies close to Planck scale and one could expect that the quantum gravitational virtual processes that constitute the spacetime foam could not be described without knowing the details of the theory of quantum gravity. However, the gravitational nature of spacetime fluctuations provides a mechanism for studying the effects of these virtual processes in the low-energy physics. Indeed, virtual gravitational collapse and topology change would forbid a proper definition of time at the Planck scale. More explicitly, in the presence of horizons, closed timelike curves, topology changes, etc., any Hamiltonian vector field that represents time evolution outside the fluctuation would vanish at points inside the fluctuation. This means that it would not be possible to describe the evolution by means of a Hamiltonian unitary flow from an initial to a final state and, consequently, quantum coherence would be lost. These effects and their order of magnitude would not depend on the detailed structure of the fluctuations but rather on their existence and global properties. In general, the regions in which the asymptotically timelike Hamiltonian vector fields vanish are associated with infinite redshift surfaces and, consequently, these small spacetime regions would behave as magnifiers of Planck length scales transforming them into low-energy modes as seen from outside the fluctuations . Therefore, spacetime foam and the related lower bound to spacetime uncertainties would leave their imprint, which may be not too small, in low-energy physics and low-energy experiments would effectively suffer a nonvanishing uncertainty coming from this lack of resolution in spacetime measurements. In this situation, loss of quantum coherence would be almost unavoidable . The idea that the quantum gravitational fluctuations contained in spacetime foam could lead to a loss of quantum coherence was put forward by Hawking and collaborators . This proposal was based in part on the thermal character of the emission predicted for evaporating black holes . If loss of coherence occurs in macroscopic black holes, it seems reasonable to conclude that the small black holes that are continuously being created and annihilated everywhere within spacetime foam will also induce loss of quantum coherence . On the other hand, scattering amplitudes of low-energy fields by topologically nontrivial configurations ($`S^2\times S^2`$, $`K^3`$ and $`CP^2`$ bubbles) lead to the conclusion that pure states turn into a partly incoherent mixture upon evolution in these nontrivial backgrounds under certain simplifying assumptions and, consequently, that quantum coherence is lost. They made explicit calculations for specific asymptotically flat spacetimes with nontrivial simply-connected topologies or causal structure which showed that it was not possible to separate the complex-time graphs for the obtained Lorentzian Green functions into two disconnected parts. More explicitly, the Euclidean Green functions obtained in these backgrounds mix positive and negative frequencies when the analytic continuation to Lorentzian signature is performed, since the Green functions develop extra acausal singularities. This situation is analogous to that in black hole physics where Lorentzian Green functions show periodic poles in imaginary time . Although these calculations were performed in a finite dimensional approximation to metrics of given topology, the contributions of these extra singularities can be determined by dimensional analysis and therefore they seem to be characteristic of each topology and hold for any metric in them . In contrast, Gross calculated scattering amplitudes in specific four-dimensional solutions that can be interpreted as three-dimensional Kaluza-Klein instantons and concluded that there was no loss of quantum coherence in such models. Hawking in turn replied to this criticism that the solutions used by Gross were special cases in the sense that the associated three-dimensional Kaluza-Klein instantons were flat and therefore topologically trivial. He further argued, with examples, that solutions with topologically nontrivial three-dimensional instantons can be constructed and that lead to a nonunitary evolution. ### 3.1 Superscattering operator Let us consider a scattering process in an asymptotically flat spacetime with nontrivial topology. If we denote the density matrices at the far past and far future by $`\rho _{}`$ and $`\rho _+`$, respectively, there will be a superscattering operator $`\$`$ that relates both of them $`\rho _+=\$\rho _{}`$, i.e., that provides the evolution between the two asymptotically flat regions across the nontrivial topology fluctuation . Let $`|0_\pm `$ represent the vacuum at each region and $`\{|A_\pm \}`$ a basis of the Fock space, so that we can write $`|A_\pm =\mathrm{{\rm Y}}_{\pm A}^{}|0_\pm `$, where $`\mathrm{{\rm Y}}^A`$ is a string of annihilation operators and, consequently, $`\mathrm{{\rm Y}}_A^{}`$ is a string of creation operators. The density matrices $`\rho _\pm `$ can then be written as $$\rho _\pm =\underset{AB}{}\rho _{\pm B}^A|A_\pm B_\pm |=\rho _{\pm B}^A\mathrm{{\rm Y}}_{\pm A}^{}|0_\pm 0_\pm |\mathrm{{\rm Y}}_\pm ^B,$$ (3.1) where a sum over repeated indices is assumed. The density matrices at both asymptotic regions can then be related by noting that the density matrix at the far future $`\rho _+`$ is given by the expectation values in the far-past state $`\rho _{}`$ of a complete set of future operators built out of creation and annihilation operators, namely, $$\rho _{+D}^C=\mathrm{tr}(\mathrm{{\rm Y}}_{+D}^{}\mathrm{{\rm Y}}_+^C\rho _{})=\rho _B^A0_{}|\mathrm{{\rm Y}}_{}^B\mathrm{{\rm Y}}_{+D}^{}\mathrm{{\rm Y}}_+^C\mathrm{{\rm Y}}_A^{}|0_{}.$$ (3.2) Therefore, the superscattering matrix $`\$_{DA}^{CB}0_{}|\mathrm{{\rm Y}}_{}^B\mathrm{{\rm Y}}_{+D}^{}\mathrm{{\rm Y}}_+^C\mathrm{{\rm Y}}_A^{}|0_{}`$, relates the density matrices in both asymptotic regions, i.e., $`\rho _{+D}^C=\$_{DA}^{CB}\rho _B^A`$. Note that the superscattering matrix $`\$_{DA}^{CB}`$ is Hermitian in both pairs of indices $`CD`$ and $`AB`$ to ensure that the Hermiticity of the density matrix is preserved. Also, the conservation of probability, i.e., $`\mathrm{tr}(\rho _\pm )=1`$, implies that $`\$_{CA}^{CB}=\delta _A^B`$. The relation between this superscattering operator and the Green functions discussed above is easily obtained if we write the annihilation operators $`a_\pm (k)`$ that form $`\mathrm{{\rm Y}}_\pm ^A`$ at each asymptotic region in terms of the corresponding field operators. For instance, in the case of a complex scalar field, this expression (up to numerical normalization factors) has the well-known form $$a_\pm (k)=i_{\mathrm{\Sigma }_\pm }𝑑\mathrm{\Sigma }^\mu (x)e^{kx}\underset{\mu }{\overset{}{}}\varphi (x),$$ (3.3) where $`\mathrm{\Sigma }_\pm `$ represent spacelike surfaces in the infinite past and future. We now introduce the identity operator $`1=_n|nn|`$, with $`|n`$ being energy eigenstates, in the expression for $`\$`$ $$\$_{DA}^{CB}=\underset{n}{}0_{}|\mathrm{{\rm Y}}_{}^B\mathrm{{\rm Y}}_{+D}^{}|nn|\mathrm{{\rm Y}}_+^C\mathrm{{\rm Y}}_A^{}|0_{}$$ (3.4) and note that the only state that can contribute is that with zero energy, i.e., $`n=0`$ for energy to be conserved. If spacetime is globally hyperbolic, so that asymptotic completeness holds, there is a one-to-one map between states at any spacetime region, in particular, between the vacua $`|0`$ and $`|0_+`$. Therefore, the only contribution from $`1=_n|nn|`$ can be regarded as coming from $`|0_+0_+|`$: $$\$_{DA}^{CB}=0_{}|\mathrm{{\rm Y}}_{}^B\mathrm{{\rm Y}}_{+D}^{}|0_+0_+|\mathrm{{\rm Y}}_+^C\mathrm{{\rm Y}}_A^{}|0_{}.$$ (3.5) In this case, the superscattering operator factorizes into two unitary factors: $$\$_{DA}^{CB}=S_A^CS_D^B,$$ (3.6) with $`S_A^C=0_+|\mathrm{{\rm Y}}_+^C\mathrm{{\rm Y}}_A^{}|0_{}=C_+|A_{}`$. Note that the scattering matrix $`S`$ is indeed unitary, i.e., $`S_A^CS_C^B=_CB_{}|C_+C_+|A_{}=\delta _A^B`$ by virtue of the condition of conservation of probability. The factorizability of the superscattering operator $`\$`$ always implies unitary evolution for the density matrix. Indeed, if the superscattering operator can be factorized as $`\$\rho =S\rho S^{}`$ for some scattering operator $`S`$, then conservation of probability, which amounts to require that $`\mathrm{tr}(\$\rho )=1`$ provided that $`\mathrm{tr}(\rho )=1`$, implies that $$1=\mathrm{tr}(\$\rho )=\mathrm{tr}(S\rho S^{})=\mathrm{tr}(\rho S^{}S)$$ (3.7) and therefore $`S^{}S=1`$, i.e., the scattering operator $`S`$ is unitary. In this case, the operator $`\$`$ also implies a unitary evolution for the density matrix since it preserves $`\mathrm{tr}(\rho ^2)`$: $$\mathrm{tr}(\rho _+^2)=\mathrm{tr}[(\$\rho _{})(\$\rho _{})]=\mathrm{tr}(S\rho _{}S^{}S\rho _{}S^{})=\mathrm{tr}(S\rho _{}^2S^{})=\mathrm{tr}(\rho _{}^2).$$ (3.8) If, on the other hand, we cannot guarantee that states at different spacetime regions are one-to-one related, then the zero energy state $`|0`$ will not correspond in general to the zero energy state $`|0_+`$ and the superscattering operator will not admit a factorized form: $`\$\rho S\rho S^{}`$. When the superscattering operator does not satisfy the factorization condition, the evolution does not preserve $`\mathrm{tr}(\rho ^2)`$ in general and quantum coherence is lost. This can be seen explicitly in the analysis below. ### 3.2 Quasilocal superscattering Let us assume that the dynamics that underlies a superscattering operator $`\$`$ is quasilocal. By quasilocal we mean that any possible effect leading to a nonfactorizable superscattering operator is confined to a spacetime region whose size $`r`$ is much smaller than the characteristic spacetime size $`l`$ of the low-energy fields, i.e., we will assume that $`r/l1`$. Then, the superscattering equation $`\rho _+=\$\rho _{}`$ can be obtained by integrating a differential equation of the form $`\dot{\rho }(t)=L(t)\rho (t)`$, where $`L(t)`$ is a linear operator . Furthermore, it can be shown that $`L(t)`$ can be generally written as $`L\rho `$ $`=`$ $`i[H_0,\rho ]{\displaystyle \frac{1}{2}}h_{\alpha \beta }(Q^\beta Q^\alpha \rho +\rho Q^\beta Q^\alpha 2Q^\alpha \rho Q^\beta )`$ (3.9) $`=`$ $`i[H_0,\rho ]{\displaystyle \frac{i}{2}}\mathrm{Im}(h_{\alpha \beta })[Q^\alpha ,[Q^\beta ,\rho ]_+]{\displaystyle \frac{1}{2}}\mathrm{Re}(h_{\alpha \beta })[Q^\alpha ,[Q^\beta ,\rho ]],`$ where $`H_0`$ and $`Q^\alpha `$ form a complete set of Hermitian matrices, $`Q^\alpha `$ have been chosen to be orthogonal, i.e., $`\mathrm{tr}(Q^\alpha Q^\beta )=\delta ^{\alpha \beta }`$, and $`h_{\alpha \beta }`$ is a Hermitian matrix. A sufficient, but not necessary, condition for having a decreasing value of $`\mathrm{tr}(\rho ^2)`$ and, consequently, loss of coherence is that $`h_{\alpha \beta }`$ be real and positive. As a simple example, we can consider the case in which we have only one operator $`Q`$. Then, $$\frac{d}{dt}\mathrm{tr}(\rho ^2)=\mathrm{tr}(\rho ^2Q^2\rho Q\rho Q).$$ (3.10) If we diagonalize the density matrix and call $`\{|i\}`$ to the preferred basis in which $`\rho `$ is diagonal, so that $`\rho =_ip_i|ii|`$, this equation becomes $$\frac{d}{dt}\mathrm{tr}(\rho ^2)=\underset{ij}{}p_i|Q_{ij}|^2(p_ip_j)=\underset{i>j}{}|Q_{ij}|^2(p_ip_j)^2,$$ (3.11) where $`Q_{ij}=i|Q|j`$. We then see that provided that $`Q`$ is not diagonal in the basis $`\{|i\}`$, $`\frac{d}{dt}\mathrm{tr}(\rho ^2)<0`$, except for very specific states, such as the obvious $`p_i=p_j`$, which has maximum entropy. There has been an interesting debate on the possible violations of energy and momentum conservation or locality in processes that do not lead to a factorizable $`\$`$ matrix. According to Gross and Ellis et al. , a nonfactorizable $`\$`$ matrix allows for continuous symmetries whose associated generators are not conserved. In other words, “invariance principles are no longer equivalent to conservation laws” . Let us illustrate this issue with the simple example of two spin-1/2 particles in a state described by the density matrix $`\rho _{}=\frac{1}{4}(1\stackrel{}{s}_1\stackrel{}{s}_2)`$, where $`\stackrel{}{s}_{1,2}`$ are the spin vectors of the particles 1 and 2, respectively. This density matrix represents a pure state since $`\mathrm{tr}(\rho _{}^2)=1`$. In fact, the two particles are in a rotationally invariant pure state with vanishing total spin. Assume that the final state can be obtained by a superscattering operator $`\$`$. Then, $`\rho _+=\$\rho _{}`$ must have the form $`\rho _+=\frac{1}{4}(1\beta \stackrel{}{s}_1\stackrel{}{s}_2)`$, for it to conserve probability $`\mathrm{tr}(\rho _+)=`$1 and be rotationally invariant. Furthermore, since $`\mathrm{tr}(\rho _+^2)=(1+3\beta ^2)/41`$, we must have $`\beta 1`$, the equality holding only when $`\rho _+`$ is a pure state. The initial state is such that $`\mathrm{tr}[(\stackrel{}{s}_1+\stackrel{}{s}_2)^2\rho _{}]=1`$, which means that, in any given direction, there is initially a perfect anticorrelation between the spin of the two particles, so that the total spin vanishes. However, for the final state, $`\mathrm{tr}[(\stackrel{}{s}_1+\stackrel{}{s}_2)^2\rho _+]=1\beta `$. We then see that, despite the rotational invariance of the states and the evolution, we will not obtain total anticorrelation in the final state and, hence, spin conservation, unless $`\beta =1`$, i.e., unless quantum coherence is preserved. In particular, these authors argued that energy and momentum conservation does not follow from Poincaré invariance. However, energy and momentum conservation is a consequence of the field equations in the asymptotic regions . This issue also arises when the evolution of the density matrix is obtained by a differential equation whose integral leads to a nonfactorizable $`\$`$ operator. If this equation is assumed to be local on scales a bit larger than Planck length, then there appears a conflict between this pretended locality on the one hand and energy and momentum conservation on the other . This violation of energy and momentum conservation comes from the high-energy modes, whose characteristic evolution times is of the same order as the size of the nontrivial topology region. Again, the existence of asymptotic regions would enforce this conservation and this can be effectively achieved if the propagating fields are regarded as low-energy ones and, therefore, with characteristic size $`l`$ much larger the size $`r`$ of the fluctuation. Furthermore, Unruh and Wald analyzed simple non-Markovian toy models that lose quantum coherence and argued that conservation of energy and momentum need not be in conflict with causality and locality, in contrast with the claims of Ref. (see also Ref. ). Therefore, these topology fluctuations can be regarded as nonlocal in the length scale $`r`$, since, within this scale, the unitary $`S`$-matrix diagrams will be mixed (thus leading to a nonfactorizable $`\$`$ matrix), while from the low-energy point of view, the fluctuations are confined in a very small region so that they can be described as local effective interactions in a master differential equation as above. This relation will be the subject of the next two sections. ## 4 Quantum bath Spacetime foam contains, according to the scenario above, highly nontrivial topological or causal configurations, which will introduce additional features in the description of the evolution of low-energy fields as compared with topologically trivial, globally hyperbolic manifolds. The analogy with fields propagating in a finite-temperature environment is compelling. Actually, despite the different conceptual and physical origin of the fluctuations, we will see that the effects of these two systems are not that different. In order to build an effective theory that accounts for the propagation of low-energy fields in a foamlike spacetime, we will substitute the spacetime foam, in which we possibly have a minimum length because the notion of distance is not valid at such scale, by a fixed background with low-energy fields living on it. We will perform a 3+1 foliation of the effective spacetime that, for simplicity, will be regarded as flat, $`t`$ denoting the time parameter and $`x`$ the spatial coordinates. The gravitational fluctuations and the minimum length present in the original spacetime foam will be modeled by means of nonlocal interactions that relate spacetime points that are sufficiently close in the effective background, where a well-defined notion of distance exists (for related ideas see also Refs. and for a review on stochastic gravity see Ref. ). Furthermore, these nonlocal interactions will be described in terms of local interactions as follows. Let $`\{h_i[\varphi ;t]\}`$ be a basis of local gauge-invariant interactions at the spacetime point $`(x,t)`$ made out of factors of the form $`\mathrm{}_{}^{2n(1+s)4}\left[\varphi (x,t)\right]^{2n}`$, $`\varphi `$ being the low-energy field strength of spin $`s`$. As a notational convention, each index $`i`$ implies a dependence on the spatial position $`x`$ by default; whenever the index $`i`$ does not carry an implicit spatial dependence, it will appear underlined $`\underset{¯}{i}`$. Also, any contraction of indices (except for underlined ones) will entail an integral over spatial positions. ### 4.1 Influence functional The low-energy density $`\rho [\varphi ,\phi ;t]`$ at the time $`t`$ in the field representation can be generally related to the density matrix at $`t=0`$ $$\rho [\varphi ,\phi ;t]=D\varphi ^{}D\phi ^{}\$[\varphi ,\phi ;t|\varphi ^{},\phi ^{};0]\rho [\varphi ^{},\phi ^{};0],$$ (4.1) which we will write in the compact form $`\rho (t)=\$(t)\rho (0)`$. Here $`\$(t)`$ is the propagator for the density matrix and $`D\varphi _x\varphi (x,t)`$. This propagator has the form $$\$[\varphi ,\phi ;t|\varphi ^{},\phi ^{};0]=𝒟\varphi 𝒟\phi e^{i\{S_0[\varphi ;t]S_0[\phi ;t]\}}[\varphi ,\phi ;t],$$ (4.2) where $`[\varphi ,\phi ;t]`$ is the so-called influence functional , $`𝒟\varphi _{x,s}\varphi (x,s)`$ and these path integrals are performed over paths $`\varphi (s)`$, $`\phi (s)`$ such that at the end points match the values $`\varphi `$, $`\phi `$ at $`t`$ and $`\varphi ^{}`$, $`\phi ^{}`$ at $`s=0`$. The influence functional $`[\varphi ,\phi ;t]`$ contains all the information about the interaction of the low-energy fields with spacetime foam. Let us now introduce another functional $`𝒲[\varphi ,\phi ;t]`$ that we will call influence action and such that $`[\varphi ,\phi ;t]=\mathrm{exp}𝒲[\varphi ,\phi ;t]`$. If the influence action $`𝒲[\varphi ,\phi ;t]`$ were equal to the zero, then we would have unitary evolution provided by a factorized superscattering matrix. However, $`𝒲`$ does not vanish in the presence of gravitational fluctuations and, in fact, the nonlocal effective interactions will be modeled by terms in $`𝒲`$ that follow the pattern $$𝑑t_1\mathrm{}𝑑t_N\upsilon ^{i_1\mathrm{}i_N}(t_1\mathrm{}t_N)h_{i_1}[\varphi ;t_1]\mathrm{}h_{i_N}[\varphi ;t_N].$$ (4.3) Here, $`\upsilon ^{i_1\mathrm{}i_N}(t_1\mathrm{}t_N)`$ are dimensionless complex functions that vanish for relative spacetime distances larger than the length scale $`r`$ of the gravitational fluctuations. If the gravitational fluctuations are smooth in the sense that they only involve trivial topologies or contain no horizons, the coefficients $`\upsilon ^{i_1\mathrm{}i_N}(t_1\mathrm{}t_N)`$ will be $`N`$-point propagators which, as such, will have infinitely long tails and the size of the gravitational fluctuations will be effectively infinite. In other words, we would be dealing with a local theory written in a nonstandard way. The gravitational origin of these fluctuations eliminate these long tails because of the presence of gravitational collapse and topology change. This means that, for instance, virtual black holes will appear and disappear and horizons will be present throughout. As Padmanabhan has also argued, horizons induce nonlocal interactions of finite range since the Planckian degrees of freedom will be magnified by the horizon (because of an infinite redshift factor) thus giving rise to low-energy interactions as seen from outside the gravitational fluctuation. Virtual black holes represent a kind of components of spacetime foam that because of the horizons and their nontrivial topology will induce nonlocal interactions but, most probably, other fluctuations with complicated topology will warp spacetime in a similar way and the same magnification process will also take place. The coefficients $`\upsilon ^{i_1\mathrm{}i_N}(t_1\mathrm{}t_N)`$ can depend only on relative positions and not on the location of the gravitational fluctuation itself. The physical reason for this is conservation of energy and momentum: the fluctuations do not carry energy, momentum, or gauge charges. Thus, diffeomorphism invariance is preserved, at least at low-energy scales. One should not expect that at the Planck scale this invariance still holds. However, this violation of energy-momentum conservation is safely kept within Planck scale limits , where the processes will no longer be Markovian. Finally, the coefficients $`\upsilon ^{i_1\mathrm{}i_N}(t_1\mathrm{}t_N)`$ will contain a factor $`[e^{S(r)/2}]^N`$, $`S(r)`$ being the Euclidean action of the gravitational fluctuation, which is of the order $`(r/\mathrm{}_{})^2`$. This is just an expression of the idea that inside large fluctuations, interactions that involve a large number of spacetime points are strongly suppressed. As the size of the fluctuation decreases, the probability for events in which three or more spacetime points are correlated increases, in close analogy with the kinetic theory of gases: the higher the density of molecules in the gas, the more probable is that a large number of molecules collide at the same point. The expansion parameter in this example is typically the density of molecules. In our case, the natural expansion parameter is the transition amplitude. It is given by the square root of the two-point transition probability which in the semiclassical approximation is of the form $`e^{S(r)}`$. Thus the $`N`$-local interaction term in $`𝒲`$ will be of order $`[e^{S(r)/2}]^N`$. In the weak-coupling approximation, i.e., up to second order in the expansion parameter, the trilocal and higher effective interactions do not contribute. The terms corresponding to $`N=0,1`$ are local and can be absorbed in the bare action (note that the coefficient $`\upsilon `$ is constant and that the coefficients $`\upsilon ^{i_1}(t_1)`$ cannot depend on spacetime positions because of diffeomorphism invariance). Consequently, we can write the action functional $`𝒲`$ as a bilocal whose most general form is $`𝒲[\varphi ,\phi ;t]`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle _0^t}𝑑s{\displaystyle _0^s}𝑑s^{}\{h_i[\varphi ;s]h_i[\phi ;s]\}`$ (4.4) $`\times \{\upsilon ^{ij}(ss^{})h_j[\varphi ;s^{}]\upsilon ^{ij}(ss^{})^{}h_j[\phi ;s^{}]\},`$ where we have renamed $`\upsilon ^{ij}(s,s^{})`$ as $`\upsilon ^{ij}(ss^{})`$, and without loss of generality we have set $`s>s^{}`$. This complex coefficient is Hermitian in the pair of indices $`ij`$ and depends on the spatial positions $`x_{\underset{¯}{i}}`$ and $`x_{\underset{¯}{j}}`$ only through the relative distance $`|x_{\underset{¯}{i}}x_{\underset{¯}{j}}|`$. It is of order $`e^{S(r)}`$ and is concentrated within a spacetime region of size $`r`$. Let us now decompose $`\upsilon ^{ij}(\tau )`$ in terms of its real and imaginary parts as $$\upsilon ^{ij}(\tau )=c^{ij}(\tau )+i\dot{f}^{ij}(\tau ),$$ (4.5) where $`c^{ij}(\tau )`$ and $`f^{ij}(\tau )`$ are real and symmetric, and the overdot denotes time derivative. The imaginary part is antisymmetric in the exchange of $`i,\tau `$ and $`j,\tau `$ and has been written as a time derivative for convenience, since this choice does not involve any restriction. The $`f`$ term can then be integrated by parts to obtain $`𝒲[\varphi ,\phi ;t]`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle _0^t}𝑑s{\displaystyle _0^s}𝑑s^{}c^{ij}(ss^{})\{h_i[\varphi ;s]h_i[\phi ;s]\}\{h_j[\varphi ;s^{}]h_j[\phi ;s^{}]\}`$ (4.6) $`{\displaystyle \frac{i}{2}}{\displaystyle _0^t}𝑑s{\displaystyle _0^s}𝑑s^{}f^{ij}(ss^{})\{h_i[\varphi ;s]h_i[\phi ;s]\}\{\dot{h}_j[\varphi ;s^{}]+\dot{h}_j[\phi ;s^{}]\}.`$ In this integration, we have ignored surface terms that contribute, at most, to a finite renormalization of the bare low-energy Hamiltonian. The functions $`f^{ij}(\tau )`$ and $`c^{ij}(\tau )`$ characterize spacetime foam in our effective description but, under fairly general assumptions, the characterization can be carried out by a smaller set of independent functions. In what follows we will simplify this set. With this aim, we first write $`f^{ij}(\tau )`$ and $`c^{ij}(\tau )`$ in terms of their spectral counterparts $`\stackrel{~}{f}^{\underset{¯}{i}\underset{¯}{j}}(\omega )`$ and $`\stackrel{~}{c}^{\underset{¯}{i}\underset{¯}{j}}(\omega )`$. Lorentz invariance and spatial homogeneity implies that $`f^{ij}(\tau )`$ and $`c^{ij}(\tau )`$ must have the form $`f^{ij}(\tau )`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}𝑑\omega \stackrel{~}{f}^{\underset{¯}{i}\underset{¯}{j}}(\omega )8\pi {\displaystyle \frac{\mathrm{sin}(\omega |x_{\underset{¯}{i}}x_{\underset{¯}{j}}|)}{\omega |x_{\underset{¯}{i}}x_{\underset{¯}{j}}|}}\mathrm{cos}(\omega \tau ),`$ (4.7) $`c^{ij}(\tau )`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}𝑑\omega \stackrel{~}{c}^{\underset{¯}{i}\underset{¯}{j}}(\omega )8\pi {\displaystyle \frac{\mathrm{sin}(\omega |x_{\underset{¯}{i}}x_{\underset{¯}{j}}|)}{\omega |x_{\underset{¯}{i}}x_{\underset{¯}{j}}|}}\mathrm{cos}(\omega \tau ),`$ (4.8) for some real functions $`\stackrel{~}{f}^{\underset{¯}{i}\underset{¯}{j}}(\omega )`$ and $`\stackrel{~}{c}^{\underset{¯}{i}\underset{¯}{j}}(\omega )`$. It seems reasonable to assume a kind of equanimity principle by which spacetime foam produces interactions whose intensity does not depend on the pair of interactions $`h_i`$ itself but on its independent components for each mode, i.e., that the spectral interaction is given by products of functions $`\chi ^{\underset{¯}{i}}(\omega )`$: $`\stackrel{~}{f}^{\underset{¯}{i}\underset{¯}{j}}(\omega )`$ $`=`$ $`\chi ^{\underset{¯}{i}}(\omega )\chi ^{\underset{¯}{j}}(\omega ),`$ (4.9) $`\stackrel{~}{c}^{\underset{¯}{i}\underset{¯}{j}}(\omega )`$ $`=`$ $`g(\omega )\chi ^{\underset{¯}{i}}(\omega )\chi ^{\underset{¯}{j}}(\omega ),`$ (4.10) where $`g(\omega )`$ is a function that, together with $`\chi ^{\underset{¯}{i}}(\omega )`$, fully characterize spacetime foam under these assumptions. Then, $`f^{ij}(\tau )`$ and $`c^{ij}(\tau )`$ can be written as $`f^{ij}(\tau )`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}𝑑\omega G^{ij}(\omega )\mathrm{cos}(\omega \tau ),`$ (4.11) $`c^{ij}(\tau )`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}𝑑\omega g(\omega )G^{ij}(\omega )\mathrm{cos}(\omega \tau ),`$ (4.12) with $$G^{ij}(\omega )=8\pi \frac{\mathrm{sin}(\omega |x_{\underset{¯}{i}}x_{\underset{¯}{j}}|)}{\omega |x_{\underset{¯}{i}}x_{\underset{¯}{j}}|}\chi ^{\underset{¯}{i}}(\omega )\chi ^{\underset{¯}{j}}(\omega ).$$ (4.13) The functions $`\chi ^{\underset{¯}{i}}(\omega )`$ can be interpreted as the spectral effective couplings between spacetime foam and low-energy fields. Since $`\upsilon ^{ij}(\tau )`$ is of order $`e^{S(r)}`$ and is concentrated in a region of linear size $`r`$, the couplings $`\chi ^{\underset{¯}{i}}(\omega )`$ will have dimensions of length, will be of order $`e^{S(r)/2}r`$, and will induce a significant interaction for all frequencies $`\omega `$ up to the natural cutoff $`r^1`$. On the other hand, the function $`g(\omega )`$ has dimensions of inverse length and must be of order $`r^1`$. Actually, this function must be almost flat in the frequency range $`(0,r^1)`$ to ensure that all the modes contribute significantly to all bilocal interactions. As we will see, the function $`g(\omega )`$ also admits a straightforward interpretation in terms of the mean occupation number for the mode of frequency $`\omega `$. Once we have computed the influence functional $``$, it is possible to obtain the master equation that governs the evolution of the density of low-energy fields, although we will not follow this procedure here. We postpone the derivation of the full master equation until next section. The bilocal effective interaction does not lead to a unitary evolution. The reason for this is that it is not sufficient to know the fields and their time derivatives at an instant of time in order to know their values at a later time: we need to know the history of the system, at least for a time $`r`$. There exist different trajectories that arrive at a given configuration $`(\varphi ,\dot{\varphi })`$. The future evolution depends on these past trajectories and not only on the values of $`\varphi `$ and $`\dot{\varphi }`$ at that instant of time. Therefore, the system cannot possess a well-defined Hamiltonian vector field and suffers from an intrinsic loss of predictability . This can be easily seen if we restrict to the case in which $`f^{ij}(\tau )`$ vanishes, i.e., $`\upsilon ^{ij}(\tau )=c^{ij}(\tau )`$. Then, the influence functional $`_\mathrm{c}`$ is the characteristic functional of a Gaussian probability functional distribution, i.e., it can be written as $$_\mathrm{c}[\varphi ,\phi ;t]=𝒟\alpha e^{\frac{1}{2}_0^t𝑑s_0^s𝑑s^{}\gamma _{ij}(ss^{})\alpha ^i(s)\alpha ^j(s^{})}e^{i_0^t𝑑s\alpha ^i(s)\{h_i[\varphi ;s]h_i[\phi ;s]\}}.$$ (4.14) Here, the continuous matrix $`\gamma _{ij}(ss^{})`$ is the inverse of $`c^{ij}(ss^{})`$, i.e., $$𝑑s^{\prime \prime }\gamma _{ik}(ss^{\prime \prime })c^{kj}(s^{\prime \prime }s^{})=\delta _i^j\delta (ss^{}).$$ (4.15) Then, in this case, the propagator $`\$(t)`$ has the form $$\$(t)=𝒟\alpha P[\alpha ]\$_\alpha (t),$$ (4.16) where $`\$_\alpha (t)`$ is just a factorizable propagator associated with unitary evolution governed by the action $`S_0+\alpha ^ih_i`$ and $$P[\alpha ]=e^{\frac{1}{2}_0^t𝑑s_0^s𝑑s^{}\gamma _{ij}(ss^{})\alpha ^i(s)\alpha ^j(s^{})}.$$ (4.17) Therefore, $`\$(t)`$ is just the average with Gaussian weight $`P[\alpha ]`$ of the unitary propagator $`\$_\alpha (t)`$. Note that the quadratic character of the distribution for the fields $`\alpha ^i`$ is a consequence of the weak-coupling approximation, which keeps only the bilocal term in the action. Higher-order terms would introduce deviations from this noise distribution. The nonunitary nature of the bilocal interaction has been encoded inside the fields $`\alpha ^i`$, so that, when insisting on writing the system in terms of unitary evolution operators, an additional sum over the part of the system that is unknown naturally appears. Note also that we have a different field $`\alpha ^i`$ for each kind of interaction $`h_i`$. Thus, we have transferred the nonlocality of the low-energy field $`\varphi `$ to the set of fields $`\alpha ^i`$, which are nontrivially coupled to it and that represent spacetime foam. ### 4.2 Semiclassical diffusion We can see that the limit of vanishing $`f^{ij}(\tau )`$, with nonzero $`c^{ij}(\tau )`$ (and therefore real $`\upsilon ^{ij}(\tau )`$), is a kind of semiclassical approximation since, in this limit, one ignores the quantum nature of the gravitational fluctuations. Indeed, the fields $`\alpha ^i`$ represent spacetime foam but, as we have seen, the path integral for the whole system does not contain any trace of the dynamical character of the fields $`\alpha ^i`$. It just contains a Gaussian probability distribution for them. The path integral above can then be interpreted as a Gaussian average over the classical noise sources $`\alpha ^i`$. Classicality here means that we can keep the sources $`\alpha ^i`$ fixed, ignoring the noise commutation relations, and, at the end of the calculations, we just average over them. The low-energy density matrix $`\rho `$ then satisfies the following master equation $$\dot{\rho }=i[H_0,\rho ]_0^{\mathrm{}}𝑑\tau c^{ij}(\tau )[h_i,[h_j^\mathrm{I}(\tau ),\rho ]],$$ (4.18) where $`h_j^\mathrm{I}(\tau )=e^{iH_0\tau }h_je^{iH_0\tau }`$. Since $`e^{iH_0\tau }=1+O(\tau /l)`$, the final form of the master equation for a low-energy system subject to gravitational fluctuations treated as a classical environment and at zeroth order in $`r/l`$ (the effect of higher order terms in $`r/l`$ will be thoroughly studied together with the quantum effects) is $$\dot{\rho }=i[H_0,\rho ]_0^{\mathrm{}}𝑑\tau c^{ij}(\tau )[h_i,[h_j,\rho ]]$$ (4.19) (for similar approaches yielding this type of master equation see also Refs. ). The first term gives the low-energy Hamiltonian evolution that would also be present in the absence of fluctuations. The second term is a diffusion term which will be responsible for the loss of coherence (and the subsequent increase of entropy). It is a direct consequence of the foamlike structure of spacetime and the related existence of a minimum length. Note there is no dissipation term. This term is usually present in order to preserve the commutation relations under time evolution. However, we have considered the classical noise limit, i.e., the noise $`\alpha `$ has been considered as a classical source and the commutation relations are automatically preserved. We will see that the dissipation term, apart from being of quantum origin, is $`r/l`$ times smaller than the diffusion term and we have only considered the zeroth order approximation in $`r/l`$. The characteristic decoherence time $`\tau _d`$ induced by the diffusion term can be easily calculated. Indeed, the interaction Hamiltonian density $`h_i`$ is of order $`\mathrm{}_{}^4(\mathrm{}_{}/l)^{2n_{\underset{¯}{i}}(1+s_{\underset{¯}{i}})}`$ and $`c^{ij}(\tau )`$ is of order $`e^{S(r)}`$. Furthermore, the diffusion term contains one integral over time and two integrals over spatial positions. The integral over time and the one over relative spatial positions provide a factor $`r^4`$, since $`c^{ij}(\tau )`$ is different from zero only in a spacetime region of size $`r^4`$, and the remaining integral over global spatial positions provides a factor $`l^3`$, the typical low-energy spatial volume. Putting everything together, we see that the diffusion term is of order $`l^1ϵ^2_{\underset{¯}{i}\underset{¯}{j}}(\mathrm{}_{}/l)^{\eta _{\underset{¯}{i}}+\eta _{\underset{¯}{j}}}`$, with $`\eta _{\underset{¯}{i}}=2n_{\underset{¯}{i}}(1+s_{\underset{¯}{i}})2`$ and $`ϵ=e^{S(r)/2}(r/\mathrm{}_{})^2`$. This quantity defines the inverse of the decoherence time $`\tau _d`$. Therefore, the ratio between the decoherence time $`\tau _d`$ and the low-energy length scale $`l`$ is $$\tau _d/lϵ^2\left[\underset{\underset{¯}{i}\underset{¯}{j}}{}(\mathrm{}_{}/l)^{\eta _{\underset{¯}{i}}+\eta _{\underset{¯}{j}}}\right]^1.$$ (4.20) Because of the exponential factor in $`ϵ`$, only the gravitational fluctuations whose size is very close to Planck length will give a sufficiently small decoherence time. Slightly larger fluctuations will have a very small effect on the unitarity of the effective theory. For the interaction term that corresponds to the mass of a scalar field, the parameter $`\eta `$ vanishes and, consequently, $`\tau _d/lϵ^2`$. Thus, the scalar mass term will lose coherence faster than any other interaction. Indeed, for higher spins and/or powers of the field strength, $`\eta 1`$ and therefore $`\tau _d/l`$ increases by powers of $`l/\mathrm{}_{}`$. For instance, the next relevant decoherence time corresponds to the scalar-fermion interaction term $`\varphi ^2\overline{\psi }\psi `$, which has an associated decoherence ratio $`\tau _d/lϵ^2l/\mathrm{}_{}`$. We see that the decoherence time for the mass of scalars is independent of the low-energy length scale and, for gravitational fluctuations of size close to Planck length, $`ϵ`$ may be not too small so that scalar masses may lose coherence fairly fast, maybe in a few times the typical evolution scale. Hawking has argued that this might be the reason for not observing the Higgs particle. Higher power and/or spin interactions will lose coherence much slower but for sufficiently high energies $`l^1`$, although much smaller than the gravitational fluctuations energy $`r^1`$, the decoherence time may be small enough. This means that quantum fields will lose coherence faster for higher-energy regimes. Hawking has also suggested that loss of quantum coherence might be responsible for the vanishing of the $`\theta `$ angle in quantum chromodynamics . ### 4.3 Spacetime foam as a quantum bath As we have briefly mentioned before, considering that the coefficients $`\upsilon ^{ij}`$ are real amounts to ignore the quantum dynamical nature of spacetime foam, paying attention only to its statistical properties. In what follows, we will study these quantum effects and show that spacetime foam can be effectively described in terms of a quantum thermal bath with a nearly Planckian temperature that has a weak interaction with low-energy fields. As a consequence, other effects, apart from loss of coherence, such as Lamb and Stark transition-frequency shifts, and quantum damping, characteristic of systems in a quantum environment , naturally appear as low-energy predictions of this model . Let us consider a Hamiltonian of the form $$H=H_0+H_{\mathrm{int}}+H_\mathrm{b}.$$ (4.21) $`H_0`$ is the bare Hamiltonian that represents the low-energy fields and $`H_\mathrm{b}`$ is the Hamiltonian of a bath that, for simplicity, will be represented by a real massless scalar field. The interaction Hamiltonian will be of the form $`H_{\mathrm{int}}=\xi ^ih_i`$, where the noise operators $`\xi ^i`$ are given by $$\xi ^{\underset{¯}{i}}(x,t)=𝑑x^{}\chi ^{\underset{¯}{i}}(xx^{})p(x^{},t).$$ (4.22) Here, $`p(x,t)`$ is the momentum of the bath scalar field whose mode decomposition has the form $$p(x,t)=i𝑑k\sqrt{\omega }[a^{}(k)e^{i(\omega tkx)}a(k)e^{i(\omega tkx)}],$$ (4.23) $`\omega =\sqrt{k^2}`$, and $`a`$ and $`a^{}`$ are, respectively, the annihilation and creation operators associated with the bath; $`\chi ^{\underset{¯}{i}}(y)`$ represent the couplings between the low-energy field and the bath in the position representation. Since we are trying to construct a model for spacetime foam, we will assume that the couplings $`\chi ^{\underset{¯}{i}}(y)`$ will be concentrated on a region of radius $`r`$ and that they are determined by the spectral couplings $`\chi ^{\underset{¯}{i}}(\omega )`$ introduced before: $$\chi ^{\underset{¯}{i}}(y)=\frac{dk}{\omega }\chi ^{\underset{¯}{i}}(\omega )\mathrm{cos}(ky).$$ (4.24) The influence functional in this case has the form $$[\varphi ,\phi ;t]=Dq^{}DQ^{}\rho _\mathrm{b}[q^{},Q^{};0]𝒟q𝒟Qe^{i\{S_\mathrm{b}[q;t]S_\mathrm{b}[Q;t]\}}e^{i\{S_{\mathrm{int}}[\varphi ,q;t]S_{\mathrm{int}}[\phi ,Q;t]\}},$$ (4.25) where these path integrals are performed over paths $`q(s)`$ and $`Q(s)`$ such that at the initial time match the values $`q^{}`$ and $`Q^{}`$ and $`S_\mathrm{b}`$ is the action of the bath. If we assume that the bath is in a stationary, homogeneous, and isotropic state, this influence functional can be computed to yield an influence action $`𝒲`$ of the form discussed above. Furthermore, for a thermal state with temperature $`T1/r`$, the function $`g(\omega )`$ has the form $$g(\omega )=\omega [N(\omega )+1/2],$$ (4.26) where $`N(\omega )=[\mathrm{exp}(\omega /T)1]^1`$ is the mean occupation number of the quantum thermal bath corresponding to the frequency $`\omega `$. Recall that the functions $`G^{ij}(\omega )`$ and, hence, $`f^{ij}(\tau )`$ are uniquely determined by the couplings $`\chi ^{\underset{¯}{i}}(\omega )`$. In particular, they are completely independent of the state of the bath or the system. All the relevant information about the bath is encoded in the function $`g(\omega )`$. With this procedure, we see that spacetime foam can be represented by a quantum bath determined by $`g(\omega )`$ that interacts with the low-energy fields by means of the couplings $`\chi ^{\underset{¯}{i}}(\omega )`$ which characterize spacetime foam, in the sense that both systems produce the same low-energy effects. This model that we have proposed is particularly suited to the study of low-energy effects produced by simply connected topology fluctuations such as closed loops of virtual black holes . Virtual black holes will not obey classical equations of motion but will appear as quantum fluctuations of spacetime and thus will become part of the spacetime foam as we have discussed. Particles could fall into these black holes and be re-emitted. The scattering amplitudes of these processes could be interpreted as being produced by nonlocal effective interactions that would take place inside the fluctuations and the influence functional obtained above could then be interpreted as providing the evolution of the low-energy density matrix in the presence of a bath of ubiquitous quantum topological fluctuations of the virtual-black-hole type. ### 4.4 Wormholes and coherence Euclidean solutions of the wormhole type were obtained for a variety of matter contents (see, e.g., ). Quantum solutions to the Wheeler-DeWitt equation that represent wormholes can be found in Refs. . These solutions allowed the calculation of the effective interactions that they introduce in low-energy physics . Wormholes do not seem to induce loss of coherence despite the fact that they render spacetime multiply connected . The reason why they seem to preserve coherence is that, in the dilute gas approximation, they join spacetime regions that may be far apart from each other and therefore, both wormhole mouths must be delocalized, i.e., the multiply-connectedness requires energy and momentum conservation in both spacetime regions separately. In this way, wormholes can be described as bilocal interactions whose coefficients $`\upsilon ^{ij}`$ do not depend on spacetime positions. Diffeomorphism invariance on each spacetime region also requires the spacetime independence of $`\upsilon ^{ij}`$. This can also be seen by analyzing these wormholes from the point of view of the universal covering manifold, which is, by definition, simply connected. Here, each wormhole is represented by two boundaries located at infinity and suitably identified. This identification is equivalent to introducing coefficients $`\upsilon ^{ij}`$ that relate the bases of the Hilbert space of wormholes in both regions of the universal covering manifold. Since $`\upsilon ^{ij}`$ are just the coefficients in a change of basis, they will be constant. As a direct consequence, the correlation time for the fields $`\alpha ^i`$ is infinite. This means that the fields $`\alpha ^i`$ cannot be interpreted as noise sources that are Gaussian distributed at each spacetime point independently. Rather, they are infinitely coherent thus giving rise to superselection sectors. The Gaussian distribution to which they are subject is therefore global, spacetime independent. The only effect of wormholes is thus introducing local interactions with unknown but spacetime independent coupling constants . The spacetime independence implies that, once a experiment to determine one such constant is performed, it will remain with the obtained value forever, in sharp contrast with those induced by simply-connected topological fluctuations such as virtual black holes. In this way and because of the infinite-range correlations induced by wormholes, which forbid the existence of asymptotic regions necessary to analyze scattering processes, the loss of coherence produced by these fluctuations should actually be ascribed to the lack of knowledge of the initial state or, in other words, to the impossibility of preparing arbitrarily pure quantum states . One could also expect some effects originated in their quantum nature. However, the coefficients $`\upsilon ^{ij}`$ are spacetime independent. This means that $`c^{ij}`$ are constant and, consequently, $`\stackrel{~}{c}^{\underset{¯}{i}\underset{¯}{j}}(\omega )\delta (\omega )`$. As we have argued, $`\stackrel{~}{c}^{\underset{¯}{i}\underset{¯}{j}}(\omega )`$ and $`\stackrel{~}{f}^{\underset{¯}{i}\underset{¯}{j}}(\omega )`$ are related by a nearly flat function so that $`\stackrel{~}{f}^{\underset{¯}{i}\underset{¯}{j}}(\omega )\delta (\omega )`$ as well. This in turn implies that $`f^{ij}`$ is also constant and $`\dot{f}^{ij}=0`$, therefore concluding that $`\upsilon ^{ij}`$ is real. We have already argued that in the case of real $`\upsilon ^{ij}`$, no quantum effects will show up. Wormhole spacetimes do not lead, strictly speaking, to loss of quantum coherence although global hyperbolicity does not hold. On the other hand, the difficulties in quantum gravity with unitary propagation mainly come from the quantum field theory axiom of asymptotic completeness , which is closely related to global hyperbolicity. Indeed, in order to guarantee asymptotic completeness, it is necessary that the expectation value of the fields at any spacetime position be determined by their values at a Cauchy surface at infinity. Topologically nontrivial spacetimes however are not globally hyperbolic in general and therefore do not admit a foliation in Cauchy surfaces. Let us have a closer look at this issue. Gravitational entropy, which is closely related to the loss of quantum coherence, has its origin in the existence of two-dimensional spheres in Euclidean space that cannot be homotopically contracted to a point, i.e., with nonvanishing second Betti number. These two-dimensional surfaces become fixed points of the timelike Killing vector, so that global hyperbolicity is lost. A well-known example (for other more sophisticated examples see Ref. ) is a Schwarzschild black hole whose Euclidean sector is described by the metric $$ds^2=f(r)dt^2+f(r)^1dr^2+r^2d\mathrm{\Omega }_2^2,$$ (4.27) with $`f(r)=12\mathrm{}_{}^2m/r`$, $`m`$ being the black hole mass. In order to make this solution regular, we consider the region $`r2\mathrm{}_{}^2m`$ and set $`t`$ to be periodic with period $`\beta =8\pi \mathrm{}_{}^2m`$. The surface defined by $`r=2\mathrm{}_{}^2m`$ is a fixed point of the Killing vector $`_t`$. Thus, we have a spacetime with the topology of $`^2\times S^2`$, so that $`B_2=1`$. As we will see below, it is the existence of this surface that accounts for the entropy of this spacetime. This does not mean that it is localized in the surface itself. Rather, it is a global quantity characteristic of the whole spacetime manifold. The Euclidean action of this solution is given by the sum of the contributions $`I_{\mathrm{fp}}`$, $`I_{\mathrm{}}`$ of the two surface terms at $`r=2\mathrm{}_{}^2m`$ and $`r=\mathrm{}`$. In the semiclassical approximation, the partition function is given by $`Z=e^{I_{\mathrm{fp}}I_{\mathrm{}}}`$. Taking into account that the entropy is $`S=\mathrm{ln}Z\beta E`$ and that $`\beta E`$ is precisely the surface term at infinity $`\beta E=I_{\mathrm{}}`$, we conclude that the entropy is given by the surface term at $`r=2\mathrm{}_{}^2m`$, $`S=I_{\mathrm{fp}}=4\pi \mathrm{}_{}^2m^2`$, as is well-known. In the wormhole case, the second Betti number is zero and the first and third Betti numbers are equal. For a spacetime with a single wormhole, $`B_1=B_3=1`$. This means that there exists one circle that cannot be homotopically contracted to a point and that there also exists one three-sphere that is not homotopic to a point, but all two-spheres are contractible. Regular solutions of this sort can be identified with the wormhole throat. The only contributions to the Euclidean action in this case come from the asymptotic regions, which is precisely the term that we have to subtract from $`\mathrm{ln}Z`$, in the semiclassical approximation, in order to calculate the gravitational entropy. Thus, wormholes have vanishing entropy despite the fact that they are not globally hyperbolic. From the point of view of their universal covering manifold, a wormhole is represented by two three-surfaces whose contribution to the action are equal in absolute value but with opposite sign because of their reverse orientations, thus leaving only the asymptotic contribution, irrelevant as far as the entropy is concerned (for a different approach see Ref. ). The striking difference between wormholes and virtual black holes is precisely the formation of horizons which has no counterpart in the wormhole case. This is closely related to the issue of the infinite-range spacetime correlations established by wormholes versus the finite size of the regions occupied by virtual black holes or quantum time machines, for instance. ## 5 Low-energy effective evolution As we have already mentioned, from the influence functional obtained in the previous section, we can obtain the master equation satisfied by the low-energy density matrix, although here we will follow a different procedure: We will derive the master equation in the canonical formalism from von Neumann equation for the joint system of the low-energy fields plus the effective quantum bath coupled to them that accounts for the effects of spacetime foam. ### 5.1 Master equation It is easy to see that the function $`f^{ij}(\tau )`$ given in Eq. (4.11) determines the commutation relations at different times of the noise variables. Indeed, taking into account the commutation relations for the annihilation and creation operators $`a`$ and $`a^{}`$, i.e., $$[a(k),a(k^{})]=0,[a(k),a^{}(k^{})]=\delta (kk^{}),$$ (5.1) we obtain by direct calculation the relation $$[\xi ^i(t),\xi ^j(t^{})]=i\frac{d}{dt}f^{ij}(tt^{}),$$ (5.2) Similarly, the function $`c^{ij}(\tau )`$ of Eq. (4.12) determines the average of the anticommutator of the noise variables, $$\frac{1}{2}[\xi ^i(t),\xi ^j(t^{})]_+=c^{ij}(tt^{}),$$ (5.3) where the average of any operator $`Q`$ has been defined as $`Q\mathrm{tr}_\mathrm{b}(Q\rho _\mathrm{b})`$, provided that the bath is in a stationary, homogeneous, and isotropic state determined by $`g(\omega )`$, i.e., $$a(k)=0,a(k)a(k^{})=0,a^{}(k)a(k^{})=[g(\omega )/\omega 1/2]\delta (kk^{}).$$ (5.4) We are now ready to write down the master equation for the low-energy density matrix. We will describe the whole system (low-energy field and bath) by a density matrix $`\rho _\mathrm{T}(t)`$. We will assume that, initially, the low energy fields and the bath are independent, i.e., that at the time $`t=0`$ $$\rho _\mathrm{T}(0)=\rho (0)\rho _\mathrm{b}.$$ (5.5) If the low-energy fields and the bath do not decouple at any time, an extra renormalization term should be added to the Hamiltonian. In the interaction picture, the density matrix has the form $$\rho _\mathrm{T}^\mathrm{I}(t)=U^{}(t)\rho _\mathrm{T}(t)U(t),$$ (5.6) with $`U(t)=U_0(t)U_\mathrm{b}(t)`$, where $`U_0(t)=e^{iH_0t}`$ and $`U_\mathrm{b}(t)=e^{iH_\mathrm{b}t}`$. It obeys the equation of motion $$\dot{\rho }_\mathrm{T}^\mathrm{I}(t)=i[\xi ^i(t)h_i^\mathrm{I}(t),\rho _\mathrm{T}^\mathrm{I}(t)].$$ (5.7) Here, $`\xi ^i(t)`$ $`=`$ $`U^{}(t)\xi ^iU(t)=U_\mathrm{b}^{}(t)\xi ^iU_\mathrm{b}(t),`$ (5.8) $`h_i^\mathrm{I}(t)`$ $`=`$ $`U^{}(t)h_iU(t)=U_0^{}(t)h_iU_0(t).`$ (5.9) Integrating this evolution equation and introducing the result back into it, tracing over the variables of the bath, defining $`\rho ^\mathrm{I}(t)\mathrm{tr}_\mathrm{b}[\rho _\mathrm{T}^\mathrm{I}(t)]`$, and noting that $`\mathrm{tr}_\mathrm{b}[\xi ^i(t)h_i^\mathrm{I}(t)\rho _\mathrm{T}^\mathrm{I}(t_0)]=0`$, we obtain $$\dot{\rho }^\mathrm{I}(t)=_{t_0}^t𝑑t^{}\mathrm{tr}_\mathrm{b}\left\{[\xi ^i(t)h_i^\mathrm{I}(t),[\xi ^j(t^{})h_j^\mathrm{I}(t^{}),\rho _\mathrm{T}^\mathrm{I}(t^{})]]\right\}.$$ (5.10) In the weak-coupling approximation, which implies that $`\xi ^ih_i`$ is much smaller than $`H_0`$ and $`H_\mathrm{b}`$ (this is justified since it is of order $`ϵ`$), we assume that the bath density matrix does not change because of the interaction, so that $`\rho _\mathrm{T}^\mathrm{I}(t)=\rho ^\mathrm{I}(t)\rho _\mathrm{b}`$. The error introduced by this substitution is of order $`ϵ`$ and ignoring it in the master equation amounts to keep terms only up to second order in this parameter. Since $`[\xi ^i(t),h_j^\mathrm{I}(t^{})]=0`$ because $`[\xi ^i,h_j]=0`$, the right hand side of this equation can be written in the following way $$_0^t𝑑t^{}\left\{c^{ij}(tt^{})[h_i^\mathrm{I}(t),[h_j^\mathrm{I}(t^{}),\rho ^\mathrm{I}(t^{})]]+\frac{i}{2}f^{ij}(tt^{})[h_i^\mathrm{I}(t),[h_j^\mathrm{I}(t^{}),\rho ^\mathrm{I}(t^{})]_+]\right\}.$$ (5.11) The Markov approximation allows the substitution of $`\rho ^\mathrm{I}(t^{})`$ by $`\rho ^\mathrm{I}(t)`$ in the master equation because the integral over $`t^{}`$ will get a significant contribution from times $`t^{}`$ that are close to $`t`$ due to the factors $`\dot{f}^{ij}(tt^{})`$ and $`c^{ij}(tt^{})`$ and because, in this interval of time, the density matrix $`\rho ^\mathrm{I}`$ will not change significantly. Indeed, the typical evolution time of $`\rho ^\mathrm{I}`$ is the low-energy time scale $`l`$, which will be much larger than the time scale $`r`$ associated with the bath. If we perform a change of the integration variable from $`t^{}`$ to $`\tau =tt^{}`$, write $$\rho ^\mathrm{I}(t^{})=\rho ^\mathrm{I}(t\tau )=\rho ^\mathrm{I}(t)\tau \dot{\rho }^\mathrm{I}(t)+O(\tau ^2),$$ (5.12) and introduce this expression in the master equation above, we easily see that the error introduced by the Markovian approximation is of order $`ϵ^2`$, i.e., it amounts ignore a term of order $`ϵ^4`$. The upper integration limit $`t`$ in both integrals can be substituted by $`\mathrm{}`$ for evolution times $`t`$ much larger than the correlation time $`r`$, because of the factors $`\dot{f}^{ij}(\tau )`$ and $`c^{ij}(\tau )`$ that vanish for $`\tau >r`$. Then, after an integration by parts of the $`f`$ term, and transforming the resulting master equation back to the Schrödinger picture we obtain $$\dot{\rho }=i[H_0^{},\rho ]\frac{i}{2}_0^{\mathrm{}}𝑑\tau f^{ij}(\tau )[h_i,[\dot{h}_j^\mathrm{I}(\tau ),\rho ]_+]_0^{\mathrm{}}𝑑\tau c^{ij}(\tau )[h_i,[h_j^\mathrm{I}(\tau ),\rho ]],$$ (5.13) where $`H_0^{}=H_0\frac{1}{2}f^{ij}(0)h_ih_j`$ is just the original low-energy Hamiltonian plus a finite renormalization originated in the integration by parts of the $`f`$ term. It can be checked that the low-energy density matrix $`\rho (t)`$ obtained by means of the influence functional $``$ is indeed a solution of this master equation. Before discussing this equation in full detail, let us first study the classical noise limit. With this aim, let us introduce the parameter $$\sigma =𝑑k^{}[a(k),a^{}(k^{})],$$ (5.14) which is equal to 1 for quantum noise and 0 for classical noise. Then, the $`f`$ term is proportional to $`\sigma `$ and therefore vanishes in the classical noise limit. On the other hand, the function $`g(\omega )`$ becomes $`g(\sigma \omega )`$ when introducing the parameter $`\sigma `$. In the limit $`\sigma 0`$, it acquires the value $`g(0)`$ which is a constant of order $`1/r`$. Therefore, $`c^{ij}(\tau )`$ becomes in this limit $`c_{\mathrm{class}}^{ij}(\tau )=g(0)f^{ij}(\tau )`$. Also, the renormalization term of the low-energy Hamiltonian vanishes in this limit. In this way, we have arrived at the same master equation that we obtained in the previous section. This is not surprising because the origin of the $`f`$ term is precisely the noncommutativity of the noise operators, i.e., its quantum nature, while the $`c_{\mathrm{class}}`$ term actually contains the information about the state of the bath. In the case of a thermal bath, $`g(0)`$ is precisely the temperature of the bath. At zeroth order in $`r/l`$, the master equation for classical noise then acquires the form $$\dot{\rho }=i[H_0,\rho ]_0^{\mathrm{}}𝑑\tau c_{\mathrm{class}}^{ij}(\tau )[h_i,[h_j,\rho ]].$$ (5.15) ### 5.2 Low-energy effects Let us now analyze the general master equation, valid up to second order in $`ϵ`$, that takes into account the quantum nature of the gravitational fluctuations. These contributions will be fairly small in the low-energy regime, but may provide interesting information about the higher-energy regimes in which $`l`$ may be of the order of a few Planck lengths and for which the weak-coupling approximation is still valid. In order to see these contributions explicitly, let us further elaborate the master equation. In terms of the operator $`L_0`$ defined as $`L_0A=[H_0,A]`$ acting of any low-energy operator $`A`$, the time dependent interaction $`h_j^\mathrm{I}(\tau )`$ can be written as $$h_j^\mathrm{I}(\tau )=e^{iL_0\tau }h_j.$$ (5.16) The interaction $`h_j`$ can be expanded in eigenoperators $`h_{j\mathrm{\Omega }}^\pm `$ of the operator $`L_0`$, i.e., $$h_j=𝑑\mu _\mathrm{\Omega }\left(h_{j\mathrm{\Omega }}^++h_{j\mathrm{\Omega }}^{}\right),$$ (5.17) with $`L_0h_{j\mathrm{\Omega }}^\pm =\pm \mathrm{\Omega }h_{j\mathrm{\Omega }}^\pm `$ and $`d\mu _\mathrm{\Omega }`$ being an appropriate spectral measure, which is naturally cut off around the low-energy scale $`l^1`$. This expansion always exists provided that the eigenstates of $`H_0`$ form a complete set. Then, $`h_j^\mathrm{I}(\tau )`$ can be written as $$h_j^\mathrm{I}(\tau )=𝑑\mu _\mathrm{\Omega }(e^{i\mathrm{\Omega }\tau }h_{j\mathrm{\Omega }}^++e^{i\mathrm{\Omega }\tau }h_{j\mathrm{\Omega }}^{}).$$ (5.18) It is also convenient to define the new interaction operators for each low-energy frequency $`\mathrm{\Omega }`$ $$h_{j\mathrm{\Omega }}^1=h_{j\mathrm{\Omega }}^+h_{j\mathrm{\Omega }}^{},h_{j\mathrm{\Omega }}^2=h_{j\mathrm{\Omega }}^++h_{j\mathrm{\Omega }}^{}.$$ (5.19) The quantum noise effects are reflected in the master equation through the term proportional to $`f^{ij}(\tau )`$ and the term proportional to $`c^{ij}(\tau )`$, both of them integrated over $`\tau (0,\mathrm{})`$. Because of these incomplete integrals, each term provides two different kinds of contributions whose origin can be traced back to the well-know formula $$_0^{\mathrm{}}𝑑\tau e^{i\omega \tau }=\pi \delta (\omega )+𝒫(i/\omega ),$$ (5.20) where $`𝒫`$ is the Cauchy principal part . The master equation can then be written in the following form $$\dot{\rho }=(iL_0^{}+L_{\mathrm{diss}}+L_{\mathrm{diff}}+iL_{\mathrm{s}\mathrm{l}})\rho ,$$ (5.21) where the meaning of the different terms are explained in what follows. The first term $`iL_0^{}\rho `$, with $`L_0^{}\rho =[H_0^{},\rho ]`$, is responsible for the renormalized low-energy Hamiltonian evolution. The renormalization term is of order $`\epsilon ^2`$ as compared with the low-energy Hamiltonian $`H_0`$, where $`\epsilon ^2=ϵ^2_{\underset{¯}{i}\underset{¯}{j}}(\mathrm{}_{}/l)^{\eta _{\underset{¯}{i}}+\eta _{\underset{¯}{j}}}`$ and, remember, $`\eta _{\underset{¯}{i}}=2n_{\underset{¯}{i}}(1+s_{\underset{¯}{i}})2`$ is a parameter specific to each kind of interaction term $`h_i`$. The dissipation term $$L_{\mathrm{diss}}\rho =\frac{\pi }{4}𝑑\mu _\mathrm{\Omega }\mathrm{\Omega }G^{ij}(\mathrm{\Omega })[h_i,[h_{j\mathrm{\Omega }}^1,\rho ]_+]$$ (5.22) is necessary for the preservation in time of the low-energy commutators in the presence of quantum noise. As we have seen, it is proportional to the commutator between the noise creation and annihilation operators and, therefore, vanishes in the classical noise limit. Its size is of order $`\epsilon ^2r/l^2`$. The diffusion process is governed by $$L_{\mathrm{diff}}\rho =\frac{\pi }{2}𝑑\mu _\mathrm{\Omega }g(\mathrm{\Omega })G^{ij}(\mathrm{\Omega })[h_i,[h_{j\mathrm{\Omega }}^2,\rho ]],$$ (5.23) which is of order $`\epsilon ^2/l`$. The next term provides an energy shift which can be interpreted as a mixture of a gravitational ac Stark effect and a Lamb shift by comparison with its quantum optics analog . Its expression is $$L_{\mathrm{s}\mathrm{l}}=𝑑\mu _\mathrm{\Omega }𝒫_0^{\mathrm{}}𝑑\omega \frac{\mathrm{\Omega }}{\omega ^2\mathrm{\Omega }^2}G^{ij}(\omega )\left\{g(\omega )[h_i,[h_{j\mathrm{\Omega }}^1,\rho ]]+\frac{\mathrm{\Omega }}{2}[h_i,[h_{j\mathrm{\Omega }}^2,\rho ]_+]\right\}.$$ (5.24) The second term is of order $`\epsilon ^2r^2/l^3`$, which is fairly small. However, the first term will provide a significant contribution of order $`\epsilon ^2r/l^2[\mathrm{ln}(l/r)+1]`$. This logarithmic dependence on the relative scale is indeed characteristic of the Lamb shift . As we have argued the function $`g(\omega )`$ must be fairly flat in the whole range of frequencies up to the cutoff $`1/r`$ and be of order $`1/r`$ in order to reproduce the appropriate correlations $`c^{ij}(\tau )`$. A thermal bath, for instance, produces a function $`g(\omega )`$ with the desired characteristics, at least at the level of approximation that we are considering. In this specific case, it can be seen that the logarithmic contribution to the energy shift is not present and it would only appear in the zero temperature limit. However, since we are modeling spacetime foam with this thermal bath, the effective temperature is $`1/r`$, which is close to Planck scale and certainly far from zero. From the practical point of view, the presence or not of this logarithmic contribution is at most an order of magnitude larger than the standard one and therefore it does not significantly affect the results. Almost any other state of the bath with a more or less uniform frequency distribution will contain such logarithmic contribution. As a summary, the $`f`$ term provides a dissipation part, necessary for the preservation of commutators, and a fairly small contribution to what can be interpreted as a gravitational Lamb shift. On the other hand, the $`c`$ term gives rise to a diffusion term and a shift in the oscillation frequencies of the low-energy fields that can be interpreted as a mixture of a gravitational Stark effect and a Lamb shift. The size of these effects, compared with the bare evolution, are the following: the diffusion term is of order $`\epsilon ^2`$ (see, however, Refs. ); the damping term is smaller by a factor $`r/l`$, and the combined effect of the Stark and Lamb shifts is of order $`(r/l)[\mathrm{ln}(l/r)+1]`$ as compared with the diffusion term. Note that the quantum effects induced by spacetime foam become relevant as the low-energy length scale $`l`$ decreases, as we see from the fact that these effects depend on the ratio $`r/l`$, while, in this situation, the diffusion process becomes faster, except for the mass of scalars, which always decoheres in a time scale which is close to the low-energy evolution time. ### 5.3 Observational and experimental prospects These quantum gravitational effects are just energy shifts and decoherence effects similar to those appearing in other areas of physics, where fairly well established experimental procedures and results exist, and which can indeed be applied here, provided that sufficiently high accuracy can be achieved. Neutral kaon beams have been proposed as experimental systems for measuring loss of coherence owing to quantum gravitational fluctuations . In these systems, the main experimental consequence of the diffusion term (together with the dissipative one necessary for reaching a stationary regime) is violation of CPT because of the nonlocal origin of the effective interactions (see also Refs. ). The estimates for this violation are very close to the values accessible by current experiments with neutral kaons and will be within the range of near-future experiments. Macroscopic neutron interferometry provides another kind of experimental systems in which the effects of the diffusion term may have measurable consequences since they may cause the disappearance of the interference fringes . As for the gravitational Lamb and Stark effects, they are energy shifts that depend on the frequency, so that different low-energy modes will undergo different shifts. This translates into a modification of the dispersion relations, which makes the velocity of propagation frequency-dependent, as if low-energy fields propagated in a “medium”. Therefore, upon arrival at the detector, low-energy modes will experience different time delays (depending on their frequency) as compared to what could be expected in the absence of quantum gravitational fluctuations. These time delays in the detected signals will be very small in general. However, it might still be possible to measure them if we make the low-energy particles travel large (cosmological) distances. In fact, $`\gamma `$-ray bursts provide such a situation as has been recently pointed out (see also Refs. ), thus opening a new doorway to possible observations of these quantum gravitational effects. These authors assume that the dispersion relation for photons has a linear dependence on $`r/l`$ because of quantum gravitational fluctuations, i.e., that the speed of light is of the form $`v1+\zeta r/l`$, with $`\zeta `$ being an unknown parameter of order 1 (see also Ref. ). In this situation, photons that travel a distance $`L`$ will show a frequency-dependent time delay $`\mathrm{\Delta }t\zeta Lr/l`$. Using data from a $`\gamma `$-ray flare associated with the active galaxy Markarian 421 which give $`l^11`$TeV, $`L1.1\times 10^{16}`$ light-seconds, and a variability time scale $`\delta t`$ less than 280 seconds, it can be obtained the upper bound $`\zeta r/\mathrm{}_{}<250`$. If $`\zeta `$ is indeed of order 1, this inequality implies an upper limit on the scale $`r`$ of the gravitational fluctuations of a few hundred Planck lengths. One would then expect that the presence of the gravitational Lamb and Stark shifts predicted above could be observationally tested. However, in spacetime foam the role of the parameter $`\zeta `$ is played by $`\epsilon ^2`$ and this quantity is much smaller than 1, since it contains two factors which are smaller than 1 for different reasons. The first one is $`e^{S(r)}(r/\mathrm{}_{})^2`$. In the semiclassical approximation to nonperturbative quantum gravity, this exponential can be interpreted as the density of topological fluctuations of size $`r`$, which decreases with $`r`$ fairly fast. The second factor is, for the electromagnetic field, of the form $`(\mathrm{}_{}/l)^4`$; it comes from the spin dependence of the effective interactions and is closely related to the existence of a length scale in quantum gravity. Then, $`\epsilon ^2`$ in this case maybe so small that might render any bound on the size of quantum spacetime foam effects on the electromagnetic field nonrestrictive at all. ## 6 Real clocks In previous sections, we have analyzed the evolution of low-energy fields in the bath of quantum gravitational fluctuations that constitute spacetime foam. Here we will briefly discuss the evolution of physical systems when measured by real clocks, which are generally subject to errors and fluctuations, in contrast with ideal clocks which, although would accurately measure the time parameter that appears in the Schrödinger equation, do not exist in nature (see, e.g., Refs. ). The evolution according to real clocks bears a close resemblance with low-energy fields propagating in spacetime foam, although there also exist important differences which will be discussed at the end of this section. Quantum real clocks inevitably introduce uncertainties in the equations of motion, as has been widely discussed in the literature from various points of view (see, e.g., Refs. ). Actually, real clocks are not only subject to quantum fluctuations. They are also subject to classical imperfections, small errors, that can only be dealt with statistically. For instance, an unavoidable classical source of stochasticity is temperature, which will introduce thermal fluctuations in the behavior of real clocks. Thus, the existence of ideal clocks is also forbidden by the third law of thermodynamics. Even at zero-temperature, the quantum vacuum fluctuations of quantum field theory make propagating physical systems (real clocks among them) suffer a cold-diffusion and consequently a need for a stochastic description of their evolution . Let us study, within the context of the standard quantum theory, the evolution of an arbitrary system according to a real clock . ### 6.1 Good real clocks A real clock will be a system with a degree of freedom $`t`$ that closely follows the ideal time parameter $`t_\mathrm{i}`$, i.e., $`t_\mathrm{i}=t+\mathrm{\Delta }(t)`$, where $`\mathrm{\Delta }(t)`$ is the error at the real-clock time $`t`$. Given any real clock, its characteristics will be encoded in the probability functional distribution for the continuous stochastic processes $`\mathrm{\Delta }(t)`$ of clock errors, $`𝒫[\mathrm{\Delta }(t)]`$, which must satisfy appropriate conditions, so that it can be regarded as a good clock. A first property is that Galilean causality should be preserved, i.e., that causally related events should always be properly ordered in clock time as well, which implies that $`t_\mathrm{i}(t^{})>t_\mathrm{i}(t)`$ for every $`t^{}>t`$. In terms of the derivative $`\alpha (t)=d\mathrm{\Delta }(t)/dt`$ of the stochastic process $`\mathrm{\Delta }(t)`$, we can state this condition as requiring that, for any realization of the stochastic sequence, $`\alpha (t)>1`$. A second condition that we would require good clocks to fulfill is that the expectation value of relative errors, determined by the stochastic process $`\alpha (t)`$, be zero, i.e., $`\alpha (t)=0`$ for all $`t`$. Furthermore, a good clock should always behave in the same way (in a statistical sense). We can say that the clock behaves consistently in time as a good one if those relative errors $`\alpha (t)`$ are statistically stationary, i.e., the probability functional distribution $`𝒫[\alpha (t)]`$ for the process of relative errors $`\alpha (t)`$ (which can be obtained from $`P[\mathrm{\Delta }(t)]`$, and vice versa) must not be affected by global shifts $`tt+t_0`$ of the readout of the clock. Note that the stochastic process $`\mathrm{\Delta }(t)`$ need not be stationary, despite the stationarity of the process $`\alpha (t)`$. The one-point probability distribution function for the variables $`\alpha (t)`$ should be highly concentrated around the zero mean, if the clock is to behave nicely, i.e., $$\alpha (t)\alpha (t\tau )c(\tau )c(0)1,$$ (6.1) where $`c(\tau )=c(\tau )`$. The correlation time $`\vartheta `$ for the stochastic process $`\alpha (t)`$ is given by $$\vartheta =_0^{\mathrm{}}c(\tau )/c(0).$$ (6.2) For convenience, let us introduce a new parameter $`\kappa `$ with dimensions of time, defined as $`\kappa ^2=c(0)\vartheta ^2`$ and for which the good-clock conditions imply $`\kappa \vartheta `$. As we shall see, $`\vartheta `$ cannot be arbitrarily large, and, therefore, the ideal clock limit is given by $`\kappa 0`$. In addition to these properties, a good clock must have enough precision in order to measure the evolution of the specific system, which imposes further restrictions on the clock. On the one hand, the characteristic evolution time $`l`$ of the system must be much larger than the correlation time $`\vartheta `$ of the clock. On the other hand, the leading term in the asymptotic expansion of the variance $`\mathrm{\Delta }(t)^2`$ for large $`t`$ is of the form $`\kappa ^2t/\vartheta `$ which means that, after a certain period of time, the absolute errors can be too large. The maximum admissible standard deviation in $`\mathrm{\Delta }(t)`$ must be at most of the same order as $`l`$. Then the period of applicability of the clock to the system under study, i.e., the period of clock time during which the errors of the clock are smaller than the characteristic evolution time of the system is approximately equal to $`l^2\vartheta /\kappa ^2`$. For a good clock, $`\kappa \vartheta l`$, as we have seen, so that the period of applicability is much larger than the characteristic evolution time $`l`$. ### 6.2 Evolution laws We shall now obtain the evolution equation for the density matrix of an arbitrary quantum system in terms of the clock time $`t`$. Let $`H`$ be the time-independent Hamiltonian of the system and $`S`$ its action in the absence of errors. For any given realization of the stochastic process $`\alpha (t)`$ that characterizes a good clock, we can write the density matrix at the time $`t`$, $`\rho _\alpha (t)`$, in terms of the initial density matrix $`\rho (0)`$ as $$\rho _\alpha (t)=\$_\alpha (t)\rho (0),$$ (6.3) where the density matrix propagator $`\$_\alpha (t)`$ has the form $$\$_\alpha (t)=𝒟q𝒟q^{}e^{iS_\alpha [q;t]iS_\alpha [q^{};t]}.$$ (6.4) Here, $`S_\alpha [q;t]=S[q;t]_0^t𝑑s\alpha (s)H[q(s)]`$ is the action of the system for the given realization of the stochastic process $`\alpha (t)`$. The average of the density matrix $`\rho _\alpha (t)`$ can be regarded as the density matrix of the system $`\rho (t)`$ at the clock time $`t`$: $$\rho (t)=𝒟\alpha 𝒫[\alpha ]\$_\alpha (t)\rho (0).$$ (6.5) In the good-clock approximation, only the two-point correlation function $`c(\tau )`$ is relevant, so that we can write the probability functional as a Gaussian distribution. The integration over $`\alpha (t)`$ is then easily performed to obtain the influence action $`𝒲`$ $$𝒲[q,q^{};t]=\frac{1}{2}_0^t𝑑s_0^s𝑑s^{}\{H[q(s)]H[q^{}(s)]\}c(ss^{})\{H[q(s^{})]H[q^{}(s^{})]\}.$$ (6.6) We see that there is no dissipative term there as could be expected from the fact that the noise source is classical . Moreover, as the interaction term is proportional to $`H`$, there is no response of the system to the outside noise, which means that the associated impedance is infinite . Therefore, we see that the effect of using good real clocks for studying the evolution of a quantum system is the appearance of an effective interaction term in the action integral which is bilocal in time. This can be understood as the first term in a multilocal expansion, which corresponds to the weak-field expansion of the probability functional around the Gaussian term. This nonlocality in time admits a simple interpretation: correlations between relative errors at different instants of clock time can be understood as correlations between clock-time flows at those clock instants. The clock-time flow of the system is governed by the Hamiltonian and, therefore, the correlation of relative errors induces an effective interaction term, generically multilocal, that relates the Hamiltonians at different clock instants. From the form of the influence action, it is not difficult to see that, in the Markov approximation and provided that the system evolves for a time smaller than the period of applicability of the clock, the density matrix $`\rho (t)`$ satisfies the master equation $$\dot{\rho }(t)=i[H,\rho (t)](\kappa ^2/\vartheta )[H,[H,\rho (t)]],$$ (6.7) where the overdot denotes derivative with respect to the clock time $`t`$. Notice that, in the ideal clock limit, $`\kappa 0`$, the unitary von Neumann equation is recovered. We should also point out that irreversibility appears because the errors of the clock cannot be eliminated once we have started using it. From a different point of view, the clock can be effectively modeled by a thermal bath, with temperature $`T_\mathrm{b}`$ to be determined, coupled to the system. Let $`H+H_{\mathrm{int}}+H_\mathrm{b}`$ be the total Hamiltonian, where $`H`$ is the free Hamiltonian of the system and $`H_\mathrm{b}`$ is the Hamiltonian of a bath that will be represented by a collection of harmonic oscillators . The interaction Hamiltonian will be of the form $`H_{\mathrm{int}}=\xi H`$, where the noise operator $`\xi `$ is given by $$\xi (t)=\frac{i}{\sqrt{2\pi }}_0^{\mathrm{}}𝑑\omega \chi (\omega )[a^{}(\omega )e^{i\omega t}a(\omega )e^{i\omega t}].$$ (6.8) In this expression, $`a`$ and $`a^{}`$ are, respectively, the annihilation and creation operators associated with the bath, and $`\chi (\omega )`$ is a real function, to be determined, that represents the coupling between the system and the bath for each frequency $`\omega `$. Identifying, in the classical noise limit, the classical correlation function of the bath with $`c(\tau )`$, the suitable coupling between the system and the bath is given by the spectral density of fluctuations of the clock: $$T_\mathrm{b}\chi (\omega )^2=_0^{\mathrm{}}𝑑\tau c(\tau )\mathrm{cos}(\omega \tau ).$$ (6.9) With this choice, the master equation for evolution according to real clocks is identical to the master equation for the system obtained by tracing over the effective bath. ### 6.3 Loss of coherence The master equation contains a diffusion term and will therefore lead to loss of coherence. However, this loss depends on the initial state. In other words, there exists a pointer basis , so that any density matrix which is diagonal in this specific basis will not be affected by the diffusion term, while any other will approach a diagonal density matrix. The stochastic perturbation $`\alpha (t)H`$ is obviously diagonal in the basis of eigenstates of the Hamiltonian, which is therefore the pointer basis: the interaction term cannot induce any transition between different energy levels. The smallest energy difference provides the inverse of the characteristic time for the evolution of the system $`l`$ and, therefore, the decay constant is $`\kappa ^2/\vartheta l^2`$, equal to the inverse of the period of applicability of the clock. By the end of this period, the density matrix will have been reduced to the diagonal terms and a much diminished remnant of those off-diagonal terms with slow evolution. In any case, the von Neumann entropy grows if the density matrix is not initially diagonal in the energy basis. The effect of decoherence due to errors of real clocks does not only turn up in the quantum context. Consider for instance a classical particle with a definite energy moving under a time-independent Hamiltonian $`H`$. Because of the errors of the clock, we cannot be positive about the location of the particle in its trajectory on phase space at our clock time $`t`$. Therefore we have an increasing spread in the coordinate and conjugate momentum over the trajectory. For a generic system, this effect is codified in the classical master equation $$\dot{\varrho }=\{H,\varrho \}+(\kappa ^2/\vartheta )\{H,\{H,\varrho \}\},$$ (6.10) where $`\varrho (t)`$ is the probability distribution on phase space in clock time. Finally, it should be observed that the mechanism of decoherence is neither tracing over degrees of freedom, nor coarse graining, nor dephasing . Even though there is no integration over time introduced here by fiat, as happens in dephasing in quantum mechanics, the spread in time due to the errors of the clock has a similar effect, and produces decoherence. ### 6.4 Real clocks and spacetime foam As we have seen, there exist strong similarities between the evolution in spacetime foam and that in quantum mechanics with real clocks. In both cases, the fluctuations are described statistically and induce loss of coherence. However, there are some major differences. In the case of real clocks, the diffusion term contains only the Hamiltonian of the system while, in the spacetime foam analysis, a plethora of interactions appeared. Closely related to this, fluctuations of the real clock affect in very similar ways to both classical and quantum evolution; this is not the case in spacetime foam. The origin of these differences is the nature of the fluctuations that we are considering and, more specifically, the existence or not of horizons. Indeed, when studying real clocks, we have ensured that they satisfied Galilean causality, i.e., that the real-time parameter always grows as compared with the ideal time, so that no closed timelike curves are allowed in Galilean spacetime, whichever clock we are using. This requirement is in sharp contrast with the situation that we find in spacetime foam, where we have to consider topological fluctuations that contain horizons (virtual black holes, time machines, etc.). Scattering processes in a spacetime with horizons are necessarily of quantum nature. A classical scattering process in the presence of these horizons would inevitably lead to loss of probability because of the particles that would fall inside the horizons and would never come out to the asymptotic region. In other words, the underlying dynamics is completely different in both cases. Spacetime foam provides a non-Hamiltonian dynamics since the underlying manifold is not globally hyperbolic. On the other hand, in the case of quantum mechanics according to clocks subject to small errors, the underlying evolution is purely Hamiltonian, although the effective one is an average over all possible Hamiltonian evolutions and becomes nonunitary. ## 7 Conclusions Quantum fluctuations of the gravitational field may well give rise to the existence of a minimum length in the Planck scale . This can be seen, for instance, by making use of the fact that measurements and vacuum fluctuations of the gravitational field are extended both in space and time and can therefore be treated with the techniques employed for continuous measurements, in particular the action uncertainty principle . The existence of this resolution limit spoils the metric structure of spacetime at the Planck scales and opens a doorway to nontrivial topologies , which will not only contribute to the path integral formulation but will also dominate the Planck scale physics thus endowing spacetime with a foamlike structure with very complicated topology. Indeed, at the Planck scale, both the partition function and the density of topologies seem to receive the dominant contribution from topological configurations with very high Betti numbers . Spacetime foam may leave its imprint in the low-energy physics. For instance, it can play the role of a universal regulator for both the ultraviolet and infrared divergences of quantum field theory. It has also been proposed as the key ingredient in mechanisms for the vanishing of the cosmological constant . Furthermore, it seems to induce loss of coherence in the low-energy quantum fields that propagate on it as well as mode-dependent energy shifts . In order to study some of these effects in more detail, we have built an effective theory in which spacetime foam has been substituted by a fixed classical background plus nonlocal interactions between the low-energy fields confined to bounded spacetime regions of nearly Planck size . In the weak-coupling approximation, these nonlocal interactions become bilocal. The low-energy evolution is nonunitary because of the absence of a nonvanishing timelike Hamiltonian vector field. The nonunitarity of the bilocal interaction can be encoded in a quantum noise source locally coupled to the low-energy fields. From the form of the influence functional that accounts for the interaction with spacetime foam, we have derived a master equation for the evolution of the low-energy fields which contains a diffusion term, a damping term, and energy shifts that can be interpreted as gravitational Lamb and Stark effects. We have also discussed the size of these effects as well as the possibility of observing them in the near future. We have seen that the evolution of quantum systems according to good real clocks is quite similar to that in spacetime foam. Indeed, we have argued that good classical clocks, which are naturally subject to fluctuations, can be described in statistical terms and we have obtained the master equation that governs the evolution of quantum systems according to these clocks. This master equation is diffusive and produces loss of coherence. Moreover, real clocks can be described in terms of effective interactions that are nonlocal in time. Alternatively, they can be modeled by an effective thermal bath coupled to the system. In view of this analysis, we have seen that, although there exist strong similarities between propagation in spacetime foam and according to real clocks, there are also important differences that come from the fact that the underlying evolution laws for spacetime foam are nonunitary because of the presence of horizons while, in the case of real clocks, the underlying evolution is unitary and the loss of coherence is due to an average over such Hamiltonian evolutions. ## Acknowledgments I am grateful to C. Barceló and P.F. González-Díaz for helpful discussions and reading the manusscript. I was supported by funds provided by DGICYT and MEC (Spain) under Projects PB93–0139, PB94–0107, and PB97–1218.
no-problem/9911/hep-lat9911017.html
ar5iv
text
# References One of the recent developments in the treatment of the chiral symmetry on a lattice based on the Ginsparg-Wilson relation $`D^{lat}\gamma _5+\gamma _5D^{lat}=aD^{lat}\gamma _5D^{lat}`$ (1) is the interesting index relation $`tr^{lat}\gamma _5(1{\displaystyle \frac{a}{2}}D^{lat})=n_+^{lat}n_{}^{lat}`$ (2) on a lattice and its implications for the understanding of the anomaly -. In these equations, $`D^{lat}`$ is a Dirac operator describing a fermion on a lattice, $`n_\pm ^{lat}`$ are the numbers of the right-handed and left-handed zero modes of $`D^{lat}`$, $`a`$ is the lattice spacing and $`tr^{lat}`$ is the trace in the lattice theory. The role of the factor $`(1\frac{a}{2}D^{lat})`$ in Eq. (2), which is absent in the continuum index relation $`tr^c\gamma _5=n_+^cn_{}^c,`$ (3) is further studied in Refs. based on the representation of the algebra (1), leading to the observation that the mismatch of the chiralities between the zero modes of $`D^{lat}`$ should be compensated by the mismatch of the chiralities of its eigenmodes with the eigenvalue $`\frac{2}{a}`$ to ensure the relation $`tr^{lat}\gamma _5=0`$ in the lattice theory. In this short note, we pursue Ginsparg and Wilsons’ block spin approach in the derivation of the Ginsparg-Wilson relation and study the correspondence between the eigenmodes of a continuum Dirac operator $`D^c`$ and those of the lattice Dirac operator $`D^{lat}`$ constructed from $`D^c`$ following the block spin transformation, in the hope that such analyses will clarify understandings of the eigenmodes of $`D^{lat}`$ from a physical point of view. The eigenmodes of $`D^{lat}`$ with the eigenvalue $`\frac{2}{a}`$ and the zero modes of $`D^c`$ and $`D^{lat}`$ are investigated after introducing a suitable cut-off in $`D^c`$ to make our analysis free from divergences. This cut-off procedure, which was not considered in Ref. , is an important step to derive a clear correspondence of the eigenmodes in our study. We will see that the eigenmodes of $`D^{lat}`$ with the eigenvalue $`\frac{2}{a}`$ do not correspond to any physical modes of $`D^c`$, thus they are considered to be unphysical . Based on this criterion of the unphysical modes, we interpret that the role of the factor $`(1\frac{a}{2}D^{lat})`$ in the $`tr^{lat}\gamma _5(1\frac{a}{2}D^{lat})`$ is to ensure that the unphysical modes $`\lambda _n`$ satisfying $`D^{lat}\lambda =\frac{2}{a}\lambda `$ are omitted in the evaluation of the trace. Also we will show that the zero modes of $`D^c`$ are transformed to the zero modes of $`D^{lat}`$ preserving the chirality so that $`n_\pm ^c=n_\pm ^{lat}`$. These two observations provide us a physical interpretation of the index expression (2) and the identity $`tr^c\gamma _5=tr^{lat}\gamma _5(1{\displaystyle \frac{a}{2}}D^{lat})`$ (4) at a formal level. We begin with an action $`S^c(\overline{\varphi }_x,\varphi _x)`$ of the fermionic fields $`\varphi _x`$ and $`\overline{\varphi }_x`$ defined in the continuum Euclidean space-time. From this action Ginsparg and Wilson constructed a new action $`S^{lat}(\overline{\psi }_n,\psi _n)`$ on a lattice by block spin transformation. First we define the block variables $`\rho _n`$ and $`\overline{\rho }_n`$ corresponding to the continuum fields $`\varphi _x`$ and $`\overline{\varphi }_x`$ as $`\rho _n={\displaystyle \underset{x}{}}f_{nx}\varphi _x,\overline{\rho }_n={\displaystyle \underset{x}{}}\overline{\varphi }_xf_{xn}^{},`$ (5) where the functions $`f_{nx}`$ and $`f_{xn}^{}`$ have a sharp peak at $`x=n`$ and are proportional to the unit matrix in Dirac space. Using these variables, the new action $`S^{lat}(\overline{\psi }_n,\psi _n)`$ is defined by means of the block spin transformation as $`Ce^{S^{lat}(\overline{\psi },\psi )}={\displaystyle \underset{x}{}d\overline{\varphi }_xd\varphi _xe^{S^c(\overline{\varphi },\varphi )\alpha _n(\overline{\psi }_n\overline{\rho }_n)(\psi _n\rho _n)}},`$ (6) where $`\alpha `$ is a constant and will be equated to $`2/a`$ later. Assuming that $`S^c(\overline{\varphi },\varphi )`$ is quadratic in the fermion fields then so is $`S^{lat}(\overline{\psi },\psi )`$, and we may write $`S^c(\overline{\varphi },\varphi )={\displaystyle \underset{xy}{}}\overline{\varphi }_xD_{xy}^c\varphi _y,S^{lat}(\overline{\psi },\psi )={\displaystyle \underset{mn}{}}\overline{\psi }_mD_{mn}^{lat}\psi _n.`$ (7) The Ginsparg-Wilson relation is a relation satisfied by $`D_{mn}^{lat}`$ by virtue of its having been constructed from an initially chirally invariant action $`S^c(\overline{\varphi },\varphi )`$. The exponent of the right-hand side of Eq. (6) is $`{\displaystyle \underset{xy}{}}\overline{\varphi }_xD_{xy}^c\varphi _y\alpha {\displaystyle \underset{n}{}}(\overline{\psi }_n\overline{\rho }_n)(\psi _n\rho _n)`$ $`={\displaystyle \underset{xy}{}}\overline{\varphi }_x\{D_{xy}^c+\alpha {\displaystyle \underset{n}{}}f_{xn}^{}f_{ny}\}\varphi _y+{\displaystyle \underset{x}{}}\overline{\varphi }_x\xi _x+{\displaystyle \underset{y}{}}\overline{\xi }_y\varphi _y\alpha {\displaystyle \underset{n}{}}\overline{\psi }_n\psi _n,`$ (8) $`\xi _x=\alpha {\displaystyle \underset{n}{}}f_{xn}^{}\psi _n,\overline{\xi }_y=\alpha {\displaystyle \underset{n}{}}\overline{\psi }_nf_{ny}.`$ Here we assume that our continuum theory is made well-defined by a certain regularization procedure and the spectrum of the Dirac operator $`D_{xy}^c`$ (namely the absolute value of its eigenvalues) is bounded from above. Therefore all the eigenmodes $`\lambda _x^{phys}`$ of the Dirac operator $`D_{xy}^c`$, which we want to simulate on a lattice, satisfy $`{\displaystyle \underset{y}{}}D_{xy}^c\lambda _y^{i,phys}=\epsilon _i\lambda _x^{i,phys},|\epsilon _i|\mathrm{\Lambda },`$ (9) for some cut-off $`\mathrm{\Lambda }`$. Then for sufficiently large $`\alpha `$ (as will be seen later, $`\alpha `$ plays the role of the cut-off in the lattice theory), the operator $`M_{xy}=D_{xy}^c+\alpha _nf_{xn}^{}f_{ny}`$ has no zero mode so that its inverse $`M_{xy}^1`$ exists. For this choice of $`\alpha `$, the result of the path integral in Eq. (6) is given by $`Ce^{S^{lat}(\overline{\psi },\psi )}`$ $`=`$ $`detMe^{_{xy}\overline{\xi }_xM_{xy}^1\xi _y\alpha _n\overline{\psi }_n\psi _n}`$ (10) $`=`$ $`detMe^{_{mn}\overline{\psi }_m\{\alpha ^2_{xy}f_{mx}M_{xy}^1f_{yn}^{}+\alpha \delta _{mn}\}\psi _n}`$ so we see $`D_{mn}^{lat}`$ $`=`$ $`\{\alpha ^2{\displaystyle \underset{xy}{}}f_{mx}M_{xy}^1f_{yn}^{}+\alpha \delta _{mn}\},`$ (12) $`M_{xy}=D_{xy}^c+\alpha {\displaystyle \underset{n}{}}f_{xn}^{}f_{ny}.`$ Now we consider the eigenmodes of $`D_{mn}^{lat}`$ and identify unphysical modes. Since the eigenvalues of $`M_{xy}`$ defined in Eq. (12) for the eigenmodes $`\lambda _x^{phys}`$ are non-zero and finite, neither $`_yM_{xy}\lambda _y^{phys}`$ nor $`_yM_{xy}^1\lambda _y^{phys}`$ is equal to zero; $`{\displaystyle \underset{y}{}}M_{xy}\lambda _y^{phys}0,{\displaystyle \underset{y}{}}M_{xy}^1\lambda _y^{phys}0.`$ (13) Therefore, a mode $`\lambda _n`$ of $`D_{mn}^{lat}`$ which satisfies $`{\displaystyle \underset{n}{}}M_{xy}^1f_{yn}^{}\lambda _n=0`$ (14) is considered to be unphysical, because the eigenmode $`\lambda _n`$ on a lattice does not simulate a mode similar to any physical mode $`\{\lambda _x^{phys}\}`$. For $`\lambda _n`$ satisfying $`_nM_{xy}^1f_{yn}^{}\lambda _n=0`$, we have $`{\displaystyle \underset{n}{}}D_{mn}^{lat}\lambda _n=\alpha \lambda _n`$ (15) so the eigenmodes of $`D_{mn}^{lat}`$ with the eigenvalue $`\alpha `$ are considered to be unphysical on a lattice, because they have no counterparts among the physical spectrum $`\{\lambda _y^{phys}\}`$ of $`D_{xy}^c`$. Next we consider a zero mode $`\lambda _x^{0,phys}`$ of $`D_{xy}^c`$. The eigenvalue equation $`_yD_{xy}^c\lambda _y^{0,phys}=0`$ and Eq. (12) yield $`{\displaystyle \underset{y}{}}M_{xy}\lambda _y^{0,phys}={\displaystyle \underset{y}{}}\alpha {\displaystyle \underset{n}{}}f_{xn}^{}f_{ny}\lambda _y^{0,phys}.`$ (16) Since $`M_{xy}^1`$ exists, multiplying $`_zf_{mz}M_{zx}^1`$ for both sides of the above equation, we obtain $`\lambda _m^{0,phys}=\alpha {\displaystyle \underset{z,x,n}{}}f_{mz}M_{zx}^1f_{xn}^{}\lambda _n^{0,phys},`$ (17) $`\lambda _m^{0,phys}={\displaystyle \underset{x}{}}f_{mx}\lambda _x^{0,phys}.`$ (18) Because of the identity (17), we have $`{\displaystyle \underset{n}{}}D_{mn}^{lat}\lambda _n^{0,phys}`$ $`=`$ $`{\displaystyle \underset{n}{}}\{\alpha ^2{\displaystyle \underset{xy}{}}f_{mx}M_{xy}^1f_{yn}^{}+\alpha \delta _{mn}\}\lambda _n^{0,phys}`$ (19) $`=`$ $`\alpha \lambda _m^{0,phys}+\alpha \lambda _m^{0,phys}=0,`$ therefore $`\lambda _n^{0,phys}`$ is a zero mode of $`D_{mn}^{lat}`$. The functions $`\{f_{nx}\}`$ are proportional to the unit matrix in Dirac space so that the chirality of $`\lambda _m^{0,phys}`$ is same as that of $`\lambda _x^{0,phys}`$. So all the zero modes of $`D_{xy}^c`$ have the corresponding zero modes of $`D_{mn}^{lat}`$ preserving their chiralities. This is in some sense desirable and naively expected result, and we have seen that it is in fact shown regardless of the details of the block spin transformation, e.g., the choice of the functions $`f`$ and $`f^{}`$. Here we assume that our lattice is fine enough so that the shape of the zero modes $`\lambda _x^{0,phys}`$ is well preserved on a lattice when transformed to the zero modes $`\lambda _n^{0,phys}`$ of $`D_{mn}^{lat}`$ and they are distinguishable with each other on a lattice. Our criterion of the unphysical modes on a lattice leads us to define the index on a lattice among the physical modes to be $`tr^{lat}\gamma _5(1{\displaystyle \frac{1}{\alpha }}D^{lat}).`$ (20) After eliminating the contribution of the unphysical modes by the factor $`(1\frac{1}{\alpha }D^{lat})`$, the index (20) is equal to $`n_+^{lat}n_{}^{lat}`$ where $`n_+^{lat}`$ and $`n_{}^{lat}`$ are the number of the right-handed and left-handed zero modes of $`D^{lat}`$ . Our analysis on the zero modes in the continuum and lattice theories naively yields $`n_\pm ^c=n_\pm ^{lat}`$, where $`n_+^c`$ and $`n_{}^c`$ are the number of the right-handed and left-handed zero modes of $`D^c`$. These two observations give rise to the index relation among the physical modes as $`tr^c\gamma _5=tr^{lat}\gamma _5(1{\displaystyle \frac{1}{\alpha }}D^{lat}).`$ (21) Here we derive the Ginsparg-Wilson relation starting from Eq. (6) to see that $`\alpha =2/a`$ and the index relation (21), which was obtained by studying the correspondence of the eigenmodes, is also obtained as a by-product in the derivation. Under a global chiral transformation $`\psi e^{iϵ\gamma _5}\psi `$ and $`\overline{\psi }\overline{\psi }e^{iϵ\gamma _5}`$, we have $`Ce^{S^{lat}(\overline{\psi }e^{iϵ\gamma _5},e^{iϵ\gamma _5}\psi )}`$ $`=`$ $`{\displaystyle \underset{x}{}d\overline{\varphi }_xd\varphi _xe^{S^c(\overline{\varphi },\varphi )\alpha _n(\overline{\psi }_ne^{iϵ\gamma _5}\overline{\rho }_n)(e^{iϵ\gamma _5}\psi _n\rho _n)}}`$ (22) $`=`$ $`e^{2iϵtr^c\gamma _5}{\displaystyle \underset{x}{}d\overline{\varphi }_x^{}d\varphi _x^{}e^{S^c(\overline{\varphi }^{},\varphi ^{})\alpha _n(\overline{\psi }_n\overline{\rho }_n^{})e^{2iϵ\gamma _5}(\psi _n\rho _n^{})}}`$ where the second line follows from a change of variables $`\varphi \varphi ^{}=e^{iϵ\gamma _5}\varphi `$ and $`\overline{\varphi }\overline{\varphi }^{}=\overline{\varphi }e^{iϵ\gamma _5}`$, and $`\rho _n^{}`$ and $`\overline{\rho }_n^{}`$ are the block variables constructed from $`\varphi ^{}`$ and $`\overline{\varphi }^{}`$. We have also kept the Jacobian factor $`2iϵtr^c\gamma _5`$ of this change of variables, which exhibits some subtleties for the physical modes $`\{\lambda _x^{phys}\}`$ of the Dirac operator $`D_{xy}^c`$ with gauge fields of topologically nontrivial nature . The right-hand side of Eq. (22) is expanded to first order in $`ϵ`$ as $`(1+2iϵtr^c\gamma _5){\displaystyle \underset{x}{}d\overline{\varphi }_x^{}d\varphi _x^{}e^{S^c(\overline{\varphi }^{},\varphi ^{})\alpha _n(\overline{\psi }_n\overline{\rho }_n^{})(\psi _n\rho _n^{})}\{1+2iϵ\alpha \underset{n}{}(\overline{\psi }_n\overline{\rho }_n^{})\gamma _5(\psi _n\rho _n^{})\}}`$ $`=(1+2iϵtr^c\gamma _5){\displaystyle \underset{x}{}d\overline{\varphi }_x^{}d\varphi _x^{}\{1+2iϵ\alpha \underset{n}{}(\overline{\psi }_n\overline{\rho }_n^{})\gamma _5(\frac{1}{\alpha })\frac{}{\overline{\psi }_n}\}e^{S^c(\overline{\varphi }^{},\varphi ^{})\alpha _n(\overline{\psi }_n\overline{\rho }_n^{})(\psi _n\rho _n^{})}}`$ $`=(1+2iϵtr^c\gamma _5)[12iϵ\{tr^{lat}\gamma _5+{\displaystyle \frac{1}{\alpha }}{\displaystyle \frac{}{\psi _n}}\gamma _5{\displaystyle \frac{}{\overline{\psi }_n}}\}]Ce^{S^{lat}(\overline{\psi },\psi )}`$ $`=[1+2iϵ\{tr^c\gamma _5tr^{lat}\gamma _5(1{\displaystyle \frac{1}{\alpha }}D^{lat})\}+2iϵ{\displaystyle \frac{1}{\alpha }}\overline{\psi }D^{lat}\gamma _5D^{lat}\psi ]Ce^{S^{lat}(\overline{\psi },\psi )}`$ (23) while the left-hand side is expanded as $`Ce^{S^{lat}(\overline{\psi }e^{iϵ\gamma _5},e^{iϵ\gamma _5}\psi )}\{1+iϵ\overline{\psi }(D^{lat}\gamma _5+\gamma _5D^{lat})\psi \}.`$ (24) Comparing these two expressions, we obtain the Ginsparg-Wilson relation $`D^{lat}\gamma _5+\gamma _5D^{lat}={\displaystyle \frac{2}{\alpha }}D^{lat}\gamma _5D^{lat}`$ (25) with $`\alpha =2/a`$ and the index relation $`2tr^c\gamma _5=2tr^{lat}\gamma _5(1{\displaystyle \frac{1}{\alpha }}D^{lat}).`$ (26) Finally a few comments are in order. Our criterion of the unphysical modes of $`D^{lat}`$ is consistent with the definitions of the scalar and pseudo-scalar densities $`S_m`$ and $`P_m`$ proposed in Refs. $`S_m={\displaystyle \underset{n}{}}\overline{\psi }_m(1{\displaystyle \frac{1}{\alpha }}D^{lat})_{mn}\psi _n,P_m={\displaystyle \underset{n}{}}\overline{\psi }_m\gamma _5(1{\displaystyle \frac{1}{\alpha }}D^{lat})_{mn}\psi _n,`$ (27) where we interpret that the role of $`(1\frac{1}{\alpha }D^{lat})`$ is to eliminate the contributions of the unphysical modes. The precise relation between the high energy modes of $`D^c`$ and the high energy modes of $`D^{lat}`$ with the eigenvalue not equal to $`\alpha `$ depends on the choice of the cut-off scales $`\mathrm{\Lambda }`$ in Eq. (9) and $`\alpha `$. However these high energy modes are always vector-like both in the continuum and lattice theories and will decouple from the low energy physics. Thus detailed prescriptions of the ultraviolet cut-off do not change our observations. I would like to thank K. Fujikawa for encouraging me to write this short note. I am also grateful to M. Ishibashi for careful reading of the manuscript and useful comments, and to K. Nagai for useful comments.
no-problem/9911/cond-mat9911425.html
ar5iv
text
# Effects of Non-local Stress on the Determination of Shear Banding Flow ## Abstract We analyze the steady planar shear flow of the modified Johnson-Segalman model, which has an added non-local term. We find that the new term allows for unambiguous selection of the stress at which two “phases” coexist, in contrast to the original model. For general differential constitutive models we show the singular nature of stress selection in terms of a saddle connection between fixed points in the equivalent dynamical system. The result means that stress selection is unique under most conditions for space non-local models. Finally, illustrated by simple models, we show that stress selection generally depends on the form of the non-local terms (weak universality). Introduction—Oldroyd’s proposal that any sensible rheological constitutive equation for a general fluid should obey the “admissible conditions” has had a great influence on the rheology community. In his “admissible conditions”, apart from requiring covariance, Oldroyd further imposed the “principle of frame indifference” and “locality”. de Gennes has shown that the former is no longer true if inertia effects are important . Below we discuss how one must extend locality to model shear banding flow, which appears in some surfactant and polymer solutions. Shear banding has long been mooted by the polymer rheology community in connection with the spurt effect in the processing of linear polymer melts (see, *e.g.* Ref. for a review). It is, however, in surfactant worm-like micelle solutions that the phenomena has been firmly established , particularly convincingly through magnetic imaging experiments . For entangled polymers or surfactant micelles, as the shear rate increases the polymers/micelles gradually align and the fluid shear thins. According to reptation theory , the fluid shear thins so heavily that a maximum appears in the shear stress-shear rate relation at a shear rate of approximately the inverse reptation time. At high enough shear rate, the shear stress is presumed to increase again, resumably due to local (e.g. solvent) dissipation mechanisms . The steady shear rate curve is qualitatively like Fig. 1. For intermediate shear rates where the slope is negative, homogeneous flow is mechanically unstable. It is found in real systems that, when the controlled shear rate increases up to the order of the inverse reptation time, the fluid becomes a composite flow profile (shear bands), where two or more bands, with alternating low and high shear rates, coexist with a common shear stress. According to Fig. 1, there is a range of shear stress $`[\sigma _{min},\sigma _{max}]`$ within which the constitutive equation is multi-valued (three shear rates for a given stress), so that shear bands seem possible for any stress within $`[\sigma _{min},\sigma _{max}]`$. However, real systems select a well defined stress . We therefore need a mathematical condition, a selection criterion, which determines the coexisting stress. There are scattered results showing generically that steady state solutions behave very differently depending on whether or not the constitutive equations are local in space . For local models, the final steady stress depends on the flow history . Therefore an additional selection criterion must be imposed to select stress, be it variational or not . On the other hand, steady state analysis of several non-local models, either analytically or numerically , yields a well-defined selected stress. It is interesting to know whether non-local effects generally lead to a stress selection criterion. In this paper, we first present numerical results for the modified Johnson-Segalman (JS) model, as a concrete example of stress selection in a non-local model that has been well-studied in local form by many workers . Although perhaps not molecularly faithful to any fluid, the JS model is a covariant model which posesses a non-monotonic flow curve, providing the necessary ingredient for a qualitative study of shear banding. We then present the main result which shows the general link between non-locality and stress selection. Finally we use simple examples to illustrate the important fact that stress selection depends on the details of the interface structure, *i.e.* it has weak universality. Numerical Results for the Modified JS Model with Stress Diffusion—This model is given by $`\rho (_t+𝐯\mathbf{})𝐯`$ $`=`$ $`\mathbf{}𝝈`$ (1) $`𝝈`$ $`=`$ $`p𝐈+2\eta 𝜿+𝚺`$ (2) $`\stackrel{}{𝚺}`$ $`=`$ $`2G𝜿𝚺/\tau +D^2𝚺`$ (3) where the strain rate is $`𝜿(\mathbf{}𝐯+\mathbf{}𝐯^T)/2`$. The first equation is the momentum equation, with $`\rho ,𝐯,𝝈`$ the fluid density, velocity, and stress tensor, respectively. The stress comprises the pressure $`p`$ (I denotes the identity matrix), the elastic stress $`𝚺`$, and a Newtonian viscous stress with viscosity $`\eta `$. We assume an incompressible fluid, $`\mathbf{}𝐯=0`$, enforced by the pressure. The non-Newtonian stress $`𝚺`$ is governed by Eq. (3), where the stress is induced by strain rate $`𝜿`$ with strength described by a modulus $`G`$, and $`\tau `$ is the stress relaxation time. The Gordon-Schowalter convected time derivative is $`\stackrel{}{𝚺}`$ $``$ $`\left(_t+𝐯\mathbf{}\right)𝚺\mathbf{}𝐯^T𝚺𝚺\mathbf{}𝐯`$ (5) $`+(1a)\left(𝚺𝜿+𝜿𝚺\right),`$ and describes the “slip” between the polymer and fluid extension, with $`a[1,1]`$. Eqs. (1-3) differ from the standard JS model due to the additional stress diffusion term $`D^2𝚺`$ in Eq. (3). El-Kareh and Leal have analyzed the microscopic dumbbell model (*i.e.* the case $`a=1`$) and derived this term, representing the diffusion of the stress carrying element. Note the important difference that the standard JS model is a local model, whereas Eq. (3) is not. Given a shear stress value $`\sigma _{xy}`$, we look for the steady planar shear flow numerically. For $`𝐯=v(y,t)\widehat{𝐞}_x`$, the modified JS model reduces to $`\sigma `$ $`=`$ $`S+e\dot{\gamma }`$ (6) $`\tau _tS`$ $`=`$ $`\widehat{D}_y^2SS\dot{\gamma }N+\dot{\gamma }`$ (7) $`\tau _tN`$ $`=`$ $`\widehat{D}_y^2NN+\dot{\gamma }S`$ (8) The rescaled variables are defined as $`\sigma =\sqrt{1a^2}\sigma _{xy}/G`$, $`S=\sqrt{1a^2}\mathrm{\Sigma }_{xy}/G`$, $`N=[(1a)\mathrm{\Sigma }_{xx}(1+a)\mathrm{\Sigma }_{yy}]/2G`$, $`\widehat{D}=\tau D`$, $`\dot{\gamma }=2\sqrt{1a^2}\tau \kappa _{xy}`$, and $`e=\eta /G\tau `$. Because the steady planar shear flow solution of Eq. (1) has zero Reynolds number (and most macromolecular dynamics are in this limit) we can use the low Reynolds number limit ($`\rho 0`$), Eq. (6), to obtain the same steady solution. Eqs. (7-8) are the non-trivial dynamics of $`𝚺`$ (other components are uncoupled). Note that there is only one non-dimensional parameter $`e`$ (the viscosity ratio) in the steady state problem. Given $`\sigma `$, we integrate Eqs. (6-8) over time $`t`$ to see whether a steady state banding solution can be reached. During the integration, the values of $`S`$ and $`N`$ at the two ends $`y=\pm y_0`$ are fixed to those of the high and low strain rate branches respectively. At $`t=0`$, the functions $`S(y)`$ and $`N(y)`$ are chosen arbitrarily as $`C_1\mathrm{tanh}y+C_2`$, with the constants $`C_1`$, $`C_2`$ fixed by the boundary conditions. The value $`y_0`$ is chosen to be sufficiently large that its exact value is irrelevant. We find that a steady (elementary) shear band solution can be found only at a specific shear stress value, $`\sigma =\sigma _0`$. If the stress value is too large or too small, one of the phases shrinks completely. The values $`\sigma _0`$ as a function of the model parameter $`e`$ are shown in Fig. 2. Note that in an infinite system $`\sigma _0`$ does not depend on the diffusivity $`D`$, which just sets the interface width. However, anisotropic diffusion (a fourth rank tensor $`𝑫`$ in Eq. 3), may lead to diffusivity dependent selection. Nontransverse Saddle Connection—We have shown above that a non-local model can select the stress sharply. Now we show that, in general, a non-local model in planar shear flow selects stress sharply (could be uniquely), provided that the model is a differential equation and satisfies rotational and Galilean invariance. 1. To begin, we observe that the steady state equations, like Eqs. (6-8) without time derivatives, comprise a set of ordinary differential equations (ODE) for the independent dynamical variables $`\{\psi _i\}`$ ($`S`$ and $`N`$ for the JS model), rather than partial differential equations (PDE), since only differentiation in the velocity gradient direction, say $`\widehat{𝐲}`$, is present in planar shear flow. The phase space for the equivalent first order ODE is $`[\left\{\psi _i\right\},\left\{_y\psi _i\right\},\left\{_y^2\psi _i\right\},\mathrm{}]`$. Higher order gradients in the original PDE would entail a ODE phase space with large dimension. 2. To each homogeneous steady flow (a phase) there corresponds a fixed point ($`[\left\{\psi _i^{}\right\},0,0,\mathrm{}]`$) in the ODE phase space since, by definition, homogeneous flow means the variables do not change with $`y`$ . 3. If the interface width is small compared to the other length scales in the problem, *e. g.* the distance between two neighbouring interfaces, it is sufficient to consider an elementary shear banding solution, which describes, in a planar geometry, a composite flow with a smooth interface separating a single region of material in the high shear rate phase from a single region of material in the low shear rate phase. Mathematically, the elementary shear band is a solution of the ODE which asymptotically connects the high and low shear rate phase fixed points between $`y=\pm \mathrm{}`$. Since the fixed points are both saddle points (see below) and distinct, the elementary shear band solution is a *heteroclinic saddle connection* . 4. Let the attractor and repellor basins of one fixed point, say $`A`$, have dimensions $`a(A)`$ and $`r(A)`$ respectively. Another fixed point $`B`$ has similarly defined $`a(B)`$ and $`r(B)`$. Note that if the phase space has dimension $`d`$ (for the modified JS model $`d=4`$), then $`a(A)+r(A)d`$ and $`a(B)+r(B)d`$. A saddle connection joins two fixed points, so that it is an intersection of the repellor of one fixed point, say $`A`$, and the attractor of the other fixed point $`B`$. We denote its dimension $`s(A,B)`$ . In the ODE phase space, the intersection is at least one dimension, so that $`s(A,B)1`$. There is a trivial inequality $`r(A)+a(B)s(A,B)d`$. If equality holds the saddle connection is called transverse: it is formed by a robust intersection of two manifolds, and is structurally stable against small perturbations of the ODE parameters (in the modified JS model, $`\sigma `$ and $`e`$). The case $`r(A)+a(B)s(A,B)<d`$ is called non-transverse, and the corresponding saddle connection is structurally unstable. It is important to recognise that if the elementary shear band solution is a non-transverse saddle connection, a small change of the shear stress (a parameter in the ODE) will remove the saddle connection, which gives a very sharp stress selection. In another words, given an existing shear band solution, a stress perturbation is singular if it is of the non-transverse type saddle connection. On the other hand, if there is a transverse saddle connection shear band solution, one can perturb the stress to obtain a (slightly) different shear band solution. Therefore, there is no stress selection for a transverse saddle connection. (Stress perturbation becomes a regular perturbation.) 5. We now prove that: if the model has rotational and Galilean invariance, a shear band solution in planar shear flow must be non-transverse. Let us momentarily assume that there is a transverse saddle connection approaching, say, fixed point $`A`$ at $`y=L`$ to fixed point $`B`$ at $`y=L`$. Galilean invariance allows us to choose an inertial frame in which the flow velocity $`v_x(y=\pm L)=\pm V`$. Since the bulk are rotationally invariant and the boundary conditions are symmetric under rotation by 180 degrees, one can rotate the solution around the vorticity ($`z`$) axis by 180 degrees to obtain a different shear band solution which obeys the same equation and the same boundary condition (e.g. $`v_x(\pm L)=\pm V`$). This solution approaches fixed point $`B`$ at $`y=L`$ to fixed point $`A`$ at $`y=L`$. Fixed points $`A`$ and $`B`$ have both attractor and repellor directions, and are thus saddle points. Now take the (thermodynamic) limit $`L\mathrm{}`$ so that the above two solutions become saddle connections. The two saddle connections are related by the symmetry transformations, so are of the same type (both transverse). A contradiction now appears, since “there are not enough dimensions to put two transverse saddle connections in the phase space”. Formally, the transverse condition leads to $`r(A)+a(B)=d+s(A,B)>d`$ and $`r(B)+a(A)=d+s(B,A)>d`$, therefore $`a(A)+r(A)+a(B)+r(B)>2d`$. However, from $`a(A)+r(A)d`$ and $`a(B)+r(B)d`$ we have $`a(A)+r(A)+a(B)+r(B)2d`$. So the shear band cannot be a transverse saddle connection. Weak Universality—One important aspect of stress selection can be illustrated by a simple model inspired by the macromolecular character of many systems exhibiting non-linear rheology. Let the shear stress $`\sigma `$ be $$\sigma =F(\dot{\gamma })+\eta \dot{\gamma }_{loc},$$ (9) where the first, non-linear, part associated with macromolecules is sensitive to a locally averaged shear rate $`\dot{\gamma }`$ as opposed to the second, Newtonian, part attributed to solvent, which is sensitive to the true local shear rate $`\dot{\gamma }_{loc}`$. We approximate the local averaging from $`\dot{\gamma }_{loc}`$ to $`\dot{\gamma }`$ by a gradient expansion, $`\dot{\gamma }=\left(1R^2(\dot{\gamma })^2+\mathrm{}\right)^1\dot{\gamma }_{loc}`$, and hence $$\dot{\gamma }_{loc}\left(1R^2\left(\dot{\gamma }\right)_y^2\right)\dot{\gamma }$$ (10) We anticipate sensitivity of the non-local scale $`R(\dot{\gamma })`$ to $`\dot{\gamma }`$ through distortion of the macromolecular shape. In terms of the locally averaged shear rate $`\dot{\gamma }`$ we have $`\sigma =f(\dot{\gamma })\eta R^2(\dot{\gamma })_y^2\dot{\gamma }`$ (11) where $`f(\dot{\gamma })F(\dot{\gamma })+\eta \dot{\gamma }`$ is, as before, the steady (and constant) shear rate flow curve as exemplified in Fig.1. Multiplying Eq. (11) by $`R^2(\dot{\gamma })_y\dot{\gamma }`$ and integrating $`y`$ across the shear band leads to $`𝒟(\sigma ){\displaystyle _{\dot{\gamma }(\mathrm{})}^{\dot{\gamma }(\mathrm{})}}{\displaystyle \frac{f(\dot{\gamma })\sigma }{R^2(\dot{\gamma })}}𝑑\dot{\gamma }=\left[{\displaystyle \frac{\eta }{2}}(_y\dot{\gamma }(y))^2\right]_{y=\mathrm{}}^{y=\mathrm{}}`$ (12) where $`\dot{\gamma }(y=\pm \mathrm{})`$ depends on $`\sigma `$ through $`f(\dot{\gamma }(y=\pm \mathrm{}))=\sigma `$. Since $`_y\dot{\gamma }(y=\pm \mathrm{})=0`$, an interfacial profile must satisfy $`𝒟(\sigma )=0`$, which is the condition to select the stress. According to Eq. (12), different functions $`R(\dot{\gamma })`$ give different $`𝒟(\sigma )`$, and hence different selection criteria $`𝒟=0`$! Therefore, two models with different $`R(\dot{\gamma })`$ but the same behaviour in homogeneous flow, $`f(\dot{\gamma })`$, can behave differently upon forming shear bands. The simple case $`R(\dot{\gamma })`$ independent of $`\dot{\gamma }`$ corresponds to the equal areas construction speculated upon by Ref. , and cannot be regarded as generic. Stress selection has weak universality, implying that impurities or other effects which changes the interfacial properties could in principle affect quantities like $`R(\dot{\gamma })`$ and hence alter the selected shear stress. For equilibrium transitions, with coexistence equations analogous to Eq. (11), terms involving gradients would be exactly integrable without an integrating factor ($`1/R^2(\dot{\gamma })`$ in Eq. 12) because they arise from a functional derivative of a coarse-grained free energy. The resulting interface solvability condition (*i.e.* a Maxwell construction) is independent of the detailed gradient terms, in contrast to the weak universality discussed above. Discussion—We close with a few comments. First, we have not proven existence of shear banding solutions. There are physical phenomena where coexistence between bulk states is prohibited because it is unfavourable to form an interface, and robust hysteresis can be expected. Gel swelling provides a beautiful example. Secondly, the non-transversality condition of the saddle connection is, strictly speaking, weaker than uniqueness. There are two possible situations where uniqueness may fail. The first situation happens when, due to an accidental situation, the attractive basin of one fixed point and the repelling basin of the other fixed point move so as to maintain the saddle connection upon changing the shear stress $`\sigma `$ as a control parameter. The second situation happens when more than one isolated stresses are selected, *i.e.* uniqueness fails globally. These have negligible chance to be realised in a model. Should it happen, one may ask for a physical argument for the degeneracy to be sure that it is not a mathematical artfact. Generally, uniqueness of the stress selection can be expected for models with gradient terms An interesting question is whether the stress selected by non-local effects can be obtained by a variational principle. A conventional variational principle, like the one used for the equilibrium phase transitions, relies on the volume contribution to a universal functional and gives a criterion insensitive to the interfacial structure , because the interface only contributes a vanishing fraction if the total volume is assumed to be large. The model Eq. (11), illustrates that the way non-local effects select stress is different from a variational principle based on such a universal functional. Therefore the obvious choices of either free energy or entropy production cannot generally represent the selection criterion posed by the non-local effects. After this manuscript was submitted, Yuan published a similar modification to the JS model . Acknowledgments—We thank A. Ajdari, M. Cates, B. L. Hao, and O. Radulescu for fruitful conversations.
no-problem/9911/astro-ph9911353.html
ar5iv
text
# A High Fraction of Mergers in the cluster MS1054–03 at 𝑧=0.83 ## 1. Introduction High redshift clusters can be used very efficiently to study galaxy formation and evolution. Their high overdensities allow us to study large numbers of galaxies with relatively small field imagers and spectrographs. Furthermore, the range in morphologies in clusters, and between clusters, allow us to study a variety of galaxy types. We have embarked on a study of several massive clusters, out to a redshift of 1, using the Hubble Space Telescope to take mosaics of deep multi-color images, and using large telescopes to take deep spectroscopy. Our goal is to study a few clusters very well, with wide coverage HST imaging, and extensive spectroscopy. Our study is complementary to most other programs which study larger samples of clusters with limited coverage (e.g., Dressler et al. 1997, and references therein). Until now, we have obtained wide field HST data on MS1358+62 at $`z=0.33`$, MS1054–03 at $`z=0.83`$, and recently, MS2053–04 at $`z=0.58`$. All these clusters were selected from the EMSS survey (Gioia et al. 1990). We have obtained $`>200`$ redshifts for each of these fields (Fisher et al. 1998, van Dokkum et al. 1999, 2000), and deep spectroscopy of the brighter galaxies to measure internal velocity dispersions (van Dokkum & Franx 1996, Kelson et al. 1999, van Dokkum et al. 1998). Here we present our new results on MS1054–03, the highest redshift cluster in the EMSS catalogue (Gioia et al. 1995, Luppino & Gioia 1996). ## 2. Observations of MS1054–03 We have taken deep, multicolor images of MS1054–03 at 6 pointings with WFPC2 on the Hubble Space Telescope. The Keck telescope was used to take 200 spectra, aimed to be complete to an I magnitude of 22.7. The typical integration time per galaxy was 40 min. We were able to measure redshifts of 186 galaxies, and of those, 80 were cluster members. Together with data from literature, we found 89 cluster members, of which 81 lie in the area of the HST images. ### 2.1. Merger fraction We classified the spectroscopically confirmed cluster members, analogous to our classification of galaxies in MS1358+62 (Fabricant et al. 1999). We classified galaxies along the revised Hubble sequence. We allowed for a separate catagory of mergers. We combined the 3 classifications from 3 of us, and verified that the results were robust from classifier to classifier. The results have been presented in van Dokkum et al. (1999, 2000). The main outcome of this exercise is the high fraction of mergers in MS1054–03. Many of these mergers are very luminous. One of the most striking ways to display the effect, is to show a panel with the 16 brightest galaxies (Fig 1.) Five out of these 16 were classified as mergers. A color version of figures 1 and 2 can be found at http://www.strw.leidenuniv.nl/~franx/clusters. A similar mosaic of the cluster MS1358+62 at $`z=0.33`$ is shown in Fig. 2. The absence of mergers in this lower redshift cluster, and the much more homogeneous color distribution are notable. The enhancement of peculiar galaxies in MS1054–03 could be due to the brightening of low mass galaxies during a starburst. We verified that the merger fraction remains high if the galaxies are selected by mass, instead of (blue) luminosity. The optical colors of the mergers are consistent with this result. As shown in Fig. 3, the mergers are generally red, with a few exceptions. Similarly, the spectra of most of the mergers do not show strong emission lines. These results suggest that the bulk of the stars of the mergers were formed well before the merger. Hence the stellar age of the merged galaxies is significantly different from the “assembly age”, i.e., the time at which the galaxy “was put together”. The results are consistent with the hypothesis that the mergers evolve into ellipticals. Their scatter in the color-magnitude diagram is significantly larger than the scatter for the ellipticals (0.073 versus 0.045 in restframe U-B). After aging of the stellar populations, the scatter of the total population of mergers+ellipticals will have decreased from 0.054 at $`z=0.83`$ to 0.015 at $`z=0`$. Hence a low scatter at $`z=0`$ does not mean that all galaxies in the population are homogeneous and very old: the influence of merging can be small if the star formation involved with the merging is low. The physical reason for the low star formation is unknown: it is possible that the massive precursor galaxies had already lost their cold gas due to internal processes (such as super winds, or winds driven by nuclear activity). Alternatively, the cluster environment may play an important role: the cold gas may have been stripped by the cluster X-ray gas. Observations of more clusters may shed further light on this effect. ### 2.2. Pair fraction Whereas the classifications of galaxies remains a subjective procedure, counting close pairs of galaxies is a very objective way to establish whether interactions and mergers are enhanced. Furthermore, the distribution of pairs may shed light on the future merging rate in the cluster. We have counted the number of pairs in the outskirts of the cluster, to avoid the high density core. The pair fraction is shown in Fig 3b. As we can see, there is an excess of pairs at small separations ($`<`$ 10 h<sup>-1</sup>kpc). Half of these are classified as mergers, the other half not. These may constitute a reservoir of “future” mergers. It will be interesting to measure the velocity differences of the galaxies in pairs. ## 3. Conclusions We have found a high fraction of mergers, which are generally red. The fraction is comparable to the number of ellipticals. The results are qualitatively consistent with the hypothesis of hierarchical galaxy formation. The relatively old stellar age of the mergers compared to the young “assembly age” is consistent with predictions from semi-analytical models (e.g., Kauffmann 1996). The results are inconsistent with the hypothesis that all ellipticals are formed and assembled at very high redshift. Nevertheless, many questions remain open: * Is the result for MS1054–03 typical for high redshift clusters ? Is the merger fraction higher or lower in the field ? It is interesting to note that studies of the field give high merger fractions and/or pair fractions at intermediate redshift (e.g., Patton et al. 1995, Le Fevre et al. 1999). It remains to be seen whether these field mergers are as massive as the mergers found in MS1054–03. * What is the typical redshift at which the mass of early type galaxies was half of the current mass ? * When did the major episode of star formation occur ? Future studies can be directed to shed light on these questions. ## References Dressler, A., et al. 1997, ApJ, 490, 577 Fisher, D., Fabricant, D., Franx, M., van Dokkum, P., 1998, ApJ 498, 195 Fabricant, D., van Dokkum, P., Franx, M., 1999, in preparation Gioia, I. M., Maccacaro, T., Schild, R. E., Wolter, A., & Stocke, J. T. 1990, ApJS, 72, 567 Kauffmann, G., 1996, MNRAS, 281, 487 Kelson, D. D., Illingworth, G. D., van Dokkum, P. G., Franx, M., 1999, ApJ, in press, astroph-9906152 Le Fevre, O., et al. 1999, MNRAS, in press, astro-ph/9909211 Luppino, G. A., & Gioia, I. M., 1995, ApJ, 445, L77 Patton, D. R., Pritchet, C. J., Yee, H. K. C., Ellinson, E., Carlberg, R. G., 1997, ApJ, 475, 29 van Dokkum, P. G., Franx, M. 1996, MNRAS, 281, 985 van Dokkum, P. G., Franx, M., Kelson, D. D., Illingworth, G. D., 1998, ApJL, L504 van Dokkum, P. G., Franx, M., Fabricant, D., Kelson, D. D., Illingworth, G. D. 1999, ApJ, 520, L95 van Dokkum, P. G., et al. 2000, in preparation
no-problem/9911/cond-mat9911061.html
ar5iv
text
# High-quality continuous random networks ## I Introduction The structure of amorphous semiconductors, as seen by theorists, is well represented by the continuous random network (CRN) model introduced more than 60 years ago by Zachariasen . The interest of this model lies in its simplicity: the only requirement of this model is that each atom should satisfy fully its bonding needs. In addition, the quality of a CRN is generally determined by the amount of strain, as measured by the local deviations from crystalline environment, induced by this constraint; the “ideal” CRN being typically defined as that with the lowest spread in the bond length and bond angle distributions. In spite of the simplicity of the model, it has turned out to be difficult to actually prepare CRN realizations of a quality comparable to that of experiment, making it difficult to fully assess the real structure of amorphous semiconductors. The origin of this problem has generally been attributed to weaknesses in the model-building community: standard approaches such as molecular dynamics cannot reach time scales appropriate for full relaxation. Moreover, other techniques suggest that empirical and semi-empirical potentials able to reproduce all properties of amorphous semiconductors are still missing. An alternative explanation for the fact that experimentally a lower spread in bond length and angular distribution is observed, might be that the coordination in high-quality a-Si samples is significantly lower than four. Laaziri et al. report a coordination as low as 3.88 , a density of defects orders of magnitude higher than what is measured using electron-spin resonance techniques . If true, this higher density of defects might easily facilitate a lower spread in the bond lengths and angles, explaining in part the discrepancy between experiment and theoretical models. Following a long tradition, one approach to shed some light on this discrepancy is to try to see how far it is possible to push the continuous random model in order to reach structural properties in agreement with experiment. By creating idealized networks with the same angular deviation as the experimental ones and a good overall fit to the radial distribution function, it is possible to show that perfect coordination in amorphous silicon is not ruled out by the low angular deviation. This is the purpose of this paper which follows a long series of works going in the same direction . Using a modified version of the Wooten-Winer-Weaire algorithm, we have succeeded in creating a number of totally independent 1000-atom configurations with a bond-angle distribution as low as 9.19 degrees, almost 2 degrees below the best available numerical models without four-membered rings and on a par with experimental values . The algorithm we use avoids completely the crystalline state, contrary to previous WWW-type approaches. Moreover, as shown below, the structural and electronic properties of the networks are excellent, making them ideal starting point for empirical as well as tight-binding or ab-initio studies . This paper is constructed as follows. First, we review briefly the Wooten-Winer-Weaire algorithm and detail our simulation procedure. Next, we present structural and electronic properties of the configurations generated, and compare them with previous simulations and experimental results. ## II Methodology and Details of Simulations In the sillium approach, proposed by Wooten, Winer and Weaire (WWW) to generate CRN structures, a configuration consists of the coordinates of all $`N`$ atoms, together with a list of the $`2N`$ bonds between them. The structural evolution consists of a sequence of bond transpositions involving four atoms. Four atoms A, B, C, and D are selected following the geometry shown in Fig. 1; two bonds, AB and CD, are then broken, and atoms A and D reassigned, respectively, to C and B, creating two new bonds, AC and BD. After the transposition, all atoms are allowed to relax within the constraints of the neighbor list. Within this approach, the generation of a CRN starts with a cubic diamond structure, which is then randomized by a large number of bond transpositions. After thermalization, the network is relaxed through a sequence of many more proposed bond transpositions, accepted with the Metropolis acceptance probability $$P=\mathrm{Min}[1,\mathrm{exp}((E_bE_f)/k_bT)],$$ (1) where $`k_b`$ is the Boltzmann constant, $`T`$ is the temperature, and $`E_b`$ and $`E_f`$ are the total energies of the system before and after the proposed bond transposition. The list of neighbors determines the topology, but also the energy of the network: independently of the distance between two atoms, they interact only if they are connected in the list of neighbors. With an explicit list of neighbors, it is possible to use a simple interaction such as the Keating potential : $`E`$ $`=`$ $`{\displaystyle \frac{3}{16}}{\displaystyle \frac{\alpha }{d^2}}{\displaystyle \underset{ij}{}}\left(𝐫_{ij}𝐫_{ij}d^2\right)^2`$ (3) $`+{\displaystyle \frac{3}{8}}{\displaystyle \frac{\beta }{d^2}}{\displaystyle \underset{jik}{}}\left(𝐫_{ij}𝐫_{ik}+{\displaystyle \frac{1}{3}}d^2\right)^2,`$ where $`\alpha `$ and $`\beta `$ are the bond-stretching and bond-bending force constants, and $`d=2.35`$ Å is the Si-Si strain-free equilibrium bond length in the diamond structure. Usual values are $`\alpha =2.965`$eV/Å<sup>2</sup> and $`\beta =0.285\alpha `$. With the approach described above, along with a few more details that can be found in Ref. , Wooten and Weaire obtained 216-atom structures with an angular distribution as low as 10.9 degrees. A decade later, using the same approach but more computing power, Djordjević, Thorpe and Wooten (DTW) produced some large (4096-atom) networks of even better quality, with a bond angle distribution of 11.02 degrees for configurations without four-membered rings and 10.51 degrees when these rings where allowed . In the present work, using a series of algorithmic improvements and faster computers, we are able to generate structurally and electronically better networks: the 1000-atom configurations, for example, show a bond-angle distribution of almost two degrees lower than DTW’s model while our 4096-atom cell is more than one degree better. The improvements introduced to the sillium approach are the following: 1. we start from a truly random configuration rather than from a molten crystalline state, thus guaranteeing that the structure is not contaminated by some memory of the crystalline state; 2. we evaluate the acceptance of a trial move using a Metropolis accept/reject procedure without doing full relaxation; 3. we use a local/non-local relaxation procedure to limit the number of force evaluations, i.e., we relax only locally in the first ten relaxation steps after a bond transposition (up to the third neighbor shell); in combination with 2), this makes the time per bond transposition almost independent of the configuration size; 4. at regular times, we quench the structure at zero temperature, with advantages outlined in section II C. With these improvements, the generation of the networks goes as follows. We first generate starting configurations as described in section II A, and quench these structures as described in section II C. Next, we alternate running at a temperature of 0.25 eV for about 100 trial bond transpositions per atom, and quenching. The decrease in energy is almost exclusively obtained during the quenching, the role of the annealing at finite temperature is mostly to provide for a fresh starting point for the next quench. Once the energy is brought down to about 0.3 eV per atom, and the angular spread around 10 degrees, this procedure yields diminishing returns: the annealing is no longer able to bring the sample to a starting point where the quenching leads to a lower minimum. To lower the energy further, we therefore also anneal the configurations in different conditions for a few hundred trial bond transpositions per atom, like a stronger three-body force or a larger volume. ### A Generating random initial CRNs To generate a random initial configuration, we randomly place the atoms in a box at crystalline density, under the constraint that no two atoms be closer than 2.3 Å. The difficult part is to connect these atoms in order to obtain a tetravalent network. We achieve this by starting with a loop visiting four atoms somewhere in the sample, in such a way that each pair of atoms that are neighbors along the loop be not separated by more than a cut-off distance $`r_c`$. This loop is gradually expanded until it visits each atom exactly twice; the steps of the loop are then the bonds in our tetravalent network. The expansion of the loop is achieved by randomly selecting a group of three atoms A, B and C, such that A is not four-fold coordinated and is within a distance of $`r_c`$ from B and C but not bonded to either, while B and C are bonded. Next, the bond BC is replaced by bonds AB and AC, expanding the loop by one step. This procedure is illustrated in figure 2. Initially, $`r_c`$ is set to some small value like 3 Å, but then it is gradually increased until all atoms are four-fold coordinated. Although this method leads to highly strained initial configurations, it has the advantage that it contains absolutely no trace of crystallinity. This process leads typically to CRNs whose angular distribution initially has a width of around thirty degrees, but which reduces rapidly to around 13 degrees in a single quench (see section II C). In the beginning of this first quench, when the angular deviation is quite large, sometimes a pair of atoms is closeby without being bonded; to eliminate such artefacts that result from the fact that within the Keating potential atoms only interact if they are explicitly bonded, we replace a bond of each of these atoms by a bond between these atoms and another bond between their neighbors (conserving four-fold coordination). ### B Avoiding complete relaxation of trial configurations In the standard sillium approach, a move consists of a bond transposition followed by full structural relaxation and an accept/reject step according to the Metropolis criterion (1). An alternative implementation is to first decide a threshold energy given by $$E_t=E_bk_bT\mathrm{log}(s),$$ (4) where $`s`$ is a random number between 0 and 1. The proposed move is then accepted only if $`E_fE_t`$. This procedure is exactly equivalent to the usual Metropolis procedure. By fixing the threshold for $`E_f`$ before a transposition is attempted, it is however possible to reject the move as soon as it becomes clear that this threshold cannot be reached, i.e., before the configuration is fully relaxed. Since the energy is harmonic around the minimum, the decrease in energy obtained during further relaxation is approximately equal to the square of the force times some proportionality constant $`c_f`$, so that during the relaxation the final energy can be estimated to be $$E_fEc_f|F|^2.$$ (5) If, at any moment during the relaxation, $`Ec_f|F|^2>E_t`$, the trial move is rejected and a new one is started. Such a method requires, of course, a conservative choice for $`c_f`$; in our units, the proportionality constant $`c_f`$ in well-relaxed configurations is always well below 1. To account for anharmonicities, we do not reject any move during the first five steps of relaxation. Since much less than one percent of the proposed moves are accepted in well-relaxed samples, avoiding spending time on moves that are eventually rejected can produce a significant gain in efficiency; using this improvement, we observed a speed-up of close to an order of magnitude, so that all bond transpositions in a 1000-atom network can be attempted in less then 3 minutes on a 500 Mhz DEC-Alpha workstation. ### C Efficient quenching Further optimizations are possible in the case of zero temperature. Since the threshold energy (Eq. (4)) is constant, a proposed bond transposition that is once rejected will keep on being rejected, as long as no other bond transpositions are accepted in the mean time. A combination of four atoms ABCD with bonds AB, BC and CD can be selected in $`N433/2`$ times, so there are $`18N`$ possible bond transpositions. We mark all bond transpositions that were rejected since the last accepted bond transposition to avoid retrying these. Once all bond transpositions have been tried but rejected, the quenching is complete. At this stage, the system is not only at a local energy minimum (i.e. a point in phase space where the force is zero and all eigenvalues of the hessian are positive), but no single bond transposition can lower the energy. The configurations we discuss here, have this property. In the standard sillium approach, the creation of four-membered rings is disallowed. Following DTW , we find that especially for quenching the relaxation is significantly helped by allowing for four-membered rings, because of the large extra number of pathways accessible to the system. At the end of the quenching, the few four-membered rings that are created can easily be removed one by one, by chosing the energetically most favorable bond transposition in which bond AB is part of the four-membered ring (and where no new four-membered rings are introduced). Typically, the energy increases by less than an eV per removed four-membered ring. ## III Resulting configurations We present here results for three different configurations: two 1000-atom cells and one with 4096 atoms. In Table I, we compare our configurations, relaxed with the Keating potential used in the modified WWW algorithm, with those of Djordjević, Thorpe and Wooten . We also provide the irreducible ring statistics, i.e., those rings in which no two atoms are connected through a sequence of bonds that is shorter than the sequence along the ring. We also provide the ring statistics for all $`n`$-membered rings in order to compare with Ref. and other papers in the literature (Table II). Table I shows that the strain per atom in our structures is significantly below that of DTW. One of the standard measurements to evaluate the quality of a model is the coordination number as computed based on the radial distribution function (RDF). Using the minimum of the RDF between the first- and second–neighbor peak and after relaxation with the Keating model, the first two configurations are perfectly tetravalent. The 4096-atom configuration has 0.1% of 5-fold defects. Another important quantity that can be compared with experiment is the width of the bond angle distribution. Experimentally this quantity can be extracted from the radial distribution function, or from the Raman spectrum — using a relation proposed by Beeman et al.. The most recent measurement, taken on annealed samples prepared by ion bombardment and using the second-neighbor peak of the radial distribution function, gives 10.45 and 9.63 degrees, respectively, for as-implanted and annealed samples . Our configurations, relaxed with the Keating potential, present therefore a bond angle distribution slightly narrower than experimental values. (This is to be expected of the “right” structure since the theoretical models are taken at zero K.) While structural averages provide a good idea of the overall quality of a model, they do not say much regarding the local environments. It is therefore important to look also at the electronic properties of these models: even small densities of highly strained geometries or defect atoms will be picked up as states in the gap of the electronic density of states (EDOS). In the last few years, it has become possible to compute the electronic structure of multi-thousand-atom configurations. Here, we show the electronic density of state for Configuration 2, a 1000-atom configuration. Because of the costs of doing a full ab-initio atomic relaxation, we have relaxed the cell with the Keating potential and used the Fireball local-basis ab-initio code to obtain the electronic density of state. Previous work showed that configurations relaxed with a Keating potential demonstrated little further relaxation with the Fireball code so that the results presented here are unlikely to change very much during further relaxation. Figure 3 shows the EDOS smoothed with a gaussian of width 0.01 eV. A remarkable feature of this EDOS is the absence of states in the gap, leading to a perfect gap of 1.3 eV. The generation of defect-less models is very important for our understanding of the electronic dynamics and the role of defects in disordered semiconductors. The decay of the valence tail, the Urbach tails, can be reasonably well approximated by an exponential — $`\rho (E)\mathrm{exp}(E/E_0)`$ — with $`E_0=0.2`$ eV, in agreement with previous calculations . Although we get good structures using the Keating potential, it is important to verify the stability of these networks when relaxed with a more realistic interaction potential that does not require a pre-set list of neighbors. There exists no empirical potential at the moment that can describe fully the properties of a-Si; we use a modified Stillinger-Weber potential, where the three-body contribution to the energy is enhanced by 50 % with respect to the two-body term. This ad-hoc modification was shown to produce good structural properties for amorphous silicon . After relaxation at zero pressure, the two 1000-atom configurations remain perfectly coordinated. The 4096-atom cell, less well relaxed, develops a few coordination defects : based on the first minimum in the RDF, it presents respectively 0.4% and 0.3% of three-fold and five-fold coordinated atoms. Table II presents the structural and energetic properties of the relaxed CRNs at zero pressure. For all configurations, the bond angle distribution widens and the density decreases significantly compared to the Keating-relaxed structures. For the 1000-atom configurations, the local relaxation with the modified Stillinger-Weber potential did not result in a change of topology and the total energies are very low compared with previous models. We therefore expect that the structures will be stable with any reasonable potential. Figure 4 shows a comparison of Configuration 3 with experimental data obtained by Laaziri et al. on annealed a-Si samples prepared by ion bombardment. The agreement between the two is excellent except for some discrepancy in the height of the third-neighbor peak. Such an agreement must only be seen as a sign that the topology might be right, however: configurations differing widely in their topology can easily produce similar radial distribution functions . The same figure also presents the bond-angle distribution for Configuration 3 relaxed with both the Keating and the modified Stillinger-Weber potentials. As expected for a perfectly coordinated sample, the distribution is smooth and presents a single peak centered at the tetrahedral angle. To compare with previous molecular dynamics studies , we have also relaxed our cells with the standard Stillinger-Weber potential, which is known to give an incorrect amorphous structure. After relaxation of Configuration 2, we find 17 atoms with five-fold coordination and three three-fold coordinated ones; similar results are found with the two other networks. The resulting configurational energy, given in Table II, compares favorably with molecular dynamical results. Contrary to the topological properties, which seem relatively independent of the details of the potential, we see that the ideal density of amorphous silicon compared with that of the crystal changes qualitatively as a function of the potential used. Configurations relaxed at zero pressure with the Keating potential show a reproducible densification by 2 % while the modified Stillinger-Weber potential, also at zero pressure, leads to a structure which is up to 6 % less dense. The latter results are in qualitative agreement with experiment and previous simulations using a similarly modified potential . Recently, Laaziri and collaborators have pointed to the lower density of a-Si as an explanation for the relatively low coordination measured by x-ray scattering. Our results, on the contrary, indicate that there is very little dependence between the density of the amorphous material and its topology, at least within the application of our two empirical potentials. A volume change at the percent level should therefore have very little impact on the topology and will reflect mostly some fine details of the real atomic interactions. ## IV Conclusions We have presented here modifications to the Wooten-Winer-Weaire algorithm that allows one to produce large high-quality continuous random networks without passing at all by a crystalline phase. Structural and electronic properties of the networks produced are excellent and they compare favorably with experiment. The coordinates of the three configurations discussed here, as well as a 10 000-atom sample under preparation, are available by request. ## V Acknowledgements We thank S. Roorda for providing us with his experimental data, and S. Nakhmanson for computing the electronic density of states of Configuration 2. NM acknowledges stimulating discussion with D. A. Drabold and partial support by the National Science Foundation under grant number DMR-9805848.
no-problem/9911/hep-ph9911378.html
ar5iv
text
# Klein-Fock equation, proper-time formalism and symmetry of Hydrogen atom 11footnote 1To be published in the Proceedings of the UNESCO International School of Physics ”Quantum Theory ’ part 2, commemorating the 100th anniversary of the birth of V.A.Fock. St.Peterburg , SPbU, 1999. This work was supported in part by RFBR (Grant 97-01-01186) and by GRACENAS (Grant 97-0-14.1-61). Email: yunovo@niif.spb.su, novozhil@nvv.gc.spb.ru Abstract We present main points of some of Fock papers in Quantum Theory, which were not properly followed when published. Introduction Fock papers on Quantum Theory have quite different lifes in Physics. Some of them were immediately recognized with Fock name attached to them. Hartree-Fock approach, Fock space and Fock representation are very well known examples. Another ones were not properly used when published. To this class belong papers where Fock was ahead of time, and his comptemporaries were not able to really understand their importance. Klein-Fock equation, proper-time formalism and symmetry of Hydrogen atom are best examples of Fock papers within such a class. In this lecture we explain main points of these Fock papers and remind Fock interpretation of energy-time uncertainty relation, as well as his ideas on generalization of the concept of physical space published in his last paper. Generalization of Schroedinger equation and Klein-Fock equation V.A.Fock sent his first paper on Quantum Mechanics for publication in Zs.f. Physik one week after Schroedinger’s first paper had reached Leningrad. Fock generalized Schroedinger equation for the case of magnetic field and obtained an expression for the (normal) level splitting in a magnetic field as well as level splitting of Hydrogen atom in an electric field . In his second paper on Quantum Mechanics V.A.Fock presented relativistic generalization of Schroedinger equation for a particle in electromagnetic field on curved space. O.Klein has also derived a relativistic generalization of Schroedinger equation and published it in Zs.f.Physik, vol.37 before Fock. But the paper by Fock in vol.39 was received by this journal before publication of Klein’s paper. The third paper on the same subject for the simplest case of a free partcle on a flat space was written by W.Gordon ; it was received by Zs.F.Physik after paper by Klein was published and appeared in vol.40. Nevertheless, in many textbooks the equation for spin 0 particle is called as the Klein-Gordon equation. One might see the reason for such discrimination of Fock in a character of the paper . This paper contains much more material than a simple relativistic generalization of Schroedinger Eq. to Eq. for a free spin 0 particle. At that time, this material could not be appreciated by most physicists. Let us consider briefly main points of Fock’s paper . Fock introduces five dimensional space with the metric depending on electromagnetic potential $`A_\mu `$ $`d\sigma ^2`$ $`=`$ $`g_{\mu \nu }dx_\mu dx_\nu {\displaystyle \frac{e^2}{m^2c^4}}\left(A_\nu dx_\nu +du\right)^2`$ where $`u`$ is an additional coordinate. In classical physics null geodesic line $`d\sigma =0`$ describes a trajectory of charged particle in this space. Corresponding action $`S`$ will have five dimensional gradient squared equal to zero. Four dimensional action $`W`$ is related to $`S`$ $`S`$ $`=`$ $`{\displaystyle \frac{e}{c}}u+W`$ Five dimensional equations are invariant under transformation $`A_\nu `$ $`=`$ $`A_\nu ^{}+_\nu f`$ $`u`$ $`=`$ $`u^{}f`$ which later were called as gradient transformation. Both classical and quantum eqations act in the same space.Therefore, the corresponding quantum wave equation for the wave function $`\mathrm{\Psi }`$ is the d’Alambert equation in five dimensional space $$\frac{1}{\sqrt{g}}\frac{}{x_\mu }\left(\sqrt{g}g^{\mu \nu }\frac{\mathrm{\Psi }}{x_\nu }\right)2A^\nu \frac{^2\mathrm{\Psi }}{ux_\nu }+\left(A_\nu A^\nu \frac{m^2c^4}{e^2}\right)\frac{^2\mathrm{\Psi }}{u^2}=0$$ Four dimensional wave function $`\psi `$ can be found from $`\mathrm{\Psi }`$ by a phase transformation $$\mathrm{\Psi }=e^{i\frac{e}{hc}u}\psi $$ and five dimensional eq. for $`\mathrm{\Psi }`$ produces immediately the Klein-Fock equation for $`\psi `$ in electromagnetic field and on the curved space. Gradient transformation of $`A_\nu `$ and $`u`$ induces transformation of the wave function $`\psi ^{}`$ $`=`$ $`e^{i\frac{ef}{hc}}\psi `$ This transformation rule of wave function under gauge tranformation was first written in Fock’s paper on the Klein-Fock equation. These two papers of young Fock made his name known to theoreticians. He got Rockefeller fellowship and went to study in Goettingen and Paris. Proper time formalism and Fock gauge condition (1937) In this paper V.A.Fock considers Dirac equation with an external field and develops a relativistically and gauge invariant quasiclassical method of integration. At that time, such a method was needed to deal with vacuum polarization in the Dirac positron theory. It was shown by V.Weisskopf that logarithmic divergences appear in calculation of vacuum polarization. The problem was how to separate finite parts from divergent expressions in a unique relativistically and gauge invariant way. Fock proposed to ensure invariance and uniqueness of calculations by using invariant quantities only. He introduces an invariant parameter to consider evolution in a symmetric manner with respect to all four relativistic coordinates and shows how to relate new five dimensional space to the four dimensional one in a unique way so that an additional parameter acquires a meaning of proper time. Fock notes that working in five dimensional space is necessary, because by this trick one gets uniqueness of divergent expressions due to the fact that the Riemann fundamental solution can be defined uniquely only in spaces of odd dimensions. Fock introduced also a new gauge condition for electromagnetic potential without singularities which enables to express potentials in terms of field strengths uniquely. Fock considers second order Dirac equation in the four dimensional space $$\{P_\mu P^\mu +m^2c^2+\frac{eh}{c}\left(\overline{\sigma }𝐇\right)\frac{ieh}{c}\left(\overline{\alpha }𝐄\right)\}\mathrm{\Psi }=0$$ or symbolically $$h^2\mathrm{\Lambda }\mathrm{\Psi }=0$$ where $`\mathrm{\Lambda }`$ is a second order operator. One can present a solution as an integral $$\mathrm{\Psi }=_CF𝑑\tau $$ over a function $`F`$ of five variables $`x_\mu ,\tau `$ with a given integration path $`C`$ in variable $`\tau `$ . Then $`F`$ should be a solution of Dirac equation with proper time $`\tau `$ $$\frac{h^2}{2m}\mathrm{\Lambda }F=ih\frac{F}{\tau }$$ subject to condition $$_C\frac{F}{\tau }𝑑\tau =F_C=0$$ where $`F_C`$ is function $`F`$ taken at the path $`C`$ . Fock shows how to choose the path $`C`$ to give $`\tau `$ a meaning of proper time. All calculations should be done in five dimensional space where separation of divergences is unique. Integration over proper time completes calculations. Proper time integrals does not depend on gauge and are relativistically invariant by their definition. Fock proper time formalism was developed by J.Schwinger in 1951 and by B.S.De Witt in 1965. It is usually referred to as Schwinger-De Witt proper time method, although it was formulated by Fock. Fock paper was too much ahead of time. Fock gauge condition for electromagnetic potential reads $$(x_\nu x_\nu ^0)A^\nu =0$$ assuming that potential $`A^\nu `$ is non-singular. The potential can be expressed in terms of field strengths $`A_{\mu \nu }`$ by averaging between points $`x`$ and $`x^0`$ according to $$\stackrel{}{\overline{f}=2_0^1f\left[x^0+\left(xx^0\right)u\right]u𝑑u}$$ Then $`A^\nu `$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(xx^0\right)_\mu \overline{A}^{\mu \nu }`$ Fock gauge condition is especially useful when dealing with selfdual fields. Fock condition has another advantage: as it was found later , in quantum field theory the Faddeev-Popov ghosts decouple . Symmetry of Hydrogen atom and dynamical groups According to Schroedinger equation, energy levels of a charge in a spherically symmetric field are characterized by two quantum numbers: principal quantum number and eigenvalue of angular momentum. However, in Hydrogen atom energy levels depend on principal quantum number only. The origin of degeneracy was guessed to be in additional symmetry of Hydrogen atom. This problem was known long before Fock, but only Fock was able to solve the problem. He did it in paper ”Hydrogen atom and non-Euclidean geometry” in a simple and elegant way. Fock starts by writing down the Schroedinger equation with Coulomb potential in momentum space as an integral equation $$\frac{1}{2m}p^2\psi \left(𝐩\right)=\frac{Ze^2}{2\pi ^2h}\frac{\psi \left(𝐩^{}\right)}{\left|𝐩𝐩^{}\right|}𝑑𝐩^{}$$ For discrete spectrum an average square momentum is $$p_0=\sqrt{2mE}$$ Fock introduces coordinates of a stereographic projection of a unit sphere in fourdimensional Euclidean space $`\xi `$ $`=`$ $`{\displaystyle \frac{2p_0p_x}{p_0^2+𝐩^2}}=\mathrm{sin}\alpha \mathrm{sin}\vartheta \mathrm{cos}\phi `$ $`\eta `$ $`=`$ $`{\displaystyle \frac{2p_0p_y}{p_0^2+𝐩^2}}=\mathrm{sin}\alpha \mathrm{sin}\vartheta \mathrm{sin}\phi `$ $`\zeta `$ $`=`$ $`{\displaystyle \frac{2p_0p_z}{p_0^2+𝐩^2}}=\mathrm{sin}\alpha \mathrm{cos}\vartheta `$ $`\chi `$ $`=`$ $`{\displaystyle \frac{p_0^2𝐩^2}{p_0^2=𝐩^2}}=\mathrm{cos}\alpha `$ so that $$\xi ^2+\eta ^2+\zeta ^2+\chi ^2=1$$ Schroedinger equation in new coordinates takes the following form $$\mathrm{\Psi }(\alpha ,\vartheta ,\phi )=\frac{\lambda }{2\pi ^2}\frac{\mathrm{\Psi }(\alpha ^{},\vartheta ^{},\phi ^{})}{4\mathrm{sin}^2\frac{\omega }{2}}𝑑\mathrm{\Omega }^{}$$ where $$\lambda =\frac{Zme^2}{h\sqrt{2mE}}$$ and $`2\mathrm{sin}\frac{\omega }{2}`$ is distance between points $`\alpha ,\vartheta ,\phi `$ and $`\alpha ^{},\vartheta ^{},\phi ^{}`$ on four-dimensional unit sphere $$4\mathrm{sin}^2\frac{\omega }{2}=\left(\xi \xi ^{}\right)^2+\left(\eta \eta ^{}\right)^2+\left(\zeta \zeta ^{}\right)^2+\left(\chi \chi ^{}\right)^2$$ The equation for $`\mathrm{\Psi }(\alpha ,\vartheta ,\phi )`$ is nothing but an integral equation for four-dimensional spherical functions. Function $`\mathrm{\Psi }(\alpha ,\vartheta ,\phi )`$ is related to the wave function in momentum space $`\psi \left(𝐩\right)`$ as follows $$\mathrm{\Psi }(\alpha ,\vartheta ,\phi )=\frac{\pi }{\sqrt{8}}p_0^{3/2}\left(p_0^2+𝐩^2\right)^2\psi \left(𝐩\right)$$ Integral equation for $`\mathrm{\Psi }(\alpha ,\vartheta ,\phi )`$ contains important information: 1. An invariance group of the Hydrogen-like atom is a group of four dimensional rotations. This explains why energy levels are independent of asimutal quantum number and introduces hyperspherical functions in calculations. The invariance group discovered by Fock is especially useful in averaging or summation within the layer with given principal quantum number. 2. In the case of continuous spectrum of Hydrogen-like atoms analogous approach leads to the Lorentz group symmetry on four-dimensional hyperparaboloid. 3. The invariance group of the Hydrogen-like atom discovered by Fock is not a kinematical group which transforms only coordinates or momenta. $`\alpha `$-rotations connect functions with different energy levels $`E`$ . Such symmetry was named by Fock as dynamical. Search for new dynamical groups became quite popular after 1960, i.e. 25 years after this paper by Fock. 4. Fock has also shown that momentum space of Hydrogen-like atoms is non-Euclidean. It is endowned by the Riemannien geometry of constant positive curvature in the case of discrete spectra, and by Lobachevskian geometry with constant negative curvature in the case of continuous spectra. Energy-time uncertainty relation Uncertainty relation $`\mathrm{\Delta }E\mathrm{\Delta }th`$ was originally considered as a relativization of Heisenberg’s relations $`\mathrm{\Delta }x\mathrm{\Delta }p_xh`$ . Analysis of Fock and Krylov has shown that energy-time uncertainty relation should be written as $$\mathrm{\Delta }(E^{}E)\mathrm{\Delta }th$$ where $`\mathrm{\Delta }\left(E^{}E\right)`$ is uncertainty in energy change in transition from one state to another, and $`\mathrm{\Delta }t`$ is time for which transition probability is close to unity. This relation is applicable to any chosen experiment and results from ”non-absolute description” of microsysteme by the wave function. For quasistationary state $`E^{}`$ in transition to exactly stationary state $`E`$ uncertainty relation defines level width $`\mathrm{\Delta }E^{}`$. If this width is small so that $`\mathrm{\Delta }t`$ $`\tau `$ is great, decay probability is constant in time, and exponential decay follows with lifetime $`\tau .`$ One should remind that behaviour of state during experiment (i.e.measurement) cannot be described by Schroedinger equation. Uncertainty in energy change in a system means that uncertainty of interaction energy of a system in transition grows when time of change shortens. A possible generalization of the concept of physical space Fock writes : ”The concept of physical space is closely related with that of the motion of a physical body. This connection is quite natural since we learn the properties of space by studying the motion of physical bodies. .. And the physical space (as distinguished from the configuration space) was always thought of as a manifold connected with the degrees of freedom of asingle mass point. In quantum physics the simplest kind of physical object (a material point) may be supposed to have the properties of an electron.” Fock describes spin degree of freedom of electron and the Pauli principle and concludes: ”Applying to this case the (tacitly assumed) classical presumption that the physical space is defined by totality of variables describing the degrees of freedom of the simplest physical body (conventionally called a mass point), …we arrive at the generalization of the concept of physical space. The generalized physical space defined above can be called spinor space.” The Pauli principle corresponds to the impossibility for two electrons to occupy one and the same point in spinor space. This Fock paper may be considered as a physical motivation for Supersymmetry . References 1. Fock V.A.-Zs.f.Phys., 1926, vol.38, 242. 2. Fock V.A. - Zs.f.Phys., 1926, vol.39, 226. 3 .Klein O. - Zs.f.Phys., 1926, vol. 37, 895. 4. Gordon W.- Zs.f.Phys., 1926,vol.40, 117. 5. FockV.A. - Zs.f.Phys., 1926, vol. 57, 261. 6. FockV.A.- Phys.Zs.Sowet., 1937, vol.3, 404. 7.Weisskopf V.- Zs.f.Phys.,1934, vol.89, 27; vol. 90, 817. 8. Schwinger J. - Phys. Rev., 1951, vol.82, 664. 9. De Witt B.S. - in: Relativity, Groups and Topology, eds. De Witt B.S., New York, Gordon and Breach, 1965, p.19. 10. Itabashi K.- Prog. Theor.Phys., 1981, vol.85, 1423. 11. Fock V.A. -Zs.f.Phys.,1935, vol. 98, 145. 12. Krylov N.S., Fock V.A.- JETP, 1947,vol.17, 93. 13. Fock V.A. Physica Norvegica, 1971, vol.5, 149. 14. Golfand Yu.A., Likhtman E.P., JETP Letters, 1971, vol.13, 452. Volkov D.V., Akulov V.P., JETP Letters, 1972, vol.16, 621
no-problem/9911/astro-ph9911330.html
ar5iv
text
# Stochastic optimization methods for extracting cosmological parameters from CMBR power spectra ## I Introduction In the past few years it has been realized that the Cosmic Microwave Background Radiation (CMBR) holds information about virtually all relevant cosmological parameters . The shape and amplitude of the fluctuations in the CMBR are strongly dependent on such parameters as $`\mathrm{\Omega }`$, $`H_0`$ etc. . Given a sufficiently accurate map of fluctuations it should therefore in principle be possible to extract information on the values of these parameters. In general, it is customary to describe the fluctuations in spherical harmonics $$\frac{\mathrm{\Delta }T}{T}(\theta ,\varphi )=\underset{lm}{}a_{lm}Y_{lm}(\theta ,\varphi ),$$ (1) where the $`a_{lm}`$ coefficients are related to the power spectrum by $`C_l=a_{lm}^{}a_{lm}_m`$. For purely Gaussian fluctuations the power spectrum contains all statistical information about the fluctuations . The CMBR fluctuations were first detected in 1992 by the COBE satellite , and at present the COBE measurements together with a number of smaller scale experiments make up our experimental knowledge of the CMBR power spectrum. These data are not of sufficient accuracy to really pin down any of the cosmological parameters, but the next few years will hopefully see an explosion in the amount of experimental data. Two new satellite projects, the American MAP and the European PLANCK , are scheduled and are designed to measure the power spectrum precisely down to very small scales ($`l1000`$ for MAP and $`l2000`$ for PLANCK). This should yield sufficient information to determine almost all relevant cosmological parameters. However, using CMBR data to extract information about the underlying cosmological parameters will rely heavily on our ability to handle very large amounts of data (Refs. and references therein). The first problem lies in constructing a power spectrum from the much larger CMBR map. If there are $`m`$ data points, then the power spectrum calculation involves inversion of $`m\times m`$ matrices (an order $`m^3`$ operation). For the new satellite experiments $`m^3`$ is prohibitively large , and much effort has been devoted to finding methods for reducing this number by exploiting inherent symmetries in the CMBR . However, once the power spectrum has been constructed the troubles are not over. Then the space of cosmological parameters has to be searched for the best-fit model. If there are $`n`$ free cosmological parameters, each sampled by $`q`$ points, then the computational time scales as $`q^n`$ and, if $`n`$ is large, the problem becomes intractable. In the present paper we assume that a power spectrum has been constructed, so that only the problem of searching out the cosmological parameter space remains. In general, parameter extraction relies on the fact that for Gaussian errors it is possible to build a likelihood function from the set of measurements $$(\mathrm{\Theta })\mathrm{exp}\left(\frac{1}{2}x^{}[C(\mathrm{\Theta })^1]x\right),$$ (2) where $`\mathrm{\Theta }=(\mathrm{\Omega },\mathrm{\Omega }_b,H_0,n,\tau ,\mathrm{})`$ is a vector describing the given point in parameter space. $`x`$ is a vector containing all the data points. This vector can represent either the CMBR map, or the reconstructed power spectrum points. $`C(\mathrm{\Theta })`$ is the data covariance matrix. Assuming that the data points are uncorrelated, so that the data covariance matrix is diagonal, this can be reduced to the simple expression, $`e^{\chi ^2/2}`$, where $$\chi ^2=\underset{l=1}{\overset{N_{\mathrm{max}}}{}}\frac{(C_{l,\mathrm{obs}}C_{l,\mathrm{theory}})^2}{\sigma (C_l)^2},$$ (3) is a $`\chi ^2`$-statistics and $`N_{\mathrm{max}}`$ is the number of power spectrum data points . In order to extract parameters from the power spectrum we need to minimize $`\chi ^2`$ over the multidimensional parameter space. In general there is no easy way of doing this. The topology of $`\chi ^2`$ could be very complicated, with several different local minima. However, let us for now ignore this possible problem and assume that the function is unimodal. Then there exist a vast number of algorithms for extremizing the function. The most efficient methods for optimization usually depend on the ability to calculate the gradient of the objective function, $`\chi ^2`$. These methods work on completely general continuously differentiable functions, but under the right assumptions, $`\chi ^2`$ possesses qualities which makes it possible to improve on the simple gradient methods. In general, the second derivative of $`\chi ^2`$ with respect to parameters $`i`$ and $`j`$ is $$\frac{^2\chi ^2}{\theta _i\theta _j}=2\underset{k=2}{\overset{N}{}}\frac{1}{\sigma _k^2}\left[\frac{C_l}{\theta _i}\frac{C_l}{\theta _j}(C_{l,\mathrm{obs}}C_l)\frac{^2C_l}{\theta _i\theta _j}\right]$$ (4) Sufficiently close to the minimum of $`\chi ^2`$, the second term in the equation above should be small compared with the first. In practice this means that we get the second derivative information “for free” by just calculating the first derivative. Therefore, if we assume that the starting point for the optimization is sufficiently close to the true minimum, an algorithm utilising second-derivative information should converge much faster than a gradient method. The most popular algorithm of this type is the Levenberg-Marquardt method . Note, however, that far away from the minimum, the above expression for the second derivative can be very wrong and cause the algorithm to converge much slower. Both gradient and second order algorithms are typically very efficient. However, there are several weaknesses: 1) They rely on our ability to calculate derivatives of $`\chi ^2`$. Although in principle this is no problem, numerical experiments have shown that results for this derivative are not always reliable . For instance, the numerical code for calculating power spectra, CMBFAST , is fundamentally different for open and flat cosmologies, and has no implementation of closed models, so that the derivative of $`\chi ^2`$ with respect to $`\mathrm{\Omega }_0`$ is not reliable at $`\mathrm{\Omega }_0=1`$. This is just one example, but the problem is generic as soon as points are located sufficiently near parameter boundaries. 2) The next problem is related to the fact that the above methods in general works as steepest descent methods. This means that they are very easily fooled into taking the shortest path towards some local minimum which needs not be global. If there are either many local minima or the topology of $`\chi ^2`$ is complicated with many near degeneracies, then the above gradient-based methods are likely to perform poorly. Unfortunately this might easily be the case with any given realization of the CMBR power spectrum. ## II Stochastic optimization ### A Multistart algorithms The above caveats lead us to look for more robust methods for finding the true minimum of $`\chi ^2`$. As soon as we are dealing with multimodal functions it is clear that we cannot contend ourselves with just running an optimization scheme based on the above method with just one starting point. The simplest possible improvement on the above method is a Monte Carlo multi start algorithm. In this case a starting point is chosen at random in the parameter space, and optimization is performed, using either a gradient or a second-order method. After the algorithm converges a new starting point is chosen. This method has the advantage that it converges to the global minimum in the asymptotic limit of infinite computational time. However, it is easy to improve on it, because the simple multistart algorithm will detect the same local minimum many times uncritically. The multi level single linkage (MLSL) algorithm tries to alleviate this problem by mapping out the basins connected with the different local minima. If it detects that a trial point lies within a basin which has already been mapped, then the point is rejected. Depending on the type of objective function this algorithm can perform exceedingly well In what follows we use the simple implementation of the MLSL algorithm provided by Locatelli . First, we need the following definition: Let $`x_{\mathrm{max}}`$ and $`x_{\mathrm{min}}`$ be the maximum and minimum allowed value of parameter $`i`$. Then define a new parameter $`q(xx_{\mathrm{min}})/(x_{\mathrm{max}}x_{\mathrm{min}})`$, so that $`q[0,1]`$. We use this new parameter $`q`$ in the algorithm below, so that all cosmological parameters are treated on an equal footing and the allowed region is a simple hypercube spanning all values from 0 to 1 in $`R^n`$. The algorithm is then devised as follow: 1) At each step, $`k`$, pick out $`N`$ sample points from the allowed region and calculate the objective function. 2) Sort the whole sample of $`kN`$ points in order of increasing $`\chi ^2`$ value and select the $`\gamma kN`$ points with smallest values. 3) For all of these points, run optimization on given point $`q`$, iff | - | No point $`y`$ exists so that | | --- | --- | | | $`d(q,y)\alpha `$ and $`\chi ^2(y)\chi ^2(q)`$ | | - | $`d(q,S)>d`$ | | - | Optimization was not previously applied to $`q`$. | Optimization is performed with a gradient method. 4) Proceed to step $`k+1`$. In the above, $`d(q,y)`$ is the Euclidean distance between $`x`$ and $`y`$, and $`S`$ is the set of already discovered local minima. $`\alpha `$ and $`d`$ are predefined distances which should be chosen to optimize the rate of finding local minima. They are a measure of how large the basins connected with local minima are in general in that specific problem. The above method thus includes a host of different parameters which should be chosen by the user, $`N`$, $`d`$, $`\gamma `$ and $`\alpha `$. This can make it quite troublesome to devise an algorithm which performs optimally. In our implementation we have chosen $`N=10`$, $`\gamma =0.2`$, $`d=0.1`$ and $`\alpha =0.1`$. Note that this is somewhat in conflict with the definition given by Refs. , in that $`\alpha `$ should really be a quantity which depends on $`k`$, but in order to obtain a simple implementation we have used the above values. ### B Simulated annealing A completely different method, which in the next section is shown to be very effective for $`\chi ^2`$ minimization on CMBR power spectra, is simulated annealing. The method of simulated annealing was first introduced by Kirkpatrick et al. in 1983 . It is based the cooling behaviour of thermodynamic systems. Consider a thermodynamic system in contact with a heat bath at some temperature, $`T`$. If left for sufficiently long the system will approach thermal equilibrium with that temperature. The heat bath is then cooled, and if this is done slowly enough the system maintains equilibrium in the cooling phase, and finally as $`T0`$ settles into the true ground state, the state with the lowest possible energy. This is very similar to global searches for minima of functions and simulated annealing relies on the fact that the function to be minimized can be considered as the energy of a thermodynamic system. If the system is then cooled from a very high ”temperature” towards $`T=0`$ it should find the global minimum, given that it maintains thermal equilibrium at all times. In practise one lets the system jump around in parameter space at random. Given a starting point $`i`$, a trial point is sought according to some prescription, and is then either accepted or rejected according to the Metropolis acceptance probability $$P_{\mathrm{accept}}(i+1)=\{\begin{array}{cc}1\hfill & \text{for }E_{i+1}E_i\hfill \\ e^{(E_{i+1}E_i)/T}\hfill & \text{for }E_{i+1}>E_i\hfill \end{array},$$ (5) where, in our case $`E=\chi ^2`$. There are very many similarities between this and thermodynamic systems, at high temperatures the system visits all states freely, while at low temperatures it can visit only states very close to the minimum. For instance it has been shown that by using the above criterion the system asymptotically approaches the Boltzmann distribution, given that it is kept at constant temperature asymptotically long . Also, if a system undergoes simulated annealing with complete thermal equilibrium at all times then as $`T0`$ the energy approaches the global minimum . For absolute global convergence to be ensured it is thus necessary to allow infinite time at each temperature. In order to use simulated annealing for functional optimization it is necessary to specify three things: 1) A space of all possible system configurations 2) A cooling schedule for the system 3) A neighbourhood structure. Here, the configuration space is a hypercube in $`R^n`$ bounded by the limits on the individual parameters. The cooling schedule and the neighbourhood structure are both something which in general are quite difficult to choose optimally . Further, they make the scheme problem dependent. For this reason adaptive simulated annealing procedures have been devised which dynamically choose the cooling rate and neighbourhood directly from the previous iterations in order to maximize the thermalisation rate . The problem with this approach is that the thermodynamic behaviour is no longer well-defined. For instance the approach to a Boltzmann distribution is not ensured. In the present work we choose a relatively simple cooling schedule and neighbourhood structure, neither of which are adaptive. In practise we start with an initial temperature, $`T_0`$, which is then lowered exponentially by the following criterion $`T_{i+1}=\alpha T_i`$, where $`\alpha `$ is some constant. When the temperature reaches a final value $`T_1`$ the algorithm stops. In this way $`\alpha `$ is a function of the total number of steps, $`N_s`$, given as $`\alpha =(T_1/T_0)^{1/N_s}`$. The neighbourhood search is devised so that at high temperatures the system is prone to make large jumps whereas at lower temperatures it mostly searches the nearest-neighbour points. In our specific model the parameter space consists of a vector, $`𝐱`$, of $`n`$ free parameters, bounded from below by the vector, $`𝐱_{\mathrm{min}}`$, and from above by $`𝐱_{\mathrm{max}}`$. Let iteration point $`i`$ have the value $`(x_\beta )_i`$ for the parameter labelled $`\beta `$. Then the value of this parameter at iteration $`i+1`$ has acceptance probability given as $$P[(x_\beta )_{i+1}]e^{|(x_\beta )_{i+1}(x_\beta )_i|/T_{,\beta }},$$ (6) where $$T_{,\beta }=A_\beta [(x_\beta )_{\mathrm{max}}(x_\beta )_{\mathrm{min}}](T/T_0)^{1/2},$$ (7) and $`A_\beta `$ is some constant, chosen to yield a good convergence rate. The above probability is set to 0 if $`(x_\beta )_{i+1}`$ is outside the allowed interval for the given parameter. This criterion for picking out trial points has the desired quality that it makes large jumps at high temperature and progressively smaller jumps as the temperature is lowered. If the objective function depends strongly on $`\beta `$, then $`A_\beta `$ should be small, whereas if it is almost independent of $`\beta `$, $`A_\beta `$ should be large. It is well known that $`\chi ^2`$ is almost degenerate in the parameter $`\mathrm{\Omega }_mh^2`$ . Therefore it is natural to choose $`A_{\mathrm{\Omega }_mh^2}`$ to be small. In our implementation we have chosen the following values for the control parameters: $`T_0=10^4`$, $`T_1=2`$, $`A_{\mathrm{\Omega }_mh^2}=1/32`$, $`A_\beta =1/8`$ for $`\beta \mathrm{\Omega }_mh^2`$. Note that the method of simulated annealing was first applied to simulated CMBR data by Knox , for a relatively small model with four free parameters. ## III Numerical results ### A Performance of different algorithms In order to test the relative efficiency of the different optimization schemes we have tried to run $`\chi ^2`$ minimization on synthetic power spectra. All the power spectra in the present paper have been calculated by use of the publicly available CMBFAST package . To make calculations not too cumbersome we have restricted the calculations to a six-dimensional parameter space, characterised by the vector $`\mathrm{\Theta }=(\mathrm{\Omega }_m,\mathrm{\Omega }_b,H_0,n_S,N_\nu ,Q)`$. The model is taken to have flat geometry so that $`\mathrm{\Omega }_\mathrm{\Lambda }=1\mathrm{\Omega }_m`$. We start from an assumed true model with $`\mathrm{\Theta }=(0.5,0.05,50,1,3,30\mu \mathrm{K})`$, i.e. fairly close to the currently favoured $`\mathrm{\Lambda }`$CDM model . Table I shows the free parameters, as well as the allowed region for each. We further assume that all $`C_l`$’s up to $`l=1000`$ can be measured without noise. That is, the errors are completely dominated by cosmic variance, with the error being equal to $$\sigma (C_l)=\sqrt{\frac{2}{2l+1}}C_l.$$ (8) From underlying statistics we have produced a single realisation which we take to be the measured power spectrum. Since we have $`N=999`$ synthetic data points, all normally distributed, $`\chi ^2`$ of the data set, relative to the true, underlying power spectrum should have a $`\chi ^2`$ distribution with mean $`N`$, and standard error $`\sqrt{2N}`$, so that $$\chi ^2=999\pm 45.$$ (9) The specific synthetic data set we use has $`\chi _{}^2=1090.98`$, i.e., it is within about 2$`\sigma `$ of the expected value. If the optimization routine is optimal, then for each optimization run $$\chi _{\mathrm{minimization}}^2\chi _{}^2.$$ (10) The average of several optimization runs should preferably yield a value which is somewhat below $`\chi _{}^2`$. We therefore have a measure of whether or not the optimization has been successful. We have tested four different optimization algorithms on a subset of the full six-dimensional parameter space. The algorithms are: Simple Monte Carlo multistart with: 1) gradient optimization method (G), 2) Levenberg-Marquardt method (LM), 3) multi level single linkage (MLSL), as described in Section IIa, 4) simulated annealing, as described in Section IIb. Algorithms 1-3 use optimization routines from the PORT3 library . In order to make direct comparison between the algorithms, we have let them run for a fixed number of steps, where one step is defined equal to one power spectrum calculation. All methods, except simulated annealing, use gradient information, which means that additional power spectra must be calculated at each iteration. We use two sided derivatives, so that to calculate the gradient (and Hessian), we need $`2n`$ more calculations, where $`n`$ is the number of cosmological parameters. Fig. 1 shows the minimum $`\chi ^2`$ found by the different algorithms. Each point in Fig. 1 stems from a Monte Carlo run of 15 optimizations. Clearly, the MLSL method improves on the simple multi start algorithm. The LM algorithm performs better than gradient optimization in some cases, but in other cases it is much worse. This is probably due to the fact that if the starting point is far away from a local minimum then the second derivative may yield false information because Eq. (4) does not hold, causing the algorithm to converge slower. This weakness could be remedied to some extent by diagonalising the matrix of second-derivatives (Fisher matrix diagonalisation), so that the correlation between different parameters is approximately broken. However, the most striking feature in Fig. 1 is that SA outperforms the other algorithms easily. Most likely this is due to the fact that $`\chi ^2`$ possesses valleys where the function has many almost degenerate local minima. Note that the likelyhood function does not need to be truly multi-modal for this effect to occur. It can happen either because the parameter space is constrained so that the algorithm takes a path which leads out of the allowed space, or because there are small “bumps” on $`\chi ^2`$ close to the global minimum, which cause the gradient algorithms to get trapped. $`\chi ^2`$ is not multimodal in the sense that it contains equally good local minima, separated by long distances in parameter space. For the case of four free parameters (upper panel), most of the algorithms produce acceptable results with about 1000 steps, but with five parameters (lower panel), about 2000 steps are needed. In both cases, SA needs substantially fewer steps than the other algorithms. In Fig. 2 we show four different runs of the simple gradient-based algorithm without multi-start. In two of the cases the algorithm converges towards the global minimum, whereas in the two other it becomes trapped at much higher lying points in parameter space. We have tested the effect of varying step size in the gradient calculation and found that the results do not depend on this. This figure also shows that the gradient based algorithms generally converge fairly rapidly (i.e. a few hundred steps), so that the multi-start algorithm generally runs several times even for relatively a relatively small number of total steps. ### B Parameter extraction If the $`\chi ^2`$ minimization succeeds in finding the global minimum, then the value found should reflect the underlying measurement uncertainty. We have performed a detailed Monte Carlo study of how well the SA algorithm is able to extract parameters from the power spectrum. The test goes as follows: First, construct $`N_{\mathrm{MC}}`$ synthetic measured power spectra, as described in the previous section. Then run optimization on each one of these spectra. This produces $`N_{\mathrm{MC}}`$ estimated points in parameter space. In order to compare these points with the underlying uncertainty, we then need to calculate the estimated standard error on the different parameters. This is done by the standard method of calculating the Fisher information matrix. At the true point in parameter space, the likelihood function should be maximal, so that it should have zero gradient. The matrix of second derivatives is then given by (Eq. (4)) $$I_{ij}=\underset{l=2}{\overset{l_{\mathrm{max}}}{}}(2l+1)C_l^2\frac{C_l}{\theta _i}\frac{C_l}{\theta _j},$$ (11) The expected error on the estimation of parameter $`i`$ is then given by $$\sigma _i^2(I^1)_{ii},$$ (12) if we assume that all the relevant cosmological parameters should be determined simultaneously. The expected error on $`\mathrm{\Omega }_m`$ is $`\sigma =0.098`$, given our assumed measurement precision. Note that above we have again assumed that the only uncertainty in the measurements is from cosmic variance. We have performed this Monte Carlo test on the 6-dimensional parameter space, using 24 different synthetic spectra. We have extracted parameters using SA with a different number of total steps: 500, 2000 and 4000. Fig. 3 shows how the estimated points are distributed for the parameter $`\mathrm{\Omega }_m`$. We have binned the extracted points in bins of width 1$`\sigma `$ up to 5$`\sigma `$. For the optimization performed with 500 steps the distribution is very wide, showing no specific centering on the true parameter value. The optimization with 2000 steps extracts values which are centered on the true value, indicative of a good optimization. Furthermore, the optimization with 4000 steps shows little improvement over that with 2000, again indicating that the one with 2000 steps is already performing optimally. Note that both for 2000 and 4000 steps the distribution of extracted points is significantly wider than the theoretical expectation which was calculated assuming a normal distribution with $`\sigma =0.098`$. One would expect this to be the case since the probability distribution of any given parameter is only normal close to the true value, even for a perfect optimization. Therefore there are likely to be more outlying points than suggested by the normal distribution. If we have $`N_{\mathrm{MC}}`$ Monte Carlo runs, then if the optimization is perfect one should obtain a sample mean of roughly $$\mu _{\mathrm{sample}}\mu _{\mathrm{true}}\pm \sigma _s,$$ (13) where $`\sigma _s=\sigma /\sqrt{N_{\mathrm{MC}}}`$ for a given parameter if $`N_{\mathrm{MC}}`$ is large and the extracted parameters are drawn from a normal distribution. We can also calculate $`\chi ^2`$ for the sample $$\chi _\theta ^2=\underset{i=1}{\overset{N_{\mathrm{MC}}}{}}\frac{1}{\sigma ^2}(\theta _{\mathrm{found}}\theta _{\mathrm{true}})^2.$$ (14) This function should be approximately $`\chi ^2`$ distributed. We have calculated $`\mu `$ and $`\chi ^2`$ for the sample of extracted parameters, to see if it is compatible with the theoretical expectations. Table II shows the values found from the 24 Monte Carlo simulations. The sample mean found by the optimization with 500 steps deviates by more than 7$`\sigma `$ from the expectation. Again this indicates a poor optimization. The optimizations with 2000 and 4000 steps succeed in recovering the true mean to within 2$`\sigma `$. As for $`\chi ^2`$, it is much lower for the 2000 and 4000 steps optimizations than for the 500 steps. However, both are still much larger than expected from a normal distribution. As mentioned above this has to do with the fact that the distribution is not normal far away from the true parameter value, so that more outlier points are expected. These contribute heavily to $`\chi ^2`$, so that a larger value can be expected, even for a perfect optimization. As seen above, even for the small 6 parameter model we use, it is necessary on average to calculate more than 10<sup>3</sup> power spectra. Even on a fast computer this is something which takes several hours. This must be done each time one wants to check how a new proposed cosmological model fits the data. This very clearly shows the necessity of using fast optimization algorithms for parameter extraction. Note that the models we have calculated are flat and without reionization, including either curvature or reionization significantly slows the CMBFAST code. Also, more exotic models like scenarios with decaying neutrinos lead to very cumbersome CMBR spectrum calculations . The above Monte Carlo method was also used by Knox in order to test the $`\chi ^2`$ optimization efficiency for a small model with 4 free parameters. ## IV Discussion and Conclusions We have tested different methods for $`\chi ^2`$ minimization and parameter extraction on CMBR power spectra. It was found that simulated annealing is very effective in this regard, and that it compared very favourably with other optimization routines. The reason for this is most likely that $`\chi ^2`$ posseses very nearly degenerate minima. Also, numerical noise in the CMBFAST code can cause the gradient information to become unreliable near stationary points, causing the gradient based methods to become trapped in points which are not true minima. We have also found that even for the simulated annealing algorithm, many power spectrum calculations are usually necessary in order to obtain a good estimate of the global minimum. Without a fast optimization algorithm it is very difficult to extract reliable parameter estimates from CMBR power spectra, and even with a routine like SA, it is computationally very demanding as soon as the parameter space is realistically large (9-10 dimensional). Note that all of the above calculations rely on stochastic methods in that they start out at completely random points in the allowed parameter space. This is very different from the method used by Oh, Spergel and Hinshaw , who use as the initial point a fit obtained by the chi-by-eye method and then optimize that initial guess using a second order method. This method surely makes the optimization algorithm converge faster, but suffers greatly from the problem of how to choose the initial point without biasing the outcome (i.e. making the algorithm find a minimum which is not global). We believe that using stochastic optimization is a much more robust way of optimization. Interestingly, there are other modern algorithms for optimization which work along some of the same principles as SA, for instance genetic algorithms . Given the magnitude of the computational challenge provided by upcoming CMBR data, it appears worthwhile to explore the potential of such new algorithms. ###### Acknowledgements. This work was supported by a grant from the Carlsberg Foundation.
no-problem/9911/astro-ph9911231.html
ar5iv
text
# Pixelated Lenses and 𝐻₀ from Time-delay QSOs ## 1 Introduction Most ways of measuring the Hubble constant involve a form of distance ladder, which utilizes a number of astrophysical standard candle and standard ruler relations, and is calibrated locally by a geometrical technique such as parallax (e.g., Madore et al. 1999, Madore et al. 1998, Kennicutt 1995). A recent exciting development in this field is to extend the reach of the geometrical rung of the distance ladder by using masers in orbit around galaxy centers to get distances to nearby galaxies thus bypassing Cepheids (Herrnstein et al. 1999). A few methods involve no distance ladder: good examples are (i) inferring the distance of Type II supernovae from their light curves and spectra by modeling their expanding photospheres (Schmidt et al. 1992), and (ii) comparing the $`H_0`$-independent angular extent of galaxy clusters to their $`H_0`$-dependent depth as deduced by the X-ray emission, and the Sunyaev-Zeldovich microwave background decrement due to the cluster (Hughes & Birkinshaw 1998). But the most ‘one-step’ method of all was proposed by S. Refsdal in 1964, though it has only recently become feasible. The principle of Refsdal’s method is simple. In a system where a high-redshift QSO is split into multiple images by an intervening galaxy lens, the difference in light travel time between different images (observable as time delays if the QSO is variable) is proportional to the scale factor of the universe. The time delay is given by the schematic formula $`\text{Time delay}`$ $`=`$ $`h^1\times \text{1 month}\times \text{image separation in arcsec}^2`$ $`\times z_{\mathrm{lens}}\times \text{weak dependence on }z_{\mathrm{lens}}\text{,}z_{\mathrm{QSO}}\text{, and cosmology}`$ $`\times \text{lens-mass-distribution dependent factor}`$ where the last two factors are of order unity. To obtain $`H_0`$ using this method one requires three types of input: (i) the observed time delay(s) between QSO images, (ii) knowledge of the cosmology, and (iii) the mass distribution in the lensing galaxy. The first can and has been measured with increasing precision for about eight systems so far. The second is not a serious problem, because the dependence on cosmology is weak and the uncertainty due to it is easy to quantify; in this paper we will refer all results to the Einstein-de Sitter cosmology. The uncertainty in $`H_0`$ is dominated by the third item; the number of usable constraints on the mass distribution in the galaxy are few, while the range of possible distributions is huge. Thus, mass distribution is the major source of uncertainty. Two different paths can be taken to compensate for our lack of knowledge about the galaxy. One is to assume an exact parametric form for the galaxy mass distribution and fit the observed lensing properties as best as possible; the other is to take the image properties as exact, and try to reconstruct the galaxy mass map as best as possible. Single parametric models fix the last term in (1) and thus cannot account for the uncertainty resulting from it. Blandford & Kundić (1996) advise that even if one finds a parametric galaxy model which is dynamically possible and which reproduces the image properties with acceptably low $`\chi ^2`$, one still has to ‘aggressively explore all other classes of models’ to get the true uncertainty in $`H_0`$. To explore the model space in a systematic fashion one needs to use a representation of the galaxy mass distribution that is general and not restricted to a particular form. One way would be to expand the mass distribution using a set of basis functions, another is to pixelate the galaxy and take each pixel as an independent mass element. We introduced pixelated models in Saha & Williams (1997, hereafter SW97) but at that time did not have any strategy for searching model space. We have now extended that work to explore the model space with the goal of estimating the uncertainty in the derived value of $`H_0`$. The plan of this paper is as follows. In Section 2 we summarize the observational situation with regard to strongly lensed QSOs. In Section 3 we present the general lensing formalism and point out a few properties of the lensing equations that are useful in interpreting the results of modeling. We also explain the reasons for confining our analysis to PG1115+080 and B1608+656 for now. Sections 4 and 5 describe our method for deriving $`H_0`$ and test it on a synthetic sample via a blind test. Application to the real systems can be found in Section 6. Section 7 discusses our results. ## 2 Observed Time-Delay Lenses The first piece of input for $`H_0`$ determination is the measurement of time delays between the various QSO images. At the present time, ten multiply-imaged QSOs already have measured time delays or are being monitored: Q0957+561 (Kundić et al. 1997a), PG1115+080 (Schechter et al. 1997, Barkana 1997), B1608+656 (Fassnacht et al. 1999), B0218+257 (Biggs et al. 1999), PKS 1830-211 (Lovell et al. 1998), HE 1104-1805 (Wisotzki et al. 1998), B1030+074, B1600+434 (Burud et al. 1999), J1933+503, and RXJ0911+0551 (Hjorth et al. 1999). In this work we limit ourselves to 4-image lenses with known source and lens redshifts and accurate time delay measurements; PG1115+080 and B1608+656 fit the description. PG1115 (Weymann et al. 1980) was the second lens to be discovered. The source is a radio-quiet QSO at $`z_s=1.722`$. Accurate positions for the images were measured by Kristian et al. (1993); lightcurves were analyzed by Schechter et al. (1997), and time delays derived by Schechter et al. and Barkana (1997). The main lensing galaxy is an outlying member of a small galaxy group, $`z_l=0.311`$ with an estimated line of sight velocity dispersion of $`270\pm 70`$km s<sup>-1</sup> (Kundić et al. 1997b). A summary of observational results on this system can be found in SW97. B1608 was discovered in the Cosmic Lens All-Sky Survey (Myers et al. 1995, Myers et al. 1999). The lens is either a perturbed single galaxy or a merging/interacting pair of galaxies superimposed in the plane of the sky. The source and lens redshifts are 1.394 and 0.630 respectively. The time delays were recently reported by Fassnacht et al. (1999) based on VLA observations spanning 7 months. The time delays we use in this work are an earlier determination (Fassnacht, private communication), and are less than 0.5$`\sigma `$ away from the values quoted in Fassnacht et al. (1999); $`\mathrm{\Delta }t_{BA}=28.5`$, $`\mathrm{\Delta }t_{BC}=32`$, and $`\mathrm{\Delta }t_{BD}=77`$. ## 3 Lensing formalism A photon traveling through a galaxy will take longer to arrive at the observer then an unimpeded photon. Part of the time delay occurs because the path of the ray bundle makes a detour rather than going straight; the time delay is further increased because the photon travels through the gravitational potential well of the galaxy. The total time delay is given by, $$\tau (\text{`},\text{`}_\mathrm{s})=(1+z_\mathrm{l})\frac{D_{\mathrm{ol}}D_{\mathrm{os}}}{D_{\mathrm{ls}}}\left[\frac{1}{2}(\text{`}\text{`}_\mathrm{s})^2\frac{1}{\pi }d^2\text{`}^{}\kappa (\text{`}^{})\mathrm{ln}|\text{`}\text{`}^{}|\right]$$ (2) where \` is the position on the sky, $`\text{`}_\mathrm{s}`$ is the source position, $`D`$’s are the angular diameter distances between the source, the lens and the observer, $`z_\mathrm{l}`$ is the redshift of the lens galaxy, and $`\kappa (\text{`})`$ is the projected mass density in the galaxy in units of $`\mathrm{\Sigma }_{\mathrm{crit}}=(c^2/4\pi G)(D_{\mathrm{os}}/D_{\mathrm{ls}}D_{\mathrm{ol}})`$. If the lens mass distribution $`\kappa (\text{`})`$ is known then the arrival time surface, Eq. (2) provides us with all the necessary information about the images. Time delay between any two images is just the difference between $`\tau `$ at the relevant locations. According to Fermat’s Principle the images appear at stationary points of the arrival time surface, $$\frac{\tau }{\text{`}}=0=\text{`}\text{`}_\mathrm{s}\frac{1}{\pi }d^2\text{`}^{}\kappa (\theta ^{})\frac{\text{`}\text{`}^{}}{|\text{`}\text{`}^{}|^2}$$ (3) Image distortion and magnification are given by the inverse of the curvature matrix of the arrival time surface $$\left[\frac{^2\tau }{\theta _i\theta _j}\right]^1$$ (4) A few things can be learned by looking at the arrival time and lens equations: (1) The time ordering of the images can be deduced from the image configuration using the morphological properties of the arrival time surface. The image furthest from the lensing galaxy is always first, and the one nearest the galaxy the last. In four-image QSOs the second image is the one opposite the first. Figure 1 illustrates. (2) When four images are formed by an isolated galaxy of typical ellipticity the images are located nearly at the same galactocentric distance. This is easy to see by considering the two pieces of the arrival time surface. If the source and the center of the galaxy are not well aligned, i.e., if the ‘bump’ due to the gravitational potential contribution is away from the ‘well’ of the geometrical contribution, then the steepness of the geometrical part allows only two images to form, one roughly on either side of the galaxy center. To get four images, the bump of the gravitational contribution must be centered close to the source location. In such a situation the total arrival time configuration is centrally symmetric and the resulting images are approximately equidistant from the galaxy center. (3) If the four images of a single source are located at different galactocentric distances the simplest explanation is the presence of external shear. External shear effectively raises the gravitational part of the arrival time surface closest to itself (see Fig. 1 and Eq. 2). The effect is to push the locations of the stationary points away from the source of external shear, hence increasing the radial spread of images. It follows that the direction of external shear can be determined by examining image locations with respect to the galaxy center. PG1115 is a good example; the image closest to the galaxy center, image B, is located between the galaxy-lens and the galaxy group, which is the source of external shear in this case. (4) Position angles (PA) of images are determined by the ellipticity PA of the galaxy roughly at the radius of the images. When images are spread over a range of radial distances their PA provide information on galaxy ellipticity PA over a range of galactocentric distances. Thus detailed modeling can reveal the twisting of the isodensity contours. (5) Not all types of information about images are equally useful for modeling purposes. The arrival time surface integrates over $`\kappa (\text{`})`$ twice, making time delays most sensitive to the overall mass distribution in the galaxy, and least dependent on the local small-scale perturbations in the mass distribution. Image positions are determined from the lensing equation which integrates over $`\kappa (\text{`})`$ once. Finally, image magnifications are very dependent on the local behavior of mass, making them the least useful for modeling. This means, unfortunately, that a double like Q0957, though it has well-measured substructure in the images and near-perfect time-delay measurements, provides too few constraints on the lensing mass to usefully estimate $`H_0`$ unless drastic assumptions about the mass distribution are made. In that case, the derived errors will tend to be underestimated as was noted by Bernstein and Fischer (1999) who constructed many types of parametric models for Q0957: ‘The bounds on $`H_0`$ are strongly dependent on our assumptions about a “reasonable” galaxy profile’. (6) A linear rescaling of the arrival time and lens equations, i.e., multiplying both by a constant factor $`ϵ`$ will not alter the observable properties of images, image separations and relative magnification tensors. Physically the transformation amounts to rescaling the mass density of the lens by $`ϵ`$ and adding a constant mass density sheet. This transformation was first discussed by Gorenstein et al. (1988) with regard to modeling of Q0957, and later became known as the mass sheet degeneracy. Note that a mass sheet extending to infinity is not needed, a mass disk larger than the observed field is enough because an external monopole has no observable effect. ## 4 The method The first step is to pixelate the lens plane mass distribution of the main lensing galaxy. In practice we use $`0.1^{\prime \prime }`$ pixels, and limit the galaxy to a circular window of radius about twice that of the image-ring. Pixelated versions of Eqs. (2) and (3) are: $$\tau (\text{`},\text{`}_\mathrm{s})=(1+z_\mathrm{l})\frac{D_{\mathrm{ol}}D_{\mathrm{os}}}{D_{\mathrm{ls}}}\left[\frac{1}{2}|\text{`}|^2\text{`}\text{`}_\mathrm{s}\underset{n}{}\kappa _n\psi _n(\text{`})\right]$$ (5) and $$\text{`}\text{`}_\mathrm{s}\underset{n}{}\kappa _n\stackrel{}{\alpha }_n(\text{`})=0,$$ (6) where the summation is over mass pixels and $`\psi _n`$ and $`\alpha _n`$ are integrals over individual pixels and can be evaluated analytically (see Appendix of SW97). A term $`|\text{`}_\mathrm{s}|^2`$ has been omitted from Eq. (3) because a constant additive factor in the arrival time cannot be measured. Image properties translate into linear constraints in the $`(N+2)`$-dimensional model space, where $`N`$ dimensions represent a pixel each and 2 represent source coordinates. We call these primary constraints. The images can provide us with only a few constraints: in a 4-image system we have $`2\times 4`$ coordinates and 3 time delay ratios: 11 in all. On the other hand, the unknowns are numerous, $`20^2`$ mass pixels plus 2 source coordinates. This results in a plethora of galaxy models each of which reproduces the image properties exactly. Luckily, the bulk of these models can be discarded because they do not look anything like galaxies. In fact, we consider only those models which satisfy the following further (linear) constraints, which we call secondary. These pertain to the main lensing galaxy: 1. mass pixel values, $`\kappa _n`$ must be non-negative; 2. the location of the galaxy center is assumed to be coincident with that of the optical/IR image; 3. the density gradient of the lens must point no more than $`45^{}`$ away from the center of the galaxy; 4. the lens must have inversion symmetry, i.e., look the same if rotated by $`180^{}`$ \[enforced only if the main lensing galaxy appears unperturbed and has no companions close to the QSO images\]; 5. logarithmic projected density gradient in the vicinity of the images, $`d\mathrm{ln}\kappa /d\mathrm{ln}\theta =\mathrm{ind}(r)`$ should be no shallower that $`0.5`$. For a power law projected density profile, radial magnification at an image is equal to $`1/\mathrm{ind}(r)`$, therefore a statement that $`\mathrm{ind}(r)<0.5`$ means that images are magnified radially by less than a factor of 2, which is probably a reasonable limit given the appearance of optical Einstein rings seen in some systems, for example, PG1115 and B0218; 6. external shear, i.e., influence of mass other than the main lensing galaxy is restricted to be constant across the image region, and is represented by adding a term $`\frac{1}{2}\gamma _1(\theta _1^2\theta _2^2)+\gamma _2\theta _1\theta _2`$ to the lensing potential, Eq. (2). All these constraints are non-restrictive and are obeyed by the vast majority of galaxies, thus our analysis explores the widest possible range of galaxy mass distributions. Obviously, the primary and secondary constraints are not enough to isolate a unique galaxy mass solution. A unique solution can be singled out by further specifying galaxy properties. For example, in SW97 particular galaxy models were found making a trial value of $`H_0`$ as one of the primary constraints, and then and picking the model that followed the observed light distribution as closely as possible given the rigid primary and secondary constraints, see Figures 2–5 of SW97. Here our aim is different. Any of the infinitely many models remaining after the primary and secondary constraints have been applied could be the real lens, as all of them reproduce the image properties exactly and all look reasonably like a galaxies, therefore any one of the corresponding derived $`H_0`$’s could be the real $`H_0`$. We want to produce an ensemble that samples this model space, and our procedure is as follows. The allowed models form a simplex in the $`(N+2)`$-dimensional space of mass pixels and source positions, because the constraints are all linear. We start with a random point in the allowed simplex (i.e., an allowed model). Next we choose a random vertex of that simplex, which is easily done by linear programming. Then we consider the line joining the current point with the vertex, and move to a random point on it, taking care to remain inside the simplex. The process is repeated until a sample of 100 models, and hence 100 $`H_0`$ values, is assembled. This procedure is a trivial case of the Metropolis algorithm (see e.g., Binney et al. 1992) for sampling density functions in high-dimensional spaces. The resulting ensemble of $`H_0`$ values has a straightforward interpretation in terms of Bayesian probabilities. The part of model space allowed by the secondary constraints is the prior (i.e., possibilities allowed before considering tha data). Our prior is uniform, which is to say that we have not incorporated any prior preferences between different models allowed by the secondary constraints. Since the unknowns $`\kappa _n`$ occur linearly in Eqs. 5 and 6, a uniform prior means that any linear interval in $`\kappa _n`$ is a priori as probable as any other interval of equal length. The primary constraints come from data, and the 100 models that satisfy both primary and secondary constraints sample the posterior probability distribution. At the present time there is no clear motivation to use any other but a uniform prior, however, a non-uniform prior, if desired, would modify the method only slightly: one could either keep the same 100 models but weight them according to the prior, or take the prior into account while choosing the models through the Metropolis prescription. ## 5 Blind tests of the method Before applying the method to real systems we try it on a synthetic situation designed to resemble the real world as close as possible. One of us, “Person A”, picked an $`h`$ value and created a set of four galaxies and the corresponding images of a single background source in each case. Exact values of image positions with respect to the galaxy center and time delays (but not $`h`$, nor information as to whether the galaxy was inversion symmetric or if there was any external shear) were conveyed to the other one of us, “Person B”, who used this information to construct an ensemble of galaxy models and derive $`h`$ distributions for each case separately. We ran the whole experiment several times to remove bugs, and did not want to fall into the trap of simply publishing the results of the best run. So once we were confident that the experiment worked, we decided that the next four galaxies, whatever the results, would go into the published paper. Figure 2 pictorially illustrates the three stages of our blind test. Person B applied the reconstruction method to each system twice, once with the assumption of inversion symmetry (i.e., symmetric galaxies, see item 4 in Section 4), and once without. Based on the appearance of the reconstructed mass distribution Person B decided whether inversion symmetry constraint was right in each case. Figures 310 present the results for each of the four galaxies. For galaxies #1, 3 and 4 Person B picked symmetric options, and the asymmetric option for galaxy #2. Panels (a) and (b) of Figures 3, 5, 7, and 9 show the actual projected density distribution and the average of the 100 reconstructed galaxies, for galaxies #1, 2, 3, and 4 respectively. In a map which is an average of many reconstructions, persistent features of individual maps are enhanced while peculiarities are washed out, so the average is a reasonable guess as to what the real galaxy looks like, in a probabilistic sense. Panels (a) of Figures 4, 6, 8, and 10 plot the slope of density profile, $`\mathrm{ind}(r)`$ vs. derived $`h`$. The ‘real’ value of $`h`$ is 0.025. In all the cases the slope of the density profile, $`\mathrm{ind}(r)`$ in the vicinity of the images correlates with the derived $`h`$ value, though the degree of correlation and its slope is not universal. Qualitatively, the reason for the correlation is easily understood. A relatively flat galaxy density profile, i.e., $`|\mathrm{ind}(r)|`$ is small, translates into a flat gravitational contribution to the arrival time surface, and ‘fills’ the well of the geometrical time delay contribution evenly resulting in small fluctuations in the amplitude of the total arrival time surface. Thus the predicted time delays between images will be small, and to keep the observed time delays fixed the derived $`h`$ has to be small as well. Panels (b) of Figures 4, 6, 8, and 10 show the derived $`h`$ probability distribution. These distributions look different for all galaxies, because galaxy morphologies are different. Since all four are independent probability distributions based on that galaxy, the overall distribution is just the product of the four, see Figure 11. The solid histogram is the product of the four distributions presented in panels (b) of Figures 4, 6, 8, and 10. The dashed histogram is similar, but results from Person B excluding what appeared to be the best constrained galaxy (# 3), and the dotted histogram represents the case where inversion symmetry was not applied to any of the systems. All three resultant distributions recover $`h`$ fairly well, with the $`90\%`$ of the models contained within $`20\%`$ of the true $`h`$. However the distributions are not the same; the most probable values are different by $`10\%`$. This illustrates how a relatively minor feature in modeling constraints, namely exclusion or inclusion of inversion symmetry, can make a considerable difference in the estimated $`h`$ value when the goal is to achieve precision of $`10\%`$. Based on this observation we conclude that assumed galaxy shape in parametric reconstructions plays a major role in determining the outcome of the $`H_0`$ determination. How robust are the results to the changes in other modeling assumptions? Changing pixel size by a factor of $`1.5`$, and relaxing mass gradient angle constraint (item 3 in Section 4) does not change our results considerably. ## 6 Application to real systems ### 6.1 PG1115 Figure 12 shows the results of the reconstruction for PG1115. Since the main lensing galaxy has no close companions and its light profile is smooth we have included inversion symmetry as one of the modeling constraints. The average of 100 arrival time surfaces is shown in Figure 12(a); Figure 12(b) shows the corresponding caustics and critical lines. The latter are not as smooth as the former because locations of caustics and critical lines are derived using the gradients the arrival time surface, which are always noisier than the original function. Panels (c) and (d) plot the quantity $`\text{`}\text{`}_\mathrm{s}_n\kappa _n\psi _n(\text{`})`$, and the total arrival time surface, respectively. The plot of the modified gravitational potential, (c), illustrates the effect of external shear which is due to a galaxy group to the lower right of the main galaxy. Because $`\mathrm{ind}(r)`$ has been measured for the main lensing galaxy in PG1115, the relation between profile slope and derived $`H_0`$, Figure 13(a), can be used to derive an upper limit on $`H_0`$. Impey et al. (1998) fit the galaxy light with a de Vaucouleurs profile of an effective radius $`r_e=0.59^{\prime \prime }`$. At the location of the images, about $`1.3^{\prime \prime }`$ from galaxy center the double logarithmic density slope is $`\mathrm{ind}(r)=2.3`$. Assuming that the mass profile can only be shallower than the light profile, and consulting Figure 13(a) we place an upper limit on $`H_0`$ of 75$`\mathrm{km}\mathrm{sec}^1\mathrm{Mpc}^1`$. If the true mass density profile slope is isothermal the corresponding $`H_0`$ is 30$`\mathrm{km}\mathrm{sec}^1\mathrm{Mpc}^1`$. A low value of $`H_0`$ was also obtained by parametric models that assumed isothermal models for the galaxy (Schechter et al 1997). In the blind test, Section 5, we assumed that all time delays are known precisely, which is not currently the case for any of the systems except Q0957. What effect does an error in time delay determination have on the derived $`H_0`$? Figure 13(b) shows two distributions derived using two different $`\mathrm{\Delta }t`$ determinations based on the same lightcurves. There is a $`20\%`$ difference in the most probable value of $`H_0`$ in the two histograms, but overall they are not very different. Both distributions are very broad; $`90\%`$ of the models span the range between 30 and 75$`\mathrm{km}\mathrm{sec}^1\mathrm{Mpc}^1`$. Figure 14 shows a dense version of the arrival time surface. The regions of the plot where the contours are sparse are the flattest, i.e., most ‘stationary’ regions of the lens plane. This is where one would expect to find images of sources placed close to the main source. For example if the point-like QSO is surrounded by a host galaxy the image of that galaxy will be well delineated by these ‘empty’ regions. In fact, the observed optical ring in the case of PG1115 is well reproduced by the ring in Figure 14. ### 6.2 B1608 The light distribution of the lensing system is rather messy, possibly representing a merging/interacting galaxy pair, therefore inversion symmetry was not used in the following reconstructions. Figures 15 and 16 are the same as Figures 12 and 13, but for B1608. The range 50 to 100$`\mathrm{km}\mathrm{sec}^1\mathrm{Mpc}^1`$ in Figure 16(b) encompasses about 90$`\%`$ of the reconstructions. ### 6.3 Combined p(h) plot Just like in the case of the blind test we now multiply probability distributions from PG1115 and B1608 to get the combined distribution, Figure 17. $`90\%`$ of all points lie within the range 43–79$`\mathrm{km}\mathrm{sec}^1\mathrm{Mpc}^1`$, while the median of the distribution is 61$`\mathrm{km}\mathrm{sec}^1\mathrm{Mpc}^1`$. Note that the errorbars obtained using our method are substantially larger than what is usually quoted in other studies. We ascribe this increase to the more systematic sampling of the whole image-defined model space unrestricted by the confines of parametric models. ## 7 Discussion and Conclusions Multiply-imaged QSO systems provide us with an elegant way of measuring $`H_0`$, and a lot of observational and modeling effort has been invested in this enterprise. As the quality of the observational data improves, most of the uncertainty in $`H_0`$ is contributed by the mass distribution in the lens. How to treat this problem is a matter of some debate. Should one use a single, physically motivated mass model, or should one approach the problem with no preconceptions about the galaxy form? In general, ad hoc restrictions on the allowed mass models translate into a too optimistic and probably biased estimated distribution of $`H_0`$’s. To avoid this trap one has to allow as much freedom for the lens models as possible. On the other hand, to yield a useful estimate of $`H_0`$ one has to restrict the amount of freedom allowed for the models using physically motivated criteria. Ideally one wants to balance these two opposing tendencies and impose just the correct quantity and quality of model constraints. Based on our experience from the present work we conclude that parametric, or any other approach that severely restricts the freedom of the galaxy-lens, has over-constrained their models and thus ended up with unrealistically small errorbars, and biased $`H_0`$’s. As a result different models of the same systems can yield discrepant results. For example, Romanowski & Kochanek (1998) use dynamical methods to model the galaxy in Q0957, and further constrain the galaxy to be similar to nearby ellipticals; they quote $`61_{15}^{+13}`$ at $`2\sigma `$ level. Bernstein & Fischer (1999) analyzed the same system but used a range of astrophysically reasonable parametric models. Their estimate, $`77_{24}^{+29}`$, also at $`2\sigma `$ level, does not agree with that of Romanowski & Kochanek. Our approach is different in that it does not presuppose a galaxy shape, but instead allows us to impose as many or as few constraints as is deemed appropriate. The most unrestricted models would be constrained solely by what we call the primary constraints, i.e., image observables. By definition these would yield unbiased estimates of $`H_0`$ based on lensing data alone. We chose to go somewhat beyond this and apply what we call secondary constraints, which describe realistic galaxies in most general terms. The derived $`H_0`$ distributions are narrower; the price we pay is a small amount of bias. It can be argued that we are still too generous with our mass models, i.e. other galaxy characteristics can be safely assumed, and hence tighter constraints can be applied to the models without sacrificing the unbiased nature of results. This avenue can be taken in the future work if additional constraints become available. A potential source of additional modeling constraints are optical rings, lensed images of the QSO galaxy host, which are seen in some cases, for example, in PG1115 and B0218. The orientations and elongations of individual images of QSO host galaxy can be used as linear inequality constraints to narrow down the range of possible galaxy mass distributions. If two or more sources with known redshifts were lensed by the same foreground galaxy, these could be used to break the mass sheet degeneracy and thus further constrain the galaxy. However in practice cases of two sources at different redshifts lensed by the same galaxy are expected to be very rare because of the small galaxy cross-sections. Probably the most promising potential constraint is based on the relation between the slope of the projected density profile around the images and the derived $`H_0`$. If the slope can be estimated by means other than lensing, or at least a limit placed on its value as we did in Section 6.1 using the observed slope of the light distribution, then $`H_0`$ can be constrained much better compared to what is currently possible. With the two systems used in the present work, PG1115+080 and B1608+656, and implementing primary constraints of image properties and secondary constraints describing a few general properties of lensing galaxies, we conclude that $`H_0`$ is between 43 and 79$`\mathrm{km}\mathrm{sec}^1\mathrm{Mpc}^1`$ at $`90\%`$ confidence level, with the best estimate being 61$`\mathrm{km}\mathrm{sec}^1\mathrm{Mpc}^1`$.
no-problem/9911/quant-ph9911083.html
ar5iv
text
# Off-Diagonal Geometric Phases \[ ## Abstract We investigate the adiabatic evolution of a set of non-degenerate eigenstates of a parameterized Hamiltonian. Their relative phase change can be related to geometric measurable quantities that extend the familiar concept of Berry phase to the evolution of more than one state. We present several physical systems where these concepts can be applied, including an experiment on microwave cavities for which off-diagonal phases can be determined from published data. \] Consider the adiabatic evolution of a set of nondegenerate normalized eigenstates $`|\psi _i(𝐬)`$ of a parameterized Hamiltonian $`H(𝐬)`$. The idea that, with a suitable definition, the phase of the scalar product $`\psi _j(𝐬_1)|\psi _j(𝐬_2)`$ contains a geometric, measurable contribution dates back to Pancharatnam’s pioneering work . In particular, when $`𝐬_1=𝐬_2`$ and the state $`|\psi _j(𝐬)`$ is transported adiabatically along a closed loop, the existence of a nontrivial phase factor was discovered and put on a firm basis by Berry . Since then, considerable work has been devoted to interpretation , generalization , and experimental determination of these geometric phase factors. Surprisingly, for $`𝐬_1𝐬_2`$, the phase relation of $`\psi _j(𝐬_1)|\psi _k(𝐬_2)`$ between two different eigenstates has not been equally well investigated so far . This is even more surprising if one considers that, for some pair of points $`𝐬_1`$ and $`𝐬_2`$, it may occur that $`|\psi _k(𝐬_2)=e^{i\alpha }|\psi _j(𝐬_1)`$ ($`kj`$). This implies that both scalar products $`\psi _j(𝐬_1)|\psi _j(𝐬_2)`$ and $`\psi _k(𝐬_1)|\psi _k(𝐬_2)`$ vanish, and, as well known, the usual Pancharatnam-Berry phase on any path connecting $`𝐬_1`$ to $`𝐬_2`$ is undefined for the states $`k`$ and $`j`$. The only phase information left is thus contained in the cross scalar products $`\psi _j(𝐬_1)|\psi _k(𝐬_2)`$. In this Letter we determine the measurable and geometric phase factors associated to the off-diagonal matrix elements $`\psi _j(𝐬_1)|\psi _k(𝐬_2)`$ of the operator describing the evolution along a general open path in the parameter space that connect $`𝐬_1`$ to $`𝐬_2`$. We find a set of independent off-diagonal phase factors that exhaust the geometrical phase information carried by the basis of eigenstates along the path. Analogously to the familiar Berry phase, the values of these phases depend on the presence of degeneracies of the energy levels in the parameters space. The formalism is then applied to an experiment on quantum billiards , where the off-diagonal phase factors can be extracted directly from published experimental data. In order to introduce the off-diagonal geometric phases, it is convenient to consider the usual definition of the geometric phase of one normalized state $`|\psi _j(𝐬)`$ in terms of parallel transport . Given any path $`\mathrm{\Gamma }`$ that joins $`𝐬_1`$ to $`𝐬_2`$, the state parallel-transported along it is defined by: $$|\psi _j^{}(𝐬_2)=\mathrm{exp}\left\{_\mathrm{\Gamma }𝑑𝐬\psi _j(𝐬)|_𝐬\psi _j(𝐬)\right\}|\psi _j(𝐬_2).$$ (1) This fixes the phase of the state along the path in the unique way satisfying $`\psi _j^{}(𝐬)|\psi _j^{}(𝐬+\delta )=1+O(\delta ^2)`$ for $`\delta 0`$, i.e. having maximal projection on the “previous” state. The geometric phase factor is then defined simply in terms of the scalar product along the parallel evolution: $$\gamma _j^\mathrm{\Gamma }\mathrm{\Phi }\left(U_{jj}^\mathrm{\Gamma }\right)=\mathrm{\Phi }\left(\psi _j^{}(𝐬_1)|\psi _j^{}(𝐬_2)\right),$$ (2) where $`\mathrm{\Phi }(z)=z/|z|`$ for complex $`z0`$. $`\gamma _j^\mathrm{\Gamma }`$ is univocally determined by the sequence $`\mathrm{\Gamma }_j`$ of states $`|\psi _j(𝐬)`$, with $`𝐬`$ varying along $`\mathrm{\Gamma }`$. Indeed, $`\gamma _j^\mathrm{\Gamma }`$ is unchanged by a local “gauge” transformation: $$|\psi _j(𝐬)|\psi _j(𝐬)\mathrm{exp}[i\phi _j(𝐬)]$$ (3) and by any reparametrization of the sequence of states $`\mathrm{\Gamma }_j`$. It is thus a geometric, measurable quantity. In a similar way, we define the phase factors associated to the off-diagonal elements of the parallel-evolution operator $`U^\mathrm{\Gamma }`$: $$\sigma _{jk}^\mathrm{\Gamma }\mathrm{\Phi }\left(U_{jk}^\mathrm{\Gamma }\right)=\mathrm{\Phi }\left(\psi _j^{}(𝐬_1)|\psi _k^{}(𝐬_2)\right).$$ (4) Like $`\gamma _j^\mathrm{\Gamma }`$, the phase factor $`\sigma _{jk}^\mathrm{\Gamma }`$ is independent of the path parametrization. However, $`\sigma _{jk}^\mathrm{\Gamma }`$ depends on the relative phase of the two vectors $`|\psi _j`$ and $`|\psi _k`$ at $`𝐬_1`$. Indeed, under the gauge transformation (3), $`\sigma _{jk}^\mathrm{\Gamma }`$ transforms as follows: $$\sigma _{jk}^\mathrm{\Gamma }\sigma _{jk}^\mathrm{\Gamma }\mathrm{exp}i[\phi _k(𝐬_1)\phi _j(𝐬_1)].$$ (5) This shows that $`\sigma _{jk}^\mathrm{\Gamma }`$ is arbitrary, thus non-measurable. In order to define a gauge-invariant quantity, we combine two $`\sigma `$’s in the following product: $$\gamma _{jk}^\mathrm{\Gamma }=\sigma _{jk}^\mathrm{\Gamma }\sigma _{kj}^\mathrm{\Gamma }.$$ (6) This new phase factor $`\gamma _{jk}^\mathrm{\Gamma }`$ is determined uniquely by the trajectories $`\mathrm{\Gamma }_j`$ and $`\mathrm{\Gamma }_k`$ of $`|\psi _j`$ and $`|\psi _k`$ in the Hilbert space. The finding of the measurable geometric quantity $`\gamma _{jk}^\mathrm{\Gamma }`$ is the central result of this Letter. A simple geometric interpretation for $`\gamma _{jk}^\mathrm{\Gamma }`$ can be obtained in analogy with that for the Pancharatnam phase. Consider the path of state $`j`$ in the space of rays (where two states differing only for a complex factor are identified). If $`|\psi _j(𝐬_1)`$ is not orthogonal to $`|\psi _j(𝐬_2)`$, there exists a unique geodesic path $`G_{jj}`$ going from $`|\psi _j(𝐬_2)`$ to $`|\psi _j(𝐬_1)`$, along which the geometric phase factor is unity. Then, trivially, the open-path geometric factor $`\gamma _j^\mathrm{\Gamma }`$ equals the phase factor on the circuit composed by $`\mathrm{\Gamma }_j`$ and $`G_{jj}`$ (see Fig. 1). Once reduced to a closed path, using Stokes’ theorem, one can write $`\gamma _j^\mathrm{\Gamma }`$ in terms of the integral of Berry’s local-gauge-invariant 2-form on any surface $`S_j`$ bounded by $`\mathrm{\Gamma }_j+G_{jj}`$ . Consider now two states $`j`$ and $`k`$ evolving along $`\mathrm{\Gamma }_j`$ and $`\mathrm{\Gamma }_k`$ in the space of rays. We generate all possible oriented loops by connecting the extremal points with geodesics. As Fig. 1 shows, only the three loops $`\mathrm{\Gamma }_j+G_{jj}`$, $`\mathrm{\Gamma }_k+G_{kk}`$ and $`\mathrm{\Gamma }_j+G_{jk}+\mathrm{\Gamma }_k+G_{kj}`$ can be generated. The first two loops give the usual phase factors $`\gamma _j^\mathrm{\Gamma }`$ and $`\gamma _k^\mathrm{\Gamma }`$, while the third one corresponds to $`\gamma _{jk}^\mathrm{\Gamma }`$. In this way, $`\gamma _{jk}^\mathrm{\Gamma }`$ can be calculated, in analogy to $`\gamma _j^\mathrm{\Gamma }`$, as the integral of Berry’s 2-form over a surface $`S_{jk}`$ bounded by this 4-legs loop. The complementarity of $`\gamma _{jk}^\mathrm{\Gamma }`$ and $`\gamma _j^\mathrm{\Gamma }`$ is evident from this geometric picture. In complete analogy with the usual Berry phase, this expression in terms of a surface integral also proves the sensitivity of $`\gamma _{jk}^\mathrm{\Gamma }`$ to the presence of degeneracies of the two energy level $`i`$ and $`j`$ in the parametric Hamiltonian associated to the above paths. However, given the open path $`\mathrm{\Gamma }`$ and the energy levels involved, there is no general rule to determine a closed loop in parameter’s space entangled with a degenerate submanifold. Whenever this loop can be found, $`\gamma _{jk}^\mathrm{\Gamma }`$ is a direct probe of the presence and position of degeneracies. The simplest system to illustrate the concept of off-diagonal geometric phase is a spin-$`\frac{1}{2}`$ aligned to a slowly rotating magnetic field $`𝐁`$ in (say) the $`xz`$ plane. The polar angle $`\theta `$ of $`𝐁`$ parameterizes a circular path in the 2-dimensional space of the magnetic fields. For any value of $`\theta `$, the columns of the matrix $$U(\theta )=\left(\begin{array}{cc}\mathrm{cos}\frac{\theta }{2}& \mathrm{sin}\frac{\theta }{2}\\ \mathrm{sin}\frac{\theta }{2}& \mathrm{cos}\frac{\theta }{2}\end{array}\right).$$ (7) represent the parallel-transported eigenvectors $`|\psi _j(\theta )`$ on the initial basis $`|\psi _1(0)=|`$, $`|\psi _2(0)=|`$. Thus, the familiar Pancharatnam-Berry phase factor of the state $`|\psi _j(\theta )`$ evolving from $`\theta =0`$ to $`\theta _\mathrm{f}`$ is given by the diagonal matrix element $`\gamma _j(\theta _\mathrm{f})=\mathrm{\Phi }\left(U_{jj}(\theta _\mathrm{f})\right)\mathrm{\Phi }\left(\psi _j(0)|\psi _j(\theta _\mathrm{f})\right)`$. The single off-diagonal term is $`\gamma _{12}=\mathrm{\Phi }(\mathrm{sin}\theta /2)\mathrm{\Phi }(\mathrm{sin}\theta _\mathrm{f}/2)1`$ for any $`\theta _\mathrm{f}0`$, $`2\pi `$. For generic $`\theta `$, $`\gamma _1`$, $`\gamma _2`$ and $`\gamma _{12}`$ are all equally important. For $`\theta =\pi `$, $`\gamma _{12}`$ carries all the geometric phase contents of the eigenstates, while $`\gamma _1`$ and $`\gamma _2`$ are undefined. At $`\theta =2\pi `$ the roles are exchanged. In this sense, the off-diagonal phase factor $`\gamma _{12}`$ constitutes the counterpart of $`\gamma _j`$, when the latter is undefined. Interference experiments have measured the noncyclic Pancharatnam-Berry phases $`\gamma _j`$ in the spin-$`\frac{1}{2}`$ system. In a similar way, one can envisage a spin-rotation experiment to measure by interference $`\sigma _{12}`$ and $`\sigma _{21}`$ for an arbitrary fixed gauge at the starting point. The dependence on the gauge chosen cancels out in the product $`\gamma _{12}`$, which, for this simple system, must equal $`1`$ for any rotation angle $`\theta 2\pi `$. Essentially any experiment sensitive to open-path diagonal geometric phases can be generalized to observe off-diagonal phases. In systems of larger dimensionality, several off-diagonal phase factors can be defined, and they may assume different values on different paths. The definition (6) of the off-diagonal phase factors $`\gamma ^\mathrm{\Gamma }`$ can be generalized to the simultaneous evolution of more than two orthonormal states. Consider for example $`n`$ orthonormal eigenstates $`|\psi _j(𝐬)`$ (ordered by increasing energy) of a parameterized Hermitian Hamiltonian matrix $`H(𝐬)`$, representing a physical system. Observing the effect (5) of a gauge change on the $`\sigma _{jk}^\mathrm{\Gamma }`$ phase factors, we note that any cyclic product of $`\sigma `$’s is gauge-invariant. It is then natural to generalize Eq. (6) by defining $$\gamma _{j_1j_2j_3\mathrm{}j_l}^{(l)\mathrm{\Gamma }}=\sigma _{j_1j_2}^\mathrm{\Gamma }\sigma _{j_2j_3}^\mathrm{\Gamma }\mathrm{}\sigma _{j_{l1}j_l}^\mathrm{\Gamma }\sigma _{j_lj_1}^\mathrm{\Gamma }.$$ (8) For $`l=1`$, Eq. (8) reduces to the familiar definition (2) of the Pancharatnam-Berry diagonal phase factor $`\gamma _j^\mathrm{\Gamma }=\gamma _j^{(1)\mathrm{\Gamma }}=\sigma _{jj}^\mathrm{\Gamma }`$. The 2-indexes $`\gamma _{jk}^{(2)\mathrm{\Gamma }}`$ phase factors coincide with those introduced by Eq. (6). Larger $`l`$ describe more complex phase relations among off-diagonal components of the eigenstates at the endpoints of $`\mathrm{\Gamma }`$. The same geometrical construction of a closed path done for $`\gamma ^{(2)}`$ extends to $`\gamma ^{(l)}`$ with $`l>2`$. We note that any cyclic permutation of all the indexes $`j_1j_2j_3\mathrm{}j_l`$ is immaterial. Moreover, if one index is repeated, the associated $`\gamma ^{(l)}`$ can be decomposed into the product $`\gamma ^{(l_1)}\gamma ^{(l_2)}`$’s with $`l_1+l_2=l`$. We can thus reduce to consider the $`\gamma ^{(l)}`$’s with no repeated indexes, which means in particular $`ln`$. One can readily verify that the number of $`\gamma ^{(l)}`$’s left grows with $`n`$ faster than $`n^2`$. Since $`n^2`$ is the number of the constituent $`\sigma _{jk}`$’s, not all the $`\gamma ^{(l)}`$’s can be independent. We shall now find a complete set of independent $`\gamma ^{(l)}`$’s, under the condition that $`U_{jk}^\mathrm{\Gamma }0`$ for all $`j`$ and $`k`$. Clearly, the $`n`$ Pancharatnam-Berry diagonal phase factors $`\gamma _j^{(1)}`$ are all independent, since any diagonal $`\sigma _{jj}`$ enters only $`\gamma _j^{(1)}`$. On the other hand, the off-diagonal $`\gamma ^{(l)}`$’s are interrelated by the following exact equalities \[they can be verified substituting explicitly the definition (8)\]: $`\gamma _{i\{j\}k\{m\}}^{(l)}`$ $`=`$ $`\gamma _{i\{j\}k}^{(l^{})}\gamma _{k\{m\}i}^{(l^{\prime \prime })}\gamma _{ik}^{(2)}{}_{}{}^{}(l4)`$ (9) $`\gamma _{jkm}^{(3)}\gamma _{jmk}^{(3)}`$ $`=`$ $`\gamma _{jk}^{(2)}\gamma _{km}^{(2)}\gamma _{jm}^{(2)}`$ (10) $`\gamma _{ijm}^{(3)}\gamma _{mj}^{(2)}{}_{}{}^{}\gamma _{jkm}^{(3)}`$ $`=`$ $`\gamma _{ijk}^{(3)}\gamma _{ki}^{(2)}{}_{}{}^{}\gamma _{ikm}^{(3)}.`$ (11) In Eq. (9), $`\{j\}`$ indicates a set of one or more indexes, and $`l^{}`$, $`l^{\prime \prime }`$ ($`<l`$) count the indexes in the corresponding $`\gamma `$. Combining relations (9-11), any $`\gamma ^{(l)}`$’s may be expressed in terms of three categories: the $`n`$ diagonal phases $`\gamma _j^{(1)}`$, the $`n(n1)/2`$ quadratic $`\gamma _{j<k}^{(2)}`$’s, and the $`(n1)(n2)/2`$ cubic $`\gamma _{1<j<k}^{(3)}`$. These $`n^2n+1`$ factors are indeed functionally independent combinations of the $`\sigma `$’s: we verified that the Jacobian determinant $`\left|\gamma _{\{j\}}/\sigma _{km}\right|`$ is nonzero. The number of independent phases can be easily understood: it amounts to the $`n^2`$ phases of $`U_{jk}^\mathrm{\Gamma }`$ minus the arbitrary $`n1`$ relative phases among the $`n`$ eigenstates at a given point $`𝐬`$. We restrict now to the particular case of a path joining a pair of points $`𝐬_1^P`$ $`𝐬_2^P`$ such that the $`n`$ eigenstates at the final point are a permutation $`P`$ of the initial eigenstates, i.e. $$\{\begin{array}{ccc}\hfill H(𝐬_1^P)& =& _jE_j|\psi _j\psi _j|\hfill \\ \hfill H(𝐬_2^P)& =& _jE_j^{}|\psi _{P_j}\psi _{P_j}|\hfill \end{array},$$ (12) where $`E_j`$ and $`E_j^{}`$ are in increasing order as usual. The only well-defined $`\sigma ^\mathrm{\Gamma }`$’s are the $`n`$ phase factors $`\sigma _{jP_j}^\mathrm{\Gamma }`$. When the permutation is nontrivial ($`P_jj`$) the familiar Berry-Pancharatnam phase factor associated to state $`j`$ is undefined. For this special case the only well-defined geometric phases are the off-diagonal ones. One can classify them according to standard group theory. Any permutation $`P`$ can be decomposed univocally into $`c`$ cycles of lengths $`l_1`$, $`l_2`$, $`\mathrm{}`$ $`l_c`$ . To each cycle $`i`$, it is possible to associate one $`\gamma _{\{j\}}^{(l_i)\mathrm{\Gamma }}`$, the $`l_i`$ indexes $`\{j\}`$ following the corresponding cycle. These phase factors involve only nonzero $`U_{jk}^\mathrm{\Gamma }`$ and are thus well defined. In contrast, all other $`\gamma ^{(l)\mathrm{\Gamma }}`$’s are undefined. In Table I, for each permutation $`P`$ of the eigenstates we report the corresponding well-defined $`\gamma ^{(l)}`$ for $`n4`$. For these paths permuting the eigenvectors, the determinant $`\left|U^\mathrm{\Gamma }\right|`$ of the overlap matrix is related to the product of the $`\sigma `$’s. The equality $`\left|U^\mathrm{\Gamma }\right|=1`$ becomes therefore $$\underset{j=1}{\overset{n}{}}\sigma _{jP_j}^\mathrm{\Gamma }=(1)^P.$$ (13) The third column of Table I summarizes this condition in terms of the $`\gamma ^{(l)}`$’s. In the special case of a real symmetric Hamiltonian $`H(𝐬)`$, all $`\sigma `$’s, and thus all $`\gamma ^{(l)}`$’s either equal $`+1`$ or $`1`$. For this simple but relevant situation, the last column of Table I reports the number of combinations of values that the $`\gamma ^{(l)}`$’s may take, as allowed by the condition (13). The above arguments on the permutational symmetry remain valid even if Eq. (12) is only approximate, provided that $`|U_{j,P_j}^\mathrm{\Gamma }|n\mathrm{max}_{(kP_j)}|U_{jk}^\mathrm{\Gamma }|`$ for all $`j`$. This extends the interest of the permutational case to a finite domain of the parameters’ space around the point where Eq. (12) holds exactly or, more in general, to any region where the inequality on $`U_{jk}^\mathrm{\Gamma }`$ holds. For example, an approximate permutation occurs when the energy levels of an Hamiltonian $`H(𝐬)`$ undergo a sequence of sharp avoided crossings along the path. At each avoided crossing, the two involved eigenstates, to a good approximation, exchange. As a result, there exist sizable regions between two avoided crossings where the eigenvectors are an approximate permutation of the starting ones. Probably the simplest example of a nontrivial permutation of the Hamiltonian eigenstates occurs when the relation $$H(𝐬_1)=H(𝐬_2)$$ (14) holds at the ends of the path. This symmetry is verified exactly by the spin-$`\frac{1}{2}`$ system, where it determines the swap of the eigenstates between $`\theta =0`$ and $`\theta =\pi `$. Relation (14) holds also, approximately, in very common situations. Suppose, for example, that a point, say $`𝐬=0`$, locates an $`n`$-fold degeneracy, and consider the perturbative expansion around there: $$H(𝐬)=𝐬𝐇^{(1)}+\mathrm{}.$$ (15) \[$`𝐇^{(1)}`$ is a vector of Hermitian numerical matrices.\] In the sufficiently small neighborhood of the degeneracy, where the linear term accounts for the main contribution to the energy shifts, pairs of opposite points $`(𝐬_1,𝐬_2=𝐬_1)`$ satisfy the relation (14). The permutation of the eigenstates associated to (14) is composed by $`n/2`$ 2-cycles for even $`n`$, or by $`(n1)/2`$ 2-cycles plus one 1-cycle for odd $`n`$: the corresponding $`\gamma `$’s are marked by stars in Table I. In the final part of this Letter, we examine the deformed microwave resonators experiment of Ref. . In a recent work the diagonal, closed-path Berry phases were calculated for that system. Here we analyze the experiment of Ref. as a transparent example of how off-diagonal $`\gamma _{jk}^{(2)}`$’s can be measured for open paths. For these systems, $`𝐬=(s\mathrm{cos}\theta ,s\mathrm{sin}\theta )`$ parameterizes the displacement of one corner of the resonator away from the position of a conical intersection of the energy levels. Lauber et al. investigate the Berry phase of these nearly degenerate states, when the distortion is driven through a loop $`\theta =0`$ to $`2\pi `$ around the degenerate point. The distortion path is traced in small steps in $`\theta `$, following adiabatically the real eigenfunctions. In Fig. 2 we report the initial ($`\theta =0`$), half-way ($`\theta =\pi `$) and final ($`\theta =2\pi `$) parallel-transported eigenfunctions from the original pictures of Ref. . The first case considered is that of a triangular cavity deformed around a twofold degeneracy: for small distortions, the system behaves similarly to a spin $`\frac{1}{2}`$. In particular, the Berry phases $`\gamma _j^{(1)}`$ at the end of the loop both equal $`1`$ as expected for such a situation (cf. in Fig. 2 the recurrence of the pattern with changed sign at $`\theta =0`$ and $`2\pi `$). Due to the well approximate symmetry (14) at half path ($`\theta =\pi `$), the diagonal Berry phases are undefined there, but it is instead possible to determine the experimental value of $`\gamma _{12}^{(2)}`$ for this path. From inspection of Fig. 2 we determine $`\sigma _{12}=1`$, $`\sigma _{21}=1`$. This is consistent with the only possible value $`\gamma _{12}^{(2)}=1`$ allowed in this spin-$`\frac{1}{2}`$–like case (see Table I). The same holds for the path going from $`\theta =\pi `$ to $`2\pi `$. The case of the rectangular resonator is more interesting. Here, three states intersect conically at $`𝐬=0`$. The three Berry phases $`\gamma _j^{(1)}`$ at the end of the loop ($`1`$, $`+1`$ and $`1`$) are compatible with the determinant requirement of Table I. Figure 2 shows that empirically also this system satisfies the symmetry relation $`H(\pi )=H(0)`$ at mid loop. Thus, for the path $`\theta =0`$ to $`\pi `$ the only well defined Pancharatnam-Berry phase is that of the central state $`\gamma _2^{(1)}=1`$. The upper and lower states exchange, giving $`\sigma _{13}=1`$, $`\sigma _{31}=1`$ thus $`\gamma _{13}^{(2)}=1`$. This is one of the two combinations of values allowed by the determinant rule $`\gamma _{13}\gamma _2=1`$ of Table I. In conclusion, we have identified novel off-diagonal geometric phase factors, generalizing the (diagonal) Berry phase. The two sets of diagonal and off-diagonal geometric phases together exhaust the number of independent observable phase relations among $`n`$ orthogonal states evolved along a path. We show that, in many common situations, the off-diagonal factors carry the relevant geometric phase information on the basis of eigenstates. We thank Prof. D. Dubbers, Dr. F. Faure, and Dr. A. F. Morpurgo for useful discussions.
no-problem/9911/astro-ph9911404.html
ar5iv
text
# Obscuration model of Variability in AGN ## Introduction The variability of radio quiet AGN has been established since the early EXOSAT observations. However, the nature of this variability, observed in the optical, UV and X-ray band is not clear. The emission of radiation is caused by accretion of surrounding gas onto a central supermassive black hole. The observed variability may be therefore directly related to the variable rate of energy dissipation in the accretion flow. However, it is also possible that the observed variability does not represent any significant changes in the flow. Such an ’illusion of variability’ may be created if we do not have a full direct view of the nucleus. We explore this possibility in some detail. Clumpy accretion flow has been suggested by various authors in a physical context of gas thermal instabilities or strong magnetic field (e.g. celotti92 ; krolik98 ; torricelli98 ). Here we follow a specific accretion flow pattern described by collin96 . We assume that the cold disk flow is disrupted at the distance of 10 - 100 $`R_{\mathrm{Schw}}`$ from the black hole. The resulting clumps of cool material are large and optically very thick for electron scattering, and they become isotropically distributed around the central black hole. Hot plasma responsible for hard X-ray emission forms still closer to a black hole, perhaps due to cloud collision. We do not discuss the dynamics of cloud formation but we concentrate on the description of the radiation produced by such a system. We consider the radiative transfer within the clouds and radiative interaction between the clouds and a hot phase, and we relate the radiation flux and spectra to the variations in the cloud distribution. ## Variability mechanism Within the frame of the cloud scenario, our line of sight to the hot X-ray emitting plasma is partially blocked by the surrounding optically thick clouds. Variations in the cloud distribution lead to two types of phenomena: slower variations due to the systematic change in a total number of clouds and fastest variations due to the random rearrangement of the clouds without any change of their total number. We concentrate on this second type of variability. The amplitude of such a variability is determined by the number of clouds $`N`$ surrounding a black hole at any given moment and the mean covering factor $`C`$ of the cloud distribution $`\left(\frac{\delta L_X}{L_X}\right)_{obs}=\frac{C\sqrt{2/N}}{1C}`$. Such variations do not reflect any deep changes in the hot plasma itself so they are expected to happen without the change in the hard X-ray slope of the plasma emission. However, the UV variability amplitude caused by the same mechanism as well as some variations in hard X-rays due to the presence of the X-ray reflection depend in general on the cloud properties like X-ray albedo and radiative losses through the unilluminated dark sides of clouds. We use very detailed radiative transfer in X-ray heated optically thick clouds in order to estimate those quantities. We develop a complementary toy model describing the energetics of the entire hot plasma/cloud system for easy use to estimate the model parameters from the observed variability amplitudes. ## Radiative transfer solutions for mean spectra Two codes are used iteratively in order to compute a mean spectrum emitted by the clouds distribution. TITAN dumont99 is designed to solve the radiative transfer within an optically thick medium, including computations of the ionization state of the gas and its opacities. NOAR abrassart99 is a Monte Carlo code designed to follow the hard X-ray photons using Monte Carlo method. The result of the numerical computation of a single mean spectrum is shown in Figure 1. The cloud distribution was assumed to be spherical, with the covering factor $`C=0.9`$, all clouds being located at a single radius. The hot medium in this computation was replaced by a point like source of a primary emission, with flux normalizations fixed through specification of the ionization parameter $`\xi `$. However, the multiple scattering of photons of different clouds was included. We see that the broad band spectrum clearly consists of two basic components but there are also detailed spectral features in UV and soft X-ray band in addition to hard X-ray iron $`K_\alpha `$ line. ## Toy model and variability amplitudes In our toy model we replace the radiative transfer computations with analytical description of the probabilities of the X-ray and UV photon fate. X-ray photons can be reflected by bright sides of the clouds, can escape from the central region towards an observer or can be absorbed and provide new UV photons as well as energy for the emission from the dark sides. UV photon can also escape, can be reflected and can be upscattered to an X-ray photon by a hot plasma. All those probabilities are determined by four model parameters: covering factor $`C`$, probability of Compton upscattering $`\gamma `$, X-ray albedo $`a`$ and efficiency of dark side emission $`\beta _d`$. The condition of compensating for the system energy losses with Compton upscattering relate those four quantities to the Compton amplification factor. The variability amplitude depends also on the number of clouds, $`N`$. Such a model allow us to calculate all basic properties of the stationary model, like the observed ratio of the X-ray luminosity to the UV luminosity, the intrinsic ratio of those two quantities as seen by the clouds, the slope of the hard X-ray emission and the variability amplitudes in UV and X-ray band. In particular, we can determine the ratio of the normalized variability amplitudes in X-ray and UV band predicted by our model $`R=\left(\frac{\delta L_{UV}}{L_{UV}}\right)_{obs}/\left(\frac{\delta L_X}{L_X}\right)_{obs}`$. In Figure 2 we show the dependence of this ratio on the covering factor $`C`$ and the efficiency of the dark side energy loss by the clouds. The dependence on other model parameters was reduced by assuming the X-ray albedo $`a=0.5`$ supported by numerical results and the Compton amplification factor $`A=4`$ which well describes the mean hard X-ray spectral slope. We see that if the clouds are very opaque ($`\beta _b`$ negligible) the normalized amplitude ratio is always equal 1 within the frame of our model. Significant dark side energy losses reduce the UV amplitude since they add a constant contribution to UV flux. ## Discussion The presented model well reproduces large observed variability amplitudes if the covering factor is close to 1. It also explains why large variability amplitudes are not necessarily accompanied by the change of the slope of the hard X-ray emission coming from comptonizing hot plasma. In order to check whether the model requires unacceptable values of the parameters we confront the model with the data in the following way. We apply our toy model to observed variability of four Seyfert 1 galaxies extensively monitored in UV and X-ray band (see Table 1). The variability amplitudes are taken from goad99 ; edelson99 ; nandra98 ; edelson96 ; clavel92 , and we estimated the mean X-ray to UV luminosity ratio as 1/3 in all objects. We assumed the X-ray albedo $`a=0.5`$ and the Compton amplification factor as $`A=4`$. We calculated the remaining model parameters: $`C`$, $`N`$, $`\gamma `$, $`\beta _d`$. All four objects are consistent with the model, having relative UV amplitude smaller than the relative X-ray amplitude. As expected, the covering factor (determined by the luminosity ratio) is large and the probability of Compton upscattering is low either due to small optical depth of the hot plasma or due to small radial extension of the hot plasma. The required dark side losses are comparable to the value of 0.20 obtained from the numerical solution of the radiative transfer within a cloud (see Figure 1). Therefore cloud scenario offers an attractive explanation of the observed variability of AGN if further observations will confirm that the slope of the direct Compton component and its high energy cut-off do not vary.
no-problem/9911/hep-ph9911430.html
ar5iv
text
# 1 Introduction ## 1 Introduction It is widely accepted that the non-equilibrium dynamics of field theories has many important but difficult issues yet to be understood. The motivation for considering such theories hardly needs to be stressed — non-equilibrium situations are ubiquitous — from processes in inflation or baryogenesis in the early universe, transport processes in condensed matter, to the possible states of hadronic matter in heavy ion collisions, such as quark–gluon plasma, disoriented chiral condensates and color superconducting states. These phenomena all involve the non-equilibrium dynamics in some essential manner. The particular non-equilibrium problem we study is the behavior of a field theory when various temperature boundary conditions are imposed on the boundaries of the theory. The physical properties such as the temperature profile, $`T\left(x\right)`$, pressure or entropy inside the boundaries, are determined dynamically. It is à priori not clear whether the system thermalizes when strong thermal gradients are present, and we would like to clarify the situation. It would be preferable to compute the physical properties of the theory within an analytic field theory, yet such an approach seems difficult: One can compute the transport coefficients within linear response theory, since such a computation is performed in equilibrium, yet even then, its region of applicability is unclear and in principle could even be null, as was found in some cases. Within the linear regime, we might try to approach the problem directly using thermofield dynamics, for instance, but it seems difficult to do so without imposing some assumptions on the dynamics of the theory. Beyond the linear regime, it seems fair to say that the problem is very difficult. In this work, we impose various temperature boundary conditions on massless $`\varphi ^4`$ theory in (1+1) and (3+1) dimensions and study the behavior of the theory in the steady state. We make use of numerical methods to compute physical quantities of interest. We shall be interested in such questions as the validity of the linear response theory and the region of its applicability, or the possibility of the failure to achieve thermalization. Let us point out the limitations of our current approach: The theory we work with is classical and it is formulated on a lattice. However, apart from this, we make no assumptions on the dynamics of the theory and we compute physical observables from first principles. Also, it should be clear from our approach that our methods are applicable to other field theories as well, except perhaps for the need for more computational time in more complicated problems. We choose the $`\varphi ^4`$ theory since it is a prototypical field theory and it appears in various contexts in many areas of physics. It should also be pointed out that classical approximations to quantum field theories have been studied for some time and while far from trivial, a basic understanding of the relation of the classical theory to the quantum one for high temperatures does exist . In addition, the classical theory is of interest on its own right and we believe that the understating of its dynamics is essential, if not necessary, to the understanding of the quantum theory. ## 2 The model The Lagrangian of the model we study, the massless $`\varphi ^4`$ theory, is in the continuum, $$\text{Ļ}=\left(_\mu \stackrel{~}{\varphi }\right)^2+\frac{\stackrel{~}{g}^2}{4}\stackrel{~}{\varphi }^4$$ (1) We may scale out the dimensionful variables and the coupling using the rescalings $`𝐫=\stackrel{~}{𝐫}/a,t=\stackrel{~}{t}/a,\varphi _𝐫=a\stackrel{~}{g}\stackrel{~}{\varphi }(\stackrel{~}{𝐫},\stackrel{~}{t})`$ with $`a`$ being the lattice spacing. We obtain the Hamiltonian on the lattice, $`H`$, $$H=\underset{𝐫}{}\left[\frac{1}{2}\pi _𝐫^2+\frac{1}{2}\left(\stackrel{}{}\varphi \right)_𝐫^2+\frac{1}{4}\varphi _𝐫^4\right],0xL,0y,zL^{}$$ (2) Here, $`\left(_k\varphi \right)_𝐫=\varphi _{𝐫+𝐞_k}\varphi _𝐫`$ where $`𝐞_k`$ is the unit vector in the $`k`$-th direction. We thermostat the boundaries at temperatures $`T\left(x=0\right)=T_1^0`$ and $`T\left(x=L\right)=T_2^0`$. In (3+1) dimensions, we impose periodic boundary conditions in the $`y,z`$ directions. In this manner, statistical averages of observables in equilibrium or non-equilibrium steady states are equivalent to time averages in the long-time limit. The dynamics of the system is purely that of the $`\varphi ^4`$ theory inside the boundaries $`0<x<L`$. The thermostats are provided using the “global demons” of , a non-Hamiltonian generalization of the Nosé–Hoover approach. When the boundary temperatures are equal, $`T_1^0=T_2^0`$, we recover the equilibrium ensemble, as we should. The temperature inside the system is the same as the boundary temperature, $`T\left(𝐫\right)=T_1^0\left(0<x<L\right)`$ and the distribution of the momentum $`\pi _𝐫`$ at any site is Maxwellian. As we make the boundary temperatures different, we find the linear regime and then the non-linear regime. We now discuss the behavior of the system under these conditions. ## 3 Weak Thermal Gradients: $`T_1^0<T_2^0`$ When the boundary temperature difference is small, a linear temperature profile emerges. An example of such a profile is shown in Fig. 1. We find that the distributions of the momenta are thermal at any of the points inside and outside the boundaries. Thermalization of the region outside the boundaries is established through the direct coupling to the thermostats while the thermalization of the sites inside the boundaries is established dynamically. Using these types of linear profiles, we have obtained the thermal conductivity of the system for various temperatures and various lattice sizes. We expect that the linear region should be well described by Fourier’s law, $$\text{Ţ}^{01}_{NE}=\kappa \left(T\right)T,\mathrm{where}\text{Ţ}_k^{01}=\pi _k\left(\varphi \right)_k,$$ (3) $`\mathrm{}_{NE}`$ is the non-equilibrium average, and $`\text{Ţ}^{01}`$ is the heat flux in our theory. While it is quite possible to extract a value for the thermal conductivity from systems like the one shown in Fig. 1, we have extracted the thermal conductivity for a given temperature from systems obtained by varying the temperature difference around the given temperature. We find that the thermal conductivity has a well defined bulk limit. In other words, it remains constant when the lattice size is increased for moderately large lattices. The thermal conductivity we find is described by a power law as shown on Fig. 2, both in (1+1) ($`\times `$) and (3+1) ($``$) dimensions: $$\kappa \left(T\right)=\frac{A}{T^\gamma },\{\begin{array}{cc}\gamma =1.35\left(2\right),A=2.83\left(4\right)\hfill & \text{(1+1) dimensions}\hfill \\ \gamma =1.58\left(4\right),A=9.5\left(5\right)\hfill & \text{(3+1) dimensions}\hfill \end{array}$$ (4) The thermal conductivity may also be computed in a completely different manner in equilibrium, by using the Green–Kubo formula for the thermal conductivity, $$\kappa \left(T\right)=\frac{1}{T^2}𝑑x𝑑t\text{Ţ}^{01}(x_0,t_0)T^{01}(x,t)_{eq}.$$ (5) Applying the formula to our lattice theory, we obtain the thermal conductivity for various temperatures which agree well with our direct measurements, as we can see in Fig. 2 (left, $`+`$). While this might seem obvious, it should be pointed out that the integrands in the Green–Kubo formula have “long time tails” and the integrand was found to be divergent in various low dimensional systems . In our theory, the long time tails do exist in (1+1) dimensions up to $`10\tau `$, where $`\tau `$ is the mean free time. These transient tails do have the expected behavior of $`t^{1/2}`$, which, if it continued, would lead to a divergent integral in the Green–Kubo formula. In our case, the long time tails are transient and the integrand decays much faster for $`t10\tau `$ leading to a finite transport coefficient. ## 4 Strong Thermal Gradients: $`T_1^0T_2^0`$ In the regime where the two boundary temperatures are substantially different, the thermal profiles become visibly curved. An example of such a profile is show in Fig. 3 as the solid curve (dashed curved will be explained later). Also, another feature that emerges is that jumps in the temperature arises at the boundaries; namely, the temperature obtained by extrapolating the temperature profile inside does not match the boundary thermostat temperatures. We would like to understand the physics behind these features. The boundary temperature jumps can be understood quantitatively using kinetic theory ideas. We dispense with the details here which can be found in . We should point out, however, that these jumps are physical and are observed generically in real systems as well as simulations for systems that are far away enough from the equilibrium. In any case, it should be emphasized that as long as the boundary thermostats are thermalizing the degrees of freedom outside the boundaries — which we indeed do confirm — what happens within the boundaries is determined dynamically only by the degrees of freedom of the $`\varphi ^4`$ theory. Let us now move on to the curved temperature profiles: We first note that even within the context of linear response theory, the temperature profile will become curved since the thermal conductivity is a non–trivial function of the temperature, as in Eq. (4). If we assume that this is the only cause of the non–linearity of the profile, we obtain $$T\left(x\right)=T_1\left[1\frac{x}{L}+\left(\frac{T_2}{T_1}\right)^{1\gamma }\frac{x}{L}\right]^{\frac{1}{1\gamma }}$$ (6) where $`T_{1,2}`$ are the extrapolated boundary temperatures which in general can be different from the thermostat temperatures, $`T_{1,2}^0`$, due to the existence of the boundary temperature jumps. We find that as long as the temperature gradient is not too large, the temperature profile is well explained by this formula. An example of such a fit is shown in Fig. 3 (left) as a dashed curve — in fact, the fit is barely distinguishable from the observed thermal profile except at the boundaries. It should be noted that $`\gamma `$ in Eq. (6) is determined independently from systems near and at equilibrium. We have also looked at the momentum distributions in these systems at various sites and find that they are quite close to their equilibrium gaussian distributions. To analyze this quantitatively, one may look at the higher order cumulants of $`\pi _k`$, $$\frac{\pi _k^4}{3\pi _k^2^2}1,\frac{\pi _k^6}{15\pi _k^2^3}1,\mathrm{},k=0,1,\mathrm{}L$$ (7) The spatial dependence of the fourth order cumulant is shown in Fig. 3, where we see that the deviation from the equilibrium value, $`0`$, is at a percent level. So in conclusion, there exists a regime where the thermal profiles are curved, yet both the concept of local equilibrium and linear response theory apply quite well. À priori, this needed not be the case. Let us briefly discuss what happens when we make the temperature difference larger and larger. We find that linear response predictions break down and that the profiles are no longer described by the equation Eq. (6). It is tempting to interpret this as a “non-linear” response of the system involving higher order derivatives of the temperature, such as $`^3T,T^2T`$,… . However, before we apply these ideas, we should check that local equilibrium is achieved and that the usual concept of temperature applies. In our approach, whether local equilibrium is achieved or not is a dynamical question that can be answered unambiguously. We find that when the linear response does not apply, neither does local equilibrium hold. So, in fact, the situation is much more complicated; while non-linear response might indeed exist, at least for the theory at hand, the failure of local equilibrium needs to be taken into account simultaneously. ## 5 Discussions The dynamics of the classical lattice $`\varphi ^4`$ theory was studied under weak and strong thermal gradients from first principles. In equilibrium, the Green–Kubo formula was applied to derive the thermal conductivity. This was found to agree with that obtained using the Fourier’s law for the system under weak thermal gradients. To our knowledge, this is the first time the non–trivial temperature dependence of the thermal conductivity computed from first principles over a number of decades for any system. For moderately strong gradients, the linear response theory was found to be quite applicable even though the thermal profiles were visibly curved. For even stronger gradients, linear response theory ceases to hold and local equilibrium breaks down also at the same time. While we did not have time to discuss this here, we have also studied other quantities in the theory, such as the heat capacity, entropy and the speed of sound and have elucidated how they are related to each other, leading to a comprehensive understanding of the dynamics . Clearly, much remains to be done: It would be interesting to study the dynamics of the theory which exhibits spontaneous symmetry breaking. In this case, the phase boundary will emerge dynamically and this can be analyzed within the current approach. We would also like to understand the dynamics of other theories, such as Yang–Mills theories under thermal gradients. Since our methods are not restricted to the steady state case, another problem of import is that of transient phenomena, such as thermalization. An important point which needs to be investigated is the relation of the physical quantities in the lattice theory to those in the continuum theory. In a related question, we would like to understand the quantum effects and understand how the classical theory can be “matched” to the classical theory. Work is in progress in these areas.
no-problem/9911/cond-mat9911102.html
ar5iv
text
# Internal Friction of Amorphous Silicon in a Magnetic Field ## Abstract The internal friction of e-beam amorphous silicon was measured in a magnetic field between 0 and 6 T, from 1.5–20 K, and was found to be independent of the field to better than 8%. It is concluded that the low energy excitations observed in this experiment are predominantly atomic in nature. It is well known that amorphous solids have a broad spectrum of low-energy excitations. At low temperatures, characteristic signatures of these excitations can be seen in a variety of thermal (e.g. specific heat, thermal conductivity) and elastic (e.g. internal friction, ultrasound attenuation, sound velocity) measurements. They are commonly described with the two-level tunneling model, in which it is assumed that the disordered structure of the material permits atoms or groups of atoms to tunnel between two spatial positions in close proximity and with energy splittings spanning a wide temperature range. The microscopic nature of these tunneling states, however, is still unknown. It has been suggested that tetrahedrally bonded amorphous solids (a-Ge and a-Si) are structurally overconstrained and will therefore have a lower density of tunneling states, or perhaps none at all. Experimental searches for evidence of tunneling states (or lack thereof) in such materials has been difficult, as these materials can only be grown as thin films on substrates. In practice, most experimental techniques employed must compare measurements of the bare substrate (e.g. its heat capacity in the case of specific heat) to those of the substrate plus the thin film. The value for the film itself is then extracted by considering the film-substrate geometry. Since the addition of the film to the substrate will often produce only a small change in the raw measurements, the sensitivity of these measurements is necessarily limited. Indeed, experiments trying to determine the existence of such states have given inconsistent results: some experiments showed evidence for such states, and some did not, as recently reviewed. Of particular relevance to the present work were low-temperature specific heat measurements with e-beam a-Ge by van den Berg and v. Löhneysen. Specific heat is a direct measure of the excitations that exist within a solid—the two level tunneling model predicts a contribution linear in temperature which at sufficiently low temperatures will dominate the $`T^3`$ phonon contribution. However, the presence of specific heat in excess of the phonon contribution is not necessarily the result of tunneling states. One early example was the specific heat of several silica-based glasses, which showed excitations in addition to two-level tunneling states. These additional excitations vanished in the presence of a moderate magnetic field, and were attributed to spins from iron impurities. Although a-Ge was indeed shown to have an excess specific heat below 1 K, it almost completely vanished in the presence of a 6 T magnetic field. Hence the extra excitations were concluded to be electronic—not atomic—in nature, and were attributed to exchange coupled clusters of dangling bonds, which experienced Zeeman splitting in a magnetic field. However, for the reasons mentioned above, these experiments were not sensitive enough to completely rule out a separate magnetic-field-insensitive linear contribution underlying the electronic contribution to the specific heat, but could only be used to determine an upper bound to its magnitude. Liu and Pohl, of whose work this paper is an extension, probed the existence of low energy excitations in a-Si and a-Ge films through measurements of internal friction at low temperatures using silicon double-paddle oscillators as substrates. A bare double-paddle oscillator itself has an extremely small internal friction background, typically $`Q^1=2\times 10^8`$ at liquid helium temperatures. The internal friction of a thin film will then dominate—and not be a small perturbation to—the total damping of a film-carrying paddle. Generally, amorphous solids have a temperature-independent internal friction “plateau,” whose width extends from approximately 100 mK to 10K, depending on the measuring frequency. The prototypical amorphous solid, a-SiO<sub>2</sub>, has an internal friction $`Q^1=4\times 10^4`$ in this plateau, and almost all amorphous solids have internal frictions within a factor of 3 of this value (referred to as the “glassy range”). For e-beam a-Ge films, $`Q^1=0.7\times 10^4`$, close to the glassy range. Based on this value and using the tunneling model, the specific heat was calculated to be close to the upper limit determined experimentally in large magnetic fields. However, since the internal friction had been measured in zero $`B`$-field, one has to ask whether the states seen were truly atomic in nature and not electronic. To this end, we measured the internal friction of paddle oscillators with e-beam a-Si films in a <sup>4</sup>He cryostat above 1.4 K and in magnetic fields as large as 6 T. In a magnetic field $`B`$, an electronic excitation will have a Zeeman splitting $`g\mu _BB`$ raising its excitation temperature by $`g\mu _BB/k_B`$. Using $`g=2`$ and $`B=6\mathrm{T}`$, this temperature is approximately 8 K. Therefore, any damping resulting from the relaxation of electronic excitations—most likely to occur by one-phonon emission—should be drastically reduced at sufficiently low temperatures. Details of sample preparation and measurement are the same as described previously. The following changes were made to the apparatus for work in a magnetic field, which was applied parallel to the paddle’s axis of rotation. First, we replaced the ferromagnetic Invar block to which the paddle is usually attached with one made of silicon in order to avoid disturbance of the field. Second, we found that the usual metal layer deposited on the back of the paddle and which acts as an electrode—30 $`\mathrm{\AA }`$ Cr followed by 500 $`\mathrm{\AA }`$ Au covering an area indicated in gray shading in the inset to Fig. 1—caused an unacceptably large background damping in a magnetic field. According to Ref. , the presence of a $`0.7\mu \mathrm{m}`$ a-Si film is estimated to increase the zero-field internal friction to $`Q_{\text{paddle}}^14\times 10^7`$. At a field of only 1 T, however, the bare paddle background internal friction was $`Q^1=5\times 10^6`$, over an order of magnitude larger than that of the film-carrying paddle in zero field. The magnetic field dependence of this background internal friction is shown in Fig. 1 for two different paddles at constant temperatures. No attempts were made to quantatively account for the large magnitude of this damping, which is likely due to eddy currents. However, by making the metal film electrode in the shape of a thin ‘T,’ shown in dark gray shading in the inset to Fig. 1, the large field effect was reduced by three orders of magnitude, with $`Q_{\text{sub}}^1`$ rising only by a factor of 4 in a field of 6 T. Furthermore, this background internal friction was temperature-independent between 1 and 20 K, as is shown in Fig. 2, and could be described simply as the following function of magnetic field: $$Q^1=1.106\times 10^8+1.503\times 10^9B^{1.60},$$ (1) where the magnetic field $`B`$ is measured in Tesla. Note that even for zero magnetic field, the background internal friction is decreased by a factor of 2 below that observed with the large metal electrode. Since metal films are known to have large internal frictions, the reduction in the size of the metal film electrode can explain the observed reduction of the damping. Since the paddle oscillators themselves are very fragile, the preparation must follow a certain order: the a-Si film must be deposited before epoxying the oscillator to the silicon block, and the metal electrodes deposited thereafter. Once affixed to the block, a paddle cannot be removed without a large risk of breaking it. Hence we had to use two different oscillators for the measurements with and without films. Considering the strong dependence of the background damping on details of electrode shape and the irreproducibility in depositing the ‘T’ electrodes, the background damping of the two oscillators used might differ. After comparing two different bare oscillators with ‘T’ electrodes, we concluded that such differences are no larger than 25% over the entire range of magnetic field and thus should not significantly affect the analysis. We extract the internal friction of the film itself using $$Q_{\mathrm{film}}^1=\frac{G_{\mathrm{sub}}t_{\mathrm{sub}}}{3G_{\mathrm{film}}t_{\mathrm{film}}}\left(Q_{\mathrm{paddle}}^1Q_{\mathrm{sub}}^1\right).$$ (2) The thickness of the paddle oscillator substrate is $`t_{\text{sub}}=300\mu \mathrm{m}`$, the shear modulus of crystalline silicon in the direction of the neck, (110), is taken as $`G_{\mathrm{Si}}=6.2\times 10^{11}\mathrm{dynes}/\mathrm{cm}^2`$, and that of the film $`G_{\mathrm{film}}=3.63\times 10^{11}\mathrm{dynes}/\mathrm{cm}^2`$ (Ref. ). Film thickness was $`t_{\mathrm{film}}=708\mathrm{nm}`$ in this measurement. The internal friction of the film-carrying paddle for B= 0, 2, and 6 T is plotted in Fig. 3a, together with the temperature independent background internal friction given by eq. (1). In the absence of a magnetic field, the addition of the film increases the total damping by over one order of magnitude, from $`Q_{\mathrm{paddle}}^1=1\times 10^8`$ to $`Q_{\mathrm{paddle}}^1=4\times 10^7`$. In a magnetic field, the damping is nearly unchanged, which indicates that most of the damping is not caused by electrons, but by atomic relaxations. For a quantitative estimate of the upper limit of the effect of electrons, the internal friction of the film itself is computed using eq. (2). In zero field, $`Q_{\mathrm{film}}^1=8.5\times 10^5`$ at 1.5 K, increasing with increasing temperature to $`Q_{\mathrm{film}}^1=1.1\times 10^4`$ at 18K, as shown in Fig. 3b. These values are 30% smaller than those reported in Ref. , but this can be explained considering the large sensitivity of the internal friction to even moderate heat treatment noted previously in these films. Although the film-carrying paddle shows virtually no variation with magnetic field, the fact that $`Q_{\mathrm{sub}}^1`$ increases with magnetic field implies a small decrease of $`Q_{\mathrm{film}}^1`$ with increasing magnetic field. The variation of $`Q_{\mathrm{film}}^1`$ with increasing $`B`$, however, is not simple; it is shown in more detail for two temperatures in Fig. 4. In both cases, $`Q_{\mathrm{film}}^1`$ has a maximum near 2 T, followed by a decrease with larger fields. The data were taken over the course of several days, and for 4.3 K, the time history of the data is traced. Some hysteresis is observed; the initial zero-field data have lower damping than the final zero-field data by approximately 4%. Whether this hysteresis and the apparent field effect are real or artifacts of the background measurement or other experimental details cannot presently be decided. The very small changes observed in our experiments are to be compared with the specific heat measurements. Writing their linear specific heat term as $`C=aT`$, van den Berg and v. Löhneysen found in zero field $`a=4.5\times 10^6\mathrm{J}/\mathrm{g}\mathrm{K}`$, while in 6 T, $`a1\times 10^7\mathrm{J}/\mathrm{g}\mathrm{K}`$, i.e. a suppression by at least 97%. Similarly, Stephens observed a complete removal of the specific heat anomaly caused by 12 ppm of iron in a borosilicate glass below 2K in a magnetic field of $`B=3.3\mathrm{T}`$. Biggar and Parpia studied the magnetic field dependence of the internal friction of boron-doped crystalline silicon, in which phonon scattering from holes bound to acceptor (boron) atoms causes a large internal friction in the absence of a magnetic field. This excess damping was observed in paddle oscillators fabricated from such crystals, and was reduced by 98% in a 6 T magnetic field. Compared with these experiments, the suppression observed in our experiments is insignificant. It is therefore concluded that electrons play only a negligible role in our experiments and that the relaxation observed is caused by atomic motion as predicted by the tunneling model. We would like to thank Jeevak Parpia for use of the cryostat in which these experiments were carried out, and Rob Biggar for help with the experiment. We also thank them and Richard Crandall for stimulating discussions. This work was supported by the National Science Foundation Grant No. DMR 9701972, and by the National Renewable Energy Laboratory, Grant No AAD-9-18668-12.
no-problem/9911/hep-ph9911492.html
ar5iv
text
# 1 Plots of the Free Energy from order zero (F0) to three (F3) against 𝑗=𝑗ᵩ and 𝑘=𝑘₊ for 𝜆=25, 𝑚²=-15, 𝑗_𝜓=0 and 𝑘₋=0. Contours are evenly spaced in the interval -2.2 to -2.55, and show that only the odd orders have minima which get shallower at higher orders. ## Acknowledgements We would like to thank H.F.Jones and D.Winder for useful discussions. ## Appendix
no-problem/9911/quant-ph9911019.html
ar5iv
text
# Recovery of entanglement lost in entanglement manipulation ## Abstract When an entangled state is transformed into another one with probability one by local operations and classical communication, the quantity of entanglement decreases. This letter shows that entanglement lost in the manipulation can be partially recovered by an auxiliary entangled pair. As an application, a maximally entangled pair can be obtained from two partially entangled pairs with probability one. Finally, this recovery scheme reveals a fundamental property of entanglement relevant to the existence of incomparable states. Quantum entanglement plays an important role in quantum information processing. It realizes novel information processing that is impossible in a classical manner. Thus, in addition to practical applications, quantum entanglement itself has been widely studied in recent years. For a detailed review, see Ref. and references therein. One of the most fundamental applications of an entangled state is quantum teleportation . In teleportation, Alice sends a qubit to Bob via a previously shared maximally entangled state between them, $$|\mathrm{\Phi }^+_{AB}=\frac{1}{\sqrt{2}}(|00_{AB}+|11_{AB}).$$ (1) We refer to the state $`|\mathrm{\Phi }^+_{AB}`$ as a Bell pair in the following. All the operations needed are local operations on their respective systems and classical communication between them. Since Alice and Bob are distantly located, they cannot jointly perform global operations on the composite system. This is always the case in all applications of entangled states such as quantum communication and quantum cryptography. Therefore the following question is crucial to understanding the nature of entangled states. What can we do on entangled states by local operations and classical communication alone? Recently Nielsen found necessary and sufficient conditions for an entangled state to be transformed into another one by local operations and classical communication . It was also proved that the quantity of entanglement decreases during the transformation. It is natural to wish that entanglement would not decrease because it is a valuable resource. This letter shows that entanglement lost in entanglement manipulation can be partially recovered by an auxiliary entangled pair. Besides the original entangled state to be transformed, we prepare another entangled state and perform collective operations on these two pairs. This transformation enables a part of entanglement lost in the original pair to be transferred to the auxiliary one. Entanglement of the whole system decreases during the transformation in this case too, as required by Nielsen’s result. But this scheme realizes the partial recovery of entanglement that is absolutely impossible by individual manipulations of each pair. As a particular example of the recovery procedure, it is also shown that we can obtain a Bell pair with probability one from two partially entangled pairs satisfying a certain condition. Furthermore, the condition for this recovery scheme to work reveals a fundamental property of entanglement. The property has a deep connection with the fact that there exist essentially different types of bipartite pure–state entanglement, namely, incomparable states . In this letter, we will obtain the main result using Nielsen’s theorem. First, we introduce a mathematical notion of majorization that is needed in the theorem and is also a main tool in this letter. Let $`x=(x_1,\mathrm{},x_n)`$ and $`y=(y_1,\mathrm{},y_n)`$ be real n-dimensional vectors. Let $`x^{}=(x_1^{},\mathrm{},x_n^{})`$ be the vector obtained by rearranging the elements of $`x`$ in the decreasing order, i.e., $`x_1^{}\mathrm{}x_n^{}`$. We say that $`x`$ is majorized by $`y`$, written in $`xy`$, if $$\underset{j=1}{\overset{k}{}}x_j^{}\underset{j=1}{\overset{k}{}}y_j^{},1kn1,$$ (2) and $$\underset{j=1}{\overset{n}{}}x_j^{}=\underset{j=1}{\overset{n}{}}y_j^{}.$$ (3) This letter deals with only bipartite pure entangled states, which are described in Schmidt decomposition such as $`|\psi _{AB}=_i\sqrt{a_i}|i_A|i_B`$ where $`\{a_i\}`$ are positive real numbers satisfying the normalization condition $`_ia_i=1`$. In Schmidt decomposition, $`\{|i_A\}`$ and $`\{|i_B\}`$ are orthonormal basis of respective systems; thus eigenvalues of the reduced density matrix $`\rho _\psi tr_B(|\psi _{ABAB}\psi |)`$ are $`a_1,\mathrm{},a_n`$. We define the vector of these eigenvalues as $`\lambda _\psi (a_1,\mathrm{},a_n)`$. With the theory of majorization, Nielsen proved the following theorem . Theorem: A bipartite pure entangled state $`|\psi _{AB}`$ is transformed into another one $`|\varphi _{AB}`$ with probability one by local operations and classical communication if and only if $`\lambda _\psi `$ is majorized by $`\lambda _\varphi `$, i.e., $$|\psi _{AB}|\varphi _{AB}\text{iff}\lambda _\psi \lambda _\varphi .$$ (4) It was also proved that the quantity of entanglement $`E(\psi )`$, which is uniquely defined as the von Neumann entropy of $`\rho _\psi `$ , decreases during the transformation, $$\text{if}|\psi _{AB}|\varphi _{AB},\text{then}E(\psi )E(\varphi ).$$ (5) This follows from the mathematical theorem that if $`\lambda _\psi \lambda _\varphi `$, then $`E(\psi )E(\varphi )`$, together with Eq. (4). Equation (5) means that local operations and classical communication always reduce entanglement. However, we want to prevent entanglement from decreasing as far as possible, since we have to send qubits without teleportation in order to share entanglement between distant observers again. We will show that an auxiliary entangled pair can partially recover the entanglement lost in the manipulation of two–qubit entangled states. The recovery scheme presented in this letter goes as follows. Suppose we originally want to transform $`|\psi _{AB}`$ into $`|\varphi _{AB}`$. (In the following, we exclude the trivial case $`|\psi _{AB}=|\varphi _{AB}`$.) We prepare another entangled state $`|\omega _{A^{}B^{}}`$ besides the system $`AB`$. Then we perform collective operations on $`|\psi _{AB}|\omega _{A^{}B^{}}`$, and convert it to $`|\varphi _{AB}|\chi _{A^{}B^{}}`$ where $`|\chi _{A^{}B^{}}`$ has more entanglement than $`|\omega _{A^{}B^{}}`$. This transformation transfers a part of the entanglement lost in the system $`AB`$ to the system $`A^{}B^{}`$. In the following, it is proved that this scheme is really possible. We begin with a concrete example to understand the idea of this scheme, then proceed to a general proof. We deal with the following example: $$\begin{array}{ccc}\hfill |\psi _{AB}& =& \sqrt{0.7}|00_{AB}+\sqrt{0.3}|11_{AB},\hfill \\ \hfill |\varphi _{AB}& =& \sqrt{0.8}|00_{AB}+\sqrt{0.2}|11_{AB},\hfill \\ \hfill |\omega _{A^{}B^{}}& =& \sqrt{0.6}|00_{A^{}B^{}}+\sqrt{0.4}|11_{A^{}B^{}},\hfill \\ \hfill |\chi _{A^{}B^{}}& =& \sqrt{0.55}|00_{A^{}B^{}}+\sqrt{0.45}|11_{A^{}B^{}}.\hfill \end{array}$$ (6) The vectors of eigenvalues are $$\begin{array}{ccc}\hfill \lambda _\psi & =& (0.7,\mathrm{\hspace{0.17em}0.3}),\hfill \\ \hfill \lambda _\varphi & =& (0.8,\mathrm{\hspace{0.17em}0.2}),\hfill \\ \hfill \lambda _\omega & =& (0.6,\mathrm{\hspace{0.17em}0.4}),\hfill \\ \hfill \lambda _\chi & =& (0.55,\mathrm{\hspace{0.17em}0.45}).\hfill \end{array}$$ (7) Majorization relations $`\lambda _\psi \lambda _\varphi `$ and $`\lambda _\omega \lambda _\chi `$ indicate $$|\psi _{AB}|\varphi _{AB},|\omega _{A^{}B^{}}|\chi _{A^{}B^{}},$$ (8) and $$E(\psi )E(\varphi ),E(\omega )E(\chi ).$$ (9) If we consider two entangled pairs as one system, the whole system is an entangled state with Schmidt number four, thus $`\lambda _{\psi \omega }`$ $`=`$ $`\lambda _\psi \lambda _\omega =(0.42,\mathrm{\hspace{0.17em}0.28},\mathrm{\hspace{0.17em}0.18},\mathrm{\hspace{0.17em}0.12}),`$ (10) $`\lambda _{\varphi \chi }`$ $`=`$ $`\lambda _\varphi \lambda _\chi =(0.44,\mathrm{\hspace{0.17em}0.36},\mathrm{\hspace{0.17em}0.11},\mathrm{\hspace{0.17em}0.09}).`$ (11) According to Eq. (2), three inequalities $`0.42<0.44`$, $`0.42+0.28<0.44+0.36`$, and $`10.12<10.09`$ show that $`\lambda _{\psi \omega }\lambda _{\varphi \chi }`$. \[The equality (3) is satisfied by normalization conditions.\] Therefore we can transform $`|\psi _{AB}|\omega _{A^{}B^{}}`$ into $`|\varphi _{AB}|\chi _{A^{}B^{}}`$ by collective manipulation according to Nielsen’s theorem. Equation (9) means that entanglement lost in the system $`AB`$ is partially recovered by the system $`A^{}B^{}`$. The system $`AB`$ has no difference between the collective manipulation and the individual one. As for the system $`A^{}B^{}`$, this collective method realizes increase in entanglement, which cannot be done individually. Next we prove that the recovery as stated above is always possible. We find the condition where there exist auxiliary states $`|\omega _{A^{}B^{}}`$ and $`|\chi _{A^{}B^{}}`$ such that $`E(\omega )<E(\chi )`$ and $`\lambda _{\psi \omega }\lambda _{\varphi \chi }`$, provided that $`|\psi _{AB}|\varphi _{AB}`$. Let $$\begin{array}{ccc}\hfill |\psi _{AB}& =& \sqrt{a}|00_{AB}+\sqrt{1a}|11_{AB},\hfill \\ \hfill |\varphi _{AB}& =& \sqrt{b}|00_{AB}+\sqrt{1b}|11_{AB},\hfill \\ \hfill |\omega _{A^{}B^{}}& =& \sqrt{p}|00_{A^{}B^{}}+\sqrt{1p}|11_{A^{}B^{}},\hfill \\ \hfill |\chi _{A^{}B^{}}& =& \sqrt{q}|00_{A^{}B^{}}+\sqrt{1q}|11_{A^{}B^{}}.\hfill \end{array}$$ (12) The assumption $`|\psi _{AB}|\varphi _{AB}`$ gives $$\frac{1}{2}a<b1.$$ (13) The condition $`E(\omega )<E(\chi )`$ requires $$\frac{1}{2}q<p1,$$ (14) because $`E(\omega )E(\chi )`$ is equivalent to $`\lambda _\omega \lambda _\chi `$ in the case of two–qubit states and the equality holds only for $`p=q`$. Combining the two pairs $`AB`$ and $`A^{}B^{}`$, we have $`\lambda _{\psi \omega }`$ $`=`$ $`(ap,a(1p),(1a)p,(1a)(1p)),`$ (15) $`\lambda _{\varphi \chi }`$ $`=`$ $`(bq,b(1q),(1b)q,(1b)(1q)).`$ (16) In the following, we seek a pair of numbers $`(p,q)`$ that satisfies the majorization condition $`\lambda _{\psi \omega }\lambda _{\varphi \chi }`$ and Eq. (14) with the assumption (13). The majorization relation $`\lambda _{\psi \omega }\lambda _{\varphi \chi }`$ consists of three inequalities. \[The equality (3) is satisfied by normalization conditions.\] We have to rearrange the components of the vectors (15) and (16) in the decreasing order before imposing the inequality conditions (2). Equations (13) and (14) indicate that the largest and the smallest elements in (15) are $`ap`$ and $`(1a)(1p)`$, respectively. Similarly, $`bq`$ and $`(1b)(1q)`$ are the largest and the smallest elements in (16), respectively. Thus the first and the third inequalities of the majorization condition are $`apbq`$ and $`1(1a)(1p)1(1b)(1q)`$, i.e., $`q`$ $``$ $`{\displaystyle \frac{a}{b}}p,`$ (17) $`1q`$ $``$ $`{\displaystyle \frac{1a}{1b}}(1p),`$ (18) where Eq. (13) implies $$\frac{1}{2}\frac{a}{b}<1,1<\frac{1a}{1b}.$$ (19) However, Eqs. (13) and (14) cannot tell which is the next largest element in (15) and (16). If $`ap`$ then $`a(1p)p(1a)`$, and so on. Thus comparing the second and the third elements in each vector, we have the following three cases: (i) $`ap,bq`$, (ii) $`ap,b>q`$, (iii) $`a>p,b>q`$. \[The case $`a>p,bq`$ contradicts Eqs. (13) and (14).\] (i) $`ap,bq`$: The next largest elements in (15) and (16) are $`(1a)p`$ and $`(1b)q`$, respectively. Thus the second inequality of the majorization condition is $`ap+(1a)pbq+(1b)q`$, i.e., $`pq`$, which contradicts Eq. (14). (ii) $`ap,b>q`$: Since the elements $`(1a)p`$ and $`b(1q)`$ are the next largest elements in (15) and (16), respectively, we have $`ap+(1a)pbq+b(1q)`$, i.e., $`pb`$. Therefore, $$apb,b>q.$$ (20) (iii) $`a>p,b>q`$: Similarly, the majorization condition requires $`ab`$, which is implied in Eq. (13). In this region, we have $$a>p,b>q.$$ (21) Summing up these cases, we see that the second inequality of the majorization condition is Eq. (20) or (21): $$pb,q<b$$ (22) As a result, $`(p,q)`$ must satisfy Eqs. (14), (17) – (19), and (22). Figure 1 shows these conditions as a shaded region in $`pq`$ plane. It indicates that there exists the shaded region irrespective of $`a`$ and $`b`$. Thus if we take the auxiliary states $`|\omega _{A^{}B^{}}`$ and $`|\chi _{A^{}B^{}}`$ appropriately, recovery of entanglement is always possible. Now, we discuss the implication of Fig. 1 in detail. The shaded region in Fig. 1 is divided into two parts, $`qa`$ and $`q<a`$. In the region $`qa`$, we have $`\frac{1}{2}aq<pb1`$. This means that $`\lambda _\psi \lambda _\chi `$ and $`\lambda _\omega \lambda _\varphi `$. If we perform $`|\psi _{AB}|\chi _{AB},|\omega _{A^{}B^{}}|\varphi _{A^{}B^{}}`$, and interchange $`AB`$ and $`A^{}B^{}`$, then the recovery process stated above is also accomplished by only the individual manipulations of each pair. Therefore this region of Fig. 1 presents trivial recovery that needs no collective manipulation. However, in the region $`q<a`$, we have $`\frac{1}{2}q<apb1`$, or $`\frac{1}{2}q<p<a<b1`$. These inequalities imply that neither $`|\psi `$ nor $`|\omega `$ can be transformed into $`|\chi `$. Thus this region presents true recovery that only the collective manipulation can realize. In fact, we do not need the trivial region for recovery, because, for each point in the trivial region, there exist points with the same $`p`$ and the smaller $`q`$ in the true region. Only the true region, the shaded part in $`q<a`$, is of great importance. It should also be noted that the complete recovery is represented only at the point $`(b,a)`$ in Fig. 1, which corresponds to the trivial interchange of $`AB`$ and $`A^{}B^{}`$. A useful application of this scheme is to obtain a Bell pair \[Eq. (1)\] after a recovery procedure. Figure 1 shows that if we prepare $`|\omega _{A^{}B^{}}`$ having $`p`$ such that $`pb/(2a)`$, we can transform the $`A^{}B^{}`$ pair into a Bell pair with probability one. In addition to the Bell pair, there exists residual entanglement in the system $`AB`$. If we does not need this residual entanglement in $`AB`$, which means $`b=1`$, a Bell pair can always be obtained from two partially entangled pairs $`|\psi _{AB},|\omega _{A^{}B^{}}`$ such that $$ap<\frac{1}{2}.$$ (23) An explicit example of this concentration is as follows: $$\begin{array}{ccc}\hfill |\psi _{AB}& =& \sqrt{0.6}|00_{AB}+\sqrt{0.4}|11_{AB},\hfill \\ \hfill |\varphi _{AB}& =& \sqrt{0.9}|00_{AB}+\sqrt{0.1}|11_{AB},\hfill \\ \hfill |\omega _{A^{}B^{}}& =& \sqrt{0.7}|00_{A^{}B^{}}+\sqrt{0.3}|11_{A^{}B^{}},\hfill \\ \hfill |\chi _{A^{}B^{}}& =& \sqrt{0.5}|00_{A^{}B^{}}+\sqrt{0.5}|11_{A^{}B^{}}\hfill \\ & =& |\mathrm{\Phi }^+_{A^{}B^{}}.\hfill \end{array}$$ (24) The eigenvalues of the product states are $`\lambda _{\psi \omega }`$ $`=`$ $`\lambda _\psi \lambda _\omega =(0.42,\mathrm{\hspace{0.17em}0.18},\mathrm{\hspace{0.17em}0.28},\mathrm{\hspace{0.17em}0.12}),`$ (25) $`\lambda _{\varphi \chi }`$ $`=`$ $`\lambda _\varphi \lambda _\chi =(0.45,\mathrm{\hspace{0.17em}0.45},\mathrm{\hspace{0.17em}0.05},\mathrm{\hspace{0.17em}0.05}).`$ (26) Thus $`\lambda _{\psi \omega }\lambda _{\varphi \chi }`$ indicates that the concentration $`|\psi _{AB}|\omega _{A^{}B^{}}|\varphi _{AB}|\mathrm{\Phi }^+_{A^{}B^{}}`$ is possible with probability one. The Procrustean method is already known as a way of obtaining a Bell pair from a partially entangled state. Since this method works only probabilistically, however, we cannot necessarily obtain a Bell pair by applying the method to partially entangled pairs. Thus this application of the recovery scheme is very important for practical purpose. If there happen to be two partially entangled pairs satisfying Eq. (23), then we can always prepare a Bell pair from them for future use. The collective manipulation of both pairs is the heart of this recovery scheme. It makes the transformation possible that is absolutely impossible by individual manipulations of each pair. This is reminiscent of the reversibility between entanglement concentration and dilution in the asymptotic limit and the catalysis in entanglement manipulation discovered in . Finally we consider a fundamental property of entanglement that this recovery scheme reveals. The most striking part of the condition described in Fig. 1 is $`pb`$. This condition implies that, if we intend to recover the entanglement lost in the transformation $`|\psi _{AB}|\varphi _{AB}`$, we must prepare the auxiliary state $`|\omega _{A^{}B^{}}`$ that has more entanglement than $`|\varphi _{AB}`$. It depends on only the final state $`|\varphi _{AB}`$, not the quantity of entanglement lost in the transformation $`|\psi _{AB}|\varphi _{AB}`$, whether the recovery procedure by $`|\omega _{A^{}B^{}}`$ is possible or not. No matter how much entanglement is lost, nothing can be recovered if the auxiliary state is not sufficiently entangled. This is the fundamental property of bipartite pure entangled states revealed by the recovery scheme. This surprising feature of entanglement is depicted in Fig. 2. The notion of entanglement measure cannot fully explain this property. This new property of entanglement is a direct consequence of the existence of incomparable states . The states $`|\alpha _{AB}`$ and $`|\beta _{AB}`$ are called incomparable if neither $`|\alpha _{AB}|\beta _{AB}`$ nor $`|\beta _{AB}|\alpha _{AB}`$. If $`p`$ is greater than $`b`$, then the second inequality of the majorization condition $`\lambda _{\psi \omega }\lambda _{\varphi \chi }`$ is not satisfied. Taking into account other inequalities of the majorization condition, we see that $`|\psi _{AB}|\omega _{A^{}B^{}}`$ and $`|\varphi _{AB}|\chi _{A^{}B^{}}`$ are incomparable in the region $`b<p1,(a/b)pq<p`$. \[In the region $`b<p1,(1/2)q<(a/b)p`$, the entanglement of the whole system increases because of $`\lambda _{\psi \omega }\lambda _{\varphi \chi }`$. Thus this region is excluded by Nielsen’s result.\] Therefore the impossibility of recovery by an insufficiently entangled pair is directly connected to the existence of incomparable states. In conclusion, we have proved that entanglement lost in entanglement manipulation can be partially recovered by an auxiliary entangled pair. This recovery scheme has also revealed the fundamental property of quantum entanglement that has a connection with the existence of incomparable states: When we intend to transfer entanglement from one pair to another, nothing can be transferred if the recipient is not sufficiently entangled. More detailed investigations are necessary to grasp the deep implication of this property. I am grateful to K. Suehiro for a careful reading of the manuscript.
no-problem/9911/astro-ph9911321.html
ar5iv
text
# The Parkes Multibeam Pulsar Survey Data Release ## 1. Conditions of Release Details of pulsars discovered in the survey are placed on the WWW at the time of acceptance of the paper announcing the discoveries, or 18 months after confirmation of the detection, whichever is first. Raw data tapes from the survey are made available for copying two years after recording. Details of pulsars observations which fall into these categories can be found on http://www.atnf.csiro.au/ $``$pulsar/psr/pmsurv/pmwww/. There is no observatory-based archive, so access to the data should be negotiated with project PIs. ## 2. Volume of data One-bit samples are recorded at a rate of 4 kHz for each of the 96 channels per beam. Each 35 minute survey observation with the 13-beam system fills 1.3 GBytes ($``$100 MBytes per beam) in its raw form. This is recorded on DLT7000 tapes which hold up to 35 GBytes and cost around US$80 each. About 3000 such observations will be made in completing the survey, giving a total data set of about 4 TBytes. Our search processing requires 130 hours of CPU time on a SUN Ultra 1 for each 35 minute observation. ## 3. How to Access data Data logs are available on http://www.atnf.csiro.au/$``$pulsar/psr/pmsurv/pmwww/. For the multibeam pulsar survey and followup timing projects the PIs to contact are Andrew Lyne (agl@jb.man.ac.uk) or Dick Manchester (rmanches@atnf.csiro.au) or Fernando Camilo (fernando@astro.columbia.edu). Small requests: If you want a copy of a single tape or a few individual observations we are happy to copy the data and post you a tape or a CD. Please indicate your preference for media type. Larger requests: You would need to come to Epping or Jodrell Bank and copy the data onto your own DLT7000 media. If you want a copy of the complete survey, you would need to bring a workstation, a DLT7000 tape drive and a drive for the media type of your choice. Timing Data: Processed archives or TOAs can be made available by ftp, CD or exabyte. For raw data requests the above schemes for small and large requests would apply. ## 4. Has the position you are interested in been observed ? The observing logs on the web site contain grid IDs for the centre beam of each 35 minute survey observation. The grid ID $`=il1000+ib`$ where $`b=(ib500)0.20207`$ and $`l=(il5000+0.5\mathrm{mod}(ib,2))0.2333`$. The Galactic centre has grid ID 5000500. The nominal centre positions of the other 12 beams can be determined from the offsets relative to beam 1 given in the table below. | | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | l | -0.24 | 0.24 | 0.49 | 0.24 | -0.24 | -0.49 | -0.73 | 0.00 | 0.73 | 0.73 | 0.00 | -0.73 | | b | 0.42 | 0.42 | 0.00 | -0.42 | -0.42 | 0.00 | 0.42 | 0.85 | 0.42 | -0.42 | -0.85 | -0.42 | ## 5. Software Tools We guarantee to provide software to read the tapes on a SUN Ultra class workstation. This software also works on most brands of UNIX operating systems. A range of software tools are available on an all care and no responsibility basis. | pmfind | search for pulsars | | --- | --- | | pdm | fold and dedisperse data for a given pulsar | | fch3 | fold data with precision suitable for timing | | tarch | form processed archives for timing analysis | | treduce | analyse timing archives | | pmhex | survey observation database | | foldch | fold and analyse individual filter bank channels | ## 6. Collaborative Projects It is possible to get access to data and results from the survey earlier by collaborative arrangements. To date we have embarked on such arrangements with 11 groups and welcome proposals from others.
no-problem/9911/astro-ph9911183.html
ar5iv
text
# Nucleosynthesis and Clump Formation in a Core Collapse Supernova ## 1. Introduction Overwhelming observational evidence (cf. references in Müller 1998) suggests that large-scale mixing processes took place in SN 1987 A and transported newly synthesized $`{}_{}{}^{56}\mathrm{Ni}`$ from its creation site near the collapsed core all the way out to the hydrogen envelope. Spectroscopic studies of SN 1987 F, SN 1988 A, SN 1993 J (Spyromilio, 1994, and references therein) and SN 1995 V (Fassia et al., 1998) indicate that such mixing is probably generic in core collapse supernovae. Indeed, artificial mixing of the radioactive ejecta within the (helium) envelope is indispensable in order to reproduce the light curves and spectra of Type Ib explosions using one-dimensional (1D) hydrodynamic models (Shigeyama, Nomoto, & Tsujimoto 1990; Woosley & Eastman 1997, and references therein). These findings have instigated theoretical work on multidimensional supernova models focusing either on the role of convective instabilities in the delayed, neutrino-driven explosion mechanism within about the first second of evolution (Mezzacappa et al. 1998; Janka & Müller 1996; Burrows, Hayes, & Fryxell 1995; Herant et al. 1994; Miller, Wilson, & Mayle 1993), or on the growth of Rayleigh-Taylor instabilitites during the late evolutionary stages (Nagataki, Shimizu, & Sato 1998; Herant & Benz 1992; Müller, Fryxell, & Arnett 1991; Yamada & Sato 1991; Hachisu et al. 1990). However, multidimensional simulations which follow the evolution of the supernova shock from its revival due to neutrino heating, until its emergence from the stellar surface have not yet been performed. In this Letter, we report on preliminary results of high-resolution two-dimensional (2D) computations which for the first time cover the neutrino-driven initiation of the explosion, the accompanying convection and nucleosynthesis as well as the Rayleigh-Taylor mixing within the first $`300`$ seconds of evolution. ## 2. Numerical method and initial data We split our simulation into two stages. The early evolution ($`t1`$ s) is followed with a version of the HERAKLES code (T. Plewa & E. Müller, in preparation) which solves the multidimensional hydrodynamic equations using the direct Eulerian version of the Piecewise Parabolic Method (Colella & Woodward, 1984) augmented by the Consistent Multifluid Advection scheme of Plewa & Müller (1999) in order to guarantee exact conservation of nuclear species. We have added the input physics described in Janka & Müller (1996) (henceforth JM96) with the following modifications. General relativistic corrections are made to the gravitational potential following Van Riper (1979). A 14-isotope network is incorporated in order to compute the explosive nucleosynthesis. It includes the 13 $`\alpha `$-nuclei from $`{}_{}{}^{4}\mathrm{He}`$ to $`{}_{}{}^{56}\mathrm{Ni}`$ and a representative tracer nucleus which is used to monitor the distribution of the neutrino-heated, neutron-rich material and to replace the $`{}_{}{}^{56}\mathrm{Ni}`$ production when $`Y_\mathrm{e}`$ drops below $`0.49`$ (cf. Thielemann, Nomoto, & Hashimoto, 1996). Our initial data are taken from the 15 $`\mathrm{M}_{}`$ progenitor model of Woosley, Pinto, & Ensman (1988) which was collapsed by Bruenn (1993). The model is mapped to a 2D grid consisting of 400 radial zones ($`3.17\times 10^6\mathrm{cm}r1.7\times 10^9`$ cm), and 180 angular zones ($`0\theta \pi `$; cf. JM96 for details). A random initial seed perturbation is added to the velocity field with a modulus of $`10^3`$ of the (radial) velocity of the post-collapse model. The computations begin 20 ms after core bounce and are continued until 885 ms when the explosion energy has saturated at $`1.48\times 10^{51}`$ erg (this value has still to be corrected for the binding energy of the outer envelope). We will henceforth refer to this calculation as our “explosion model”. The subsequent shock propagation through the stellar envelope and the growth of Rayleigh-Taylor instabilities is followed with the AMRA Adaptive Mesh Refinement (AMR) code (T. Plewa & E. Müller, in preparation). Neutrino physics and gravity are not included in the AMR calculations. Both do not influence the propagation of the shock during late evolutionary stages. However, gravity is important for determining the amount of fallback, a problem which is outside the scope of the present study. The equation of state takes into account contributions from photons, non-degenerate electrons, $`\mathrm{e}^+\mathrm{e}^{}`$-pairs, $`{}_{}{}^{1}\mathrm{H}`$, and the nuclei included in the reaction network. The AMR calculations are started with the inner and outer boundaries located at $`r_{\mathrm{in}}=10^8`$ cm (i.e. inside the hot bubble containing the neutrino-driven wind) and $`r_{\mathrm{out}}=2\times 10^{10}`$ cm, respectively. No further seed perturbations are added. Our maximum resolution is equivalent to that of a uniform grid of $`3072\times 768`$ zones. We do not include the entire star but expand the radial extent of the grid by a factor of 2 to 4 whenever the supernova shock is approaching the outer grid boundary, which is moved from its initial value out to $`r_{\mathrm{out}}=1.1\times 10^{12}`$ cm at $`t=300`$ s. Reflecting boundary conditions are used at $`\theta =0`$ and $`\theta =\pi `$ and free outflow is allowed across the inner and outer radial boundaries. ## 3. Results The general features of our explosion model are comparable to the models of JM96. The most important difference is caused by our use of general relativistic corrections to the gravitational potential. Since the shock has to overcome a deeper potential well than in the Newtonian case, the luminosities (which are prescribed at the inner boundary) required to obtain a certain final explosion energy are roughly 20% higher than those of JM96. In particular, we have adopted the following set of parameters: $`L_{\nu _e}^0=2.8125\times 10^{52}\mathrm{erg}/\mathrm{s},L_{\nu _x}^0=2.375\times 10^{52}\mathrm{erg}/\mathrm{s},\mathrm{\Delta }Y_l=0.0875,\mathrm{\Delta }\epsilon =0.0625`$ (cf. JM96). The neutrino spectra and the temporal decay of the luminosity are the same as in JM96. For the chosen neutrino luminosities the shock starts to move out of the iron core almost immediately. Convection between shock and gain radius sets in $`30`$ ms after the start of the simulation in form of rising blobs of heated, deleptonized material (with $`Y_\mathrm{e}0.5`$) separated by narrow downflows with $`Y_\mathrm{e}0.49`$. The shock reaches the Fe/Si interface at $`r=1.4\times 10^8`$ cm after $`100`$ ms. At this time the shocked material is still in nuclear statistical equilibrium and is composed mainly of $`\alpha `$-particles and nucleons. When the temperature right behind the shock drops below $`7\times 10^9`$ K, $`{}_{}{}^{56}\mathrm{Ni}`$ starts to form in a narrow shell. During the ongoing expansion and cooling, nickel is also synthesized in the convective region. However, this synthesis proceeds exclusively in the narrow downflows which separate the rising bubbles and have a sufficiently high electron fraction $`Y_\mathrm{e}`$. The convective anisotropies therefore lead to a highly inhomogeneous nickel distribution (Fig. 1). Freezeout of complete silicon burning occurs at $`t250`$ ms, and convection ceases at $`t400`$ ms, at which time the flow pattern becomes frozen in. Subsequently, the entire post-shock region expands nearly uniformly. The post-shock temperature drops below $`2.8\times 10^9`$ K at $`t=495`$ ms, when the shock is about to cross the Si/O interface. Thus, our model shows only moderate oxygen burning (due to a non-vanishing oxygen abundance in the silicon shell), and negligible neon and carbon burning. This is caused by the specific structure of the progenitor model of Woosley et al. (1988) and may change when different (e.g. more massive) progenitors are used. In total, $`0.052\mathrm{M}_{}`$ of $`{}_{}{}^{56}\mathrm{Ni}`$ are produced, while $`0.10\mathrm{M}_{}`$ of material are synthesized at conditions with $`Y_\mathrm{e}<0.49`$ and end up as neutron-rich nuclei. Fig. 1 shows the $`{}_{}{}^{56}\mathrm{Ni}`$ distribution at $`t=885`$ ms. The density ratio between the dense regions which contain the nickel and the low-density, deleptonized material in the bubbles deeper inside is $`2.5`$. During the next seconds the shock detaches from the formerly convective shell that carries the products of explosive nucleosynthesis and crosses the C+O/He-interface, initially accelerating and subsequently decelerating due to the varying density gradient. Twenty seconds after core bounce this unsteady propagation speed of the shock has led to a strong compression of the metal-containing shell. At the inner boundary of the shell pressure waves have steepened into a reverse shock. Rayleigh-Taylor instabilities start to grow at the Ni+Si/O- and C+O/He-interfaces and are fully developed and interact with each other at $`t=50`$ s. Nickel and silicon are dragged upward into the helium shell in rising mushrooms on angular scales from $`1^{}`$ to about $`5^{}`$, whereas helium is mixed inward in bubbles. Oxygen and carbon, located in intermediate layers of the progenitor, are swept outward as well as inward in rising and sinking flows. At $`t=300`$ s the densities and radial velocities between the dense mushrooms and clumps and the ambient medium differ by factors up to 5 and 1.3, respectively. As Fig. 2 shows, the fastest mushrooms have already propagated out to more than half the radius of the He core. The excessive outflows along the symmetry axis in Fig. 2 are a numerical artifact caused by the singularity of the polar axis in spherical coordinates. In order not to introduce large errors, only an angular wedge with $`15^{}\theta 165^{}`$ was used in all the analyses. The remarkable efficiency of the mixing is clearly visible from Fig. 3 where the original onion-shell structure of the progenitor’s metal core has disappeared after 300 s. We observe large composition gradients between different regions of a mushroom and also between different mushrooms. Some of them contain $`{}_{}{}^{56}\mathrm{Ni}`$ mass fractions of more than 70% whereas others have only nickel admixtures of 20% or less but show high concentrations of silicon and oxygen. Composition contrasts of at least this magnitude have recently been observed in different filaments of the Cas A supernova remnant by Hughes et al. (2000). We find that also more than $`0.04\mathrm{M}_{}`$ of matter consisting of neutron-rich nuclei are mixed into the $`{}_{}{}^{56}\mathrm{Ni}`$ and $`{}_{}{}^{28}\mathrm{Si}`$ clumps. The reverse shock substantially decelerates the bulk of the nickel from $`\mathrm{15\hspace{0.17em}000}`$ km/s at $`t=885`$ ms to 3200 – 4500 km/s at $`t=50`$ s, at which time the maximum velocities $`v_{\mathrm{Ni}}^{\mathrm{max}}`$ have dropped to 5800 km/s. After another 50 s, however, the clumps start to move essentially ballistically through the helium core and only a slight drop of $`v_{\mathrm{Ni}}^{\mathrm{max}}`$ from $`5000`$ km/s at $`t=100`$ s to $`4700`$ km/s at $`t=300`$ s occurs, when most of the $`{}_{}{}^{56}\mathrm{Ni}`$ has velocities below $`3000`$ km/s (Fig.4). We have recently accomplished to follow the subsequent evolution of our model up to 16 000 s after core bounce. Similar to the situation at the C+O/He interface, the supernova shock leaves behind a dense (Rayleigh-Taylor unstable) shell at the He/H interface. While the entire shell is rapidly slowed down, a second reverse shock forms at its inner boundary (Fig. 2). Our high-resolution simulations reveal a potentially severe problem for the mixing of heavy elements into the hydrogen envelope of Type II supernovae like SN 1987A. We find that the fast nickel containing clumps, after having penetrated through this reverse shock, dissipate a large fraction of their kinetic energy in bow shocks created by their supersonic motion through the shell medium. This leads to their deceleration to $`2000`$ km/s in our calculations, a negative effect on the clump propagation which has not been discussed previously. A detailed analysis of this result will be subject of a forthcoming publication. ## 4. Conclusions Our calculations provide a natural explanation for the mixing of $`{}_{}{}^{56}\mathrm{Ni}`$ into the helium layer, which is required to reproduce the light curves of Type Ib supernovae (Shigeyama et al., 1990). In this sense they justify the rather large seed perturbations ($`5\%`$) which were imposed by Hachisu et al. (1994) on the radial velocity given from spherically symmetric models of exploding helium cores at about 10 s after shock formation. In addition, they may have interesting implications for the modeling of Type Ib spectra (Woosley, private communication; compare Fig. 3 of this work with Fig. 7 in Woosley & Eastman 1997). Our simulations suggest that ballistically moving, metal-rich clumps with velocities up to more than $`4000`$ km/s are ejected during the explosion of Type Ib (and Ic) supernovae. In case of Type II supernovae, however, the dense shell left behind by the shock passing the boundary between helium core and hydrogen envelope, causes a substantial deceleration of the clumps. The high iron velocities observed in SN 1987 A, for example, can therefore not be accounted for by our models. This problem would be reduced if the density profile at the He/H interface were smoother, leading to less strong variations of the shock velocity. Could this direct to a possible common envelope phase or merger history of the progenitor star (Hillebrandt & Meyer 1989; Podsiadlowski, Joss, & Rappaport 1990)? Alternatively, the high iron velocities in SN 1987 A could require additional energy input from Ni decay (Herant & Benz, 1992) or could imply a large global anisotropy of the explosion, e.g. associated with jets emerging from the collapse of a rapidly rotating stellar core (Fryer & Heger, 1999). Though final conclusions require three-dimensional calculations, our simulations indicate that all computations of Rayleigh-Taylor mixing in Type II supernovae carried out so far (including the case of SN 1987 A; Herant & Benz 1992) have been started from overly simplified initial conditions since they have neglected clump formation within the first minutes of the explosion. The mixing of nucleosynthesis products as seen in our models poses another potential problem. A significant fraction ($`0.04\mathrm{M}_{}`$ of a total of $`0.1\mathrm{M}_{}`$) of the neutron-rich nuclei which are synthesized in regions with $`Y_\mathrm{e}<0.49`$ is dragged outward by the nickel containing mushrooms and clumps. At least this amount will most likely be ejected in the explosion. A detailed analysis of the nuclear composition is necessary to tell whether this is in conflict with limits from Galactic chemical evolution models, which allow for at most $`10^3`$$`10^2\mathrm{M}_{}`$ of neutron-rich nuclei to be ejected per supernova event (e.g., Herant et al. 1994; Thielemann et al. 1996). Later fallback will not solve this potential problem: How could it disentangle the clumpy nickel ejecta from their undesirable pollution? A better knowledge of the luminosities and spectra of electron neutrinos and antineutrinos emitted from the nascent neutron star is therefore needed to perform more reliable calculations of the neutronization of the neutrino-heated ejecta. During the computations we became aware of oscillations with angle in parts of the postshock flow (Figs. 1 and 2). These are caused by the “odd-even decoupling” phenomenon associated with grid-aligned shocks (Quirk, 1994). As a consequence, the maximum nickel velocities, $`v_{\mathrm{Ni}}^{\mathrm{max}}`$, obtained in our AMR calculations have probably been overestimated by $`25\%`$ because the growth of some of the mushrooms was influenced by the perturbations induced by this numerical defect. The main results of our study, however, are not affected. We note that a large number of supernova calculations performed with codes based either on the direct Eulerian (cf. Figs. 22 and 24 in Burrows et al. 1995, Fig. 20 in JM96), or the Lagrangean with remap formulation of the PPM scheme (Mezzacappa et al., 1998) seem to be affected by this numerical flaw. We defer a detailed analysis of this problem to a forthcoming publication. We thank S. Woosley for profiles of the progenitor star and S. Bruenn for his post-bounce core model. We acknowledge support by P. Cieciela̧g and R. Walder concerning visualization and by the crew of the Rechenzentrum Garching where the simulations were performed on the NEC SX-4B and CRAY J916 computers. TP was supported by grant 2.P03D.004.13 from the Polish Committee for Scientific Research, HTJ by DFG grant SFB-375 für Astro-Teilchenphysik.
no-problem/9911/cond-mat9911079.html
ar5iv
text
# Polarization induced by charged particles in real solids ## I Introduction A charged particle penetrating a solid causes a distortion in the electronic density around the particle and behind its position: this is what Bohr called the induced density wake. Related to this induced density there is an induced potential. For a sufficiently high velocity the induced wake shows an oscillatory behaviour. When the velocity of the projectile is larger than the average velocity of target electrons (typically $`v_F`$, the Fermi velocity), one may consider a linear response of the medium. However, in the case of projectiles moving with smaller velocities, nonlinearities may play an important role for metallic densities ($`2<r_s<6`$, $`r_s`$ being the average electron distance). Much research has been oriented to the study of these quantities. Pioneering work on dynamic screening was performed by Neufeld and Ritchie. They evaluated the induced potential and density wake by using a local dielectric function as the linear response function of the medium. Based upon a linear response of the target, different aspects related to the induced polarization such as wake riding states or the spatial distribution of the induced potential and density have been investigated. The induced potential and density have been studied beyond linear-response theory, in the static electron gas approximation and within hydrodynamical formulations. The quadratic induced polarization by an external charge in the full random-phase approximation (RPA) has been recently reported. In previous works the induced potential and density have been evaluated on the basis of a jellium model of the target, which consists of an isotropic homogeneous electron gas embedded in a uniformly distributed positive background. However, in a more realistic approach valence electrons move in a periodic potential and one-electron excitations split into the so-called energy bands. The impact of band-structure effects on plasmon dispersion curves, dynamical structure factors, stopping power, and hot-electron lifetimes has been investigated only very recently. It has been shown that these effects can be important even in the case of free-electron metals such as aluminum. The wake potential induced by swift protons through different solids has been studied using a model dielectric function. In this paper we report, within linear-response theory, an ab initio evaluation of the induced potential and density of ions moving through real solids. Within our description we can obtain the induced potential under channeling conditions, i. e., when the projectile penetrates through different symmetry directions of the solid. In section II we derive explicit expressions for the position-dependent induced potential and density. In section III numerical calculations of the induced potential in Al and Si are presented, for both random and crystal-symmetry incident directions. In section IV the most relevant conclusions are summarized. Atomic units are used throughout, i. e., $`m_e=e=\mathrm{}=1`$. ## II Theory We consider a point particle of charge $`Z_1`$ moving in an inhomogeneous system with velocity $`𝐯`$ and impact vector $`𝐛`$, such that $$\rho ^{ext}(𝐫,t)=Z_1\delta (𝐫𝐛𝐯t).$$ (1) For a periodic crystal we can write: $$\rho _𝐆^{ext}(𝐪,\omega )=2\pi Z_1e^{\mathrm{i}(𝐪+𝐆)𝐛}\delta \left[\omega (𝐪+𝐆)𝐯\right],$$ (2) where $`𝐆`$ is a reciprocal-lattice vector and $`𝐪`$ lies in the Brillouin zone (BZ). The external charge induces a density, so that the total density variation of the medium is given by the sum $$\rho _𝐆(𝐪,\omega )=\rho _𝐆^{ext}(𝐪,\omega )+\rho _𝐆^{ind}(𝐪,\omega ).$$ (3) Poisson’s equation allows us to write the potential related to a given density. In our case, $$\varphi _𝐆(𝐪,\omega )=v_𝐆(𝐪)\rho _𝐆(𝐪,\omega ),$$ (4) where $`v_𝐆(𝐪)=4\pi /|𝐪+𝐆|^2`$ and $`\varphi _𝐆(𝐪,\omega )`$ represent the Fourier components of the bare Coulomb potential and the total potential, respectively. Within linear response theory, we can write the total potential in terms of the external charge: $$\varphi _𝐆(𝐪,\omega )=\underset{𝐆^{}}{}ϵ_{𝐆,𝐆^{}}^1(𝐪,\omega )v_𝐆^{}(𝐪)\rho _𝐆^{}^{ext}(𝐪,\omega ).$$ (5) $`ϵ_{𝐆,𝐆^{}}^1(𝐪,\omega )`$ are the Fourier coefficients of the dielectric function, which within the RPA read $$ϵ_{𝐆,𝐆^{}}(𝐪,\omega )=\delta _{𝐆,𝐆^{}}v_𝐆^{}(𝐪)\chi _{𝐆,𝐆^{}}^0(𝐪,\omega ),$$ (6) where $`\chi _{𝐆,𝐆^{}}^0(𝐪,\omega )`$ represent the Fourier coefficients of the density-response function of non-interacting electrons, $`\chi _{𝐆,𝐆^{}}^0(𝐪,\omega )=`$ $`{\displaystyle \frac{1}{\mathrm{\Omega }}}{\displaystyle \underset{BZ}{}}{\displaystyle \underset{n}{}}{\displaystyle \underset{n^{}}{}}{\displaystyle \frac{f_{𝐤,n}f_{𝐤+𝐪,n^{}}}{\epsilon _{𝐤,n}\epsilon _{𝐤+𝐪,n^{}}+(\omega +\mathrm{i}\eta )}}`$ (9) $`\times \varphi _{𝐤,n}|e^{\mathrm{i}(𝐪+𝐆)𝐫}|\varphi _{𝐤+𝐪,n^{}}\varphi _{𝐤+𝐪,n^{}}|e^{\mathrm{i}(𝐪+𝐆^{})𝐫}|\varphi _{𝐤,n}.`$ Here, the sums run over the band structure for each wave vector $`𝐤`$ in the first BZ, $`|\varphi _{𝐤,n}`$ and $`\epsilon _{𝐤,n}`$ are the one-electron wave functions and energies, and $`f_{𝐤,n}`$ are Fermi factors, $`f_{𝐤,n}=\theta (E_F\epsilon _{𝐤,n})`$. $`\mathrm{\Omega }`$ is the normalization volume. Combining Eqs. (3), (4) and (5) we obtain the following equation for $`\rho ^{ind}`$ in terms of $`\rho ^{ext}`$: $$\rho _𝐆^{ind}(𝐪,\omega )=\underset{𝐆^{}}{}\left[|𝐪+𝐆|^2ϵ_{𝐆,𝐆^{}}^1(𝐪,\omega )|𝐪+𝐆^{}|^2\delta _{𝐆,𝐆^{}}\right]\rho _𝐆^{}^{ext}(𝐪,\omega ).$$ (10) Substituting Eq. (2) into Eq. (10) and Fourier transforming back to real space yields $`\rho ^{ind}(𝐫,t)=`$ $`{\displaystyle \frac{Z_1}{\mathrm{\Omega }}}{\displaystyle \underset{𝐪}{\overset{BZ}{}}}{\displaystyle \underset{𝐆}{}}e^{\mathrm{i}(𝐪+𝐆)(𝐫𝐛𝐯t)}{\displaystyle \underset{𝐊}{}}e^{\mathrm{i}𝐊𝐛}e^{\mathrm{i}𝐊𝐯t}`$ (13) $`\times \left[|𝐪+𝐆|^2ϵ_{𝐆,𝐆+𝐊}^1(𝐪,(𝐪+𝐆+𝐊)𝐯)|𝐪+𝐆+𝐊|^2\delta _{𝐆,𝐆+𝐊}\right],`$ where we have made use of $$\frac{\mathrm{d}^3𝐪}{(2\pi )^3}\frac{1}{\mathrm{\Omega }}\underset{𝐪}{\overset{BZ}{}}\underset{𝐆}{},$$ (14) and we have set $`𝐆^{}=𝐆+𝐊`$. If we consider a definite trajectory of the projectile, only the $`𝐊`$ vectors such that $`𝐊𝐯=0`$ contribute to the sum, and $`\rho ^{ind}(𝐫,t)=`$ $`{\displaystyle \frac{Z_1}{\mathrm{\Omega }}}{\displaystyle \underset{𝐪}{\overset{BZ}{}}}{\displaystyle \underset{𝐆}{}}e^{\mathrm{i}(𝐪+𝐆)(𝐫𝐛𝐯t)}|𝐪+𝐆|^2{\displaystyle \underset{𝐊}{}}{}_{}{}^{}e_{}^{\mathrm{i}𝐊𝐛}`$ (17) $`\times \left[{\displaystyle \frac{ϵ_{𝐆,𝐆+𝐊}^1(𝐪,(𝐪+𝐆+𝐊)𝐯)}{|𝐪+𝐆+𝐊|^2}}{\displaystyle \frac{\delta _{0,𝐊}}{|𝐪+𝐆|^2}}\right].`$ The prime in the sum over $`𝐊`$ accounts for the $`𝐊𝐯=0`$ condition. Therefore, Eq. (17) gives the density induced by an external charged particle moving in a definite trajectory through a crystal. In order to get the induced potential we just apply Poisson equation, which yields $`\varphi ^{ind}(𝐫,t)=`$ $`{\displaystyle \frac{4\pi Z_1}{\mathrm{\Omega }}}{\displaystyle \underset{𝐪}{\overset{BZ}{}}}{\displaystyle \underset{𝐆}{}}e^{\mathrm{i}(𝐪+𝐆)(𝐫𝐛𝐯t)}{\displaystyle \underset{𝐊}{}}{}_{}{}^{}e_{}^{\mathrm{i}𝐊𝐛}`$ (20) $`\times \left[{\displaystyle \frac{ϵ_{𝐆,𝐆+𝐊}^1(𝐪,(𝐪+𝐆)𝐯)}{|𝐪+𝐆+𝐊|^2}}{\displaystyle \frac{\delta _{0,𝐊}}{|𝐪+𝐆|^2}}\right].`$ We can define an average or induced random potential as the mean value over impact vectors $`𝐛`$, which results in the $`𝐊=0`$ term of Eq. (20): $$\varphi _{random}^{ind}(𝐫,t)=\frac{4\pi Z_1}{\mathrm{\Omega }}\underset{𝐪}{\overset{BZ}{}}\underset{𝐆}{}\frac{e^{\mathrm{i}(𝐪+𝐆)(𝐫𝐯t)}}{|𝐪+𝐆|^2}\left[ϵ_{𝐆,𝐆}^1(𝐪,(𝐪+𝐆)𝐯)1\right].$$ (21) The most important contribution to the position-dependent induced potential of Eq. (20) is provided by the $`𝐊=0`$ term. For those directions for which there are not reciprocal vectors satisfying the $`𝐊𝐯=0`$ condition, we have the average potential of Eq. (21). For a few highly symmetric or channeling conditions, non-negligible corrections to the random result are found. The random induced potential exactly coincides with the well-known jellium result when one replaces the inverse dielectric matrix entering Eq. (21) by the inverse dielectric function of a homogeneous electron gas. The position-dependent stopping power can be obtained directly from Eq. (20), by simply taking into account that $$\frac{\mathrm{d}E}{\mathrm{d}x}=\frac{Z_1}{v}\varphi ^{ind}|_{𝐫=𝐛+𝐯t}𝐯,$$ (22) which gives an expression that exactly coincides with the position-dependent stopping power derived from the knowledge of the imaginary part of the projectile self-energy. The main ingredient of our calculation of the induced potential and density is the inverse dielectric matrix, which we have evaluated in the RPA by inverting Eq. (6). Hence, at this point we have only considered the average electrostatic interaction between the electrons. We have not found differences in the induced potential by including many-body short-range correlations in the response of the system. The one-electron Bloch states entering Eq. (9) are the self-consistent LDA eigenfunctions of the Kohn-Sham equation of density-functional theory (DFT). We first expand the states in a plane-wave basis and then solve for the coefficients of the expansion self-consistently. The electron-ion interaction is described in terms of a non-local, norm-conserving ionic pseudopotential. The XC potential is computed with use of the energy first calculated by Ceperley and Adler and then parametrized by Perdew and Zunger. We subsequently evaluate the $`\chi _{𝐆,𝐆^{}}^0(𝐪,\omega )`$ polarizability and invert Eq. (6) as we sum over $`𝐪`$, $`𝐆`$, and $`𝐊`$ to obtain the induced potential and density. ## III Results In this section we present the results of our calculations of the induced potential when a charged particle penetrates through aluminum and silicon. The average electron densities of Al and Si are similar: the corresponding free-electron gas (FEG) is characterized by $`r_s=2.07`$ for Al and $`r_s=2.01`$ for Si. However, aluminum is a metal and silicon a semiconductor and the induced potential will exhibit a different behaviour, which we would not obtain on the basis of a FEG calculation. Although Al is usually regarded as a jellium-like material, inelastic X-ray scattering experiments and theoretical analyses of the dynamical structure factor, stopping power, and hot-electron decay rates have revealed that it is necessary to take into account the full band structure for a proper description of its electronic properties. In this work, Bloch states have been expanded in a plane-wave basis with an energy cutoff of 12 Ry, which corresponds to keeping in this expansion $`100`$ plane waves. The sums over the BZ for both the polarizability $`\chi _{𝐆,𝐆^{}}^0(𝐪,\omega )`$ and the induced potential have been performed on $`10\times 10\times 10`$ Monkhrost and Pack meshes. The sum over reciprocal-lattice vectors in the potential has been extended to the first 15 $`𝐆`$ vectors, which corresponds to a cutoff in the momentum transfer of $`2.9q_F`$ ($`q_F`$ is the Fermi momentum). We have included 30 bands in the sums over the band structure for each $`𝐤`$ vector in Eq. (9), which allows us to calculate the induced potential and the stopping power up to velocities of the order of 2 a.u.. Silicon is a covalent crystal which shows strong valence-electron density variations in certain directions. This allows the formation of channels in which the density is very low. For example, the integrated density in the $`110`$ channel varies from $`r_s=1.49`$ at the atomic row to $`r_s=3.37`$ at the center of the channel, i.e., an $`80\%`$. This has an obvious impact on the induced potential and the stopping power. The covalent character of Si imposes a higher cutoff for the Bloch-state expansion than in Al, due to the higher degree of localization of the electronic states in the former material. This also results in Si having bands that are flatter than in the case of Al, so that the number of bands included in the calculation of the polarizability $`\chi _{𝐆,𝐆^{}}^0(𝐪,\omega )`$ must be larger in the case of Si if the same energy transfers as in Al are to be included. We have used a cutoff of 16 Ry ($`300`$ plane waves/state), and have included 100 bands in the calculation of $`\chi _{𝐆,𝐆^{}}^0(𝐪,\omega )`$. The sums over the BZ have been performed on $`8\times 8\times 8`$ Monkhorst and Pack meshes, and the sum over reciprocal-lattice vectors has been extended to the first 15 $`𝐆`$ vectors, which corresponds to a momentum-transfer cutoff of $`2.1q_F`$. We focus on the spatial distribution of the induced potential along the incoming particle trajectory. The $`z`$ coordinate appearing in the figures is always relative to the particle position. The stopping power derived from the slope of the induced potential at the projectile position coincides with the stopping power reported in Refs. and . An ion of $`Z_1=1`$ is always considered. In Fig. 1 we have plotted the random potential induced by an ion with $`v=0.6`$ a.u. through Al (a) and Si (b). The solid line represents the calculation for the real solid, whereas the dashed line represents the FEG calculation. For this low-velocity regime (below $`v_F`$), differences between full band-structure and FEG calculations come from the presence of interband transitions and, also, the gap in the case of Si. The splitting of the bands makes the polarization easier and, for this reason, the induced potential at the origin is higher for the real crystal than for the FEG in Al. In the case of Si, the slope of the induced potential at the origin is smaller than in the corresponding FEG, due to the presence of the gap, which, for low-energy transfers diminishes the polarization of the electron system. The gap of semiconductors like Si brings about a non-linear stopping power for low velocities and an induced potential which presents, at these velocities, a lower slope. In Fig. 2 we have plotted the random potential induced by an ion with $`v=1.5`$ a.u. through Al (a) and with $`v=1.6`$ a.u. through Si (b). Solid and dashed lines represent full band-structure and FEG calculations, respectively. These velocities are well over the plasmon-excitation threshold ($`v_t1.3`$ a.u. in both cases). An oscillatory behaviour appears behind the ion due to plasmon excitation, the wavelength of the oscillations being of $`2\pi v/\omega _p`$ ($`\omega _p`$ is the plasma frequency). As can be appreciated in Figs. 2a and 2b, the wavelength of the random potential is the same in the real crystal and in the corresponding FEG for both materials. This is a consequence of the fact that plasmon contributions to the stopping power are the same for both the real solid and the FEG. However, the oscillations in real Si are more damped than in real Al, due to the shorter lifetime of plasmons in Si which stems from the higher density of bands, thus increasing the decay channels. In Fig. 3 we have plotted the position-dependent potential, Eq. (20), induced by an ion with $`v=1.5`$ a.u. along the $`100`$ direction in Al (a), and with $`v=2.0`$ a.u. along the $`110`$ direction in Si (b). The impact vector is $`b(0,1,0)`$ for Al and $`b(1,1,0)`$ for Si. $`b`$ is measured in units of the lattice constant $`a_c`$ of the target. Calculations for an impact parameter $`b=0`$ (atomic row) and $`b=0.25`$ (center of the channel) are represented by solid and short-dashed lines, respectively. For comparison, a local-density approximation (LDA) of the induced potential is also displayed in Fig. 3 by a dashed line for $`b=0`$ and a dotted line for $`b=0.25`$. In this approach the position-dependent induced potential is obtained as the potential induced in a FEG with an electron density equal to the average electron density along the projectile path. The slope of the induced potential gives the stopping power in each case. We can appreciate a substantial variation in the slope of Si as we move across the channel, due to the great variation of the density. A lower density at the center of the channel results in a smaller slope and a lower stopping power than in the atomic row and in the random case. Variations in the spatial distribution of the induced potential in Al are fairly small as $`b`$ changes, because the electron density is almost flat in this material. If we calculated the potential at the ion position as a function of $`b`$, we would obtain a qualitatively similar result in both the ab initio and FEG evaluation, i.e., following the local density variation. However, it is clear from Figs 3a and 3b that the spatial distribution of the induced potential is not well described by the LDA calculation. Above all in Si, Fig. 3b, the differences are more dramatic. The oscillations behind the ion have the same wavelength independently of $`𝐛`$ in the crystal calculation, whereas in the LDA calculation the wavelength depends on the average density for each path. Plasmon contributions appear in the $`𝐊=0`$ term of the position-dependent induced potential of Eq. (20). The remaining terms ($`𝐊0`$) are mainly corrections to the single particle e-h contribution, which are due to the presence of density variations away from the projectile path, i.e., the so-called crystalline local-field corrections. For this reason, the wavelength of the induced potential remains unchanged. Furthermore, the stopping-power peak is located at the same position for any velocity direction and for any impact parameter, since plasmons are collective excitations which involve all the electrons of the system. However, in an LDA calculation the stopping-power peak is located at the velocity that corresponds to the average density of the path and not to the average target density. These results lead us to the conclusion that LDA calculations are not suitable for the calculation, at these velocities, of the position-dependent induced potential and stopping power. ## IV Conclusions We have presented full band-structure calculations of both random and position-dependent induced potentials in Al and Si. The linear-response formalism has been used to obtain the potential and density induced by an external charge penetrating through an inhomogeneous periodic system. The random potential has been evaluated in the RPA for velocities below and above the plasmon-excitation threshold. For low velocities, differences between the FEG and ab initio calculations come from the sensitiveness to the band structure of the target. For higher velocities, we have shown that oscillations behind the ion have the same wavelength in both the FEG and the real crystal, due to the fact that plasmon excitation remains unchanged. Finally, we have investigated the position-dependent potential induced by projectiles incident along the $`100`$ direction in Al and along the widest channel in Si, the $`110`$ direction. Variations in the spatial distribution of the induced potential are more pronounced in the case of Si. Besides, we have shown that the LDA calculation does not properly account, at the velocities under study, for the spatial distribution of the induced potential along the channel. ## V Acknowledgments We thank P. M. Echenique and A. G. Eguiluz for stimulating discussions. We acknowledge partial support by the University of the Basque Country, the Basque Unibertsitate eta Ikerketa Saila, and the Spanish Ministerio de Educación y Cultura.
no-problem/9911/astro-ph9911224.html
ar5iv
text
# X-ray/TeV-gamma-ray observations of several strong flares of Mkn 501 during 1997 and implications ## 1 Introduction The BL Lac object Mkn 501 ($`z=`$0.034) underwent a major outburst in X-ray and in the VHE bands during 1997. Observations with the X-ray telescopes on board the BeppoSAX and the RXTE satellites showed, compared to earlier data, a flux increase at several keV by up to one order of magnitude and a Spectral Energy Distribution (SED) peaking frequently at very high energies, namely in the energy range above $``$25 keV (Pian et al. 1998a ; Lamer & Wagner Lame:98 (1999)). In the VHE band Mkn 501 was the brightest known source in the sky showing dramatic flux variability from a fraction to approximately $``$10 times the flux of the Crab Nebula (Aharonian et al. 1999a , called A99a in the following; Aharonian et al. 1999c ; Bhat et al. Bhat:97 (1997); Catanese et al. Cata:97 (1997); Djannati-Atai et al. Djan:99 (1999); Hayashida et al. Haya:98 (1998)), with photon energies up to $``$20 TeV (Aharonian et al. 1999b , called A99b in the following). The high energy nonthermal continuum emission of BL Lac objects is widely believed to originate in a relativistic jet due to a population of high energy electrons, emitting synchrotron radiation at longer wavelengths and higher energy photons in Inverse Compton (IC) processes of the highest energy electrons with lower energy seed photons (see for recent reviews Coppi Copp:97 (1997); Sikora Siko:97 (1997); Ulrich et al. umu:97 (1997)). The origin of the IC seed photons has not yet been established. In so-called “Synchrotron Self Compton” (SSC) models the target photon population is dominated by low energy synchrotron photons (e.g. Bloom & Marscher Bloo:93 (1993); Ghisellini et al. GhisMD:1996 (1996); Mastichiadis & Kirk Mast:97 (1997)). In “External Compton” models the seed photons originate outside the emission volume and are, e.g. radiation from the nuclear continuum scattered or reprocessed in the broad-line regions (see e.g. Sikora et al. Siko:94 (1994)) or accretion disc photons (Dermer & Schlickeiser Derm:94 (1994)). Simultaneous observations of the highest energy synchrotron photons in the X-ray band and of the highest energy IC photons in the VHE band make it possible to infer complementary information about the rapidly evolving population of highest energy electrons. Besides studies of large populations of similar sources, only detailed studies of the temporal and spectral characteristics of individual sources over a broad wavelength region promise to yield sufficient constraints to unambiguously identify the mechanism responsible for the observed emission. A further necessity to study in detail the temporal emission characteristics arises from the fact that the observed VHE $`\gamma `$-ray spectra are expected to be substantially modified by the intergalactic extinction due to pair production processes of the VHE photons with the Diffuse Extragalactic Background Radiation (DEBRA) (Nikishov Niki:62 (1962); Gould & Schréder Goul:65 (1966); Stecker et al. Stec:92 (1992)). The temporal analysis should yield enough redundant information not only to identify the emission mechanism but also to determine the jet parameters making it possible to infer the electron spectrum from its X-ray synchrotron emission and to predict the intrinsic VHE spectrum. The comparison of the intrinsic and the observed spectra yields the intergalactic extinction and as a consequence an estimate of the DEBRA density in the relatively unconstrained 0.5 $`\mu `$m to 50 $`\mu `$m wavelength region. The observations with the HEGRA stereoscopic system of Cherenkov telescopes of 1997 showed that the Mkn 501 time-averaged VHE energy spectrum extends to energies well above 10 TeV and made it possible to sample an energy spectrum with an exponential cutoff deeply into the exponential regime (A99b). From 500 GeV to $``$20 TeV the spectrum can be described by a power law model with an exponential cutoff: $`dF/dE(E/1\mathrm{TeV})^{0.9}\mathrm{exp}(E/6.2\mathrm{TeV})`$. In this paper we will study in detail the question whether the cutoff is caused by intergalactic extinction or by the emission mechanism itself. Note that the HEGRA telescope system achieves an energy flux sensitivity $`\nu f_\nu `$ at 1 TeV of 10$`{}_{}{}^{11}\mathrm{erg}/\mathrm{cm}^2\mathrm{s}`$ for 1 hour of observation time. Furthermore, it is possible to determine differential spectra with reasonable statistical accuracy for integration times of a few hours for sources with strong, Crab like VHE $`\gamma `$-ray flux levels. Such Cherenkov telescope installations together with pointed X-ray telescopes like BeppoSAX, RXTE, or ASCA make it possible to study the temporal and spectral properties of BL Lac objects on time scales of hours. Earlier discussions of the implications of the 1997 Mkn 501 X-ray and VHE data can be found in Pian et al. (1998a ), Tavecchio et al. (Tave:98 (1998)), A99a-b, Bednarek & Protheroe (Bedn:99 (1999)), Hillas (Hill:99 (1999)), and Konopelko et al. (1999b ). We present in this paper the analysis of a large data base of RXTE observations of Mkn 501 during 1997 and combine it with the simultaneous and nearly simultaneous VHE data from A99a,b. Based on the spectral and temporal X-ray data presented in this paper and the spectral information measured with the HEGRA telescopes we re-examine the constraints on SSC scenarios and their model parameters (as given e.g. in Tavecchio et al. Tave:98 (1998); Bednarek & Protheroe Bedn:99 (1999)). Furthermore, following an approach described in Coppi & Aharonian (Copp:99 (1999)) and Hillas (Hill:99 (1999)) we present SSC fits to the RXTE and BeppoSAX data which yield, together with the HEGRA data, information on the degree of intergalactic TeV Gamma-ray extinction. The paper is structured as follows. The data sample and the analysis of the RXTE and the HEGRA data is described in Sect. 2. The RXTE results and the correlations of the X-ray and VHE $`\gamma `$-ray flux levels and spectra are investigated in Sect. 3. In Sect. 4 we describe possible SSC model scenarios and SSC fits to the data. In Sect. 5 we summarize the results. ## 2 RXTE and HEGRA observations and analysis We report on public RXTE observations which were performed from April 3rd, 1997 to July 14th, 1997. During April and May two observations were made each night. The integration time of all pointings were between 10 and 70 minutes per pointing. The RXTE satellite carries two pointed X-ray experiments (Bradt et al. Brad:93 (1993)), i.e. the Proportional Counter Array (PCA) sensitive in the 2 keV to 100 keV energy range with good sensitivity below 25 keV, and the High Energy X-ray Timing Experiment (HEXTE) with sensitivity in the energy range from 15 keV to 150 keV. Due to the very limited statistical information in the HEXTE data and possible problems with the absolute HEXTE flux normalization and response matrices we used only the 3-25 keV PCA data for the spectral fits. After applying the standard screening criteria, the spectra were extracted with FTOOLS 4.1 using bright source background models, and spectral fits were performed with XSPEC 10.0. A constant neutral hydrogen column density of 2 $`\times 10^{20}\mathrm{cm}^2`$ was chosen, a value which lies between the 21 cm line HI result of 1.73 $`\times 10^{20}\mathrm{cm}^2`$ (Stark et al. Star:92 (1992)) and the ROSAT spectral absorption result of $`2.87\times 10^{20}\mathrm{cm}^2`$ (Lamer et al. Lame:96 (1996)). Since the analysis is restricted to the energy region above 3 keV the chosen hydrogen column density has only a minor influence on the fitted spectra. The majority of measurements are satisfactorily fitted with single power law models; for days with long integration times and high count rates we need broken power law models to adequately describe the data. The HEGRA Cherenkov telescope system (Daum et al. Daum:97 (1997); Konopelko et al. 1999a ) is located on the Roque de los Muchachos on the Canary Island of La Palma (lat. 28.8 N, long. 17.9 W, 2200 m a.s.l.). During monitoring of Mkn 501 from March 16th to October 2nd, 1997, 110 h of high quality data was taken. The analysis tools and the estimate of the systematic errors on the differential $`\gamma `$-ray energy spectra are discussed in A99a and A99b. The results of fits to the differential spectra determined for 63 individual days are given in A99a; the 1997 time-averaged spectrum over the energy range form 500 GeV to 24 TeV is given in A99b. ## 3 The Mkn 501 X-ray and VHE $`\gamma `$-ray characteristics during the 1997 outburst Fig. 1 shows four Mkn 501 SEDs as determined from the 3-25 keV RXTE PCA data. It can be clearly recognized that falling SEDs with spectral indices ($`\alpha `$ from $`F_\nu \nu ^\alpha `$) larger than one, as well as increasing SEDs with spectral indices smaller than 1 have been observed. While the spectrum of MJD 50578 can be described satisfactorily by a pure power law model, the spectra of MJD 50545, 50554, and 50579 show evidence for spectral softening with increasing energy. For analyzing the trends in the spectral evolution we characterize the 3-25 keV spectral steepness by a single power law index. Note that during the 1997 outburst the position of the SED peak in the X-ray energy region varied substantially on a time scale of weeks: while for MJD 50545 (April 7th, Fig. 1a) the peak is found at $`5`$ keV, the RXTE data as well as the BeppoSAX data of MJD 50554 (April 16th, Fig. 1b) show a peak above 25 keV. Fig. 2a-b show the fitted RXTE X-ray energy flux at 10 keV and the 3-25 keV spectral index as function of time. The flux varies by a factor of three with shortest exponential increase/decay times<sup>1</sup><sup>1</sup>1 For two flux levels $`F_1,F_2`$ measured at times $`t_1`$, $`t_2`$, the exponential increase/decay time $`\tau `$ is defined as $`\tau (t_2t_1)/\mathrm{ln}(F_2/F_1)`$. of roughly one day. The spectral index varies from 0.7 to 1.1 with typical velocities of 0.01/h. We do not find any evidence for faster hardening than softening. The HEGRA energy flux at 2 TeV and the 1-5 TeV spectral index $`\alpha `$ as function of time are shown in Fig. 2c-d. On April 16th, RXTE as well as BeppoSAX observed an extremely bright X-ray flare. Unfortunately, no HEGRA observations were possible, and the flux at 2 TeV has here been estimated from lower energy observations with the CAT Cherenkov telescope (Djannati-Atai et al. Djan:99 (1999)). Although the VHE Mkn 501 flux showed variability by factors of up to $``$ 30 over the whole observation period, and by factors of up to 10 for the days with nearly simultaneous RXTE observations, the spectral index – determined with an accuracy of typically 0.1 -0.3 – is rather stable and statistically consistent with a constant value. Aharonian et al. (A99a, A99b) do not find any indication for a correlation of the absolute flux and the spectral shape. From 1 TeV to 5 TeV the statistical accuracy of the observations is rather high, namely $``$0.05 in the spectral indices of flux selected spectra. Between 500 GeV and 1 TeV and from 10 TeV to 15 TeV the statistical accuracy is rather modest and changes of the spectral index in these energy regions by as much as 0.3 can not be excluded. Remarkably, the mean 1-5 TeV spectral index of the data taken during phases of rapidly rising TeV flux differs by 0.10$`\pm `$0.06 from the mean index of the data taken during phases of rapidly falling TeV flux, weakly indicating a harder spectrum during phases of increasing fluxes. The TeV data shows shortest exponential increase/decay times of about 15 h; some evidence has been found for TeV variability on shorter time scales of a few hours (A99a; Quinn et al. Quin:99 (1999)). On 25 days the time separation between the X-ray and the VHE $`\gamma `$-ray observations was smaller than 12 h and for 22 days it was smaller than 6 h. X-ray/VHE data pairs with time delays less than 6 h are marked in Fig. 2 by solid symbols. Since the X-ray and the VHE variability is found on time scales slower $`\stackrel{>}{}`$ 1/2 day and the spectral indices changed typically by less than 0.01 units per hour these measurements are well suited for a meaningful correlation analysis. Note that data with a clear signature of a substantially increasing or decreasing flux are of utmost importance for extracting information about the acceleration and emission mechanisms. The X-ray/VHE observations during April, May, and June cover a considerable number ($`5`$) of distinct strong flares including several phases of substantial flux increase and decrease. The X-ray data shows a tight correlation of the absolute flux and the spectral index (Fig. 3). An obvious tendency is that higher fluxes are accompanied by harder spectra. Compared to the April and May data, relatively high X-ray fluxes during July were observed with – in comparison to the correlation shown by the April and May data – rather soft spectral indices, indicating that conditions in the source might have changed from April/May to July. A possible time lag between the 25 keV flux and the 3 keV flux has been searched for using the Discrete Correlation Function (DCF) (Edelson & Krolik Edel:88 (1988)). The analysis on a time scale of days shows that the time lag is smaller than one day (Fig. 4a). On a time scale of hours the DCF favors a time lag of the 3 keV flux behind the 25 keV flux of smaller than $``$15 hours (Fig. 4b). Note that here and in the following the statistical error bars on the DCF have been computed according to the prescription of Edelson & Krolik (Edel:88 (1988)) from the deviations of the DCFs of individual data pairs from the mean DCF value of the corresponding bin, taking thereby the quality of the correlation into account. The errors on the DCF values resulting from the statistical errors of the measurements alone are much smaller. Fig. 5 shows the correlation coefficient of the X-ray flux at energy $`E_X`$ and the flux at 2 TeV as a function of $`E_X`$. Hereby the fluxes at $`E_X`$ were estimated from the power law fits to the RXTE data. For X-ray energies above several keV we find an excellent correlation with a correlation coefficient of about 0.8–0.9. The statistical errors on the DCF do not allow us to decide whether the correlation of the 3 keV and the 2 TeV or the 25 keV and the 2 TeV flux levels is better. Fig. 6 shows the X/TeV-correlations for $`E_X`$ equal to 3 keV and 25 keV. A quadratic relation between the keV and the TeV fluxes fits the data rather well. A DCF analysis of the 25 keV and 2 TeV fluxes shows no time lag on a timescale of days (Fig. 7a). An analysis on hour time scale indicates that the 25 keV variations happen rather simultaneously with the TeV variations, or even lead them by several hours (Fig. 7b). Although a time lag of the IC radiation relative to the synchrotron radiation is generally expected in SSC scenarios (Coppi & Aharonian Copp:99 (1999)) the DCF of Fig. 7b does not yet allow definitive conclusions about the existence of such a time lag in Mkn 501. One of the most interesting questions is whether the X-ray and VHE spectral indices are correlated. The X-ray spectral index for all nights with VHE observations varies only by 0.25 which equals to a good approximation the median accuracy of the HEGRA spectral index estimates. We draw the very important conclusion that VHE spectral variations with spectral changes comparable to the ones observed by RXTE are not excluded by the HEGRA data. The statistical accuracy of the HEGRA spectral estimates can be improved by determining average spectra for several days. A closer look at the RXTE spectral indices however shows that the mean spectral index of a sufficient large number of days varies by not more than $``$0.1, a spectral difference which is again difficult to assess at TeV energies. In our best try we grouped together HEGRA observations according to an RXTE spectral index below and above 0.85, with mean spectral indices which differ by 0.1. The HEGRA data gives a 1-5 TeV spectral index of 1.20$`\pm `$0.02<sub>stat</sub> for the data sample with harder X-ray spectra, and 1.28$`\pm `$0.04<sub>stat</sub> for the data sample with softer X-ray spectra. A difference of the VHE spectral indices of the same magnitude as at X-ray energies seems likely but the statistical evidence is marginal. To summarize, the variations of the RXTE spectral indices are rather small compared to the accuracy of the HEGRA spectral estimates and the data supports rather than excludes a correlation of the X-ray/VHE spectral indices. This is consistent with recent claim of the CAT group who reported that the joint spectrum of the data of the two strongest April flares, was harder than the 1997 mean spectrum (Djannati-Atai et al. Djan:99 (1999)). ## 4 Interpretation in the framework of SSC scenarios In this section we first discuss possible SSC scenarios for explaining the 1997 Mkn 501 X-ray and VHE characteristics and discuss the constraints on the model parameters (Sect. 4.1). Eventually we show several examples of fits to multiwavelength data and compare the estimated emitted VHE spectrum with the observed one (Sect. 4.2) to infer information about the intergalactic extinction caused by the DEBRA (Sect. 4.3). Note that the quadratic dependence of the VHE (probably IC) $`\gamma `$-ray fluxes on the X-ray (probably synchrotron) fluxes shown in Fig. 6 favors a SSC scenario in which the production of seed photons is closely connected to the production of the nonthermal electron population. ### 4.1 Constraints on the model parameters In the following a “one zone model” is used for simplicity; the synchrotron and IC radiations originate in a spherical emission volume homogeneously filled with an isotropic population of relativistic electrons which move in a magnetic field characterized by its mean value $`B`$. In the first scenario (“scenario 1”), the emission volume is characterized by a constant Doppler factor ($`\delta _\mathrm{j}^1=\mathrm{\Gamma }(1\beta \mathrm{cos}(\theta ))`$, with $`\mathrm{\Gamma }`$ and $`\beta `$ the bulk Lorentz factors of the emitting volume and $`\theta `$ the angle to the observer), a rather constant magnetic field $`B`$, and a slightly variable radius. While the spectrum of accelerated electrons does not change with time, only the variability of the rate of accelerated particles entering the emission region causes the observed X-ray and VHE $`\gamma `$-ray variability. The X-ray synchrotron spectrum hardens when a large amount of freshly accelerated particles enters the emission region (i.e. during the rising stage of a flare) and subsequently steepens due to the cooling of these particles (i.e. during the decaying stage of a flare). A continuously changing X-ray spectral index is expected in this scenario if the cooling time of electrons responsible for the X-ray radiation is comparable to the flux variability time scale. We envisage that the electrons responsible for the low energy X-ray emission (below 1 keV or so) have no time to cool before they escape the emission region and as a consequence the spectra are always very hard in this energy region. By contrast, at energies above $`25`$ keV most often soft spectra are observed due to the rapid cooling of the responsible electrons. Diffusive shock acceleration predicts a power law spectrum of nonthermal particles $`dN_\mathrm{e}/dEE^p`$ with $`p\mathrm{\hspace{0.17em}2}`$ which is steepened at higher energies by one unit ($`p^{}\mathrm{\hspace{0.17em}3}`$) due to synchrotron cooling. Therefore, the produced synchrotron radiation has a spectrum with $`F_\nu \nu ^\alpha `$ with $`\alpha =(p1)/2\mathrm{\hspace{0.17em}0.5}`$ in the low energy region and with $`\alpha ^{}=(p^{}1)/2\mathrm{\hspace{0.17em}1}`$ in the high energy region. Indeed, the Mkn 501 spectra observed with BeppoSAX during April 1997 show for all three days a spectrum $`F_\nu `$ with an spectral index $`\alpha =0.5`$ below several keV, and $`\alpha ^{}1`$ above $``$25 keV. Furthermore, observations with OSSE taken between April 9th to April 15th, 1997 showed a hard 50 keV to $``$470 keV spectrum characterized by a spectral index $`\alpha ^{}`$ of 1.1 (Catanese et al. Cata:97 (1997)). In this scenario the X-ray spectral index is determined by the temporal evolution of the density of emitting particles rather than by the density itself. As a consequence, the correlation of the X-ray and VHE spectral indices should be tighter than the correlation of the absolute VHE fluxes and VHE spectral indices, in accord with the results shown in the previous section. Assuming an escape time of the low energy electrons smaller than several days, the IC seed photon density approximately follows the rate of injected particles and a faster than linear (but not necessarily quadratic) rise of the VHE fluxes for increasing X-ray fluxes is a natural consequence of the changing injection rate of accelerated particles. The following alternative scenario (“scenario 2”) seems attractive since it automatically accounts for the $`<1`$keV X-ray spectrum with a spectral index $`\alpha 0.5`$, repeatedly observed with BeppoSAX (Pian et al. 1998a,b). The basic difference to scenario 1 is a minimum energy $`E_1`$ of accelerated electrons which is responsible for the break of the synchrotron spectrum. A large magnetic field leads to rapid cooling and hence steepening of initial electron spectra down to the minimum energy $`E_1`$. Below $`E_1`$ electrons which underwent cooling form a spectrum with the canonical spectral index $`p^{\prime \prime }=2`$, independent of the value $`p`$ of the initial spectrum. Above $`E_1`$ the electron spectrum is characterized by a spectral index $`p^{}=p+13`$ for $`p2`$. In this scenario one obtains naturally a synchrotron spectrum with $`\alpha =(p^{\prime \prime }1)/2=0.5`$ from the electrons with energies below $`E_1`$. Furthermore, the scenario allows for a high magnetic field which assures fast cooling (relative to the flux variability time scale) of the electrons near the peak of the SED and thus a high radiation efficiency. Spectral changes in the X-ray energy range are caused in this scenario by the parameters $`E_1`$ and $`p`$ varying with time. We explored in detail a third scenario (“scenario 3”) in which the observed characteristics are caused by a time-independent electron spectrum, but where all 3 parameters ($`B`$,$`R`$, and the density of emitting particles) vary substantially. A changing magnetic field together with a rather stable (e.g. due to cooling on small time scales) electron spectrum could be the reason for large spectral changes in the X-ray band accompanied by a stable VHE spectrum. Note, however, that it is necessary in this scenario to introduce fixed relationships between the parameters characterizing the emission volume (i.e. between $`R`$ and $`B`$) in order to explain the very tight X-ray/VHE correlation. Further studies in this direction are underway. Note that an alternative scenario in which the spectral variability as well as the X-ray and VHE intensity variations are caused by varying only the relativistic Doppler factor $`\delta _\mathrm{j}`$ of the emission volume can be excluded from the X-ray data alone. As a consequence of the Lorentz invariance of $`F_\nu /\nu ^3`$ a change of the energy at which the Mkn 501 SED peaks by a factor of $`a`$ would be accompanied by a change of the observed flux by $`a^{3+\alpha }`$, where $`\alpha `$ is the spectral index at the frequency of observations ($`F_\nu \nu ^\alpha `$). The spectra of Fig. 1 show a change of the SED peak position by approximately a factor of 5 but clearly without the corresponding increase in luminosity. Deciding on one of the first three scenarios will be crucial for a final understanding of the Mkn 501 SEDs. In the following we will summarize the constraints on the model parameters. The significant VHE $`\gamma `$-ray variability on timescales $`\mathrm{\Delta }t_{\mathrm{obs}}`$ of approximately half a day implies the well known upper limit on the radius of the emitting volume: $$R<\delta _\mathrm{j}c\mathrm{\Delta }t_{\mathrm{obs}}=\mathrm{\hspace{0.17em}1.3}10^{16}(\delta _\mathrm{j}/10)(\mathrm{\Delta }t_{\mathrm{obs}}/12\text{h})\text{cm}$$ (1) Given this limit, the Doppler factor is constrained by the condition of a negligible or modest absorption of high energy photons in pair production processes with lower energy synchrotron photons inside the emission volume. Extrapolating the synchrotron spectrum observed by BeppoSAX at soft X-ray energies ( $`\stackrel{<}{}`$1 keV) according to $`F_\nu \nu ^\alpha `$ with $`\alpha =0.5`$ towards lower frequencies, we obtain (see e.g. Svensson Sven:87 (1987)) an optical depth as function of the photon energy $`E_\gamma `$ (frame of observer) of the gamma-ray: $$\tau _{\gamma \gamma }(\mathrm{\Delta }t_{\mathrm{obs}}/12\mathrm{h})^1(E_\gamma /10\mathrm{TeV})^{0.5}(\delta _\mathrm{j}/4.8)^5.$$ (2) Here and in the following we assumed a Hubble constant of 60 km s<sup>-1</sup> Mpc<sup>-1</sup>. If the (long wavelength) target photon density in the source changes as the observed X-ray and VHE luminosity, one would expect a steepening VHE spectrum for large VHE luminosities. The non-observation of such a correlation with an accuracy of better than 0.1 in the VHE spectral index limits the optical depth at 10 TeV to values below $``$1/4, resulting in a lower limit on the Doppler factor of $`\delta _\mathrm{j}`$ $`\stackrel{>}{}`$ 6.3. The X-ray and VHE $`\gamma `$-ray flux levels (Fig. 6) indicate that the electron cooling is dominated by synchrotron cooling rather than by IC cooling. For the case that the spectral evolution after flares is dominated by synchrotron cooling, Takahashi et al. (Taka:96 (1996)) and Kirk et al. (Kirk:98 (1998)) discussed the possibility to constrain the magnetic field from observations of time lags between high and low energy synchrotron radiation. Due to the energy dependent cooling times of electrons with energy $`E_\mathrm{e}`$: $`t_\text{S}=\left[(4/3)\sigma _\text{T}c(B^2/8\pi )E_\mathrm{e}/m_\mathrm{e}^2c^4\right]^1`$ with $`\sigma _\text{T}`$ the Thomson cross section, one expects that after a flare the lower energy synchrotron radiation (from electrons of energy $`E_\mathrm{l}`$) lags the higher energy synchrotron radiation (from electrons of energy $`E_\mathrm{h}`$) by $`\mathrm{\Delta }t_\mathrm{s}=t_\mathrm{s}(E_\mathrm{l})t_\mathrm{s}(E_\mathrm{h})`$. As a consequence, the DCF of the lower and higher energy radiation should show a shoulder reflecting this time lag. Using the delta functional approach for the energy of synchrotron photons $`E_\text{S}`$ produced by electrons of energy $`E_\mathrm{e}`$: $$E_\text{S}200(\delta _\mathrm{j}/10)(B/\text{1 G})(E_\mathrm{e}/\text{1 TeV})^2\mathrm{keV},$$ (3) and a shoulder of the DCF of the 3 keV and the 25 keV radiation up to $`\mathrm{\Delta }t_\text{S}\mathrm{\hspace{0.17em}15}`$ h (Fig. 4) yields: $$B\mathrm{\hspace{0.17em}0.025}(\mathrm{\Delta }t_\text{S}/\text{15 h})^{2/3}(\delta _\mathrm{j}/10)^{1/3}G.$$ (4) Since several other reasons are expected to broaden the DCF (e.g. delays due to the size of the emission region) this value of the magnetic field should be regarded as a rough lower limit. In the reminder of this subsection we discuss which X-ray and VHE photons are produced by the same electrons and are henceforth expected to show correlated intensity variations. We find that the IC photons near and above the energy at which the gamma-ray SED peaks are mainly produced in Thomson scatterings as well as in the Thomson to Klein-Nishina (KN) transition regime. This can qualitatively be understood as follows (a more quantitative discussion is given in the next subsection, see also Zdziarski Zdzi:89 (1989)). On the one hand, due to a probable seed photon energy flux rising approximately as $`\nu F_\nu \nu ^{0.5}`$ the Thomson processes yield the largest energy flux of IC photons for the highest seed photon frequencies. On the other hand, due to a seed photon number spectrum falling steeply like $`d`$N/$`d\nu \nu ^{1.5}`$ with increasing frequency, and due to the suppression of the KN scatterings inversely proportional to the centre of momentum energy of seed photon and electron the KN production of VHE photons is strongly suppressed. Electrons of energy $`E_\mathrm{e}`$ produce therefore the largest IC power per logarithmic bandwidth interacting with photons from the energy (all energies in the frame of the emitting volume) $$\epsilon _0\mathrm{\hspace{0.17em}1}/4(m_\mathrm{e}c^2)^2/E_\mathrm{e}=\mathrm{\hspace{0.17em}1}/16(E_\mathrm{e}/1\mathrm{TeV})^1\mathrm{eV}$$ (5) producing Gamma-rays of energy (frame of observer): $$E_{\text{IC}}\delta _\mathrm{j}\eta E_\mathrm{e}.$$ (6) Obviously, for IC scattering in the extreme KN regime it is $`\eta \mathrm{\hspace{0.17em}1}`$; for Thomson scattering on photons of energy $`\epsilon _0`$ one naively expects from the delta functional approximation of the IC cross section (e.g. Ginzburg & Syrovatski Ginz:64 (1964)) a value $`\eta `$ of 1/3. From our simulations we find that electrons of energy $`E_\mathrm{e}`$ indeed produce the maximum energy per logarithmic bandwidth at IC photon-energies given by Eq. (6) with $`\eta 1/3`$ near the peak of the SED. Note that in our models the synchrotron self absorption cutoff is typically found between 10<sup>11</sup> Hz and 10<sup>12</sup> Hz (laboratory frame), well below the frequencies of the most important seed photons. Eqs. (3) and (6) can be combined to derive the relation of the observed synchrotron and IC photon energies produced by electrons of the same energy region: $$E_{\text{IC}}/\mathrm{TeV}\left(\frac{\delta _\mathrm{j}/10}{B/\text{0.05 G}}\right)^{1/2}(E_\text{S}/\text{1 keV})^{1/2}.$$ (7) It can be recognized that the “mapping” between synchrotron and IC photon energies is only a weak function of $`\delta `$ and $`B`$. It should be noted that Eqs. (3), (6), and (7) are only rough approximations; in reality, electrons of a certain energy produce synchrotron and IC photons with energies scattered over more than a magnitude. ### 4.2 SSC fits to multiwavelength data Figs. 8 and 9 show simultaneous and nearly simultaneous RXTE, BeppoSAX, and HEGRA data taken on April 7th and April 13th, 1997 respectively together with the results from the SSC models. Note that the RXTE spectrum from April 7th is with a spectral index of 1.1 among the softest spectra observed with RXTE in 1997 while the spectrum of April 13th has a rather average spectral index of 0.83. The SSC model is computed with the time-independent part of the code described by Coppi (Copp:92 (1992)). This time-independent part is similar to the codes described e.g. by Inoue & Takahara (Inou:96 (1996)) and Kataoka et al. (Kata:99 (1999)). For a given set of parameters ($`\delta _\mathrm{j}`$, $`B`$, $`R`$) the intrinsic VHE spectrum can be computed by deriving the spectrum of the emitting electron population from the synchrotron spectrum (Coppi & Aharonian Copp:99 (1999)). This “inversion” of the synchrotron spectrum yields unique results for sufficiently smooth electron spectra. Unfortunately, the predicted VHE spectrum depends partially on the electron spectrum outside the energy region constrained by the X-ray observations. Given the observational evidence and the theoretical framework outlined above, we use the following probable assumptions about the synchrotron spectrum outside the energy region covered by the RXTE observations, namely below 0.1 keV and above 25 keV. The RXTE observations, as well as the OSSE observations from April 9th to April 15th, 1997 show spectra with spectral indices of $`\stackrel{<}{}`$1.1. We choose accordingly a spectrum of accelerated electrons with $`dN_\mathrm{e}/dEE^p`$ with $`p=2.2`$. Electrons which did not have time to cool produce a synchrotron spectrum at energies below 0.1 keV with $`F_\nu \nu ^{0.6}`$. Above 25 keV the cooled electrons give a spectrum with $`F_\nu \nu ^{1.1}`$. First we assume a Doppler factor $`\delta _\mathrm{j}=\mathrm{\hspace{0.17em}25}`$ and a magnetic field $`B=\mathrm{\hspace{0.17em}0.037}`$ G which lie well above the lower limits from Eqs. (2) and (4); later we show how results change varying these parameters in a wide margin. For this parameter values the 3-25 keV radiation is produced by roughly the same electrons as the 3-9 TeV emission (see Eq. 7) in accord with the tight X-ray/VHE correlation shown in Figs. 5 and 6. The synchrotron spectrum probably cuts off at some energy. The BeppoSAX and the OSSE observations do not show any indications for such a cutoff up to energies of at least 150 keV. Importantly, for the used values of $`\delta `$ and $`B`$ the location of this cut off does not influence the SSC prediction of the IC spectrum up to energies of 25 TeV as long as the cut off does not influence the synchrotron spectrum below $``$150 keV (compare Eq. (7)). So in the following we use a rather high exponential cutoff at 10 MeV. Note that 10 MeV synchrotron photons can be produced by $``$20 TeV electrons (Eq. (3)) which is still below the maximum energy of electrons expected for the case of shock acceleration with Bohm diffusion (e.g. Hillas Hill:99 (1999)). The predicted IC to synchrotron luminosity ratio depends on the assumed radius of the emission volume as 1/$`R^2`$. Assuming negligible intergalactic extinction at 500 GeV we determine a radius of $`1.510^{16}`$ cm and $`1.110^{16}`$ cm for the April 7th and April 13th data, respectively. Doppler factors below $``$15 result in too large radii which conflict with the upper limit from the flux variability time scale (Eq. (1)). Note however that actually there could be some intergalactic extinction at 500 GeV; one should keep in mind that the fit predicts the shape rather than the absolute normalization of the intrinsic VHE spectrum. Finally we turn to the SSC model spectra given in Fig. 8 and Fig. 9 (solid lines). The model calculations show that the break of the electron spectrum observed in the synchrotron radiation at X-ray energies is expected to give rise to a significant curvature of the IC spectrum in the TeV energy region. Obviously, the scenario favors a very small or alternatively an energy-independent intergalactic extinction of the radiation up to approximately 5 TeV. The observed time-averaged Mkn 501 spectrum shows additional steepening above 5 TeV which is not found in the predicted spectrum. Possibly, this additional steepening is caused by the intergalactic extinction. The BeppoSAX, OSSE, and RXTE observations show that the X-ray spectrum turns over from a spectral index $`\alpha 0.6`$ below several keV to $`\alpha ^{}1.1`$ above several keV. As can be seen in Figs. 8 and 9 the corresponding electron spectrum produces an IC component with a power law behavior well below 500 GeV (spectral index $`\alpha _{\text{IC}}=0.6`$) and well above several TeV (spectral index $`\alpha _{\text{IC}}^{}=1.6`$). The power law shape of a low and high energy IC component in an energy region where the KN effect is not negligible was already discussed by Tavecchio et al. (Tave:98 (1998)). The appearance of the IC power law components can be understood by noting that the relative contribution to the production of IC photons from scatterings in the Thomson regime and from scatterings in the KN regime does not depend on the energy of the responsible electrons as long as the seed photon and electron populations are described by power law spectra in the relevant energy ranges<sup>2</sup><sup>2</sup>2The statement can easily be derived from the properties of the KN cross section, namely that the cross section as well as the fraction of electron-energy transferred to the IC photons only depends on the centre of momentum energy of seed photon and electron, i.e. on the product of the seed photon and electron energy in the comoving frame of the emission volume. . As a consequence, the slope of the produced IC component can be derived by considering scattering processes in the Thomson regime alone and using the $`\delta `$-functional approximation of the IC cross section (see also Tavecchio et al. Tave:98 (1998)). The rate of produced IC photons is given by $$\dot{n}_{\text{IC}}(E_{\text{IC}})=c_{E_1}^{E_2}n_\mathrm{e}(E_\mathrm{e})n_{\mathrm{ph}}(\epsilon )\sigma (E_\mathrm{e},\epsilon ,E_{\text{IC}})𝑑\epsilon 𝑑E_\mathrm{e},$$ (8) where $`n_{\text{IC}}`$, $`n_\mathrm{e}`$, and $`n_{\mathrm{ph}}`$ are the numbers of IC photons, high energy electrons, and seed photons per unit volume and per energy interval. Using $`dn_\mathrm{e}/dE_eE_\mathrm{e}^p`$ for $`E_\mathrm{e}`$ between the energies $`E_1`$ and $`E_2`$, $`dn_{\mathrm{ph}}/d\epsilon \epsilon ^{(\alpha +1)}`$ over the whole relevant energy region, and the rough approximation for the cross section for the production of IC photons $`\sigma =\sigma _\text{T}\delta \left[E_{\text{IC}}\frac{4}{3}\epsilon \left(\frac{E_\mathrm{e}}{m_\mathrm{e}c^2}\right)^2\right]`$ we get $$\dot{n}_{\text{IC}}(E_{\text{IC}})E_{\text{IC}}^{(\alpha +1)}_{\xi E_{\text{IC}}}^{E_2}E_e^{(p2\alpha )}𝑑E_\mathrm{e}.$$ (9) The lower bound of the integral assures with $`\xi 3`$ scattering in the Thomson regime; the additional contribution of IC photons produced in scatterings in the KN (transition) regime has formally been absorbed into a multiplicative correction factor, which is (as discussed above) to good approximation constant as long as $`\xi E_{\text{IC}}E_2`$. Although Eq. (9) is a crude approximation, we find that it predicts the spectral slopes satisfactorily for a wide range of $`p`$ and $`\alpha `$ values provided $`\alpha >0`$ (see Zdziarski Zdzi:89 (1989) who discusses in detail the applicability of the $`\delta `$-functional approximation). For an electron population with spectral index $`p`$ interacting with its own synchrotron radiation, i.e. $`\alpha =(p1)/2`$, one obtains the well known result $$\dot{n}_{\text{IC}}(E_{\text{IC}})E_{\text{IC}}^{(\alpha +1)}\mathrm{ln}(E_2/(\xi E_{\text{ IC}})),$$ (10) namely that the IC spectrum has approximately the same spectral index $`\alpha _{\text{IC}}`$ as the synchrotron spectrum. A cooled electron spectrum with $`p^{}\mathrm{\hspace{0.17em}3.2}`$ which interacts with the seed photon spectrum of the uncooled electrons with $`\alpha \mathrm{\hspace{0.17em}0.6}`$ yields $$\dot{n}_{\text{IC}}(E_{\text{IC}})E_{\text{IC}}^{(\alpha _{\text{IC}}^{}+1)}$$ (11) with $`\alpha _{\text{IC}}^{}=p^{}\alpha 1\mathrm{\hspace{0.17em}1.6}`$. Remarkably, Eqs. (10) and (11) show that in the case of SSC scenarios of TeV blazars changes of the electron spectral index are reflected in an approximate one to one relationship in changes of the spectral index of the produced IC component. Based on qualitative arguments about the stability of the Mkn 501 VHE spectrum Konopelko et al. (1999b ) inferred an intrinsic TeV spectrum following a pure power law with a spectral index $`\alpha ^{}`$ near one. Given the spectral indices $`p=2.2`$ and $`p^{}=3.2`$ of the low and the high energy electrons respectively, as deduced from the X-ray spectra, Eq. (9) as well as our simulations show that it is extremely difficult to obtain such an IC spectrum in a SSC scenario since the resulting spectrum is characterized by $`\alpha _{\text{IC}}=0.6`$ or $`\alpha _{\text{IC}}^{}=1.6`$, i.e. it is too soft or too hard by $``$0.5 units in the spectral index. Furthermore, in the scenarios discussed here, the narrow frequency range where the IC spectrum turns over ($`\alpha 1`$) is a region where we expect more rather than less spectral variability than below and above this frequency range. Since (i) the RXTE spectral index of April 13th approximately represents the mean value during the 1997 RXTE observations of this source and (ii) the HEGRA VHE spectra did not vary during 1997 within statistical errors we think it is justified to compare the prediction of VHE spectrum of this day with the observed 1997 time averaged VHE spectrum. The predicted VHE spectrum mainly depends on the parameter $`(\delta _\mathrm{j}/B)^{1/2}`$ determining the ratio of the IC SED peak frequency and the square root of the synchrotron SED peak frequency (Eq. (7)). The Doppler factor as well as the magnetic field are poorly constrained towards higher values. These large uncertainties however have only a limited impact on the model fits. Combinations of large Doppler factors and small magnetic fields are constrained by the “mapping relation” between X-ray and VHE photon energies given in Eq. (7). Using an extremely high Doppler factor of $`\delta =100`$ together with the limiting lower value of the magnetic field from Eq. (4) of 0.012 G results in a mapping of the 1 TeV to 5 TeV radiation with the 0.02 keV to 0.6 keV radiation, which seems unlikely, given the BeppoSAX, HEGRA, and CAT observations during April 1997 which showed dramatic variability above several keV and at TeV energies but almost no variability at all below 1 keV. The dashed line in Fig. 9 shows the predicted VHE spectrum for this limiting upper value of the parameter $`(\delta /B)^{1/2}`$. The difference between the predicted and the observed $`\gamma `$-ray spectrum can be interpreted as an upper limit on the increase of intergalactic extinction with $`\gamma `$-ray energy. Additionally, the magnetic field is not constrained towards higher values. The dotted line in Fig. 9 shows the effect of increasing the magnetic field from 0.037 G to a limiting value of 0.12 G (which has the same effect on the shape of the predicted IC spectrum as decreasing the Doppler factor to a value of $`\delta 8`$). Larger magnetic field values (more precisely, smaller values of $`(\delta _\mathrm{j}/B)^{1/2}`$) are highly improbable since the resulting intrinsic VHE spectra are steeper than the observed ones in the energy range above $``$1 TeV. Intergalactic extinction which decreases for increasing energy is rather unlikely given our present understanding and knowledge of the DEBRA density. Our upper limit on the magnetic field is in good agreement with earlier results of Hillas (Hill:99 (1999)). Even for $`B=0.12`$ G intergalactic extinction is needed to explain the data above $``$10 TeV. Precision measurements of TeV blazar spectra in the energy region from 50 GeV to several hundred GeV, where only negligible intergalactic extinction is expected, would allow one to select the right value of $`(\delta _\mathrm{j}/B)^{1/2}`$ for individual sources. Such measurements will become feasible with next generation Cherenkov telescope systems as HESS and VERITAS and with the satellite instrument GLAST. Note that it turns out that in our SSC scenarios the energy density of electrons strongly dominates over the energy density of the magnetic field for all admitted values of $`B`$ and $`\delta _\mathrm{j}`$ (see also Inoue & Takahara Inou:96 (1996)). ### 4.3 Implications for the intergalactic absorption The increase of the optical depth of the DEBRA above 500 GeV as inferred from comparing the model estimate of the emitted VHE spectrum with the observed one is shown in Fig. 10 (solid points). The Doppler factor and the magnetic field have been varied as before to illustrate the effect on the inferred optical depth which amounts here to about 1/2 unit (see solid lines). As pointed out already, the method primarily gives the dependence of the optical depth with energy rather than the absolute amount of absorption. The result is consistent with a rather energy independent extinction up to energies of about 5 TeV. From 5 TeV to 20 TeV the inferred optical depth increases by 1 to 2 units. The dashed line in Fig. 10 shows the optical depth expected from the DEBRA estimates by Primack et al. (Prim:99 (1999)) based on galaxy and star formation calculations. The dotted lines in Fig. 10 show the optical depths for the “high” and “low” intensity DEBRA models of Malkan & Stecker (Malk:98 (1998)). Their results are based on an empirical approach where luminosity dependent infrared spectra of galaxies are integrated over their luminosity and redshift distributions. The figure shows that the increase in optical depth suggested by our analysis lies in the range of the theoretical expectations. The very weak energy dependence of the optical depth up to $``$10 TeV inferred from our analysis agrees well with the predictions of Primack et al.. Our calculations clearly favor the low over the high intensity DEBRA model of Malkan & Stecker. The optical depth predicted by the latter seems to increase too fast with $`\gamma `$-ray energy. ## 5 Summary We have analyzed a data sample of simultaneous and nearly simultaneous X-ray and VHE data. The X-ray and VHE observations were performed with typical time delays below 6 h. Since X-ray and the VHE variability is found on time scales slower 1/2 day and the spectral indices change typically by less than 0.01 units per hour, the measurements are well suited for a meaningful correlation analysis. For the first time a data base with detailed spectral and temporal information in the X-ray and VHE bands is available which covers a considerable number of distinct flares with substantial flux increases and decreases. The strong variability signatures in both energy bands has been used to elucidate several key aspects of the emission activity. The 3-25 keV spectral index varied on a time scale of weeks from 0.7 to 1.1, indicating that the peak of the X-ray SED shifted on this time scale from the energy region above 25 keV to below several keV and vice versa. In Pian et al. (1998b ) a similar shift has been reported from observations taken in April 1997 and in May 1998 and was attributed to decline of the X-ray and VHE $`\gamma `$-ray activity of the source from 1997 to 1998. The observations presented here show that the shift actually occured also during the year of increased emission itself. Furthermore, the time lag between the 3 keV and the 25 keV emission is constrained to be smaller than half a day. The data shows an excellent correlation of the X-ray fluxes at $`\mathrm{\hspace{0.17em}25}`$keV with the 2 TeV flux levels. The data therefore firmly establishes the correlation of the X-ray and VHE $`\gamma `$-ray flux levels of Mkn 501. A time lag between the X-ray and VHE $`\gamma `$-ray fluxes is constrained to be smaller than one day which is consistent with earlier results (Catanese et al. Cata:97 (1997); A99a; Aharonian et al. 1999c ). We find that the spectral variation in X-rays and in the VHE $`\gamma `$-rays could well be of the same magnitude, but the accuracy of the VHE measurements does not allow us to draw definitive conclusions. We interpret the data in the framework of SSC models. Using the constraints on the Doppler factor and the magnetic field of the emission volume from the variability and correlation properties in the X-ray and VHE bands we reconstruct the electron spectrum from the observed synchrotron radiation and estimate the emitted VHE spectrum. It turns out that intrinsic source properties, namely a turnover in the electron spectrum, is expected to cause a turnover of the VHE spectrum just in the TeV energy range. This result shows that the VHE measurements give, apart from rather robust upper limits on the DEBRA density from general arguments as described e.g. by Biller et al. (Bill:98 (1998)) and A99b, reliable information about the intergalactic extinction only when the emission mechanism is understood in full detail. Within our model scenario the comparison of observed and predicted VHE spectrum gives an optical depth due to intergalactic extinction which is constant up to energies of about 5 TeV and rises by one to two units from 5 TeV to 20 TeV. Our future work will focus on a confirmation of the results presented in this paper by detailed time dependent model studies based on the code by Coppi (Copp:92 (1992)). Acknowledgments The analysis of the RXTE data has been possible due to the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. We are grateful to E. Pian for providing us with the BeppoSAX data points from April, 1997. H.K. and F.A. are grateful to W. Hofmann, J. Kirk, and H. Völk for very fruitful discussions. We thank R. Tuffs for a careful reading of the text. We thank the referee A.M. Hillas for very useful suggestions.
no-problem/9911/cond-mat9911011.html
ar5iv
text
# Weak localization in the 2D metallic regime of Si-MOS ## I Introduction Strong evidence for a metal-insulator transition (MIT) was observed in Si metal-oxid-semiconductor (MOS) structures and other two-dimensional (2D) semiconductor systems . One of the most striking features of this phenomenon is a strong exponential drop of the resistivity $`\rho =\rho _0+\rho _1\mathrm{exp}((T_0/T)^q)`$ which saturates at low temperatures. According to scaling arguments for non-interacting 2D electron systems no such MIT was expected . However, for interacting 2D systems, theoretical evaluation showed that the MIT in 2D is not forbidden by basic arguments . Now the central question about the MIT in 2D concerns its origin. If electron-electron interaction is responsible for the strong decrease in resistivity, then quantum effects should dominate the metallic regime . If, on the other hand, the scattering mechanism across a spin gap or the sharp crossover from quantum to classical transport takes place, then the transport properties should not be dominated by quantum effects but should be rather explainable by the classical Boltzmann transport behavior. We have thus investigated the negative magnetoresistance due to the weak-localization effect in order to get information about the phase coherence time $`\tau _\phi `$ in a regime where possibly electron-electron interactions cause the anomalous metallic state. For carrier densities around the critical concentration $`n_c`$, conductivity corrections from single electron backscattering and interaction effects in the electron-electron and electron-hole channels overlap and it is not possible to get an unambiguous value for the phase coherence time. Therefore our investigations were focused on the high carrier density regime with conductance $`g=\sigma /(e^2/h)`$ between 35 and 120, where the strong exponential drop in resistivity has shifted to higher temperatures and weak $`\mathrm{ln}(T)`$ terms with both, negative and positive sign dominate the behavior . In this regime, the negative magnetoresistance due to weak localization is restricted to a narrow range of magnetic fields and can be evaluated under the assumption that the interaction terms have only small influence on the negative magnetoresistance. ## II Measurements and discussion Our investigations were performed on two high-mobility Si-MOS samples Si-43 and Si-15 with peak mobilities of $`\mu =20,000`$ and 32,000 cm<sup>2</sup>/Vs, respectively. The samples consist of 5 mm long and 0.8 mm wide Hall bars, covered with a 200 nm thick Si-Oxide layer serving as an insulator and a 1000 nm thick Al-gate on top. The resistivity and Hall measurements were performed in a four terminal ac-technique at a frequency of 19 Hz. The magnetoresistivity measurements were performed on sample Si-43 at electron densities $`n`$ from $`5.4\times 10^{11}`$ to $`3.5\times 10^{12}`$ cm<sup>-2</sup> and temperatures $`T`$ between 0.29 and 10 K and on sample Si-15 for $`1.05\times 10^{12}<n<4.5\times 10^{12}`$ cm<sup>-2</sup> and $`0.3<T<1.4`$ K. In this density range, the conductance $`g`$ is between 35 and 120, which is just the region below the maximum metallic conductivity in these Si-MOS structures . The open circles in Fig. 1 represent the negative magnetoresistance $`\mathrm{\Delta }\rho (B)=\rho _{xx}(B)\rho _{xx}(0)`$ at a density of $`n=1.05\times 10^{12}`$ cm<sup>-2</sup> at different temperatures for Si-15. The negative magnetoresistance was fitted to the single-electron weak localization correction to conductivity $$\mathrm{\Delta }\sigma _{xx}=\frac{\alpha g_\nu e^2}{2\pi ^2\mathrm{}}\left[\mathrm{\Psi }\left(\frac{1}{2}+\frac{\mathrm{}}{4eBD\tau }\right)\mathrm{\Psi }\left(\frac{1}{2}+\frac{\mathrm{}}{4eBD\tau _\phi }\right)\right],$$ (1) where $`\mathrm{\Psi }`$ is the Digamma function, $`B`$ is the applied perpendicular magnetic field, $`D`$ the diffusion coefficient and $`\tau `$ the momentum relaxation time. The values for $`D`$ and $`\tau `$ were deduced from the temperature and density dependent Hall and resistivity measurements. The prefactor $`g_\nu `$ describes the valley degeneracy and $`\alpha `$ depends, according to theory , on the ratio of intra-valley to inter-valley scattering rates and should be between 0.5 and 1. We found values between 0.5 and 0.6 for $`\alpha `$, which is in the expected range. The full lines in Fig. 1 represent the best least square fits through the data points according to Eq. 1 for the different temperatures. The influence of stray magnetic fields was tested by carefully shielding the sample together with a copper coil by a $`\mu `$-metal foil for temperatures down to 100 mK. As no influence was observed, we conclude that the width and height of the weak localization peak is not disturbed by any background magnetic fields. We also have not seen any sign for a superconducting state of the 2D electron gas which might be disturbed by very small magnetic fields. In Fig. 2, the temperature dependence of the phase coherence time $`\tau _\phi `$ is shown for several carrier densities in the range from $`5.4\times 10^{11}`$ to $`3.5\times 10^{12}`$ cm<sup>-2</sup> for Si-43. It is found that $`\tau _\phi `$ increases by about a factor of 100 from 1 ps at 10 K to nearly 100 ps at 0.29 K. Similar values for $`\tau _\phi `$ were also reported for earlier investigations on Si-MOS structures . The increase of $`\tau _\phi `$ can be described approximately by a power law $`\tau _\phi T^p`$. At higher temperatures, large momentum transfer processes dominate the electron-electron scattering and $`p=2`$ is expected, whereas at low temperatures, small momentum transfer dominates in disordered systems even for $`k_F\mathrm{}1`$ and cause $`p=1`$ . According to the data in Fig. 2, $`p`$ is decreasing slightly from 1.4 above 4 K to 1.3 below 1 K for the lowest density, whereas $`p`$ shows a strong change from 1.5 towards 0.5 in the same temperature interval for the highest density. The latter value of 0.5 can not be explained by theory and the strong change in the slope of Fig. 2 indicates rather a saturation of the phase coherence time below 0.3 K. Such a saturation of $`\tau _\phi `$ can be caused by electron heating effects due to high frequency noise or by other processes which limit the height of the weak localization effect. According to Hikami et al. , such a limitation can be caused by spin scattering due to magnetic impurities or spin-orbit interaction. From the additional terms for spin scattering and spin-orbit scattering, which enter (1), such a limiting scattering time can be estimated to be around 200 ps. But from the available data no definite conclusion about the process, which limits $`\tau _\phi `$, can be drawn. In the temperature range where $`\tau _\phi `$ increases by about a factor 100, the resistivity of the sample is nearly constant. Especially for the highest density of $`n=3.5\times 10^{12}`$ cm<sup>-2</sup>, the exponentially strong changes in the conductivity, which where observed below 1 K for densities near the MIT, have shifted to temperatures above 10 K. Only weak changes $`\mathrm{\Delta }\rho \mathrm{ln}(T)`$ remain at small $`T`$ despite the strong increase of $`\tau _\phi `$. The dependence of $`\tau _\phi `$ on the carrier concentration is depicted in Fig. 3. For temperatures in the range from 1 to 10.7 K, $`\tau _\phi `$ increases with increasing carrier concentration. Below $`n=1\times 10^{12}`$ cm<sup>-2</sup>, this increase is steep, whereas for higher densities the increase becomes smaller. At a temperature of 0.29 K, $`\tau _\phi `$ increases as well below $`n=1\times 10^{12}`$ cm<sup>-2</sup>, but decreases slightly for higher concentrations. This abnormal behavior at 0.29 K can be assigned to the observed saturation of $`\tau _\phi `$ in the temperature dependence for higher densities as described above. The same investigations for sample Si-15 showed a very similar behavior of the phase coherence time $`\tau _\phi `$ with saturation effects near $`T=0.3K`$. In Fig. 3 the momentum relaxation time $`\tau `$ is shown for comparison. It was calculated from the conductivity $`\sigma `$ at $`B=0`$ as $`\tau =m^{}\sigma /ne^2`$ with $`m^{}=0.19m_0`$ the transversal effective mass and $`n`$ the carrier density as derived from the Hall coefficient $`R_H`$. As the magnetoresisitivity due to weak localization is a very small effect ($`\mathrm{\Delta }\sigma /\sigma 10^3`$), there is practically no difference whether $`\tau `$ is calculated at $`B=0`$ or at $`B>\mathrm{\Phi }_0/\mathrm{}_\phi ^2`$. From Fig. 3, it can be seen that the overall behavior of $`\tau _\phi `$ and $`\tau `$ is quite different and independent of each other. This can be understood by the different underlying scattering mechanisms for inelastic and elastic processes. The elastic processes are due to impurity and surface roughness scattering whereas the inelastic processes are caused by electron-phonon and electron-electron scattering. It should be noted that we observe a negative magnetoresistance within $`\pm 50`$ Gauss despite the fact that there is a superconducting Al-gate on top of the Si-MOS structures. Bulk Al is a type I superconductor with a critical superconducting temperature of $`T_c=1.175`$ K and a critical magnetic field of $`H_c=105`$ G. We have tested the superconductivity of our Al-gate directly by performing a fore-wire resistance measurement. We found a sudden decrease of the resistivity to zero at a temperature of 1.20 K. At 0.29 K, the zero-resistivity could be quenched by an external magnetic field of 51 G. The deviation of this field from $`H_c`$ is caused by the intermediate state which has a finite resitivity when the superconducting regions are not connected any more . In addition, the superconducting properties may be modified as the Al-gate consists of a thin evaporated layer which is expected to be strongly disordered. The effect of a superconducting gate on the weak localization was treated in detail by Rammer and Shelankov . At magnetic fields below $`H_c`$, the magnetic flux is collected in the flux tubes of the intermediate state. At the position of the 2D electron gas, 200 nm away from the superconducting gate, the nonuniformity of the magnetic field persists only if the period of the magnetic structure is larger than this distance. The characteristic lateral size of the domains in the intermediate state can be estimated from the laminar model . For pure Al, one gets a period $`a`$ of about 300 nm, whereas for a strongly disordered material a period of rather 80 nm is expected. This domain period has to be compared with the phase coherence length $`\mathrm{}_\phi =\sqrt{D\tau _\phi }`$, with $`D`$ the diffusion coefficient. From our investigations it follows that $`\mathrm{}_\phi `$ is in the range from 600 to 1000 nm at temperatures below $`T_c=1.2`$ K. As this range of values is much larger than the typical domain period $`a`$, no influence of the superconducting gate on the negative magnetoresistivity is expected . This is in agreement with our measurements, where we have observed no direct influence. ## III Conclusions We have investigated the weak localization in the metallic regime of Si-MOS structures at high carrier density where the conductance is between 35 and 120 $`e^2/h`$. When decreasing the temperature from 10 to 0.29 K, the phase coherence time $`\tau _\phi `$ increases from 2 to about 100 ps enabling in principle strong quantum effects to take place. Nevertheless, only weak $`\mathrm{ln}(T)`$ type changes of the conductivity were observed from 10 K where the phase coherence is as short as the momentum relaxation time down to 0.29 K where $`\tau _\phi `$ is about 100 times larger. We thus conclude that in the investigated high conductance regime no strong quantum effects due to electron-electron interaction take place which could drive the system into the metallic state. From saturation effects of $`\tau _\phi `$, we can estimate a lower boundary for the spin-orbit interaction scattering time of about 200 ps. The work was supported by RFBR, by the Programs on ”Physics of solid-state nanostructures” and ”Statistical physics”, by INTAS, by NWO, by the ”Fonds zur Förderung der Wissenschaften” P-13439 and GME Austria, and by KBN (Grant No 2-P03B-11914) Poland.
no-problem/9911/astro-ph9911172.html
ar5iv
text
# Star Formation in a Crossing Time ## 1 Introduction The duration of star formation in a region of a particular size is a physical parameter related to the star formation process that has rarely been considered interesting, except perhaps for the notion that star formation lasts sufficiently long in GMCs that sources of turbulent cloud support and internal feedback are necessary. This concept of inefficient and prolonged star formation goes back to the first galactic surveys when it was realized (Zuckerman & Evans 1974; Zuckerman & Palmer 1974) that the Galactic CO mass and density are too large, and the total star formation rate too small, to have the conversion of gas into stars take place on anything shorter than several tens of crossing times per cloud. Early claims to the longevity of clouds were also based on chemical abundances and slow reaction rates at the average cloud density. Even the first measurement of an age for GMCs, of around $`3\times 10^7`$ years from cluster disruption of gas (Bash, Green, & Peters 1977) was $`40`$ dynamical times at what was believed to be the average GMC density of $`10^3`$ cm<sup>-3</sup>. This time was comparable to the shortest theoretical lifetime estimated from internal clump collisions (Blitz & Shu 1980), but it was still long compared to the dynamical time. As a result of these ideas, astronomers have been trying for two decades to understand how self-gravitating clouds can support themselves for many dynamical times, considering the common belief (Goldreich & Kwan 1974; Field 1978) that supersonic turbulence should dissipate more rapidly. Two types of models arose, those in which turbulence supports a cloud on the large scale but not on the small scale (Bonazzola et al. 1987; Leorat, Passot, & Pouquet 1990; Vazquez-Semadeni & Gazol 1995), and those in which stellar winds continuously drive turbulence to support a cloud (Norman & Silk 1980), possibly with some type of feedback to maintain stability (Franco & Cox 1983; McKee 1989). A variety of recent observations now suggests a different picture. Star formation appears to go from start to finish in only one or two crossing times on every scale in which it occurs (Sect. 2). The star formation rate does not just scale with the self-gravity rate, as recognized for a long time, it essentially equals the self-gravity rate. The first hint at such quick star formation came long ago from the observation that a high fraction of clouds contain stars (Beichman et al. 1986), combined with the idea that pre-main sequence lifetimes are short. As a result, the total cycle of star formation, before and after cloud dispersal, has to be short as well (Elmegreen 1991). Today the observational picture for rapid star formation is more complete, consisting of direct measurements, such as embedded cluster ages or cluster age differences in comparison to the associated gas turbulent crossing times, and indirect indicators such as hierarchical structure in embedded stellar groups and the fraction of clouds with embedded stars. Much of this data is well known, but it has not been viewed together in this fashion, and it is rarely interpreted as an indication that star formation occurs in only one crossing time. The implications of such an interpretation could be important for understanding the processes of star formation. Here we point out that star formation in a crossing time implies four changes to way we view the physical processes involved: (1) Feedback and cloud support from turbulence are not necessary for molecular clouds. (2) Protostar interactions have too little time to affect the average stellar initial mass function, which must be determined primarily from a rapid sampling of existing cloud structure rather than from a long time sequence of internal cloud dynamics. (3) The chemical clock inside a molecular cloud is determined by transient, high-density events, rather than by slow chemistry at the mean density. (4) The inefficiency of star formation on a Galactic scale results from an inability of most molecular or CO-emitting gas to form stars at all, and not from any of the previous explanations, which include a delay in the onset of star formation, slow magnetic diffusion, turbulent cloud support, local inefficiencies in a cloud core, and cloud disruption. This inability to form stars arises simply from the turbulence-driven structure of clouds (Falgarone, Phillips & Walker 1991): most molecular gas is either at too low a density, in an interclump medium, or it is dense, evanescent, and too small to be self-gravitating (Padoan 1995). ## 2 Observations of rapid star formation ### 2.1 Age difference versus size for LMC clusters A correlation between the age difference and spatial separation for Cepheid variables in the LMC suggests that star formation lasts for a time that is approximately equal to the dynamical crossing time for a wide range of scales (Elmegreen & Efremov 1996). This implies not only that young star positions are hierarchical, as had been claimed before (Feitzinger & Braunsfurth 1984; Feitzinger & Galinski 1987), but also that the timing for star formation is hierarchical, with many small regions coming and going in the time it takes the larger region surrounding them to finish. A second study using cluster ages and positions in the LMC confirmed this Cepheid result (Efremov & Elmegreen 1998), and illustrated again that the time scale for coherent star formation in a region is always about one turbulent crossing time, scaling approximately with the square root of size. Figure 1 shows this result by plotting the crossing times inside molecular clouds and sub-clouds of various sizes $`S`$ versus these sizes (which are essentially cloud diameters). The crossing time is defined as the half-size (radius) divided by the Gaussian dispersion in internal velocity, $`c`$. The data is form the literature, as indicated. Superposed on this is the analogous correlation between age-separation and distance-separation for $`244`$ LMC clusters in the age range from 10 to 100 million years (from Efremov & Elmegreen 1998). The clusters lie on a continuation of the crossing time-size relation for individual clouds, suggesting that in each region in which this cluster hierarchy is observed, the duration of the star formation process is an average of about one crossing time. Hierarchical clustering in time and space is also shown by cluster pairs, analogous to h and $`\chi `$ Persei in our Galaxy. Equal-age pairs have been studied in the LMC by Bhatia & Hatzidimitriou (1988), Kontizas et al. (1989), Dieball & Grebel (1998), and Vallenari et al. (1998) and in the SMC by Hatzidimitriou & Bhatia (1990). Their existence implies that star formation is synchronized in neighboring regions, which means there is only a short time interval available for the complete formation of a cluster and its neighbor. ### 2.2 Substructure in Embedded Infrared Clusters A second indication that star formation is extremely rapid is the observation that some embedded IR clusters have sub-clustering. For example, IC 342 contains 8 smaller subclusters inside of it with 10 to 20 stars each Lada & Lada (1995). Such sub-clustering would be mixed up by star-star scattering and gravitational tidal interactions if the individual stars had enough time to orbit even once through the cloud core. Instead, the cluster seems to have crystalized instantly, preserving the pre-stellar hierarchical cloud structure in the pattern of young stars. The star formation process is not just beginning in this region either. At the present time, a fraction equal to about 50% of the total cloud mass has already been converted into stars (Lada & Lada 1995). This fraction is comparable to the likely final efficiency for the cluster, so the total star formation process is nearly over. Other clusters with hierarchical subclustering include NGC 3603 (Eisenhauer et al. 1998), W33 (Beck et al. 1998), and NGC 2264 (Piche 1993), which has two levels of hierarchical substructure, i.e., two main clusters with two subclusters in one and three in the other. Elson (1991) found spatial substructure in 18 LMC clusters, and suggested it might result from merging subclusters. Strobel (1992) found age substructure in 14 young clusters, and Persi et al. (1997) found both age and positional substructure in G 35.20-1.74. Some of the structure inside a cluster could be the result of triggering (Elmegreen & Lada 1977), but this operates on a crossing time for the outer scale too. For example, the subgroups in OB associations listed by Blaauw (1964), some of which may be triggered by older subgroups, have spatial separations on the order of $`10`$ pc, and age differences on the order of $`3`$ million years. These numbers fit on the correlation in figure 1. ### 2.3 Statistical Considerations If a high fraction of clouds contains stars and the stellar ages are always young, then the whole star formation process must be rapid. Three new compilations of this statistical measurement point to this conclusion. Fukui et al. (1999) finds that about 1/3 of the clusters in the LMC younger than 10 Myr are associated with CO clouds, while essentially none of the clusters older than this are significantly associated. This implies the entire cluster formation process, including cloud formation and dispersal, is only around 3 Myr. If the average CO cloud density at the threshold of their detection is in the usual range from $`10^2`$ to $`10^3`$ cm<sup>-3</sup>, then 3 Myr is 2 to 1.5 dynamical times, respectively (we take a dynamical time to be $`\left(G\rho \right)^{1/2}`$). Jessup & Ward-Thompson (1999) found that the mean pre-stellar lifetime decreases with increasing density, from about $`10^7`$ yr at $`10^3`$ cm<sup>-3</sup> to $`5\times 10^5`$ yr at $`3\times 10^4`$ cm<sup>-3</sup>. At $`10^3`$ cm<sup>-3</sup>, this time is 5 dynamical times, and at $`3\times 10^4`$ cm<sup>-3</sup>, it is 1.4 dynamical times. Myers (1999) confirms the result of Jessup & Ward-Thompson (1999) using different data, and finds that the mean waiting time for star formation begins to decrease rapidly with increasing density once the density reaches $`10^4`$ cm<sup>-3</sup>. At that density, the mean waiting time is 1 Myr or less, which is less than $`2`$ dynamical times. ## 3 Direct pre-main-sequence age measurements The age spread for 80% of the stars in the Orion Trapezium cluster is apparently less than 1 My (Prosser et al. 1994). The same is true for L1641 (Hodapp & Deane 1993). The age spread is much shorter for a large number (but not necessarily a large fraction) of stars in NGC 1333 because of the preponderance of jets and Herbig-Haro objects (Bally et al. 1996). In NGC 6531 as well, the age spread is immeasurably small (Forbes 1996). These short time scales are all less than a few crossing times in the cloud cores. In a recent study of the time history of star formation in the trapezium cluster, Palla et al. (1999) found that most of the low mass stars formed in the last $`1`$ My, and that the rate increased to this value somewhat gradually before this, perhaps as the associated cloud contracted. A comparison of their figures 1 and 3, along with their figure 6, indicates that the low mass stars mostly formed between $`10^5`$ and $`10^6`$ years ago. The stellar density in the trapezium is now about $`10^3`$ M pc<sup>-3</sup> (Prosser et al. 1994; McCaughrean & Stauffer 1994), so if the local efficiency of star formation was around 50% to make a nearly-bound cluster, then the prior gas density in the core was $`6\times 10^4`$ H<sub>2</sub> cm<sup>-3</sup>. This is a reasonable value considering the densities in other Orion cluster-forming regions (Lada 1992). The corresponding dynamical time scale is $`\left(G\rho \right)^{1/2}0.3`$ My. which is comparable to the isochrone times of the low mass stars. The increase in the rate of star formation during cloud contraction is what should be expected if this rate always follows the local dynamical rate (Palla et al. 1999), because that increases too during cloud contraction. On larger scales, the age spread in a whole OB association is about 10 My (Blaauw 1964), and the prior gas mass ($`2\times 10^5`$ M) inside a typical radius ($`20`$ pc) corresponds to an average density of $`200`$ atoms cm<sup>-3</sup>; this gives a similar dynamical time of 6.3 My. On even larger scales, the age spread in a star complex like Gould’s Belt is $`40`$ My (Pöppel 1997). These larger regions form inside and downstream from spiral arms in $`500`$ pc-size cloud complexes that contain $`10^7`$ M (Elmegreen & Elmegreen 1987; Efremov 1995). The average density is $`5`$ atoms cm<sup>-3</sup>, so the dynamical time is $`40`$ My. Note that the large-scale star-forming regions contain smaller scale regions inside them, and that all of the regions form on a local dynamical time. This means that several smaller regions come and go throughout the larger region during the time the larger region exists (Elmegreen & Efremov 1996). Evidently, the total duration of star formation in most clouds is only 1 and 2 dynamical times once star formation begins, and this is true for scales ranging from 1 pc to $`10^3`$ pc. The general concept that the star formation time should scale with the dynamical time is not new, but direct observations of the actual timescales have been available only recently. Some clusters have larger age spreads than the dynamical time, but this could be the result of multiple bursts. Hillenbrand et al. (1993) found that the most massive stars (80 M) in NGC 6611 have a 1 My age spread around a mean age of $`2`$ My, which is consistent with the spreads mentioned above, but there are also pre-main sequence stars in the same region, probably much younger, and a star of 30 M with an age of 6 My. The LMC cluster NGC 1850 has an age spread of 2 to 10 My (Caloi & Cassatella 1998), and NGC 2004 has both evolved low mass stars and less evolved high mass stars (Caloi & Cassatella 1995). In NGC 4755, the age spread is 6 to 7 My, based on the simultaneous presence of both high and low mass star formation (Sagar & Cannon 1995). The large age spreads may result from multiple and independent star formation events, perhaps in neighboring cloud cores or triggered regions. A merger event or projection effects could disguise the initial multiplicity. If this is the case, then the relevant dynamical time for comparison with the age spread should be calculated with the average density of the whole region surrounding the two cores and not the density of each. Thus, the whole region could form in less than a few crossing times, but the currently dense part of the cluster would have too short a crossing time for the mixture. This consideration of the average density surrounding multiple clusters is also necessary to explain the large-scale correlation between duration and size for star-forming regions defined by Cepheids and clusters in the LMC (cf. Sect. 2.1). A good example of this multiplicity may be the Pleiades cluster, which has the largest reported age spread of any of the well-studied clusters. Features in the luminosity function (Belikov et al. 1998) and synthetic HR diagrams (Siess et al. 1997) suggest continuous star formation over $`30`$ My for an age of $`100`$ My. However, the Pleiades primordial cloud could have captured stars from a neighboring, older region nearby (e.g., Bhatt 1989). Indeed, the age spread for the Pleiades is comparable to that in whole OB associations or star complexes, and the Pleiades, like most clusters, probably formed in such a region. ## 4 Implications The formation of stars in only one or two crossing times implies that cloud lifetimes are short and the observed turbulent energy does not have to be resupplied. Turbulent dissipation times are this short anyway (Stone, Ostriker, & Gammie 1998; MacLow, et al. 1998), so the implication is that all clouds proceed directly to star formation on a dissipation time and never require rejuvenation or self-sustaining feedback. Fine-tuning of cloud stability from feedback should be very difficult anyway, since protostellar wind speeds are much larger than cloud escape speeds, and the wind energy should just escape through fractal holes and tunnels (see also Henning 1989). Short timescales also imply that protostars do not have time to orbit inside their cloud cores. For example, Palla & Stahler (1999) noted that the stars in Orion could not have moved very far from their birthsites. Each star essentially stays where its initial clump first became unstable, and it does not move around to interact with other gas or distant protostars (although it may interact with one or two near neighbors). Maps of self-gravitating protostellar clumps by Motte, André, & Neri (1998) and Testi & Sargent (1998) illustrate this point: the protostars in the Ophiuchus and Serpens cores have such small individual angular filling factors that each one would have to orbit many times (the inverse of this filling factor multiplied by the relative gravitational cross section) in the cloud core to interact with each other. This result would seem to rule out models of the IMF based on clump or protostar interactions, such as those by Price & Podsiadlowski (1995), Allen & Bastien (1995), Murray & Lin (1996), Bonnell et al. (1997), Bonnell, Bate, & Zinnecker (1998), and others. Instead, IMF models based on the availability of gas to make stars in an overall fractal network seem preferred (Elmegreen 1997a, 1999). As a result of this birthsite freeze-out, the youngest star positions should appear fractal, or hierarchical, like the gas in which they form (see reviews in Elmegreen & Efremov 1999; Elmegreen et al. 2000). Larson (1995) and Simon (1997) discussed power-law two-point correlation functions for star fields, but this is not necessarily the same as a fractal distribution, and the fields they studied were probably too old (Bate, Clarke & McCaughrean 1998; Nakajima et al. 1998). Gomez et al. (1993) discussed hierarchical structure in Taurus, which is more to the point. A recent study by Vavrek, Balázs, & Epchtein (1999) finds multifractal structure in young star positions. Short cloud lifetimes have implications for chemistry too. Most chemical reactions should be occurring at the high density of a turbulent-compressed clump, which may be around $`10^5`$ cm<sup>-3</sup> (e.g., Falgarone, Phillips & Walker 1991; Lada, Evans & Falgarone 1997), rather than the low average density that is observed in studies with poor angular resolution. The formation of some chemical species at elevated temperatures in turbulent shocks has already been noted (Falgarone et al. 1995; Joulain et al. 1998). The rapid rate of star formation suggested here for individual clouds does not imply there should be a rapid rate of star formation on a galactic scale, as suggested by Zuckerman & Evans (1974) and Zuckerman & Palmer (1974). Even if star formation proceeds on a dynamical timescale, the actual time depends on the size of the cloud that contains it. This is true even on a galactic scale, where the star formation rate is generally a fixed fraction of the density (i.e., $`ϵ\rho `$ for “efficiency” $`ϵ`$ and density $`\rho `$) divided by the local orbit time (Elmegreen 1997b; Kennicutt 1998). This galactic rate probably involves the same physical principles that apply to individual complexes, associations and clusters, all of which form stars at a rate equal to the local $`ϵ\rho `$ divided by the local dynamical time. There is no catastrophe in the galactic star formation rate if all regions evolve on a dynamical time, because the dynamical time is very long on a galactic scale. The essential point is that star formation does not occur in every location where the gas is dense, it occurs primarily in self-gravitating cores that comprise only a small fraction of the total cloud mass. Even if most of a cloud is in the form of extremely dense clumps (Falgarone 1989), because of turbulence compression for example, most of these clumps are generally stable and unable to form stars (e.g., Bertoldi & McKee 1992; Falgarone, Puget, & Pérault 1992). Some gas is probably in a lower density interclump medium too, which is also unable to form stars. Thus a lot of gas contributes to the total CO emission in our Galaxy, but it does not contribute to star formation.
no-problem/9911/math9911125.html
ar5iv
text
# Dynamic Monopolies of Constant Size ## 1 Introduction Let $`G=(V,E)`$ be a simple undirected graph and $`W_0`$ a subset of $`V`$. Consider the following repetitive polling game. At round 0 the vertices of $`W_0`$ are colored white and the other vertices are colored black. At each round, each vertex $`v`$ is colored according to the following rule. If at round $`r`$ the vertex $`v`$ has more than half of its neighbors colored $`c`$, then at round $`r+1`$ the vertex $`v`$ will be colored $`c`$. If at round $`r`$ the vertex $`v`$ has exactly half of its neighbors colored white and half of its neighbors colored black, then we say there is a tie. In this case $`v`$ is colored at round $`r+1`$ by the same color it had at round $`r`$. (Peleg considered other models for dealing with ties. We will refer to these models in section 3. Additional models and further study of this game may be found at , , , and .) If there exists a finite $`r`$ so that at round $`r`$ all vertices in $`V`$ are white, then we say that $`W_0`$ is a dynamic monopoly, abbreviated dynamo. In this paper we prove ###### Theorem 1 For every natural number $`n`$ there exists a graph with more than $`n`$ vertices and with a dynamic monopoly of 18 vertices. We shall use the following notation: If $`vV`$ then $`N(v)`$ denotes the set of neighbors of $`v`$. We call $`d(v)=|N(v)|`$ the degree of $`v`$. For every $`r=0,1\mathrm{}`$ we define $`C_r`$ as a function from $`V`$ to $`\{,𝒲\}`$, so that $`C_r(v)=𝒲`$ if $`v`$ is white at round $`r`$ and $`C_r(v)=`$ if $`v`$ is black at this round. We also define $`W_r=C_r^1(𝒲)`$, $`B_r=C_r^1()`$, $`T_r=W_rW_{r1}(r>0)`$ and $`S_r=T_1\mathrm{}T_r`$ ## 2 Proof of Theorem 1 Let $`J=(V_J,E_J)`$ be the graph in figure 1. Let $$W_0=\{w_0,\mathrm{},w_9,x_0,\mathrm{},x_2,y_0,\mathrm{},y_4\}$$ and let $`U=W_0\{q\}`$ and $`D=V_JU`$. We construct a graph $`J_n`$ by duplicating $`n`$ times the vertices in $`D`$. That is, $$J_n=(V_n,E_n)$$ where $$V_n=U[n]\times D$$ and $`E_n=\{(u,v)J:u,vU\}\{(u,(i,v)):(u,v)J,uU,vD,i[n]\}`$ $`\{((i,u),(i,v)):(u,v)J,u,vD,i[n]\}`$ (Here, as usual, $`[n]`$ denotes the set $`\{1\mathrm{}n\}`$). Note that for reasons of symmetry, at a given round, all copies of a vertex in $`J`$ have the same color. Thus we may write “$`y_0`$ is white at round 3” instead of “$`(i,y_0)`$ is white at round 3 for every $`i[n]`$” etc. The following table describes the evolution of $`J_n`$. The symbol 1 stands for white and 0 stands for black. Note that the table does not depend on $`n`$. (This property is peculiar to the graph $`J`$. In general graphs duplication of vertices may change the pattern of evolution of the graph). $`\begin{array}{ccccccccccc}r& a_{012}& b_{01}& c_0\mathrm{}c_{11}& d_{0123}& e_{0123}& f& g_{01}& q& w_0\mathrm{}w_9& y_{01234}\\ 0& 000& 00& 000000000000& 0000& 0000& 0& 00& 0& 1111111111& 11111\\ 1& 111& 00& 111111111111& 0000& 1111& 0& 11& 0& 0000000000& 00000\\ 2& 000& 11& 000000000000& 1111& 0000& 1& 00& 1& 1111111111& 11111\\ 3& 111& 00& 111111111111& 0000& 1111& 0& 11& 1& 1100000000& 10000\\ 4& 000& 11& 100000000000& 1111& 1000& 1& 00& 1& 1111111111& 11111\\ 5& 111& 00& 111111111111& 1000& 1111& 0& 11& 1& 1100000000& 11000\\ 6& 000& 11& 111000000000& 1111& 1100& 1& 00& 1& 1111111111& 11111\\ 7& 111& 00& 111111111111& 1000& 1111& 0& 11& 1& 1111000000& 11100\\ 8& 000& 11& 111100000000& 1111& 1111& 1& 00& 1& 1111111111& 11111\\ 9& 111& 00& 111111111111& 1100& 1111& 0& 11& 1& 1111000000& 11111\\ 10& 000& 11& 111111000000& 1111& 1111& 1& 11& 1& 1111111111& 11111\\ 11& 111& 00& 111111111111& 1100& 1111& 1& 11& 1& 1111110000& 11111\\ 12& 000& 11& 111111100000& 1111& 1111& 1& 11& 1& 1111111111& 11111\\ 13& 111& 00& 111111111111& 1110& 1111& 1& 11& 1& 1111110000& 11111\\ 14& 000& 11& 111111111000& 1111& 1111& 1& 11& 1& 1111111111& 11111\\ 15& 111& 00& 111111111111& 1110& 1111& 1& 11& 1& 1111111100& 11111\\ 16& 000& 11& 111111111100& 1111& 1111& 1& 11& 1& 1111111111& 11111\\ 17& 111& 00& 111111111111& 1111& 1111& 1& 11& 1& 1111111100& 11111\\ 18& 000& 11& 111111111111& 1111& 1111& 1& 11& 1& 1111111111& 11111\\ 19& 111& 00& 111111111111& 1111& 1111& 1& 11& 1& 1111111111& 11111\\ 20& 111& 11& 111111111111& 1111& 1111& 1& 11& 1& 1111111111& 11111\\ 21& 111& 11& 111111111111& 1111& 1111& 1& 11& 1& 1111111111& 11111\end{array}`$ The table shows that at round 20 the entire system is white and therefore $`W_0`$ is a dynamo. The reader may go through the table by himself, but in order to facilitate the understanding of what happens in the table let us add some explanations as to the mechanism of “conquest” used in this graph. We say that round $`j`$ dominates round $`i`$ if $`W_iW_j`$. We shall make use of the following obvious fact: ###### Observation 1 If round $`j`$ dominates round $`i`$ ($`i,j=0,1\mathrm{}`$) then round $`j+1`$ dominates round $`i+1`$. By applying this observation $`k`$ times, we find that if round $`j`$ dominates round $`i`$ then round $`j+k`$ dominates round $`i+k`$ ($`i,j,k=0,1\mathrm{}`$). By looking at the table one can see that in the graph $`J_n`$ round 2 dominates round 0 and thus we have ###### Corollary 1 Round $`k+2`$ dominates round $`k`$ in $`J_n`$ for every $`k=0,1\mathrm{}`$ We say that a vertex $`v`$ blinks at round $`r`$ if $`C_{r+2i}(v)=𝒲`$ for every $`i=0,1\mathrm{}`$. We say that a vertex $`v`$ is conquered at round $`r`$ if $`C_{r+i}(v)=𝒲`$ for every $`i=0,1\mathrm{}`$. Examining rounds $`0`$ to $`3`$ in the table and using Corollary 1 one can see that $`x_0,x_1`$ and $`x_2`$ are conquered at round 0, and in addition $`q,w_0,w_1`$ and $`y_0`$ are conquered at round 2. Furthermore, every vertex in $`J_n`$ blinks either at round 1 or at round 2. Finally, we have ###### Lemma 1 If at round $`r`$ a vertex $`v`$ in $`J_n`$ has at least half of its neighbors conquered then $`v`$ is conquered at round $`r+2`$. Proof: Every vertex in $`J_n`$ blinks either at round 1 or at round 2, and hence $`v`$ is white either at round $`r+1`$ or at round $`r+2`$. From this round on, at least half of the neighbors of $`v`$ are white, so $`v`$ will stay white. $`\mathrm{}`$ Now the vertices will be conquered in the following order: $`x_0,x_1,x_2,q,w_0,w_1,y_0,c_0,e_0,d_0,y_1,c_1,c_2,e_1,w_2w_3,y_2,c_3,e_2,e_3,d_1,y_3,y_4`$, $`c_4,c_5,g_0,g_1,f,w_4,w_5,c_6,d_2,c_7,c_8,w_6,w_7,c_9,d_3,c_{10},c_{11},w_8,w_9`$, $`a_0,a_1,a_2,b_0,b_1`$. Eventually, the entire graph is colored white. $`J_n`$ is a graph with $`19+27n>n`$ vertices and $`W_0`$ is a dynamo of size 18, proving Theorem 1. ## 3 Questions and Remarks The result of Section 2 gives rise to the following questions: ###### Question 1 Does there exist an infinite graph with a finite dynamo? The answer is no. This follows from the following theorem: ###### Theorem 2 If $`W_0`$ is finite then $`T_r`$ is finite for all $`r=1,2\mathrm{}`$. Moreover, every vertex in $`T_r`$ has a finite degree. Proof: The proof is by induction on $`r`$. For $`r=1`$ the theorem is true because every vertex $`vW_0`$ with an infinite degree becomes black at round 1. For $`r>1`$, if $`C_{r1}(v)=𝒲`$ and $`v`$ has an infinite degree $`\lambda `$ then by the induction hypotheses $`C_{r2}(v)=`$ and $`|N(v)B_{r2}|<\lambda `$. Hence $`|N(v)W_{r1}||N(v)B_{r2}|+|T_{r1}|<\lambda `$ and $`C_r(v)=`$. If $`vT_r`$ has a finite degree then $`v`$ has a neighbor in $`T_{r1}`$. By the induction hypotheses only finitely many vertices have such a neighbor, and thus $`T_r`$ is finite. $`\mathrm{}`$ The next question deals with other models considered by Peleg: ###### Question 2 Do we still have a dynamo of size O(1) if we change the rules of dealing with ties? (e.g. if a vertex becomes black whenever there is a tie.) The answer here is yes. If $`G=(V,E)`$ is a graph, introduce a new vertex $`v^{}`$ for every $`vV`$ and consider the graph $`\widehat{G}=(\widehat{V},\widehat{E})`$ where $$\widehat{V}=\{v,v^{}:vV\}$$ and $$\widehat{E}=E\{(u^{},v^{}):(u,v)E\}\{(v,v^{}):2|d(v)\}$$ If $`W_0`$ is a dynamo of $`G`$ according to the model in Theorem 1, then it is easy to prove that $`\widehat{W_0}=\{v,v^{}:vW_0\}`$ is a dynamo of $`\widehat{G}`$. But all vertices of $`\widehat{G}`$ have odd degrees, and thus ties are not possible and $`\widehat{W_0}`$ is a dynamo of $`\widehat{G}`$ according to any rule of dealing with ties. Therefore, for every $`n=1,2\mathrm{}`$ the graph $`\widehat{J_n}`$ has a dynamo of size 36. ## 4 Another Model Let $`\rho >1`$ be a real number. Consider the following model, which will henceforth be called the $`\rho `$-model. At every round, for every vertex $`v`$ with $`b`$ neighbors colored black and $`w`$ neighbors colored white, if $`w>\rho b`$ then $`v`$ is colored white at the next round, otherwise it is black. For the sake of simplicity we will assume that $`\rho `$ is irrational and that there are no isolated vertices, so that $`w=\rho b`$ is impossble. The most interesting question regarding this model is whether there exist graphs with O(1) dynamo like in Theorem 1. This question is as yet open. We only have some partial results, which can be summarized as follows: 1. If $`\rho `$ is big enough then the size of a dynamo is $`\mathrm{\Omega }(\sqrt{n})`$. 2. If $`\rho `$ is small enough then there exist graphs in which the size of a dynamo is $`O(\mathrm{log}n)`$. 3. If there exist graphs with O(1) dynamo then the number of rounds needed until the entire system becomes white is $`\mathrm{\Omega }(\mathrm{log}n)`$. More explicitly: ###### Theorem 3 Let $`\rho >3`$. If a graph with $`n`$ vertices has a dynamo of size $`k`$ in the $`\rho `$-model then $$n<k^2$$ proof: For every $`r=1,2,\mathrm{}`$, let $`(S_r,\overline{S}_r)`$ be the set of edges with one vertex in $`S_r`$ and the other not in $`S_r`$. Call $`s_r=|S_r|+|(S_r,\overline{S}_r)|`$. Note that $`S_1`$ is the set of vertices which are white at both round 0 and round 1. Every $`vS_1`$ is connected to at most $`k|S_1|`$ vertices in $`W_0S_1`$ and at most $`\frac{k1}{\rho }<k1`$ vertices outside of $`W_0`$. Therefore we have $$s_1<|S_1|+|S_1|(k|S_1|+k1)=k^2(k|S_1|)^2k^2$$ Thus all we need is to show $`s_{r+1}s_r`$ and we are done. Let $`r`$ be fixed. By definition $`S_rS_{r+1}`$. Let $`\mathrm{\Delta }=S_{r+1}S_r`$, and let $`v\mathrm{\Delta }`$. More than $`\frac{3}{4}`$ of the neighbors of $`v`$ are white at round $`r`$ and more than $`\frac{3}{4}`$ of the neighbors of $`v`$ are white at round $`r1`$. Thus more than $`\frac{1}{2}`$ of the neighbors of $`v`$ belong to $`S_r`$. We therefore have $$|(S_r,\overline{S}_r)(S_{r+1},\overline{S}_{r+1})||(S_{r+1},\overline{S}_{r+1})(S_r,\overline{S}_r)||\mathrm{\Delta }|$$ which implies $`s_{r+1}s_r`$. By induction $`s_r<k^2`$ for all $`r`$. If we begin with a dynamo then for some finite $`m`$ we have $`S_m=V`$ and $`n=s_m<k^2`$ $`\mathrm{}`$ ###### Theorem 4 Let $`\rho >1`$. If $`|W_0|=k`$ and $`W_m=V`$ (the set of all vertices), then the number $`e`$ of edges in the graph satisfies $$e<k^2(\frac{2\rho }{\rho 1})^m$$ proof: Let $`d_r`$ denote the sum of the degrees of the vertices in $`S_r`$. Recall that every $`vS_1`$ is white at both round 0 and round 1, and thus $`|N(v)B_0|<k`$ and $`d(v)<k`$. Therefore, $`d_1<2k^2`$. Again, let $`r`$ be fixed, let $`\mathrm{\Delta }`$ be as in the proof of Theorem 3 and let $`v\mathrm{\Delta }`$. More than $`\frac{\rho }{\rho +1}`$ of the neighbors of $`v`$ are white at round $`r`$ and more than $`\frac{\rho }{\rho +1}`$ of the neighbors of $`v`$ are white at round $`r1`$. Thus more than $`\frac{\rho 1}{rho+1}`$ of the neighbors of $`v`$ belong to $`S_r`$. Therefore, we have $$d_{r+1}<d_r+\frac{\rho +1}{\rho 1}d_r=\frac{2\rho }{\rho 1}d_r$$ By induction $`d_r<2k^2(\frac{2\rho }{\rho 1})^{r1}`$. If the entire system is white at round $`m`$ then $`d_{m+1}=2e`$ and thus we have $$e<k^2(\frac{2\rho }{\rho 1})^m$$ $`\mathrm{}`$ ###### Theorem 5 Let $`1<\rho <\frac{257}{256}`$. For every integer $`n>5`$ there exists in the $`\rho `$ model a graph with more than $`2^n`$ vertices and with a dynamo of size $`30(n5)+36`$. Outline of proof: Let $`\widehat{J}`$ be as defined in the answer to Question 2. Construct $`\stackrel{~}{J}`$ by eliminating $`f`$ from $`\widehat{J}`$ and connecting $`f^{}`$ to $`y_0`$ and $`g_1`$ (but not to $`g_0`$). Note that in $`\stackrel{~}{J}`$ the vertex $`g_0`$ is connected only to $`y_3`$ and to $`y_4`$. In figure 2, the upper graph is a part of $`\widehat{J}`$. The lower graph is the corresponding part in $`\stackrel{~}{J}`$. The rest of $`\stackrel{~}{J}`$ is identical to the rest of $`\widehat{J}`$. Construct $`\stackrel{~}{J}_{32},\stackrel{~}{J}_{64},\mathrm{}\stackrel{~}{J}_{2^n}`$ as in the construction of $`J_n`$, where the duplicated vertices are all black vertices except for $`q`$ and $`q^{}`$. (Note that the graphs are constructed separately, namely, the sets of vertices of $`\stackrel{~}{J}_{2^i}`$ and $`\stackrel{~}{J}_{2^j}`$ are disjoint for $`ij`$.) Now connect the graphs in the following way. First, eliminate the copies of $`x_0,x_1,x_2`$ from all graphs except for $`\stackrel{~}{J}_{32}`$. Note that in $`\stackrel{~}{J}_{2^i}`$ there are $`2^i`$ copies of $`g_0`$ (when $`i=5,\mathrm{}n1`$). Divide them into 32 disjoint sets $`P_0,\mathrm{}P_{31}`$, of size $`2^{i5}`$ each. Now connect the vertices in $`P_0`$ to the copy of $`q`$ in $`\stackrel{~}{J}_{2^{i+1}}`$, connect $`P_1`$ to the copy of $`q^{}`$, and connect each one of $`P_2\mathrm{}P_{31}`$ to a respective white vertex in $`\stackrel{~}{J}_{2^{i+1}}`$ (see in figure 3). It is possible to verify the following: 1. All vertices of the obtained graph blink either at round 1 or at round 2. 2. All vertices of $`K_{32}`$ are eventually conquered. (The evolution of this conquest is similar to the one in Theorem 1.) 3. If all copies of $`g_0`$ in $`\stackrel{~}{J}_{2^i}`$ are conquered at a certain round, then all vertices of $`\stackrel{~}{J}_{2^{i+1}}`$ are eventually conquered. (Again, the evolution is similar to the one in Theorem 1. Note that we need the bound $`\rho <\frac{257}{256}`$ in order to have $`q`$ and $`q^{}`$ conquered.) Thus all vertices are eventually conquered. The theorem follows upon noticing that our graph has more than $`2^n`$ vertices, and the size of the dynamo is $`30(n5)+36`$. $`\mathrm{}`$ Acknowledgement: I would like to thank Ron Aharoni and Ron Holzman for helping me with the representation.
no-problem/9911/cond-mat9911131.html
ar5iv
text
# Collective modes and sound propagation in a 𝑝-wave superconductor: Sr2RuO4 ## I Introduction Shortly after the discovery of superconductivity in Sr<sub>2</sub>RuO<sub>4</sub>, the possibility of spin triplet pairing was discussed. Possible pairing symmetries were also classified based on the crystal symmetry. On the experimental front, there have been attempts to single out the right pairing symmetry among these possibilities. Recent measurement of <sup>17</sup>O-Knight shift in NMR for the magnetic field parallel to the $`ab`$ plane showed no change across $`T_c`$, which can be taken as the evidence of the spin triplet pairing with $`\widehat{d}`$-vector parallel to the $`c`$-axis. Here $`\widehat{d}`$ is called the spin vector which is perpendicular to the direction of the spin associated with the condensed pair. $`\mu `$SR experiment found spontaneous magnetic field in the superconducting Sr<sub>2</sub>RuO<sub>4</sub>, which seems to indicate broken time reversal symmetry in the superconducting state. These experiment may be compatible with the following order parameter $$\widehat{\mathrm{\Delta }}(𝐤)=\mathrm{\Delta }\widehat{d}(k_1\pm ik_2),$$ (1) where $`\mathrm{\Delta }`$ is the magnitude of the superconducting order parameter. Notice that this state is analogous to the $`A`$ phase of <sup>3</sup>He and there is a full gap on the Fermi surface. On the other hand, there also exist experiments that cannot be explained by a naive application of the order parameter given by Eq.(1). Earlier specific heat measurement found residual density of states at low temperatures below $`T_c`$, which provokes the ideas of orbital dependent superconductivity and even non-unitary superconducting state. However, more recent specific heat experiment on a cleaner sample reports no residual density of states and it was found that the specific heat behaves as $`T^2`$ at low temperatures. This result stimulated a speculation about different order parameters with line node. However, since there are three bands labeled by $`\alpha `$, $`\beta `$, and $`\gamma `$ which cross the Fermi surface, it is not yet clear whether the order parameter given by Eq.(1) is compatible with more recent specific heat data or not. For example, it is possible that the pairing symmetry associated with the $`\gamma `$ band is still given by Eq.(1) while the order parameter symmetry associated with $`\alpha `$ and $`\beta `$ bands can be quite different. In this case, the low temperature specific heat will be dominated by the excitations from $`\alpha `$ and $`\beta `$ bands. In order to resolve the issue, it is important to examine other predictions of the given order parameter and compare the results with future experiments. One way of identifying the correct order parameter among possible candidates is to investigate the unique collective modes supported by the ground state with a given pairing symmetry. The observation of the effects of these collective modes would provide convincing evidence for a particular order parameter symmetry. If we assume that the order parameter of Eq.(1) is realized in Sr<sub>2</sub>RuO<sub>4</sub>, the superconducting state would support unique collective modes, the so-called clapping mode and spin waves as well as the phase and amplitude modes of the order parameter which exist also in $`s`$-wave superconductors. Previously we studied the dynamics of spin waves . A possible way to distinguish the order parameter of the $`\gamma `$ band from those of the $`\alpha `$ (or $`\beta `$) band was also proposed in the context of spin wave dynamics. In this paper, we study the dynamics of the sound wave and its coupling to the clapping modes assuming that the order parameter is given by Eq.(1). As in <sup>3</sup>He, only the clapping mode can couple to the sound wave and affects its dynamics. Here we study the sound velocities and attenuation coefficients of the longitudinal and transverse sound waves. In particular, we identify the clapping mode with the frequency, $`\omega =\sqrt{2}\mathrm{\Delta }(T)`$, and examine the effects of this mode and disorder on the sound wave propagation. In a recent paper, Higashitani and Nagai obtained the clapping mode with the frequency, $`\omega =\sqrt{2}\mathrm{\Delta }`$, and discussed the possible coupling to the sound wave independent of us. It is, however, important to realize that the coupling to the sound wave is extermely small because $`C/v_F1`$ in metals, where $`C`$ is the sound velocity. Indeed the recent measurement of the sound velocity in the normal and superconducting states of Sr<sub>2</sub>RuO<sub>4</sub> reported in shows that $`C/v_F1`$. They measured the sound velocities of the longitudinal modes, $`C_{11}`$ ($`𝐪,𝐮[100]`$) and $`C_{33}`$ ($`𝐪,𝐮[001]`$), and the transverse modes, $`C_{44}`$ ($`𝐪[100]`$, $`𝐮[001]`$) and $`C_{66}`$ ($`𝐪[100]`$, $`𝐮[010]`$), where $`𝐪`$ and $`𝐮`$ are the directions of propagation and the polarization of ultrasound, respectively. They found that the longitudinal sound velocities, $`C_{11}`$ and $`C_{33}`$, decrease with a kink at $`T=T_c`$, while the transverse sound velocities do not exhibit any effect of the onset of superconductivity. We estimate from their experimental data that $`C_l/v_F10^2`$, where $`C_l`$ is the longitudinal sound velocity. It can be also seen that the transverse sound velocity, $`C_t`$, is much smaller than the longitudinal one . Incorporating the correct limit, $`C/v_F1`$, we obtained the sound velocities and attenuation coefficients for both collisionless and diffusive limits. In the diffusive limit, the quasi-particle scattering due to impurities should be properly taken into account. One can show that, in a metal like Sr<sub>2</sub>RuO<sub>4</sub>, the collisionless limit is rather difficult to reach because it can be realized only for $`\omega O(1)`$ GHz. For more practical range of frequencies, kHz $``$ MHz, the diffusive limit may be easier to achieve. On the other hand, we found that it is much easier to see the effects of the coupling between the sound waves and the clapping mode in the collisionless limit. Therefore it is worthwhile to study both regimes. Here we summarize our main results. A. Collisionless limit In the absence of the coupling to the clapping mode, the longitudinal sound velocity decreases in the superconducting state because the effect of the screening of the Coulomb potential increases, which happens in the s-wave superconductors as well. However, one of the important features of the $`p`$-wave order parameter in consideration is that the sound wave can now couple to the clapping mode. This effect is absent in $`s`$-wave superconductors. One can show that, among longitudinal waves, $`C_{11}`$ mode can couple to the clapping mode, but $`C_{33}`$ mode cannot. We found that the longitudinal sound velocity $`C_{11}`$ decreases as $$\frac{\delta C_l^{11}}{C_l^{11}}=\lambda _l^{11}\left[\frac{1}{2}2\left(\frac{C_l^{11}}{v_F}\right)^2(1f\frac{f}{4\{1+(2\mathrm{\Delta }(T)/v_Fq)^2\}})\right].$$ (2) where $`\lambda _l`$ is the couping constant and $`f`$ is the superfluid density. $`\delta C_l/C_l`$ is the relative shift in the sound velocity. We estimated the frequency regime where one can observe the effect of the clapping mode and found that the effect is visible if $`v_F|𝐪|23\mathrm{\Delta }(0)`$. This implies that $`\omega O(1)`$ GHz. Since $`C_{33}`$ does not couple to the clapping mode, the velocity change is simply given by $$\frac{\delta C_l^{33}}{C_l^{33}}=\lambda _l^{33}\left[\frac{1}{2}2\left(\frac{C_l^{33}}{v_F}\right)^2(1f)\right].$$ (3) Since the velocity of the transverse wave is much smaller than that of the longitudinal one and the coupling to the electron system is weaker than the longitudinal case as well, we expect that the change of the transverse sound velocity is hard to observe. In order to complete the discussion, we also present the results for these small changes in transverse velocities. Here only $`C_{66}`$ mode couples to the clapping mode, and $`C_{44}`$ mode does not. We found $$\frac{\delta C_t^{66}}{C_t^{66}}=\lambda _t^{66}\left[\frac{1}{2}+2\left(\frac{C_t^{66}}{v_F}\right)^2\left(1f+\frac{f}{4\{1+(2\mathrm{\Delta }/v_Fq)^2\}}\right)\right],$$ (4) $$\frac{\delta C_t^{44}}{C_t^{44}}=\lambda _t^{44}\left[\frac{1}{2}+2\left(\frac{C_t^{44}}{v_F}\right)^2(1f)\right],$$ (5) where the $`\lambda _t`$ is the transverse coupling constant. We also found that the leading contribution to the attenuation coefficient is the same as that of $`s`$-wave superconductors in the collisionless limit. B. Diffusive limit This case corresponds to $`\omega ,v_F|𝐪|\mathrm{\Gamma }`$, where $`\mathrm{\Gamma }`$ is the scattering rate due to impurities. As in the case of the collisionless limit, in principle $`C_{11}`$ and $`C_{66}`$ modes couple to the clapping mode, but it turns out that the effect is almost impossible to detect. Neglecting the coupling to the clapping mode and working in the limit $`4\pi T_c2\mathrm{\Gamma }v_F|𝐪|,\omega `$, we obtain the following results near $`T_c`$. $`{\displaystyle \frac{\delta C_l}{C_l}}`$ $`=`$ $`\lambda _l{\displaystyle \frac{1}{2}}\left\{1\left({\displaystyle \frac{\omega }{2\mathrm{\Gamma }}}\right)^2\left[1{\displaystyle \frac{4\pi ^3}{7\zeta (3)}}{\displaystyle \frac{T}{\mathrm{\Gamma }}}\left(1{\displaystyle \frac{T}{T_c}}\right)\right]\right\},`$ (6) $`{\displaystyle \frac{\alpha _l}{\alpha _l^n}}`$ $`=`$ $`1{\displaystyle \frac{2\pi ^3}{7\zeta (3)}}{\displaystyle \frac{T}{\mathrm{\Gamma }}}\left(1{\displaystyle \frac{T}{T_c}}\right),`$ (7) where $`\alpha `$ and $`\alpha _n`$ are the attenuation coefficients in the superconducting and the normal states respectively. The shift of the sound velocity and attenuation coefficient decrease linearly in $`(1T/T_c)`$ as $`TT_c`$. This result for the longitudinal sound wave is consistent with the experimental observation reported in In the case of the transverse sound waves, the leading behaviors of the sound velocity and the attenuation coeffient can be obtained simply by replacing $`C_l`$ and $`\lambda _l`$ by $`C_t`$ and $`\lambda _t`$ in the diffusive limit. However, the absolute value of the transverse sound velocity is much smaller than the longitudinal one and the coupling to the electron system is also much weaker than the case of the longitudinal sound waves. Thus, it would be hard to observe any change at $`T=T_c`$ for the transverse wave. This may explain the experimental finding that the transverse velocity does not show any change across $`T_c`$. We also obtained the attenuation coefficient for all temperatures below $`T_c`$. It is given by Eq.(43) and Fig.2 shows its behavior. The rest of the paper is organized as follows. In section II, the clapping mode is briefly discussed. In section III, we provide a brief summary of the formalism used in Ref. to explain how the sound velocity and the attenuation coefficients are related to the autocorrelation functions of the stress tensor. We present the results of the study on the sound propagation in the collisionless and diffusive limits in sections IV and V, respectively. We conclude in section VI. Further details which are not presented in the main text are relegated to the Appendix A and B. ## II Collective modes in Sr<sub>2</sub>RuO<sub>4</sub> As in the $`s`$-wave superconductors, the phase and amplitude modes of the order parameter also exist in the $`p`$-wave superconductors. On the other hand, due to the internal structure of the Cooper pair in the $`p`$-wave superconductor, there exist other types of collective mode associated with the order parameter. The nature of these modes is determined by the structure of the order parameter. There are collective modes associated with the oscillation of the spin vector $`\widehat{d}`$, which we have already discussed in and . There exists another collective mode associated with the orbital part. Using the notation $`e^{\pm i\varphi }=(k_1\pm ik_2)/|𝐤|`$, the oscillation of the orbital part $`e^{\pm i\varphi }e^{i\varphi }`$ gives rise to the clapping mode with $`\omega =\sqrt{2}\mathrm{\Delta }(T)`$. This mode couples to the sound waves as we will see in the next section. Therefore, the detection of the clapping mode will provide a unique evidence for the $`p`$-wave superconducting order parameter. The derivation of the clapping mode and the coupling to the sound wave is discussed in Appendix A. ## III Dynamics of sound wave via stress tensor In ordinary liquids, the sound wave is a density wave. In superconductors, the density is not only coupled to the longitudinal component of the normal velocity, but also to the superfluid velocity and to temperature. The role of these couplings and their consequences in the dynamics of sound wave can be studied by looking at the autocorrelation function, $`[\tau _{ij},\tau _{ij}]`$, of stress tensor, $`\tau _{ij}`$. $$[\tau _{ij},\tau _{ij}](𝐫𝐫^{},tt^{})i\theta (tt^{})[\tau _{ij}(𝐫,t),\tau _{ij}(𝐫^{},t^{})],$$ (8) where $$\tau _{ij}(𝐫,t)=\underset{\sigma }{}\left[\frac{(^{^{}})_i}{2i}\frac{(^{^{}})_j}{2im}\psi _\sigma ^{}(𝐫,t)\psi _\sigma (𝐫^{},t)\right]_{𝐫^{}=𝐫}.$$ (9) Here $`\psi _\sigma ^{}`$ is the electron creation operator with spin $`\sigma `$. The other operators whose correlation functions are needed for the ultrasonic attenuation and the sound velocity change are the density operator $$n(𝐫,t)=\underset{\sigma }{}\psi _\sigma ^{}(𝐫,t)\psi _\sigma (𝐫,t),$$ (10) and the current operator $$𝐣(𝐫,t)=\underset{\sigma }{}\left[\frac{(^{^{}})_j}{2im}\psi _\sigma ^{}(𝐫,t)\psi _\sigma (𝐫^{},t)\right]_{𝐫^{}=𝐫}.$$ (11) Assuming that the wave vector of the sound wave is in the $`\widehat{𝐱}`$ direction, $`𝐪=q\widehat{x}`$, the sound velocity shift, $`\delta C`$, at low frequencies can be computed from $`{\displaystyle \frac{\delta C_l}{C_l}}`$ $`=`$ $`{\displaystyle \frac{C_l(\omega )C_l}{C_l}}|_{\omega =C_l|𝐪|}={\displaystyle \frac{\omega }{m_{\mathrm{ion}}C_l|𝐪|}}\mathrm{Re}[h_l,h_l](𝐪,\omega )|_{\omega =C_l|𝐪|},`$ (12) $`{\displaystyle \frac{\delta C_t}{C_t}}`$ $`=`$ $`{\displaystyle \frac{C_t(\omega )C_t}{C_t}}|_{\omega =C_t|𝐪|}={\displaystyle \frac{\omega }{m_{\mathrm{ion}}C_t|𝐪|}}\mathrm{Re}[h_t,h_t](𝐪,\omega )|_{\omega =C_t|𝐪|},`$ (13) where $`h_l(𝐫,t)`$ $`=`$ $`{\displaystyle \frac{q}{\omega }}\tau _{xx}(𝐪,t){\displaystyle \frac{\omega m}{q}}n(𝐫,t),`$ (14) $`h_t(𝐫,t)`$ $`=`$ $`{\displaystyle \frac{q}{\omega }}\tau _{xy}(𝐪,t)mj_y(𝐫,t),`$ (15) Here $`C_l`$ and $`C_t`$ represent the longitudinal and transverse sound velocities in the normal state respectively. $`m_{\mathrm{ion}}`$ and $`m`$ are the mass of ions and the mass of electron, respectively. On the other hand, the attenuation coefficient, $`\alpha `$, at low frequencies is obtained from $$\alpha _l=\frac{\omega }{m_{\mathrm{ion}}C_l}\mathrm{Im}[h_l,h_l](𝐪,\omega )|_{\omega =C_l|𝐪|},\alpha _t=\frac{\omega }{m_{\mathrm{ion}}C_t}\mathrm{Im}[h_t,h_t](𝐪,\omega )|_{\omega =C_t|𝐪|}.$$ (16) These relations are extensively discussed in the work of Kadanoff and Falko. ## IV Sound propagation in the collisionless limit As discussed in the introduction, in a metal like Sr<sub>2</sub>RuO<sub>4</sub> the collisionless limit is somewhat difficult to reach because we need the sound wave with the frequency $`\omega O(1)`$ GHz. However, we will also see that this is the regime where one has the best chance to observe the effect of the collective mode. The sound velocity shift and the attenuation coefficients can be calculated by looking at the autocorrelation functions, $`[\tau _{ij},\tau _{ij}]`$, of stress tensor, $`\tau _{ij}`$, as discussed in the previous section. We will use the finite temperature Green’s function technique to compute these correlation functions. The single particle Green’s function, $`G(i\omega _n,𝐤)`$, in the Nambu space is given by $$G^1(i\omega _n,𝐤)=i\omega _n\xi _𝐤\rho _3\mathrm{\Delta }(\widehat{k}\widehat{\rho })\sigma _1,$$ (17) where $`\rho _i`$ and $`\sigma _i`$ are Pauli matrices acting on the particle-hole and spin space respectively, $`\omega _n=(2n+1)\pi T`$ is the fermionic Matsubara frequency, and $`\xi _𝐤=𝐤^2/2m\mu `$. Then, for example, the irreducible correlation function can be computed from $$[\tau _{ij},\tau _{ij}]_{00}(i\omega _\nu ,𝐪)=T\underset{n}{}\underset{𝐩}{}\mathrm{Tr}[\left(\frac{p_ip_j}{m}\right)^2\rho _3G(𝐩,\omega _n)\rho _3G(𝐩𝐪,i\omega _ni\omega _\nu )],$$ (18) where $`\omega _\nu =2\nu \pi T`$ is the bosonic Matsubara frequency. ### A Longitudinal sound wave Let us consider the longitudinal wave with $`𝐮𝐪𝐱`$, which corresponds to the $`C_{11}`$ mode. Since the stress tensor couples to the density, the autocorrelation function $`[h_l,h_l]`$ is renormalized by the Coulomb interaction. In the long wavelength limit and for $`s\omega /v_F|𝐪|1`$, the renormalized correlation function $`[h_l,h_l]_0`$ can be reduced to $$\mathrm{Re}[h_l,h_l]_0\left(\frac{q}{\omega }\right)^2\mathrm{Re}\frac{p_F^4}{4m^2}\mathrm{cos}(2\varphi ),\mathrm{cos}(2\varphi )=\frac{p_F^4}{4m^2}N(0)\left[\frac{1}{2}2s^2(1f)\right],$$ (19) where $$A,B=T\underset{n}{}\underset{𝐩}{}\mathrm{Tr}[A\rho _3G(𝐩,\omega _n)B\rho _3G(𝐩𝐪,i\omega _ni\omega _\nu )],$$ (20) with $`A`$ and $`B`$ being some functions of $`\varphi `$ or operators. Here $`\varphi `$ is the angle between p and q, $`N(0)=m/2\pi `$ is the density of states at the Fermi level and $`f`$ is the superfluid density in the static limit ($`\omega v_F|𝐪|`$) given by $$f=2\pi T\mathrm{\Delta }^2\underset{n=0}{\overset{\mathrm{}}{}}\frac{1}{\omega _n^2+\mathrm{\Delta }^2}\frac{1}{\sqrt{\omega _n^2+\mathrm{\Delta }^2+(v_Fq)^2/4}}.$$ (21) The derivation of the result in Eq.(19) is given in Appendix B 2. Therefore, the sound velocity shift, $`\delta C_l`$, is given by $$\frac{\delta C_l}{C_l}=\lambda _l[\frac{1}{2}2s^2(1f)],$$ (22) where $`\lambda _l=p_F^2/(8\pi mm_{\mathrm{ion}}C_l^2)`$ is the longitudinal coupling constant. Here we set $`s=\omega /v_F|𝐪|=C_l/v_F`$, where $`C_l`$ is the longitudinal sound velocity. In Sr<sub>2</sub>RuO<sub>4</sub>, $`s=10^21`$ which is very different from $`s1`$ of <sup>3</sup>He. Now let us consider the correction due to the collective modes. The additional renormalization of $`[h_l,h_l]`$ (in the $`C_{11}`$ mode) due to the collective mode is computed in Appendix B 3 and the result is given by $$[h_l,h_l]=\left(\frac{q}{\omega }\right)^2\frac{p_F^4}{4m^2}\left[\frac{1}{2}2s^2\left(1f\frac{f(v_F|𝐪|)^2}{4\{(v_F|𝐪|)^2+4\mathrm{\Delta }(T)^22\omega ^2\}}\right)\right]+i\frac{m_{\mathrm{ion}}C_l^{11}}{\omega }\alpha _l(\omega ).$$ (23) As one can see from the above equation, there is no resonance because $`\omega v_F|𝐪|`$. However, we will be able to see a shadow of the collective mode in the sound velocity change, which we discuss in the following. In the limit $`s1`$ and setting $`s=C_l^{11}/v_F`$, the above equation leads to the sound velocity shift given by $$\frac{\delta C_l^{11}}{C_l^{11}}=\lambda _l^{11}\left[\frac{1}{2}2\left(\frac{C_l^{11}}{v_F}\right)^2\left(1f\frac{f}{4\{1+(2\mathrm{\Delta }/v_Fq)^2\}}\right)\right].$$ (24) Note that the sound wave gets soften more by the collective mode. In Fig. 1, we show $`I=1f\frac{f}{4[1+(2\mathrm{\Delta }(T)/v_F|𝐪|)^2]}`$ for $`v_F|𝐪|/\mathrm{\Delta }(0)=0,1,2,3`$ for $`0.7<t<1.0`$ where $`t=T/T_c`$. Note that the coupling to the collective mode can be observed for $`v_F|𝐪|23\mathrm{\Delta }(0)`$ which corresponds to $`\omega O(1)`$ GHz. The attenuation coefficient, $`\alpha _l`$, is given by $`{\displaystyle \frac{\alpha _l(\omega )}{\alpha _l^n(\omega )}}`$ $`=`$ $`{\displaystyle \frac{1}{\omega }}{\displaystyle _\mathrm{\Delta }^{\mathrm{}}}𝑑\omega ^{}{\displaystyle \frac{\omega ^{}(\omega ^{}+\omega )\mathrm{\Delta }^2}{\sqrt{\omega ^2\mathrm{\Delta }^2}\sqrt{(\omega ^{}+\omega )^2\mathrm{\Delta }^2}}}\left(\mathrm{tanh}{\displaystyle \frac{\omega +\omega ^{}}{2T}}\mathrm{tanh}{\displaystyle \frac{\omega ^{}}{2T}}\right)`$ (25) $``$ $`\theta (\omega 2\mathrm{\Delta }){\displaystyle \frac{1}{\omega }}{\displaystyle _\mathrm{\Delta }^{\omega \mathrm{\Delta }}}𝑑\omega ^{}{\displaystyle \frac{\omega ^{}(\omega ^{}\omega )\mathrm{\Delta }^2}{\sqrt{\omega ^2\mathrm{\Delta }^2}\sqrt{(\omega ^{}\omega )^2\mathrm{\Delta }^2}}}\left(\mathrm{tanh}{\displaystyle \frac{\omega ^{}}{2T}}\right),`$ (26) where $`\alpha _l^n`$ is the attenuation coefficient in the normal state. This form is the same as the one in the s-wave superconductors. We can carry out a parallel analysis for the longitudinal wave with $`𝐮𝐪𝐳`$ which corresponds to the $`C_{33}`$ mode. Unfortunately, this sound wave does not couple to the clapping mode. Therefore, the velocity shift of this sound wave is simply given by $$\frac{\delta C_l^{33}}{C_l^{33}}=\lambda _l^{33}\left[\frac{1}{2}2\left(\frac{C_l^{33}}{v_F}\right)^2(1f)\right].$$ (27) . ### B Transverse sound wave Here we consider first the $`C_{66}`$ mode that has $`𝐮𝐲`$ and $`𝐪𝐱`$. In this case, the sound velocity change can be obtained from the evaluation of $`[h_t,h_t]`$. Assuming that the current contribution is negligible at low frequencies and following the same procedure used in the case of the longitudinal sound wave, we obtain $$\frac{\delta C_t^{66}}{C_t^{66}}=\lambda _t^{66}\left[\frac{1}{2}+2\left(\frac{C_t^{66}}{v_F}\right)^2\left(1f+\frac{f}{4\{1+(2\mathrm{\Delta }/v_Fq)^2\}}\right)\right],$$ (28) where the $`\lambda _t=p_F^2/(8\pi mm_{\mathrm{ion}}C_t^2)`$ is the transverse coupling constant. Note that the transverse sound velocity increases upon entering the superconducting state. However, due to the fact that the transverse velocity is rather small and the coupling to the electron system is also weak compared to the longitudinal case, it will be hard to observe the change of the transverse sound velocity at $`T=T_c`$. Another transverse sound mode, $`C_{44}`$, that has $`𝐮𝐳`$ and $`𝐪𝐱`$, does not couple to the clapping mode. Thus the sound velocity change in this case is given by $$\frac{\delta C_t^{44}}{C_t^{44}}=\lambda _t^{44}\left[\frac{1}{2}+2\left(\frac{C_t^{44}}{v_F}\right)^2(1f)\right].$$ (29) ## V The diffusive limit In the frequency range kHz $``$ MHz, the diffusive limit is more realistic. In this limit, the incorporation of the quasi-particle damping is very important. Here we assume for simplicity that the quasi-particle scattering is due to impurities. Unlike the case of s-wave superconductors, we treat the impurity scattering in the unitary limit. Then the effect of the impurity is incorporated by changing $`\omega _n`$ to $`\stackrel{~}{\omega }_n`$ (renormalized Matsubara frequnecy) in Eq.(17) . The impurity renormalized complex frequency, $`\stackrel{~}{\omega }_n`$ is determined from $$\stackrel{~}{\omega }_n=\omega +\mathrm{\Gamma }\frac{\sqrt{\stackrel{~}{\omega }_n^2+\mathrm{\Delta }^2}}{\stackrel{~}{\omega }_n},$$ (30) where $`\mathrm{\Gamma }`$ is the quasi-particle scattering rate and the quasi-particle mean free path is given by $`l=v_F/(2\mathrm{\Gamma })`$. In order to compare the results in the normal state and those in the superconducting state, let us first work out the correlation functions in the normal state, where $`\mathrm{\Delta }=0`$. ### A Normal state We can use Eq.(B6) to compute $`[h_l,h_l]_0`$. In the limit of $`\omega ,v_Fq2\mathrm{\Gamma }`$, we get $$\mathrm{cos}(2\varphi ),\mathrm{cos}(2\varphi )=\mathrm{cos}^2(2\varphi )\left(1\frac{\omega }{\omega +2i\mathrm{\Gamma }\zeta }\right)\frac{1}{2}\left[1(\frac{\omega }{2\mathrm{\Gamma }})^2+i\frac{\omega }{2\mathrm{\Gamma }}\right].$$ (31) One can show that $`\mathrm{cos}(2\varphi ),1`$ is of higher order in $`\omega /2\mathrm{\Gamma }`$ and $`v_F|𝐪|/2\mathrm{\Gamma }`$ while $`1,12\mathrm{cos}(2\varphi ),\mathrm{cos}(2\varphi )`$ to the lowest order. Thus, as in the previous section, $`[h_l,h_l]`$ is well approximated by $`\mathrm{cos}(2\varphi ),\mathrm{cos}(2\varphi )`$ times a multiplicative factor. This gives us $$\frac{\delta C_l^n}{C_l}\lambda _l\frac{1}{2}\left[1(\frac{\omega }{2\mathrm{\Gamma }})^2\right],\alpha _l^n\lambda _l|𝐪|(\frac{\omega }{2\mathrm{\Gamma }}),$$ (32) where $`\omega `$ is set to $`C_l|𝐪|`$. It is not difficult to see that the results of the transverse sound wave is essentially the same as the longitudinal case up to the lowest order with a simple replacement of $`\lambda _l`$ and $`C_l`$ by $`\lambda _t`$ and $`C_t`$. Therefore, in the diffusive limit, the longitudinal and transverse sound velocities have the same form with different coupling constants. ### B Superconducting state near $`T_c`$ Now we turn to the case of the superconducting state near $`T_c`$, where the correlation functions can be computed from Eq.(B9) after replacing $`\omega _n`$ by $`\stackrel{~}{\omega }_n`$. In this section, we will assume $`4\pi T_c2\mathrm{\Gamma }v_F|𝐪|`$ and use $`\frac{\mathrm{\Delta }}{2\pi T}1`$ near $`T_c`$. As in the previous sections, the leading contribution in $`[h_l,h_l]`$ can be computed from $`\mathrm{cos}(2\varphi ),\mathrm{cos}(2\varphi )`$. After some algebra, we finally obtain $`\mathrm{cos}(2\varphi ),\mathrm{cos}(2\varphi )`$ $``$ $`{\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{2}}({\displaystyle \frac{\omega }{2\mathrm{\Gamma }}})^2\left[1{\displaystyle \frac{\mathrm{\Delta }^2}{\pi \mathrm{\Gamma }T}}\left\{\psi ^{(1)}({\displaystyle \frac{1}{2}}+{\displaystyle \frac{\mathrm{\Gamma }}{2\pi T}}){\displaystyle \frac{\mathrm{\Gamma }}{8\pi T}}\psi ^{(2)}({\displaystyle \frac{1}{2}}+{\displaystyle \frac{\mathrm{\Gamma }}{2\pi T}})\right\}\right]`$ (34) $`+i{\displaystyle \frac{1}{2}}({\displaystyle \frac{\omega }{2\mathrm{\Gamma }}})\left[1{\displaystyle \frac{\mathrm{\Delta }^2}{2\pi \mathrm{\Gamma }T}}\left\{\psi ^{(1)}({\displaystyle \frac{1}{2}}+{\displaystyle \frac{\mathrm{\Gamma }}{2\pi T}}){\displaystyle \frac{\mathrm{\Gamma }}{4\pi T}}\psi ^{(2)}({\displaystyle \frac{1}{2}}+{\displaystyle \frac{\mathrm{\Gamma }}{2\pi T}})\right\}\right],`$ where $$\psi ^{(n)}(z)=\left(\frac{d}{dz}\right)^n\psi (z)=(1)^{n+1}n!\underset{k=0}{\overset{\mathrm{}}{}}\frac{1}{(z+k)^{n+1}}.$$ (35) Here $`\psi ^{(n)}(z)`$ is the poly-Gamma function and $`\psi (z)`$ is the di-Gamma function. This leads to $`{\displaystyle \frac{\delta C_{l,t}}{C_{l,t}}}`$ $`=`$ $`\lambda _{l,t}{\displaystyle \frac{1}{2}}\left[1\left({\displaystyle \frac{\omega }{2\mathrm{\Gamma }}}\right)^2\left(1{\displaystyle \frac{\mathrm{\Delta }^2}{\pi \mathrm{\Gamma }T}}\psi ^{(1)}({\displaystyle \frac{1}{2}}+{\displaystyle \frac{\mathrm{\Gamma }}{2\pi T}})\right)\right],`$ (36) $`{\displaystyle \frac{\alpha _{l,t}}{\alpha _{l,t}^n}}`$ $`=`$ $`1{\displaystyle \frac{\mathrm{\Delta }^2}{2\pi \mathrm{\Gamma }T}}\psi ^{(1)}({\displaystyle \frac{1}{2}}+{\displaystyle \frac{\mathrm{\Gamma }}{2\pi T}}).`$ (37) where $`\alpha _n`$ is the ultrasonic attenuation coefficient in the normal state. Here we combined the subscripts $`l`$ and $`t`$, because the above analysis applies to the case of the transverse sound wave as well. Only the coupling constants $`\lambda _{l,t}`$ are different. In particular, when $`\mathrm{\Gamma }/2\pi T\mathrm{\Gamma }/2\pi T_c1`$, the above equations can be further reduced to $`{\displaystyle \frac{\delta C_{l,t}}{C_{l,t}}}`$ $`=`$ $`\lambda _{l,t}{\displaystyle \frac{1}{2}}\left\{1\left({\displaystyle \frac{\omega }{2\mathrm{\Gamma }}}\right)^2\left[1{\displaystyle \frac{4\pi ^3}{7\zeta (3)}}{\displaystyle \frac{T}{\mathrm{\Gamma }}}\left(1{\displaystyle \frac{T}{T_c}}\right)\right]\right\},`$ (38) $`{\displaystyle \frac{\alpha _{l,t}}{\alpha _{l,t}^n}}`$ $`=`$ $`1{\displaystyle \frac{2\pi ^3}{7\zeta (3)}}{\displaystyle \frac{T}{\mathrm{\Gamma }}}\left(1{\displaystyle \frac{T}{T_c}}\right).`$ (39) Note that the sound velocity change and the attenuation coefficients decrease linearly in $`(1T/T_c)`$ as $`TT_c`$. This result, Eq. (39), for the longitudinal sound wave is consistent with the experimental observation reported in However, in the experiment, the transverse sound velocity does not show any change across $`T_c`$. The absolute value of the transverse sound velocity is much smaller than the longitudinal one and the coupling to the electron system is also much weaker than that of the longitudinal sound waves. Therefore, it is difficult to observe any change at $`T=T_c`$ for the transverse wave, which may explain the experimental results. Here we neglect the coupling to the collective mode. Indeed, even in the diffusive limit, $`C_{11}`$ and $`C_{66}`$ modes do couple to the collective mode. However, our investigation showed that the coupling to the collective mode in these cases is almost impossible to detect although we do not present the details of the analysis here. ### C Ultrasonic attenuation for all temperature regimes The general expression of the sound attenuation coefficient for $`T<T_c`$ can be obtained by following the procedure of Kadanoff & Falko, and Tsuneto. To obtain the ultrasonic attenuation coefficient, $`\alpha _t`$, we compute the imaginary part of the correlation function $`[\tau _{xy},\tau _{xy}]`$. We finally arrive at $$\mathrm{Im}[\tau _{xy},\tau _{xy}]=\frac{p_F^4}{m^2}N(0)\omega _{\mathrm{}}^{\mathrm{}}𝑑\omega \left(\frac{n_F}{\omega }\right)\frac{g(\stackrel{~}{\omega })}{\mathrm{Im}\sqrt{\stackrel{~}{\omega }^2\mathrm{\Delta }^2}}\frac{1+y^2/2\sqrt{1+y^2}}{y^4},$$ (40) where $`y=\frac{v_Fq}{2\mathrm{I}\mathrm{m}\sqrt{\stackrel{~}{\omega }^2\mathrm{\Delta }^2}}`$ and $`n_F(\omega )=1/(e^{\omega /T}+1)`$ is the Fermi distribution function. The coherence factor $`g(\stackrel{~}{\omega })`$ is given by $$g(\stackrel{~}{\omega })=\frac{1}{2}\left(1+\frac{|\stackrel{~}{x}|^21}{|\stackrel{~}{x}^21|}\right),$$ (41) where $`\stackrel{~}{x}=\stackrel{~}{\omega }/\mathrm{\Delta }`$ is determined from $$\stackrel{~}{x}=\frac{\omega }{\mathrm{\Delta }}+i\frac{\mathrm{\Gamma }}{\mathrm{\Delta }}\frac{\sqrt{\stackrel{~}{x}^21}}{\stackrel{~}{x}}.$$ (42) Similar analysis can be also done for $`\alpha _l`$. In the limit of $`|𝐪|l1`$, the above result leads to the following ratio between the attenuation coefficients in the superconducting state, $`\alpha _{l,t}`$, and the normal state, $`\alpha _{l,t}^n`$. $$\frac{\alpha _{l,t}}{\alpha _{l,t}^n}=\frac{\mathrm{\Gamma }}{8\mathrm{\Delta }}_0^{\mathrm{}}\frac{d\omega }{T}\mathrm{sech}^2(\frac{\omega }{2T})\frac{g(\stackrel{~}{\omega })}{\mathrm{Im}\sqrt{\stackrel{~}{x}^21}}.$$ (43) Notice that the Eq. (43) applies for both of the transverse and longitudinal sound waves. This result is evaluated numerically and shown in Fig.2 for several $`\mathrm{\Gamma }/\mathrm{\Gamma }_c`$ where $`\mathrm{\Gamma }_c=\mathrm{\Delta }(0)/2`$ is the critical scattering rate which drives $`T_c`$ to zero. ## VI Conclusion We have identified a unique collective mode called the clapping mode in a $`p`$-wave superconductor with the order parameter given by Eq.(1). This collective mode couples to the sound wave and affects its dynamics. The effect of the clapping mode on the sound waves was calculated in the collisionless limit. However, unlike the case of <sup>3</sup>He, the detection of the collective mode appears to be rather difficult. One needs, at least, the high frequency experiment with $`\omega O(1)`$ GHz. In the diffusive limit, we worked out the sound velocity change near $`T=T_c`$ and found that it decreases linearly in $`1T/T_c`$ which is consistent with the experiment reported by Matsui et al . We also obtained the ultrasonic attenuation coefficient for the whole temperature range, which can be tested experimentally. On the other hand, the coupling of the collective mode is almost invisible in the diffusive limit. ###### Acknowledgements. We thank T. Ishiguro, K. Nagai, Y. Maeno, and M. Sigrist for helpful discussion and especially E. Puchkaryov for drawing Fig.2. The work of H.-Y. Kee was conducted under the auspices of the Department of Energy, supported (in part) by funds provided by the University of California for the conduct of discretionary research by Los Alamos National Laboratory. This work was also supported by Alfred P. Sloan Foundation Fellowship (Y.B.K.), NSF CAREER Award No. DMR-9983731 (Y.B.K.), and CREST (K.M.). ## A ### 1 Clapping mode and its coupling to the stress tensor The fluctuation of the order parameter corresponding to the clapping mode can be written as $`\delta \mathrm{\Delta }\rho _3e^{\pm 2i\varphi }\sigma _1\rho _3`$. The relevant correlation functions for the couplings are $`\delta \mathrm{\Delta },\mathrm{cos}(2\varphi )(i\omega _\nu ,𝐪)`$ $`=`$ $`T{\displaystyle \underset{n}{}}{\displaystyle \underset{𝐩}{}}\mathrm{Tr}[\delta \mathrm{\Delta }\rho _3G(𝐩,\omega _n)\mathrm{cos}(2\varphi )\rho _3G(𝐩𝐪,i\omega _ni\omega _\nu )],`$ (A1) $`\delta \mathrm{\Delta },\delta \mathrm{\Delta }(i\omega _\nu ,𝐪)`$ $`=`$ $`T{\displaystyle \underset{n}{}}{\displaystyle \underset{𝐩}{}}\mathrm{Tr}[\delta \mathrm{\Delta }\rho _3G(𝐩,\omega _n)\delta \mathrm{\Delta }\rho _3G(𝐩𝐪,i\omega _ni\omega _\nu )].`$ (A2) After summing over p, we get $`\delta \mathrm{\Delta },\mathrm{cos}(2\varphi )`$ $`=`$ $`\pi TN(0){\displaystyle \underset{n}{}}\left({\displaystyle \frac{i\omega _\nu \mathrm{\Delta }}{2\sqrt{\omega _n^2+\mathrm{\Delta }^2}\sqrt{\omega _{n+\nu }^2+\mathrm{\Delta }^2}}}\right){\displaystyle \frac{\sqrt{\omega _n^2+\mathrm{\Delta }^2}+\sqrt{\omega _{n+\nu }^2+\mathrm{\Delta }^2}}{\left(\sqrt{\omega _n^2+\mathrm{\Delta }^2}+\sqrt{\omega _{n+\nu }^2+\mathrm{\Delta }^2}\right)^2+\zeta ^2}},`$ (A3) $`\delta \mathrm{\Delta },\delta \mathrm{\Delta }`$ $`=`$ $`\pi TN(0){\displaystyle \underset{n}{}}\left(1+{\displaystyle \frac{\omega _n\omega _{n+\nu }}{\sqrt{\omega _n^2+\mathrm{\Delta }^2}\sqrt{\omega _{n+\nu }^2+\mathrm{\Delta }^2}}}\right){\displaystyle \frac{\sqrt{\omega _n^2+\mathrm{\Delta }^2}+\sqrt{\omega _{n+\nu }^2+\mathrm{\Delta }^2}}{\left(\sqrt{\omega _n^2+\mathrm{\Delta }^2}+\sqrt{\omega _{n+\nu }^2+\mathrm{\Delta }^2}\right)^2+\zeta ^2}},`$ (A4) where $`\mathrm{}`$ on the right hand side of the equations represents the angle average. Now summing over $`\omega _n`$ and analytic continuation $`i\omega _\nu \omega +i\delta `$ lead to $`\delta \mathrm{\Delta },\mathrm{cos}(2\varphi )`$ $`=`$ $`N(0){\displaystyle \frac{\omega }{4\mathrm{\Delta }}}F,`$ (A5) $`\delta \mathrm{\Delta },\delta \mathrm{\Delta }`$ $`=`$ $`g^1N(0){\displaystyle \frac{\zeta ^2+2\mathrm{\Delta }^2\omega ^2}{4\mathrm{\Delta }^2}}F,`$ (A6) where $`F`$ is given by $$F(\omega ,\zeta )=4\mathrm{\Delta }^2(\zeta ^2\omega ^2)_\mathrm{\Delta }^{\mathrm{}}𝑑E\frac{\mathrm{tanh}(E/2T)}{\sqrt{E^2\mathrm{\Delta }^2}}\frac{(\zeta ^2\omega ^2)^24E^2(\omega ^2+\zeta ^2)+4\zeta ^2\mathrm{\Delta }^2}{[(\zeta ^2\omega ^2)^2+4E^2(\omega ^2\zeta ^2)+4\zeta ^2\mathrm{\Delta }^2]^216\omega ^2E^2(\zeta ^2\omega ^2)^2}.$$ (A7) In the limit of $`\omega v_F|𝐪|`$, the contribution (in $`[h_l,h_l]`$; see Appendix B 3) due to the coupling with the clapping mode becomes $`{\displaystyle \frac{\delta \mathrm{\Delta },\mathrm{cos}(2\varphi )^2}{g^1\delta \mathrm{\Delta },\delta \mathrm{\Delta }}}`$ $`=`$ $`N(0){\displaystyle \frac{\omega ^2F^2}{4(\zeta ^22\mathrm{\Delta }^2\omega ^2)F}}`$ (A8) $``$ $`N(0){\displaystyle \frac{s^2f}{4[\frac{1}{2}(2\mathrm{\Delta }/v_Fq)^2s^2]}}.`$ (A9) where $`f=\mathrm{lim}_{q0}\mathrm{lim}_{\omega 0}F`$ is the superfluid density and given by Eq.(21). We can see that the frequency of the clapping mode is given by $`\sqrt{2}\mathrm{\Delta }`$ from $`\delta \mathrm{\Delta },\delta \mathrm{\Delta }`$. ## B ### 1 Longitudinal sound wave in the collisionless limit The longitual sound velocity shift is given by the real part of $`[h_l,h_l]`$. The irreducible correlation function for the stress tensor can be obtained from $`[\tau _{xx},\tau _{xx}]_{00}=T{\displaystyle \underset{n}{}}{\displaystyle \underset{𝐩}{}}\mathrm{Tr}[{\displaystyle \frac{p_F^4}{m^2}}(\mathrm{cos}\varphi )^4\rho _3G(𝐩,\omega _n)\rho _3G(𝐩𝐪,i\omega _ni\omega _\nu )],`$ (B1) where $`\varphi `$ is the angle between $`𝐩`$ and $`𝐪`$. Since the stress tensor couples to the density, the correlation function is renormalized as $$[h_l,h_l]_0=[h_l,h_l]_{00}+\frac{V(𝐪)[h_l,n][n,h_l]}{1V(𝐪)[n,n]},$$ (B2) where $`V(𝐪)=2\pi e^2/|𝐪|`$ is the Coulomb interation. This equation can be simplified in the long wave length limit ($`|𝐪|0`$) as $$[h_l,h_l]_0\left(\frac{q}{\omega }\right)^2[[\tau _{xx},\tau _{xx}]_{00}\frac{[\tau _{xx},n][n,\tau _{xx}]}{[n,n]}].$$ (B3) It is useful to define the following quantity for notational convenience. $$A,B=T\underset{n}{}\underset{𝐩}{}\mathrm{Tr}[A\rho _3G(𝐩,\omega _n)B\rho _3G(𝐩𝐪,i\omega _ni\omega _\nu )],$$ (B4) where $`A`$ and $`B`$ can be some functions of $`\varphi `$ or operators. Using this notation, Eq.(B3) can be rewritten as $`[h_l,h_l]_0`$ $``$ $`\left({\displaystyle \frac{q}{\omega }}\right)^2{\displaystyle \frac{p_F^4}{m^2}}\left[\mathrm{cos}^2\varphi ,\mathrm{cos}^2\varphi {\displaystyle \frac{\mathrm{cos}^2\varphi ,11,\mathrm{cos}^2\varphi }{1,1}}\right]`$ (B5) $`=`$ $`\left({\displaystyle \frac{q}{\omega }}\right)^2{\displaystyle \frac{p_F^4}{4m^2}}\left[\mathrm{cos}(2\varphi ),\mathrm{cos}(2\varphi ){\displaystyle \frac{\mathrm{cos}(2\varphi ),11,\mathrm{cos}(2\varphi )}{1,1}}\right].`$ (B6) Then, each correlation function can be computed from $`1,1(i\omega _\nu ,𝐪)`$ $`=`$ $`T{\displaystyle \underset{n}{}}{\displaystyle \underset{𝐩}{}}\mathrm{Tr}[\rho _3G(𝐩,\omega _n)\rho _3G(𝐩𝐪,i\omega _ni\omega _\nu )],`$ (B7) $`1,\mathrm{cos}(2\varphi )(i\omega _\nu ,𝐪)`$ $`=`$ $`T{\displaystyle \underset{n}{}}{\displaystyle \underset{𝐩}{}}\mathrm{Tr}[\mathrm{cos}(2\varphi )\rho _3G(𝐩,\omega _n)\rho _3G(𝐩𝐪,i\omega _ni\omega _\nu )],`$ (B8) $`\mathrm{cos}(2\varphi ),\mathrm{cos}(2\varphi )(i\omega _\nu ,𝐪)`$ $`=`$ $`T{\displaystyle \underset{n}{}}{\displaystyle \underset{𝐩}{}}\mathrm{Tr}[\mathrm{cos}^2(2\varphi )\rho _3\sigma _1G(𝐩,\omega _n)\rho _3G(𝐩𝐪,i\omega _ni\omega _\nu )].`$ (B9) Summing over p leads to $$1,1=\pi TN(0)\underset{n}{}\left(1\frac{\omega _n\omega _{n+\nu }+\mathrm{\Delta }^2}{\sqrt{\omega _n^2+\mathrm{\Delta }^2}\sqrt{\omega _{n+\nu }^2+\mathrm{\Delta }^2}}\right)\frac{\sqrt{\omega _n^2+\mathrm{\Delta }^2}+\sqrt{\omega _{n+\nu }^2+\mathrm{\Delta }^2}}{\left(\sqrt{\omega _n^2+\mathrm{\Delta }^2}+\sqrt{\omega _{n+\nu }^2+\mathrm{\Delta }^2}\right)^2+\zeta ^2},$$ (B10) where $`\zeta =𝐯_F𝐪`$ and $`N(0)=m/2\pi `$ is the two dimensional density of states. Similar equations can obtained for $`\mathrm{cos}(2\varphi ),1`$ and $`\mathrm{cos}(2\varphi ),\mathrm{cos}(2\varphi )`$ with additional angle factors, $`\mathrm{cos}(2\varphi )`$ and $`\mathrm{cos}^2(2\varphi )`$ respectively. After summing over $`\omega _n`$ and analytic continuation $`i\omega _\nu \omega +i\delta `$, we get the following results in the limit of $`\omega v_F|𝐪|`$. $`1,1`$ $`=`$ $`N(0){\displaystyle \frac{\zeta ^2\omega ^2f}{\zeta ^2(\omega +i\delta )^2}}N(0)\left[1i{\displaystyle \frac{s(1f)}{\sqrt{1s^2}}}\right],`$ (B11) $`\mathrm{cos}(2\varphi ),1`$ $`=`$ $`N(0)\mathrm{cos}(2\varphi ){\displaystyle \frac{\zeta ^2\omega ^2f}{\zeta ^2(\omega +i\delta )^2}}N(0)\left[2s^2(1f)(1+i{\displaystyle \frac{12s^2}{2s\sqrt{1s^2}}})\right],`$ (B12) $`\mathrm{cos}(2\varphi ),\mathrm{cos}(2\varphi )`$ $`=`$ $`N(0)\mathrm{cos}^2(2\varphi ){\displaystyle \frac{\zeta ^2\omega ^2f}{\zeta ^2(\omega +i\delta )^2}}N(0)\left[{\displaystyle \frac{1}{2}}2s^2(1f)i{\displaystyle \frac{(1f)s}{2\sqrt{1s^2}}}\right].`$ (B13) We find that the second term in the last line of Eq.(B6) is of higher order in $`\omega /v_F|𝐪|(s)`$ so that we can ignore it. In Sr<sub>2</sub>RuO<sub>4</sub> or metals, $`s1`$. Thus the effect of the coupling to the density is merely to change the vertex associated with $`\tau _{xx}`$ from $`\frac{p_F^2}{m}\mathrm{cos}^2\varphi `$ to $`\frac{p_F^2}{2m}\mathrm{cos}(2\varphi )`$ as far as the lowest order contribution is concerned. Evaluation of $`\mathrm{cos}(2\varphi ),\mathrm{cos}(2\varphi )`$ leads to $$\mathrm{Re}[h_l,h_l]_0\left(\frac{q}{\omega }\right)^2\mathrm{Re}\frac{p_F^4}{4m^2}\mathrm{cos}(2\varphi ),\mathrm{cos}(2\varphi )=\left(\frac{q}{\omega }\right)^2\frac{p_F^4}{4m^2}N(0)\left[\frac{1}{2}2s^2(1f)\right],$$ (B14) where $`f`$ is the superfluid density. ### 2 Contribution coming from the coupling to the clapping mode The correction due to the collective mode leads to the renormalized correlation function as follows. $$[h_l,h_l]=[h_l,h_l]_0+\frac{[h_l,\delta \mathrm{\Delta }\rho _3][\delta \mathrm{\Delta }\rho _3,h_l]}{g^1[\delta \mathrm{\Delta }\rho _3,\delta \mathrm{\Delta }\rho _3]},$$ (B15) where $`g`$ is the coupling constant between the stress tensor and the collective mode. $`\delta \mathrm{\Delta }\rho _3`$ represents the fluctuation associated with the clapping mode. Using the fact that $`1,e^{2i\varphi }=0`$ and $`\delta \mathrm{\Delta }e^{2i\varphi }\sigma _1`$, the above equation can be further reduced to $`[h_l,h_l]`$ $`=`$ $`\left({\displaystyle \frac{q}{\omega }}\right)^2{\displaystyle \frac{p_F^4}{4m^2}}\left[\mathrm{cos}(2\varphi ),\mathrm{cos}(2\varphi )+{\displaystyle \frac{\mathrm{cos}(2\varphi ),\delta \mathrm{\Delta }\delta \mathrm{\Delta },\mathrm{cos}(2\varphi )}{g^1\delta \mathrm{\Delta },\delta \mathrm{\Delta }}}\right]`$ (B16) $`=`$ $`\left({\displaystyle \frac{q}{\omega }}\right)^2{\displaystyle \frac{p_F^4}{4m^2}}\left[{\displaystyle \frac{1}{2}}2\left({\displaystyle \frac{C_l}{v_F}}\right)^2\left(1f{\displaystyle \frac{f(v_F|𝐪|)^2}{4\{(v_F|𝐪|)^2+4\mathrm{\Delta }(T)^22\omega ^2\}}}\right)\right]+i{\displaystyle \frac{m_{\mathrm{ion}}C_l}{\omega }}\alpha _l(\omega ),`$ (B17) where $`f`$ is the superfluid density and given by Eq.(21).
no-problem/9911/astro-ph9911106.html
ar5iv
text
# Mass and mass-to-light ratio of galaxy groups from weak lensing ## 1. Introduction Galaxy groups, like the Local Group, are the most common structures in the universe. Despite being numerous, groups are hard to identify because the contrast with the smooth background of galaxies is low, and their galaxy properties are that of the field. To date most systems have been found using large redshift surveys or X-ray observations. Measuring the mass locked up in these systems is important, but difficult (cf. Gott & Turner 1977). Nolthenius & White (1987) showed that the masses inferred from redshift surveys depend on the survey parameters, the group selection procedure, and the way galaxies cluster. Consequently, an independent measure of the group mass is needed. Here we study the groups by their weak lensing effect on the shapes of the images of the faint background galaxies. The weak lensing signal is maximal if the lenses are at intermediate redshifts, but even then, given the low masses of these systems, the expected signals are too low to yield significant detections for individual groups. Thus we have to study the ensemble averaged signal of a large number of groups at intermediate redshifts. The groups identified in the Canadian Network for Observational Cosmology Field Galaxy Redshift Survey (CNOC2) (e.g. Carlberg et al. 1998; Lin et al. 1999) are ideal targets for our study. The aim of the CNOC2 survey is to study the population of field galaxies at intermediate redshifts $`(z=0.150.55)`$. To do so, four widely separated patches on the sky were selected, for which multi-colour data were obtained, as well as spectroscopic redshifts for $`5000`$ galaxies brighter than $`R_C=21.5`$. The survey allows the identification of a large number of groups at intermediate redshifts. ## 2. Data analysis We obtained deep $`R`$-band images of the central 31 by 23 arcminutes of two patches from the CNOC2 survey using the 4.2m William Herschel Telescope. To date, the combination of deep imaging an a large spectroscopic survey is unique, enabling us for the first time to study a large number of galaxy groups through weak lensing. A detailed discussion about the object analysis, including the corrections for the PSF can be found in Hoekstra et al. (1999b). We end up with catalogues of $`30000`$ galaxies with $`22<R<26`$ in each field. These galaxies are used to measure the weak lensing signal, enabling us to study the average properties of an ensemble of 59 groups from the CNOC2 survey. ## 3. Weak lensing analysis Most of the groups are relatively poor, and many of these systems have been selected on the basis of only a few members. The first question that comes to mind is whether the selected structures are genuine. The detection of a weak lensing signal provides an important test to check the validity of the group selection. Ideally one would like to scale the signals of the various groups with an estimate of their mass, but the uncertainty in the observed velocity dispersions are too large. Therefore we assume that all groups have the same mass and mass profile, and scale the signals of the various groups to that corresponding to the ‘average’ group at $`z=0.4`$. Figure 1 shows the ensemble averaged tangential distortion as a function of radius around the 59 groups taken from the CNOC2 survey. The amplitude of the signal, which is significant at the 99.8% confidence level, corresponds to that of the ‘average’ group at a redshift of $`z=0.4`$. Various tests, like increasing the phase of the distortion by $`\pi /2`$, placing the groups at random positions, or randomizing the ellipticities of the sources yield no signal. Furthermore the results are robust against imperfect corrections for the PSF anisotropy. We therefore conclude the detected signal is due to weak lensing by galaxy groups. The lensing signal is detected out to a large distance from the group centre. As the group members are believed to reside in a common group halo, it is evident that the presence of the galaxy groups will complicate attempts to constrain the sizes of halos of field galaxies. Fitting a singular isothermal sphere model $`(\kappa =r_E/2r)`$ to the observed distortion yields $`r_E=0.^{\prime \prime }99\pm 0.^{\prime \prime }30`$. To relate this measurement to an estimate of the average mass of the groups we use the photometric redshift distribution inferred from both Hubble Deep Fields (cf. Hoekstra et al. 1999a), converted to the $`R`$ band. As the groups are on average at relatively low redshifts, the dependence of the mass estimate on the redshift distribution is rather weak. Thus we find that the observed distortion corresponds to $`\sigma ^2^{1/2}=320_{54}^{+46}`$ km/s. This result is in good agreement with the dynamical estimate of $`\sigma ^2^{1/2}=251\pm 21`$ km/s, based on the group velocity dispersions. ### 3.1. Mass-to-light ratio Under the assumption that the light traces the mass we computed the lensing signal corresponding to the ensemble averaged light distribution and fitted this to the observed lensing signal. We do not observe a trend of the mass-to-light ratio with radius, and we find an average value of $`(256\pm 84)h\mathrm{M}_{}/\mathrm{L}_B`$ in the restframe $`B`$ band. After correction for luminosity evolution (e.g. Lin et al. 1999) we find a value of $`(372\pm 122)h\mathrm{M}_{}/\mathrm{L}_B`$, which is somewhat lower than what is typically found for rich clusters of galaxies (e.g. Carlberg et al. 1997). Similar to what is done for rich clusters of galaxies (e.g. Carlberg et al. 1997; Carlberg et al. 1999) or galaxy groups (e.g. Gott & Turner 1977), we can use our measurement of the group mass-to-light ratio to obtain an estimate of the density of the universe, for which we find $`\mathrm{\Omega }_m=0.22\pm 0.09`$ taking $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ $`(\mathrm{\Omega }_m=0.14\pm 0.06\mathrm{for}\mathrm{\Omega }_\mathrm{\Lambda }=1)`$. A detailed analysis will be given in Hoekstra et al. (1999b). ## 4. Conclusions The detection of the weak lensing signal of a preliminary selection of galaxy groups first of all shows that the CNOC2 survey allows the identification of these systems at intermediate redshifts. This is supported even more by the agreement between the weak lensing, and dynamical mass estimates. The ensemble averaged group velocity dispersion, based on the 59 selected groups, is found to be $`\sigma ^2^{1/2}=320_{54}^{+46}`$ km/s, which is in fair agreement with the dynamical estimates. Under the assumption that mass traces light, we find an average mass-to-light ratio in the restframe $`B`$ band of $`(256\pm 84)h\mathrm{M}_{}/\mathrm{L}_B`$. This yields an estimate for $`\mathrm{\Omega }_m`$ of $`0.22\pm 0.09(\mathrm{\Omega }_\mathrm{\Lambda }=0)`$. A detailed discussion of the preliminary results presented here can be found in Hoekstra et al. (1999b). This work was done in collaboration with Ray Carlberg and Howard Yee. ## References Carlberg, R.G., Yee, H.K.C., & Ellingson, R. 1997, ApJ, 478, 462 Carlberg, R.G., et al. 1998, astro-ph/9805131 Carlberg, R.G., et al. 1999, ApJ, 516, 552 Gott III, J.R., & Turner, E.L. 1977, ApJ, 213, 309 Hoekstra, H., Franx, M., & Kuijken, K. 1999a, ApJ, in press, astro-ph/9910487 Hoekstra, H., et al. 1999b, to be submitted Lin, H., Yee, H.K.C., Carlberg, R.G., Morris, S.L., Sawicky, M., Patton, D.R., Wirth, G., & Shepherd, C.W. 1999, ApJ, 518, 533 Nolthenius, R., & White, S.D.M. 1987, MNRAS, 235, 505
no-problem/9911/astro-ph9911243.html
ar5iv
text
# CMB Polarization Data and Galactic Foregrounds: Estimation of Cosmological Parameters ## 1 Introduction One of the primary goals of cosmology is to accurately determine various cosmological parameters associated with the background FRW universe and the structure formation in the universe ($`\mathrm{\Omega }`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$, $`\mathrm{\Omega }_B`$, $`h_0`$, etc.). In recent years compelling theoretical arguments have emerged which suggest that the study of CMB anisotropies is the best hope to achieve this goal (Bond 1996, Knox 1995, Jungman et al. 1996, Bond et al. 1997, Zaldarriaga et al. 1997). On the experimental front, two forthcoming satellite experiments MAP and Planck<sup>1</sup><sup>1</sup>1For details see http://map.gsfc.nasa.gov and http://astro.estec.esa.nl/SA-general/Projects/Planck along with a series of ground-based and balloon-borne experiments on degree to sub-arcminute scales plan to unravel the angular power spectrum of the CMB to angular scales $`1^{}`$ (for details of interferometric ground-based experiments see White et al. 1997 and references therein; for a recent update on balloon-borne experiments see Lee et al. 1999). It has been shown that an accurate determination of the CMB temperature fluctuations down to sub-degree scales could fix the values of nearly 10 cosmological parameters with unprecedented precision (Jungman et al. 1996). In addition, the future satellite missions might detect the small, hitherto elusive signal from CMB polarization fluctuations (Bouchet et al. 1999 – hereafter Paper I – and references therein). The polarization data can be used to break degeneracy between a few parameters which are determined only in a combination using the temperature data alone (Zaldarriaga et al. 1997). One of the major difficulties in extracting the power spectrum of temperature and polarization fluctuations of the CMB is the presence of galactic and extragalactic foregrounds. The extragalactic foregrounds (radio and infra-red point sources, clusters, etc) will only affect small angular scales ($`10^{}`$) at frequencies dominated by the CMB signal (Toffolatti et al. 1997, 1999). The galactic foregrounds, on the other hand, are present at all angular scales and are strongest on the largest scales. They will therefore have to be cleaned from the future data before any definitive statements about the primary CMB signal can be made. A multi-frequency Wiener filtering approach was developed to study the implications of the presence of foregrounds for the performance of future CMB missions (Bouchet et al. 1995, Tegmark & Efstathiou 1996). It was shown that the primary CMB temperature signal is much larger than the contaminating foreground for all the angular scales relevant for future satellite missions. And therefore the performance of future all-sky satellite missions in extracting the CMB temperature power spectra is unlikely to be hindered by galactic foregrounds (Bouchet et al. 1995, Tegmark & Efstathiou 1996, Gispert & Bouchet 1996, Bouchet & Gispert 1999a,b). In a previous paper (paper I) we extended this technique to include polarization and temperature-polarization cross-correlation of foregrounds to estimate their effect on the extraction of CMB polarization power spectra. Our analysis showed that the presence of foregrounds should not seriously deter the detection of $`E`$-mode CMB polarization and $`ET`$ cross-correlation by Planck. However, while the detection of CMB polarization will be easiest at the Doppler peaks of polarization fluctuations $`\mathrm{}100`$, where it should help reducing the errors on the measurement of parameters that will already be well constrained by temperature data alone, the truly new information from polarization data in the determination of cosmological parameters is contained in angular scales corresponding to $`\mathrm{}30`$ (Zaldarriaga et al. 1997). The polarization data helps break degeneracy between $`C_2`$, the quadruple moment of CMB temperature fluctuation and $`\tau `$, the optical depth to the last scattering surface (Zaldarriaga et al. 1997). The former gives the overall normalization of the CMB fluctuations and is fixed at the epoch of inflation in inflationary paradigm. The breaking of this degeneracy also results in a better determination of other inflationary parameters $`T/S`$, the ratio of tensor to scalar quadruple, and the tensor index $`n_T`$. The optical depth to the last scattering surface is crucial to understanding the epoch of reionization in the universe. Even a value of $`\tau `$ so small as 0.05 leave a telltale signature in the CMB polarization fluctuations which is potentially detectable (Zaldarriaga 1997). The main difficulty in using polarization data to break the $`C_2\text{}\tau `$ degeneracy is that one needs to use information on large angular scales. The power spectra at small $`\mathrm{}`$ is not only badly determined because of cosmic variance but also because of largely unknown level of polarized foregrounds. Prunet et al. (1998) attempted to model the dust polarized emission from the galaxy —which is the dominant foreground for Planck HFI— for scales between $`30^{}`$ to a few degrees. They showed that though one might obtain meaningful estimates for degree scales, there can be large uncertainties in the large scale ($`\mathrm{}50`$) polarized dust emission in the galaxy. The polarized synchrotron is the other major galactic foreground—and it is likely to undermine the performance of MAP and Planck LFI. For the lack of reliable data, we assumed the power spectra of polarized synchrotron to mimic that of the unpolarized component in Paper I. Though there remain large uncertainties on the polarization foregrounds, these assumed levels of foregrounds combined with the Wiener filtering methods developed in Paper I allow us to quantify the effect of foregrounds on the extraction of CMB signal. In this paper, we use the methods developed in Paper I to ascertain the errors in the cosmological parameter estimation. In the next section, we briefly review the Fisher matrix approach used in determining the errors on the extraction of cosmological parameters. We take three underlying theoretical models for our study, the rationale for which is briefly described in the next section. The results are presented and discussed in $`\mathrm{\S }`$ 3. In $`\mathrm{\S }`$ 4 we give our conclusions, and discuss the various shortcomings of our approach. ## 2 Fisher Matrix and Parameter Estimation Future CMB missions MAP and Planck will reach pixel sensitivities of $`30\mu K`$ and $`1.5\mu K`$, respectively. This should allow a very precise determination of temperature power spectrum and a possible detection of the polarization fluctuations (see paper I). Given the noise level and the underlying theoretical model, the Fisher matrix approach allows one to get an estimate of the errors in the estimation of the parameters of the underlying model. It is defined as an average value of the second derivatives of the logarithm of the Likelihood function with respect to the cosmological parameters, at the true parameters value (for details see Kendall & Stuart 1969): $$F_{ij}=\frac{^2}{\theta _i\theta _j}_{\mathrm{\Theta }=\mathrm{\Theta }_0}$$ (1) For CMB temperature and polarization data, the Fisher matrix can be expressed as (Tegmark et al. 1997): $$F_{ij}=\frac{1}{2}Tr\left[𝒞^1\frac{𝒞}{\theta _i}𝒞^1\frac{𝒞}{\theta _j}\right]$$ (2) where $`𝒞`$ stands for the covariance matrix of the data and $`\theta _i`$ correspond to cosmological parameters. The details of derivation of the covariance matrix and its derivatives in the presence of foregrounds are given in Appendix A. The error in the estimation of parameters is given by: $$\mathrm{\Delta }\theta _i=\left[F^1\right]_{ii}^{1/2}$$ (3) ### 2.1 Underlying models The estimated errors on parameters will depend on the choice of the underlying model. We consider three models for our study. Though these models do not exhaust all the possible models and their variants, our aim is to understand the errors in parameter estimation for sCDM model and its popular variants, within the framework of generic inflationary models. We are interested in the standard parameters of flat FRW cosmology, $`h`$, $`\mathrm{\Omega }_B`$, $`\mathrm{\Omega }_\nu `$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$, the reionization parameter $`\tau `$, and the inflationary parameters, $`C_2`$, $`n_s`$, $`T/S`$, and $`n_T`$. It is of course possible to consider a more general class of inflationary models which leads to a further proliferation of inflationary parameters (Liddle 1998, Souradeep et al. 1998, Kinney et al. 1998, Lesgourgues et al. 1999). We also do not consider open/close universes because, as shown in Zaldarriaga et al. (1997), in such universes the shift in the angular size of the horizon at the last scattering surface leaves a very significant sign in the CMB fluctuations which cannot be mimicked by a change in any other parameter, and therefore $`\mathrm{\Omega }_{\mathrm{total}}`$ is extremely well determined for open/close universes. It is possible for the universe to be flat (or nearly flat) with contribution from both matter and cosmological constant, and one could attempt to measure both these parameters from CMB data. However, the degeneracy between these two parameters cannot be broken by CMB data alone and one will have to resort to other measurements like observation of supernova at high-z to lift this degeneracy (Tegmark et al. 1998, Efstathiou & Bond 1999). In addition, it is possible to include parameters like $`n_\nu `$, the number of massless neutrinos, and $`Y_{He}`$, the helium fraction. However, these parameters can be better determined by particle physics or local observations (Jungman et al. 1996, Bond et al. 1997). Parameters like $`\mathrm{\Omega }_\nu `$, the contribution of massive neutrinos to the rest mass density in the universe, can be determined to a comparable accuracy by the data of future galaxy surveys like SDSS (Hu et al. 1998). In this paper, we take only CMB data for our study and do not include priors from other measurements like future Galaxy surveys or high-z Supernova results. The three models we consider are: * sCDM model with $`\tau =0.1`$. The rather large value of $`\tau `$ is taken to bring out the effects of polarization data. * Tilted CDM model with $`\tau =0.1`$, $`n_S=0.9`$, $`n_T=0.1`$, and $`T/S=0.7`$. Note that $`T/S=7n_T`$, which is one of the predictions of slow-roll inflation (Starobinsky 1985, Liddle & Lyth 1992). * Model 2 with $`\mathrm{\Omega }_\nu =0.3`$ with two light, massive neutrinos. ## 3 Results We use the results of Paper I (the values of various terms in the covariance matrix as defined in Appendix A) for the specification of the future satellite missions. Our results are shown in Tables 123 for the three underlying models. It should be noted that we fix the value of $`C_2=C_2^S+C_2^T=796(\mu K)^2`$ for all the models. Only for the sCDM model does it correspond to COBE normalization. In the models with tensor contribution, the COBE normalized CMB signal is larger than the signal for our normalization by a factor of $`1.5`$. To assess the reliability of our code we computed the errors in the $`6`$-parameters sCDM model of Zaldarriaga et al. 1997 with their instrumental specifications and without foregrounds, and compared our results with both theirs and those obtained by Eisenstein et al. 1999. Our results are comparable to those of Eisenstein et al. 1999 for most parameters, with the exception of $`\tau `$ where the error we find is noticeably bigger. We think that this discrepency is related to the way we compute the derivatives of the spectra with respect to the parameters (see appendix A). The results for sCDM model are shown in Table 1. For comparison, results for the best channel of each experiment without foregrounds are also shown. As is clearly seen, the performance of Wiener filtering matches the best channel case for all the experiments. In Paper I we showed that the Wiener filtering performance in extracting the temperature power spectra lies between the expected performances of the best channel and the combined sensitivity of all channel for each experiment, at least for the specific foreground models considered. As the temperature data alone gives a fair idea of the errors on most of the parameters our results could be anticipated from conclusions of Paper I. However, the errors of $`C_2`$ and $`\tau `$ are mostly determined by the polarization data. In paper I we showed that the extraction of polarization power spectra is degraded as compared to the cases with no foreground. Our results in this paper suggest that it should not be too much of a deterrent in determining cosmological parameters. It is also important to note that the present results for the best channel case are comparable to Wiener filtering case. This means that i) the other channels can effectively be used to clean the best channel ii) the presence of foregrounds do not introduce additional degeneracies which are absent when the data is assumed to contain only CMB and pixel noise. In Table 2, the expected errors are shown for a model which includes contribution from tensor modes. One of the aims of studying this model is to establish how well the inflationary parameters can be determined. In comparison with the sCDM case, the errors on all the standard parameters are bigger. This is because the additional parameter $`T/S`$ allows one to fix the normalization more freely, thereby introducing additional degeneracies (Zaldarriaga et al. 1997). The errors on parameters like $`C_2`$, $`h`$ and $`\mathrm{\Omega }_B`$ are higher than for similar models considered by Zaldarriaga et al. (1997). This is partly due to our choice of normalization which gives smaller signal. However, it also reflects the degradation of the polarization power spectra extraction in the presence of foregrounds. Other parameters like $`T/S`$, $`\tau `$, and $`n_T`$ are better determined than the results of Zaldarriaga et al. (1997), but it is mostly owing to the fact that we take larger input values for $`\tau `$ and $`T/S`$. We also show the effect of including very small signal from $`B`$-mode polarization. As is seen, it results in a better determination of most parameters, especially the inflationary parameters. Though the $`B`$-mode signal is much smaller compared to $`E`$-mode signal, and is generally below the pixel noise except for a small range of modes for $`\mathrm{}100`$, its very presence indicates tensor modes in inflationary paradigm. Also, the degradation of the extraction of this signal in the presence of foreground is smaller than for the $`E`$-mode signal (Paper I). Therefore, it can make a difference in the estimation of parameters. Our results show that the consistency condition of slow-roll inflation, $`T/S=7n_T`$, can be checked by future missions (it should be noted here that this relation was imposed only in the fiducial model, but excursions of both parameters were considered independently). Planck can extract both these parameters with $`1\sigma `$ errors $`50\%`$. However, it should be kept in mind that our results are more optimistic than the results of Zaldarriaga et al. (1997) because of our choice of input model. The results of adding another parameter $`\mathrm{\Omega }_\nu `$ in the model above are shown in Table 3. $`\mathrm{\Omega }_\nu `$ can be determined to an accuracy $`10\%`$ with Planck, though it will be very difficult for MAP to determine it. Note that errors on other parameters have not changed much by the addition of this parameter, which suggests that no new degeneracies have cropped up. However, degeneracies between various parameters depend very sensitively on the choice of input model. For instance, if the new parameter $`\mathrm{\Omega }_\nu `$ was added with the input value $`\mathrm{\Omega }_\nu =0`$, it would have substantially worsened the estimation of almost all parameters, especially the inflationary parameters. For all the three models considered here, we took $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$. A finite value of $`\mathrm{\Omega }_\mathrm{\Lambda }`$ results in a better estimation of all the parameters of FRW model as well as $`\mathrm{\Omega }_\mathrm{\Lambda }`$ (Zaldarriaga et al. 1997). ## 4 Conclusions and Summary In this paper we estimated the effect of foregrounds on the determination of cosmological parameters. The most important result is that although the presence of foregrounds somewhat worsens the parameter estimation by degrading the detection of polarization signal, it does not give rise to any severe degeneracies not already present in the CMB data (CMB signal and pixel noise) without foregrounds. It needs to be further confirmed with a detailed study of the Likelihood function in the multiple parameter space. Any analysis such as ours can only give a qualitative idea on the accuracy of parameter estimation. This is largely because of its strong dependence on the input model (Zaldarriaga et al. 1997). In addition, there are great uncertainties in the assumed level of foregrounds which we take in the Wiener filtering analysis of paper I. Moreover, the foreground characteristics (power spectra, frequency dependence) should be determined from the data as well, and this adds uncertainty to the determination of the cosmological parameters (see Knox 1999). It should be noticed at this point that, if Wiener filtering assumes some frequency dependences as well as power spectra for the CMB and the foregrounds, it also gives a measure of the error on the estimation of the power spectra of foregrounds from the filtered data, see Paper I. <sup>2</sup><sup>2</sup>2 It sould also be noticed that any spatial change of the frequency indexes (for dust or synchrotron) should correspond to special astrophysical regions (molecular clouds, supernovae remnants…) and that an analysis where any such spatial change of index is simply incorporated as an additional “noise” term would lead to pessimistic results. This would rather point out that a global analysis (in “Fourier” modes) is insufficient to take this effect properly into account. Still our results suggest that the primary obstacles for high precision CMB measurements will rather stem from systematic errors and inaccuracies in calibration, baseline drifts, determinations of far side lobes or estimates of filter transmissions…all of which are of course not included in this analysis. Furthermore, a Fisher matrix analysis leads to the smallest possible error bars, which can only be degraded by inaccuracies in devising the Wiener filters (by using approximate power spectra and spectral behaviours). The next step will be to directly analyse simulated mega-pixel multi-frequency CMB maps relevant to future experiments. However, such an analysis is an extremely difficult (if not intractable) numerical problem (for recent attempts see Muciaccia et al. 1997, Oh et al. 1999, Borrill 1999). In light of this, our results should be regarded as a first attempt on the problem of parameter estimation in the presence of foregrounds, which give a qualitative idea of the expected accuracy in parameter estimation till the analyses of multi-frequency CMB datasets become possible. Since the submission of this work, more detailed analysis of the effect of foregrounds have been investigated by Tegmark et al. 1999. Their results are similar to ours, with maybe slightly higher foreground residuals as they allowed some scatter in the frequency dependence of foregrounds. ## 5 Appendix A In this section we briefly recapitulate the discussion of paper I, and derive the covariance matrix of CMB data and its derivatives. The observed CMB data at multiple frequencies can be expressed in multipole space as: $$y_\nu ^i(l,m)=A_{\nu p}^{ij}(l,m)x_p^j(l,m)+b_\nu ^i(l,m)$$ (4) where $`x_p^j`$ is the underlying signal for process $`p`$ and “field” $`j`$ (i.e. temperature or (E,B) polarization modes), and $`\nu `$ is a frequency channel index. In the Wiener filtering method, one considers a linear relation between the true, underlying signal, $`x_p^j`$ and the linearly optimal estimator of the signal, $`\widehat{x}_p^j`$. $$\widehat{x}_p^i=W_{p\nu }^{ij}y_\nu ^j.$$ (5) Eqs. (4) and (5) can be used to write the estimated power spectrum as: $`\widehat{x}_p^i\widehat{x}_p^{}^j`$ $`=`$ $`(𝐖A)_{pp^{\prime \prime }}^{im}(𝐖A)_{p^{}p^{\prime \prime \prime }}^{jq}x_{p^{\prime \prime }}^mx_{p^{\prime \prime \prime }}^q`$ (6) $`+W_{p\nu }^{il}W_{p^{}\nu ^{}}^{jn}b_\nu ^lb_\nu ^{}^n`$ $``$ $`Q_{pp^{}}^{ij}x_p^ix_p^{}^j`$ where the last equality comes from the equation defining the Wiener filter (see Eq. $`6`$ of Paper I). The covariance of the filtered data can then be written as: $$𝒞_{\mathrm{}}=\left(\begin{array}{ccc}Q^{11}_{\mathrm{}}C_T\mathrm{}& Q^{12}_{\mathrm{}}C_{TE\mathrm{}}& 0\\ Q^{12}_{\mathrm{}}C_{TE\mathrm{}}& Q^{22}_{\mathrm{}}C_E\mathrm{}& 0\\ 0& 0& Q^{33}_{\mathrm{}}C_B\mathrm{}\end{array}\right)$$ (7) For computing the Fisher matrix we also need to compute the derivative of the covariance with respect to cosmological parameters: $$\frac{𝒞_{\mathrm{}}}{\theta _i}=\underset{X=T,E,ET,B}{}\frac{𝒞_{\mathrm{}}}{C_{\mathrm{}}(X)}\frac{C_{\mathrm{}}(X)}{\theta _i}$$ (8) The derivative of the covariance matrix with respect to various power spectra can be written using Eq. (6). These derivatives, it should be borne in mind, are with respect to the input power spectra used in estimating the Fisher matrix and not the power spectra used in constructing the Wiener filters, which, therefore, are invariant under this change. These derivatives can be readily calculated: $`{\displaystyle \frac{\widehat{x}_p^Tx_p^T}{C_p^T}}`$ $`=`$ $`(W_{p\nu }^{11}A_{\nu p}^{11})^2`$ (9) $`{\displaystyle \frac{\widehat{x}_p^Tx_p^T}{C_p^{TE}}}`$ $`=`$ $`2\times (W_{p\nu }^{11}A_{\nu p}^{11}W_{p\nu }^{12}A_{\nu p}^{22})`$ (10) $`{\displaystyle \frac{\widehat{x}_p^Tx_p^T}{C_p^E}}`$ $`=`$ $`(W_{p\nu }^{12}A_{\nu p}^{22})^2`$ (11) $`{\displaystyle \frac{\widehat{x}_p^Ex_p^E}{C_p^E}}`$ $`=`$ $`(W_{p\nu }^{22}A_{\nu p}^{22})^2`$ (12) $`{\displaystyle \frac{\widehat{x}_p^Ex_p^E}{C_p^{TE}}}`$ $`=`$ $`2\times (W_{p\nu }^{22}A_{\nu p}^{22}W_{p\nu }^{21}A_{\nu p}^{11})`$ (13) $`{\displaystyle \frac{\widehat{x}_p^Ex_p^E}{C_p^T}}`$ $`=`$ $`(W_{p\nu }^{21}A_{\nu p}^{11})^2`$ (14) $`{\displaystyle \frac{\widehat{x}_p^Tx_p^E}{C_p^T}}`$ $`=`$ $`(W_{p\nu }^{11}A_{\nu p}^{11}W_{p\nu }^{21}A_{\nu p}^{11})`$ (15) $`{\displaystyle \frac{\widehat{x}_p^Tx_p^E}{C_p^{TE}}}`$ $`=`$ $`(W_{p\nu }^{11}A_{\nu p}^{11}W_{p\nu }^{22}A_{\nu p}^{22})+(W_{p\nu }^{12}A_{\nu p}^{22}W_{p\nu }^{21}A_{\nu p}^{11})`$ (16) $`{\displaystyle \frac{\widehat{x}_p^Tx_p^E}{C_p^E}}`$ $`=`$ $`(W_{p\nu }^{22}A_{\nu p}^{22}W_{p\nu }^{12}A_{\nu p}^{22})`$ (17) Theoretical power spectra are calculated using the CMB Boltzmann code CMBFAST (Seljak & Zaldarriaga 1996). Derivatives with respect to cosmological parameters are calculated numerically using a variant of dfridr routine of numerical recipes (Press et al. 1992). We notice that a $`5\%`$ step in most parameters gives stable results. The only exception is derivative of $`E`$-mode power spectra with respect to $`\tau `$ when $`\tau 0.05`$ for $`\mathrm{}20`$. This numerical instability is expected as a small change in this parameter when the input value of $`\tau `$ is very small can cause appreciable change in the $`E`$-mode power spectra at small $`\mathrm{}`$. However, the numerical differentiation is quite stable for $`\tau 0.05`$.
no-problem/9911/astro-ph9911185.html
ar5iv
text
# The Parkes Multibeam Pulsar Survey ## 1. Introduction Pulsars are steep-spectrum radio sources, with a “typical” spectral index of $`1.6`$ (Lorimer et al. 1995). For this reason, most large-area pulsar surveys have been done at relatively low frequencies, $`\nu 400`$ MHz. However, at low frequencies and low Galactic latitudes the contribution from synchrotron radiation (with spectral index $`2.7`$) dominates the system temperature of a radio telescope, greatly reducing the sensitivity to most pulsars. To search for pulsars along the disk of the Galaxy, one should therefore consider using a relatively high frequency, $`\nu 1400`$ MHz ($`\lambda 20`$ cm). To search for distant pulsars along the Galactic plane, one is virtually compelled to use high frequencies, because multi-path propagation of radio pulses through the inhomogeneous interstellar medium results in broadening (“scattering”) of intrinsically sharp pulses (see Fig. 4b). This effect, greatly reducing the detectability of pulsars, varies with frequency approximately as $`\nu ^{4.4}`$ (Cordes, Weisberg, & Boriakoff 1985). The obvious drawback of a survey at 1400 MHz is that, in addition to the long individual integration times required to maintain high sensitivity due to the reduced pulsar fluxes, the number of independent telescope pointings needed increases as $`\nu ^2`$. Despite this hindrance there are very good reasons for wanting to search the Galactic plane: young pulsars will naturally be found close to their places of birth, viz. the Galactic disk. While relatively rare, they are interesting for a variety of reasons, including the study of pulsar–supernova remnant interactions, and the preferential display of rotational “glitches”, and increased likelihood of emission at X- and $`\gamma `$-ray energies, which are of interest for studies of the internal dynamics, and cooling and emission mechanisms of magnetized neutron stars, respectively. Also, to obtain an unbiased picture of the intrinsic Galactic distribution of pulsars, rather than just of the local population, one must penetrate deep into the Galaxy, i.e., use high frequencies. Only two large-area pulsar surveys had been carried out at 20 cm prior to the one described here. The surveys of Clifton et al. (1992) and Johnston et al. (1992) in the 1980s (see Table 1) were very successful at finding many pulsars, preferentially young and relatively distant. In early 1997 a 13-element receiver package with very good system noise characteristics was installed on the Parkes telescope. Developed for surveying Hi in the local universe (Staveley-Smith et al. 1996), it is also ideally suited for pulsar searching. ## 2. Multibeam Survey We began collecting data for the survey in August 1997. Receivers for each of 13 beams are sensitive to two orthogonal linear polarizations. Signals from each polarization of each beam are detected in 96 filters, each 3 MHz wide, upon which they are added in polarization pairs, high-pass filtered with a cutoff of 0.2 Hz, integrated for 0.25 ms, and 1-bit sampled before being written to magnetic tape with relevant telescope information, for off-line processing. The sensitivity of the survey to long-period pulsars, about 0.15 mJy, is a factor of seven better than the previous Parkes 20 cm survey (Johnston et al. 1992), and we are even more sensitive to short-period pulsars, owing to the faster sample interval and narrower filters used (see Table 1). Figure 1a shows the calculated sensitivity of the survey. Note that the figure does not take into account scattering: according to it a pulsar like the Crab, with period 33 ms, but located across the Galaxy, with $`\text{DM}=1000`$ cm<sup>-3</sup> pc, will be detectable with a minimum flux density of about 0.6 mJy. As Figure 4b shows, such a pulsar will most likely not be detectable as a pulsed source almost regardless of strength. The 13 beam patterns (each subtending a $`14^{}`$ diameter) are not adjacent on the sky; rather, one central beam (with the best sensitivity) is surrounded by a ring of six beams separated by two beams-widths, surrounded in turn by a second ring separated by a further two beam-widths. We collect data by interleaving pointings on a hexagonal grid, resulting in complete sky coverage with adjacent beams overlapping at half-power points. Each pointing covers an area about 0.6 deg<sup>2</sup>, resulting in sky coverage at a rate of 1 deg<sup>2</sup>/hr, and the total survey area requires about 2700 individual pointings. As of September 1999 we have collected and analyzed about 1200 independent telescope pointings, some 700 deg<sup>2</sup>, or 45% of the total. Data reduction takes place in workstations in a manner similar to previous surveys (e.g., Manchester et al. 1996). Figure 1b shows the search-code output for the discovery of a binary pulsar. To date we have discovered 440 new pulsars, and have re-detected 190 known pulsars. Because of the long integrations, some binary pulsars (in particular millisecond pulsars) have signal-to-noise ratios reduced, owing to Doppler-induced varying spin periods. We therefore complement our standard search analysis with “acceleration search” reduction, recently begun. ## 3. Discussion While we have discovered 10 new pulsars for every 1% of the survey area searched so far, it should not be concluded that we will amass a total of 1000 new pulsars. We began by surveying the regions closest to the Galactic plane, which are richest in pulsars. In Figure 2a we see that we have already searched essentially the entire $`|b|<1^{}`$ region, with a new-pulsar density, averaged over the entire longitude range, of slightly over 1/deg<sup>2</sup>. Clearly the density of pulsars drops dramatically for $`|b|>1^{}`$, which comprises much of the region yet to be searched. Conversely in Figure 2b we see that, as a function of longitude, the region we have preferentially searched so far ($`260^{}<l<320^{}`$) has the lowest pulsar density. Accounting for these selection effects in some detail, we predict that the number of new discoveries for the entire survey should be somewhat over 600. Another estimate is derived from the number of pulsars previously known in the overall search area, 255, and the number of (re-)detections: these would suggest an eventual total of $`(440/190)255600`$ new objects. In fact we expect a total of about 700 pulsars to be discovered, after accounting for some further selection effects (e.g., so far we have not been complete in confirming new pulsar candidates down to the sensitivity level implicit in the calculations underlying Figure 1a). We have therefore already detected about 2/3 of all pulsars we will discover. Manchester et al. (this volume) describe in some detail what is already known about many of the new pulsars. Figure 3a shows the distribution of DM for the newly discovered pulsars, and for comparison also shows that for previously known pulsars. Qualitatively the distribution is as expected: we find pulsars predominantly with large DM, since our survey is by far the most sensitive ever to distant pulsars along the Galactic plane; on the other hand we find very few “nearby” ($`\text{DM}<100`$ cm<sup>-3</sup> pc) pulsars, in part because selection effects (e.g., scattering) preventing the detection of such pulsars in past surveys were far less important than for high-DM pulsars. The median DM for the new pulsars is about 400 cm<sup>-3</sup> pc, and we also see that there is a marked decrease in the number of objects with $`\text{DM}>900`$ cm<sup>-3</sup> pc, with a maximum of about 1300 cm<sup>-3</sup> pc. Naturally it is more difficult to find such highly dispersed pulsars, both because of the attending level of scattering, and reduced flux density due to large distance. But as we can see from Figure 4a, we do not in any case expect large numbers of pulsars in the Galaxy to have $`\text{DM}>1400`$ cm<sup>-3</sup> pc. A comparison of the period distributions for newly discovered and previously known pulsars (Fig. 3b) yields some surprises. The most obvious discrepancy is that we have not discovered a significant number of short-period pulsars (only one with $`P<30`$ ms). The Parkes 70 cm survey (Manchester et al. 1996), with a long-period sensitivity of $`3`$ mJy and poorer time- and frequency-resolution than the multibeam survey, yielded 17 millisecond pulsars. Assuming a spectral index of $`2`$, 3 mJy at 70 cm are equivalent to 0.3 mJy at 20 cm — i.e., we should be more sensitive to millisecond pulsars than that survey. One cannot reasonably appeal to hypothetical larger spectral indices: Edwards et al. (this volume) report on the very successful use of the multibeam data-acquisition system for finding millisecond pulsars in a survey at intermediate Galactic latitudes and with short integration times. It is conceivable that strong scattering prevents us from detecting many millisecond pulsars with $`\text{DM}>70`$ cm<sup>-3</sup> pc; in any case we are still investigating this dearth of millisecond pulsar discoveries. A second surprise concerns the discovery of relatively many pulsars with long periods, $`P>3`$ s. This may be due to surveys with shorter integration times not collecting data for many such pulse periods, coupled with the prevalence of “nulling” pulsars among those with long periods; and to the effectiveness with which much local radio-frequency interference gets “dispersed away” in our search for high-DM pulsars. In Figure 4a we plot the positions of the pulsars detected in the multibeam survey projected onto the Galactic plane, according to the distance model of Taylor & Cordes (1993). Two things are immediately apparent: the previously known re-detected pulsars are located relatively nearby ($`d<4`$ kpc), while many of the newly discovered pulsars are very distant, with many beyond the inner-most spiral arm; and many of the newly discovered pulsars seem to be located along spiral arms, particularly the two inner-most ones. Regarding this last point, we do expect pulsars to be correlated with spiral arm locations, but we should be careful to consider potential biases in the distance model that will place pulsars preferentially in regions of high electron density. The multibeam survey is being remarkably successful at uncovering very distant pulsars. The DM-distribution of these pulsars, $`\text{DM}(l,b)`$, together with measured scattering parameters (cf. Fig. 4b), and eventually Faraday-rotation parameters, probing the interstellar magnetic field, should give us a more unbiased picture of the Galactic distribution of pulsars, and add considerably to our knowledge of the interstellar medium. ## References Clifton, T. R., Lyne, A. G., Jones, A. W., McKenna, J., & Ashworth, M. 1992, MNRAS, 254, 177 Cordes, J. M., Weisberg, J. M., & Boriakoff, V. 1985, ApJ, 288, 221 Johnston, S., Lyne, A. G., Manchester, R. N., Kniffen, D. A., D’Amico, N., Lim, J., & Ashworth, M. 1992, MNRAS, 255, 401 Lorimer, D. R., Yates, J. A., Lyne, A. G., & Gould, D. M. 1995, MNRAS, 273, 411 Manchester, R. N. et al. 1996, MNRAS, 279, 1235 Staveley-Smith, L. et al. 1996, Proc. Astr. Soc. Aust., 13, 243 Taylor, J. H. & Cordes, J. M. 1993, ApJ, 411, 674
no-problem/9911/cond-mat9911476.html
ar5iv
text
# Derivation of the Semi-circle Law from the Law of Corresponding States \[ McGill-99/36 ## Abstract We show that, for the transition between any two quantum Hall states, the semi-circle law and the existence of a duality symmetry follow solely from the consistency of the law of corresponding states with the two-dimensional scaling flow. This puts these two effects on a sound theoretical footing, implying that both should hold exactly at zero temperature, independently of the details of the microscopic electron dynamics. This derivation also shows how the experimental evidence favours taking the two-dimensional flow seriously for the whole transition, and not just near the critical points. PACS nos: 73.40.Hm, 05.30.Fk, 02.20.-a <sup>3</sup>On leave from: Dept. of Mathematical Physics, National University of Ireland, Maynooth, Republic of Ireland. Research funds for this work have been provided by grants from NSERC (Canada), FCAR (Québec), CNRS (France) and by Enterprise Ireland (Basic research grant no. SC/1998/739). \] Quantum Hall systems are remarkable for the high accuracy with which many of their properties are known. Although an explanation based on general principles (gauge invariance) has been given by Laughlin for the precise quantization of the Hall conductivity on the integer Hall plateaux, a similar understanding of the robustness of the other Hall features does not yet exist. Our goal in this paper is to derive some of these as general consequences of the symmetries of the low-energy limit of these systems, independently of the microscopic details. The current understanding of the transport properties of quantum Hall systems is based on a very successful effective field theory consisting of composite fermions . The symmetry which we shall use in what follows was argued to be a property of the effective theory, under certain circumstances, in a seminal paper by Kivelson, Lee and Zhang (KLZ) . These authors argue that the effective theory satisfies a law of corresponding states, which consist of the following correspondences between conductivities, in the long wavelength limit: * Landau Level Addition Transformation (L) $$\sigma _{xy}(\nu +1)\sigma _{xy}(\nu )+1\sigma _{xx}(\nu +1)\sigma _{xx}(\nu );$$ * Flux Attachment Transformation (F) $$\rho _{xy}\left(\frac{\nu }{2\nu +1}\right)\rho _{xy}(\nu )+2\rho _{xx}\left(\frac{\nu }{2\nu +1}\right)\rho _{xx}(\nu );$$ * Particle-Hole Transformation (P) $$\sigma _{xy}(1\nu )1\sigma _{xy}(\nu )\sigma _{xx}(1\nu )\sigma _{xx}(\nu );$$ where $`\nu `$ denotes the filling factor and we use units in which $`e^2/h=1`$. The arrows become equalities when the correspondence becomes a symmetry, and the conditions for this to be the case are discussed in — in particular this is expected to hold at zero temperature. Taking repeated powers of $`𝐋`$, $`𝐅`$ and their inverses generates an infinite order discrete group, which we shall call the KLZ group and denote by $`𝒦`$, which is well known to mathematicians (see e.g. where it is denoted by $`\mathrm{\Gamma }_U(2)`$). The complete group, including also the transformation $`𝐏`$, which is an outer auto-morphism of $`𝒦`$, was first proposed as relevant to the quantum Hall effect by Lütken and Ross The KLZ symmetry is most succinctly expressed by writing the conductivity tensor as a single complex variable, $`\sigma =\sigma _{xy}+i\sigma _{xx}`$ (with the resistivities therefore given by $`\rho =1/\sigma =\rho _{xy}+i\rho _{xx}`$). Since the ohmic resistance, $`\sigma _{xx}`$, must be positive, the physical region consists only of the upper-half $`\sigma `$ plane. A general element of $`𝒦`$ can then be represented as $`\gamma (\sigma )=\frac{a\sigma +b}{c\sigma +d}`$ with integer coefficients, such that $`c`$ is even and $`adbc=1`$. Note that $`𝒦`$ maps the upper half-complex conductivity plane into itself. The whole upper-half conductivity plane can be obtained from the vertical strip above the semi-circle of unit diameter spanning $`0`$ and $`1`$ by repeated action of the KLZ group. This strip is termed the fundamental domain in the mathematical literature. The law of corresponding states can now be seen to imply that any quantum Hall state can be obtained from any other state by the action of some element of $`𝒦`$. Thus, for example, starting from $`\sigma =1`$ we obtain the integer series $`\sigma =n`$ from $`𝐋^{n1}`$, the Laughlin series $`\sigma =\frac{1}{2m+1}`$ from $`𝐅^m`$ and the Jain series $`\sigma =\frac{p}{2pm+1}`$ from $`𝐅^m𝐋^{p1}`$. It has already been pointed out that the KLZ group gives a selection rule for quantum Hall transitions — a transition between two Hall plateaux with $`\sigma _{xy}=p_1/q_1`$ and $`\sigma _{xy}=p_2/q_2`$ is allowed only if $`|p_1q_2p_2q_1|=1`$. However it implies much more if we examine the consequences for the $`\beta `$-functions of the theory. Strong predictions can be made when KLZ symmetry is combined with the scaling theory of disorder , as applied to quantum Hall systems . According to the scaling theory, conductances (so, in two dimensions, also the conductivities) are the macroscopic measures of microscopic disorder, and so are the relevant variables whose renormalisation group (RG) flow describes the system’s long-wavelength evolution. The main tools for describing this flow are then the $`\beta `$ functions, which describe the scaling of $`\sigma _{xx}`$ and $`\sigma _{xy}`$. In this language the law of corresponding states becomes the requirement that $`𝒦`$ (and $`𝐏`$) should commute with the RG flow . The flow is described by a single complex $`\beta `$-function: $$\beta (\sigma ,\overline{\sigma })=\frac{d\sigma }{dt}=\beta _{xy}(\sigma _{xx},\sigma _{xy})+i\beta _{xx}(\sigma _{xx},\sigma _{xy}),$$ (1) and a simple calculation reveals that the flow commutes with the symmetry if $$\beta (\gamma (\sigma ),\gamma (\overline{\sigma }))=\frac{d\gamma (\sigma )}{dt}=\frac{\beta (\sigma ,\overline{\sigma })}{(c\sigma +d)^2},$$ (2) where the property $`adbc=1`$ has been used. We now describe some consequences which follow for the flow of any $`\beta `$-function which satisfies eq. (2) (and subject to a global requirement concerning flow topology, as explained below) regardless of its detailed form. Previous analyses have made further assumptions about the functional form of $`\beta `$, , but we shall avoid any such assumptions here and simply follow the implications of particle-hole symmetry. It is an immediate consequence of eq. (2) that the $`\beta `$-function must vanish at any point $`\sigma _{}`$ (called a fixed point) which is taken to itself — i.e. $`\gamma (\sigma _{})=\sigma _{}`$ – by the action of a group element for which $`c\sigma _{}+d`$ is neither zero nor infinite . The only such fixed points within the fundamental domain are the one at $`\sigma _{}=\frac{1}{2}(1+i)`$ — which is taken to itself by $`\gamma (\sigma )=(\sigma 1)/(2\sigma 1)`$ — as well as $`\sigma =0`$ — with $`\gamma (\sigma )=\sigma /(2\sigma 1)`$ — and $`\sigma =1`$ – with $`\gamma (\sigma )=(3\sigma 2)/(2\sigma 1)`$. $`\beta `$ must therefore vanish at these three points (assuming it is finite). The consistency of KLZ symmetry with the flow thereby predicts universal values for the conductivity at the critical points, a possibility which was argued within a more general context in ref. . KLZ symmetry also requires the $`\beta `$-function to vanish — and to have precisely the same critical exponents — at all of the images of the basic fixed points under the action of $`𝒦`$. There is indeed experimental evidence for this equivalence of the critical exponents at different quantum Hall transitions (known as super-universality) , a result which had been also argued microscopically (neglecting electron self-interactions) . It is not an inescapable consequence of the KLZ symmetry that there be no critical points other than those which are fixed points of $`𝒦`$. However there is no experimental evidence for any other critical points in the quantum Hall effect, and so it would not seem unreasonable to assume there is none. None of the following conclusions requires this assumption unless explicitly stated. Most of the above observations have already appeared in the literature but we now go on to describe the two main results of this paper, which have not been derived from general principles before. 1. The Semi-circle Law: We now show that particle-hole symmetry, together with $`𝒦`$ invariance, implies the semi-circle law. The proof of this argument hinges on the existence of a unique and well-known function, $`f(\sigma )`$, which has the following two crucial properties : 1. It provides a one-to-one map from the fundamental domain to the complex plane (including the point at infinity); 2. It is invariant under the action of the symmetry group $`𝒦`$, $`f(\gamma (\sigma ))=f(\sigma )`$. Define a $`\beta `$-function for $`f`$, $$B(f,\overline{f}):=\frac{df}{dt}=\frac{df}{d\sigma }\frac{d\sigma }{dt}=\frac{df}{d\sigma }\beta (\sigma ,\overline{\sigma }).$$ (3) We can conclude that $`B(f,\overline{f})`$ is invariant with respect to $`𝒦`$ — since $`f`$ is already invariant, $`𝒦`$-invariance imposes no further restrictions on the function $`B(f,\overline{f})`$. Now impose particle-hole symmetry, $`𝐏`$. To determine the consequences for $`B(f,\overline{f})`$ due to $`𝐏`$ we use the following explicit expression for $`f`$ in terms of Jacobi $`\vartheta `$-functions (a very clear description of these classical functions is given in ): $$f(\sigma )=\frac{\vartheta _3^4\vartheta _4^4}{\vartheta _2^8}=\frac{1}{256q^2}\underset{n=1}{\overset{\mathrm{}}{}}\frac{\left(1q^{4n2}\right)^8}{\left(1+q^{2n}\right)^{16}},$$ (4) where $`q=e^{i\pi \sigma }`$. Since the action of particle-hole symmetry on $`q`$ is $`𝐏:q\overline{q}`$, it is clear from the definition of $`f`$ that $`𝐏:f=f(\overline{q})=\overline{f(q)}`$. Thus particle-hole symmetry implies that $`B(f,\overline{f})`$ must be invariant under the interchange of $`f`$ and $`\overline{f}`$. So this implies that $$\frac{df}{dt}=B(f,\overline{f}),\frac{d\overline{f}}{dt}=\overline{B(f,\overline{f})}=B(\overline{f},f).$$ (5) Now suppose we start our RG flow from a value of $`\sigma `$ for which $`f`$ is real. Equation (5) states that $`B`$ is real when evaluated at this point, and hence $`\frac{df}{dt}`$ must be real. Repeating this argument point-by-point along the flow line we see that particle-hole symmetry implies $`f`$ cannot develop an imaginary part if it doesn’t start with one. We conclude that curves on which $`f`$ is real are integral curves of any renormalisation group flow which commutes with both $`𝒦`$ and $`𝐏`$! The curves along which $`f`$ is real are easily found, and for the fundamental domain consist of the curves defining the boundaries, plus the vertical line $`\sigma =\frac{1}{2}+iw`$, $`w\frac{1}{2}`$. $`f`$ is real along the vertical lines $`\sigma =\frac{n}{2}+iw`$ (with $`n`$ integral) because it is an even function of $`q`$, and $`q`$ is real or pure imaginary when evaluated along these vertical lines. To see that $`f`$ is also real on the semi-circular arch spanning $`0`$ and $`1`$ requires the following classical facts about the $`\vartheta `$-functions (see e.g. , page 475): $`\vartheta _2\left(1/\sigma \right)=\sqrt{i\sigma }\vartheta _4(\sigma ),`$ $`\vartheta _3\left(1/\sigma \right)=\sqrt{i\sigma }\vartheta _3(\sigma )`$ (6) $`\vartheta _4\left(1/\sigma \right)`$ $`=`$ $`\sqrt{i\sigma }\vartheta _2(\sigma ).`$ (7) The factors of $`\sqrt{i\sigma }`$ here cancel when we take ratios to form $`f`$ and so $`f(1/\sigma )=\frac{\vartheta _3^4(\sigma )\vartheta _2^4(\sigma )}{\vartheta _4^8(\sigma )}`$. Now $`\sigma 1/\sigma `$ sends the vertical line $`\sigma =1+iw`$, $`0<w<\mathrm{}`$, onto the semi-circle spanning $`1`$ and $`0`$. Since $`\vartheta _a^4`$ ($`a=2,3,4`$) are all real on said vertical line (see e.g. ), $`f`$ must be real on said semi-circle, the latter is then perforce an integral curve of the renormalisation group flow. The complete set of integral curves is then obtained by mapping the above curves around the complex plane using $`𝒦`$, and this is how figure 1 is generated. Points where trajectories cross are fixed points of the renormalisation group flow, and the fixed point at $`\sigma _{}=\frac{1+i}{2}`$ is evident. The direction of the flow lines is uniquely determined if we assume that there are no other fixed points, the Hall plateau are attractive fixed points of the flow, and that the flow comes downwards vertically from $`\sigma =i\mathrm{}`$. The line segment $`\sigma =\frac{1}{2}+iw`$, $`\frac{1}{2}<w<\mathrm{}`$ is mapped onto $`\sigma =\frac{1}{2}+iw`$, $`0<w<\frac{1}{2}`$ by $`𝐅`$ — the latter line must therefore flow upwards towards $`\frac{1+i}{2}`$ if the former flows downwards towards $`\frac{1+i}{2}`$. Assuming that $`0`$ and $`1`$ are attractive fixed points then determines the flow direction as indicated by the arrows in figure 1. Notice we are inevitably led to the existence of the semi-circles linking $`0`$ to $`1/2`$ in figure 1. It is a general property that the KLZ group takes semi-circles centred on the real axis to other semi-circles also centred on the real axis (including the degenerate case of infinitely large semi-circles, which are vertical lines parallel to the imaginary axis). It follows that the flow between any two Hall plateaux must be along a semi-circle, centred on the real axis, which is the image of the basic semi-circle connecting 0 and 1. In this way we obtain a robust derivation of the semi-circle law, which states that the conductivities move along such semi-circles in the conductivity plane during transitions between Hall plateaux. (Since the relation $`\rho =1/\sigma `$ maps, for example, semi-circles having one end at $`\sigma =0`$ into vertical lines in the $`\rho `$ plane, the corresponding statement in the resistivity plane is that for transitions from Hall fluids to the Hall insulator, the flow is along lines of constant $`\rho _{xy}`$.) Although the semi-circle law was proposed in on the basis of a particular microscopic model, we see here that it holds more generally than does its original derivation. Any model compatible with the symmetry of the law of corresponding states must reproduce it. Experimentally, the law is also well supported . Figure 1 2. Duality: A second experimentally striking result which follows quite generally from the symmetry version of the Law of Corresponding States is the existence of a duality symmetry relating the conductivities of the flow on either side of the critical point as one flows between any two allowed Hall plateaux, or between Hall plateaux and the Hall insulator. Since all flows are related by a symmetry to the basic semi-circle running between 0 and 1, we derive the duality symmetry for this semi-circle in detail. A convenient parameterization of the semi-circle from 0 to 1 is $`\sigma =\frac{1}{2}+\frac{1}{2}(\frac{1w^2+2iw}{1+w^2})`$, with $`0<w<\mathrm{}`$. The key observation is that this curve is reflected into itself about the vertical line $`\mathrm{}\sigma =\frac{1}{2}`$ by $`𝐏`$ – as well as by $`\gamma =\frac{(\sigma 1)}{(2\sigma 1)}𝒦`$. In terms of the parameter $`w`$ this becomes $`w1/w`$, and since the semi-circle transformed to the $`\rho `$ plane is $`\rho =1+iw`$, this is recognizable as the $`\rho _{xx}1/\rho _{xx}`$ duality which has been observed in the transition to the Hall insulator from the $`\nu =1`$ integer Hall state. The extension to other transitions follows from the action of $`𝒦`$. For $`\nu :\frac{p_1}{q_1}\frac{p_2}{q_2}`$ with $`p_2q_1p_1q_2=1`$, where the transition is along the curve $`\rho =\frac{(q_2p_2+w^2q_1p_1)+iw}{(p_2^2+w^2p_1^2)}`$, the duality is again given by $`w1/w`$. As specialized to transitions to the Hall insulator from the Laughlin sequence, $`\nu :\frac{1}{2n+1}0`$, the flow is the vertical line $`\rho =(2n+1)+iw`$ in the resistivity plane and so the duality $`w1/w`$ again implies the inversion $`\rho _{xx}1/\rho _{xx}`$ about the critical point. In conclusion, we wish to emphasise two points. First, the assumption that the law of corresponding states holds as a symmetry at low temperatures leads to an infinite order discrete symmetry group for the quantum Hall effect — called the KLZ group here. This group acts on the upper-half complex conductivity plane. If this is to be a symmetry its action must commute with the renormalisation group flow of the system and fixed points of the group action must be fixed points of the flow. Three kinds of fixed points are predicted in this way: attractive fixed points with $`\sigma _{xx}=0`$ (which are images under the group of the basic ones at $`\sigma =0`$ or 1 and all have odd denominator) describing the quantum Hall fluids and the Hall insulator; repulsive fixed points with $`\sigma _{xx}=0`$ (which are images of $`\sigma =1/2`$ and all have even denominator) and saddle points (which are images under the group of the basic one at $`\sigma =\frac{1}{2}(1+i)`$) describing the critical points in the transitions between the various quantum Hall states. By organizing the critical points of the system via a (infinite order) discrete symmetry, the KLZ group furnishes a fascinating generalisation of the Kramers-Wannier $`𝐙_2`$ duality of the Ising model. Particle-hole symmetry places further restrictions on the $`\beta `$-function and dictates that the form of the RG flow between Hall plateaux, and between the plateaux and the Hall insulator, to described by semi-circle law. It also inescapably predicts the general existence of a duality symmetry for all Hall transitions, which reduces to the observed $`\rho _{xx}1/\rho _{xx}`$ duality for Laughlin-sequence/Hall-insulator transitions. A point that must be addressed here is that the experimental data do not always reproduce a fixed point exactly at $`\sigma _{xx}=1/2`$ for integer transitions. For example in the critical point in the $`10`$ transition is definitely not identified with $`\frac{1+i}{2}`$. This is therefore incompatible with the law of corresponding states. However, it is notoriously difficult to extract the longitudinal resistivity (and hence conductivity) from the longitudinal resistance (this issue is discussed, for example, on page 52 in Cage’s article in ) — the relationship between these two quantities is plagued with geometrical ambiguities, unlike the transverse resistivity. Since a Hall-insulator transition $`\frac{1}{2q+1}0`$ in the $`\sigma `$ plane corresponds to a vertical line $`\rho =(2q+1)+iw`$, $`w>0`$, in the resistivity plane, rescaling the imaginary part of $`\rho `$ does not affect the semi-circle law in the $`\sigma `$ plane for these transitions. However it does move the critical point. One interpretation of the experiment is that the experimentally determined longitudinal resistivity is not the same as the true microscopic longitudinal resistivity, but a constant multiple of it (a factor of $`1.7`$ in this particular experiment). We thank C.A. Lütken for helpful discussions. C.B. is grateful to the University of Barcelona, and B.D. to the IPN, Orsay and CNRS, for their generous support and kind hospitality while part of this work was carried out.
no-problem/9911/astro-ph9911220.html
ar5iv
text
# Microlensing of Broad Absorption Line Quasars: Polarization Variability ## 1 Introduction While they are amongst the most luminous objects in the universe, the vast majority of the energy radiated by quasars arises in a region less than a parsec in extent. At cosmological distances, such a region subtends only microarcseconds and its structural properties are well below detection by conventional techniques. A growing number of studies, however, have demonstrated that if such sources are microlensed by stars in a foreground system then observations during high magnification events can reveal details at these scales, probing both the structural and kinematic properties of quasars (Kayser, Refsdal, & Stabell, 1986; Nemiroff, 1988; Irwin et al., 1989; Schneider & Wambsganss, 1990; Corrigan et al., 1991; Lewis et al., 1998). Recent studies have extended this earlier work, providing further approaches by which microlensing can reveal the inner structure of quasars. Using the fact that, to a microlensing mass, the quasar continuum source appears quite extended, and that variations intrinsic to the source will be manifest in all the macrolensed images, Yonehara et al. (1998, 1999) demonstrated how the nature of the accretion disk can be probed with multi-wavelength observations. Accordingly, the size of the source may also be constrained by monitoring and studying quasar variations (Yonehara, 1999), high-magnification event shapes (Wyithe & Webster, 1999), and fold caustic crossing events (Fluke & Webster, 1999). A different approach by Lewis & Ibata (1998) used the microlensing induced centroid shift of a macrolensed image to determine quasar structure. In a previous paper, (Lewis & Belle 1998; hereafter Paper I), we demonstrated how microlensing may be used to examine the scale of the clouds responsible for the prominent BALs seen in some quasars. Here we extend this earlier study and show how spectropolarimetric monitoring of BAL quasars can provide further clues to the internal structure of quasars, specifically the details of the scattering mechanism responsible for the polarization increase observed within the BAL troughs. A description of the current favored models of BAL systems is presented in Section 2, while the rôle of microlensing as a probe of these models is discussed in Section 3. The statistics of microlensing events are presented in Section 4.2, focusing on the case of the multiply imaged quasar, H 1413+117. The conclusions of this study are presented in Section 5. ## 2 Broad Absorption Line Quasars: Models About $`10\%`$ of quasars display BALs in their spectra, appearing blue-ward of prominent emission lines and exhibiting bulk outflow velocities of $`500030000\mathrm{km}\mathrm{s}^1`$. These are generally interpreted as resonance absorption line systems due to highly ionized species (Turnshek, 1984, 1988, 1995; Weymann, 1995). The comparative rarity of BAL quasars has made them the focus of numerous studies and has led to the question of their place in the unified model of active galactic nuclei (e.g., Antonucci 1993), a question that is currently a matter of debate (Kuncic, 1999). The nature of the absorbing region responsible for these spectral features, although the subject of various investigations, has yet to be fully explained. Several different observations of BAL quasars are, however, beginning to paint a picture of the inner regions of the quasar and the material which produces the BALs. Abundance ratios indicate that the constant source of material could emanate from atmospheres of giant stars (Scoville & Norman, 1995), novae (Shields, 1996, 1997), parts of an inner obscuring torus or even parts of the accretion disk (Murray et al., 1995). The high outflow velocity of the absorbing material, most likely due to radiative acceleration (Arav, Li, & Begelman, 1994), has led to models which consider the necessary confinement of the clouds comprising the BAL region. Pressure confinement by a hot ambient medium (Arav, Li, & Begelman, 1994) or by magnetic fields (Arav & Li, 1994) and X-ray shielding of the material by a very high column density ionized gas (Murray et al., 1995) have all been offered as solutions to the confinement problem. Alternatively, the confinement problem can be avoided altogether by involving a continuous, high column density outflow of absorbing gas rather than clouds (Kuncic, 1999). Electron column densities imply that the region lies at a distance of 30-500 pc from the central continuum source (Turnshek, 1986). Estimates for the radial extent of the absorbing clouds range from $`10^8`$ cm \[assuming many clouds with small filling factors (Weymann, 1995)\] up to $`10^{13.5}`$ cm \[from column density measurements (Turnshek, 1995; Murray et al., 1995)\] and even sizes as large as $`10^{15}`$ cm \[determined from microlensing limits (Paper I)\] have been suggested. The ‘blackness’ of some of the observed troughs indicates that the material in the BAL region can completely obscure our view of a quasar core. This region typically possesses an angular scale of only microarcseconds, making a determination of the geometrical properties of the BAL region virtually impossible with normal observational techniques. It has become apparent, however, that our view of quasars in polarized light is implicitly dependent upon the geometry of the central regions. From the various polarimetry studies undertaken for BAL quasars, several consistent observations have been noted; it is seen that the broad emission lines evident in the quasar spectra exhibit no or very low net polarization (e.g., Goodrich & Miller 1995; Schmidt & Hines 1999), while it has also been shown that a greater amount of polarization exists in the BAL troughs ($`12\%`$) as compared to the continuum ($`2\%`$) (e.g., Cohen et al. 1995; Ogle 1997; Schmidt & Hines 1999). Other clues come from the amount of reddening seen in the broad emission lines, with $`E(BV)0.4`$, which is significantly greater than that seen in the continuum or the BALs, with $`E(BV)>0.1`$ (Sprayberry & Foltz, 1992). The existence of polarized light in quasar spectra indicates that some scattering of radiation must be present. When the above observations are considered together, a schematic picture of the central regions of the BAL quasar emerges, with the material responsible for the BAL absorption being blown equatorially from the dusty torus surrounding the central accretion disk (e.g., Goodrich & Miller 1995; Hines & Wills 1995; Weymann 1995; Schmidt & Hines 1999). As illustrated in Figure 1, however, two possible configurations have been proposed to explain the observed polarization properties of BAL spectra (e.g., Cohen et al. 1995); * Model A: The path of the continuum light (Path A) is a simple direct path which leaves the central source and passes through the BAL region. In this case, it is assumed that the continuum emission is intrinsically polarized upon leaving the source. As the radiation travels through the absorbing region, any polarization already present is enhanced by the action of resonance scattering of emission into the line of sight, leading to the BAL troughs exhibiting greater percentage polarization than the continuum. * Model B: This model assumes a slightly more complex geometry in which emission from the continuum source is intrinsically unpolarized. This light travels not only directly to an observer, traversing the BAL region, but also along an alternate path (denoted B in Figure 1) impinging upon a scattering region, most likely consisting of electrons or dust (Antonucci & Miller, 1985; Goodrich & Miller, 1995; Gallagher et al., 1999; Brandt et al., 1999). It then travels on to an observer, leaving the central regions of the quasar without necessarily traveling through the BAL region. Unpolarized light coming directly to the observer dilutes this polarized flux in the continuum of the spectrum. The BAL troughs, however, eat into this unpolarized flux, reducing the dilution and enhancing the percentage polarization in the absorption line. The goal of this paper is to demonstrate how photometric and polarimetric monitoring of microlensed BAL quasars can clearly delineate between the above proposed models, and illustrate how microlensing can give further clues to the nature of BAL quasars. ## 3 Microlensing Chang and Refsdal (1979) first demonstrated that the granularity of galactic matter, distributed on small scales as point-like stars and planets, can exert a gravitational lensing influence on the light passing through it from a distant source. While the image splitting introduced by such masses is extremely small ($`10^6`$ arcseconds), such lenses can induce a significant magnification of a background source. As the relative configuration of source, lensing stars, and observer change, so too does the degree of magnification and consequently the brightness of a background source is seen to fluctuate. Such fluctuations have been seen in the light curve of the quadruple quasar, Q 2237+0305 (Irwin et al., 1989; Corrigan et al., 1991). The original work of Chang and Refsdal (1979) considered the gravitational lensing action of a single, isolated star \[a description that accurately represents the simple gravitational lensing by MACHO objects within our Galactic halo (Alcock et al., 1993)\], however, it is the case that many stars act on a light beam as it passes through a galaxy. In such a high optical depth regime, the pattern of magnification a source will suffer is no longer simple; rather it is characterized by a series of violent asymmetric fluctuations, inter-spaced with quiescent regions where the source suffers demagnification, also known as the microlensing caustic structure (Kayser, Refsdal, & Stabell, 1986). Different numerical techniques have been developed to investigate the pattern of magnification that a microlensed object undergoes. The backwards ray-tracing method, which reconstructs a map of magnification (Kayser, Refsdal, & Stabell, 1986; Wambsganss, 1990) and the contour algorithm (Lewis et al., 1993; Witt, 1993) have proven to be useful in the analysis of the statistical properties of microlensed induced variability (Lewis & Irwin, 1995, 1996). To understand the characteristics of variability introduced by microlensing, it is necessary to determine two important scale factors: the Einstein radius and the caustic crossing timescale. The Einstein radius, $`\eta `$, represents the microlensing scalelength in the source plane. For a single star of mass $`M`$ it is given as $$\eta =\sqrt{\frac{4GM}{c^2}\frac{D_{os}D_{ls}}{D_{ol}}},$$ (1) where $`D_{ij}`$ represents the angular diameter distance between observer ($`o`$), lens ($`l`$), and source ($`s`$) (Schneider, Ehlers, & Falco, 1992). For significant magnification, sources must be substantially smaller than this scalelength. Considering the cosmological distances involved, typical values of $`\eta `$ for multiply imaged quasars, microlensed by stars of mass $`1M_{}`$, range from $``$0.01-0.1 pc. The characteristic microlensing timescale, $`\tau `$, is the time taken by a region of high magnification (which can be formally infinite at a ‘caustic’ in the magnification pattern) to sweep across a source (Kayser, Refsdal, & Stabell, 1986). This is given as $$\tau \frac{f_{15}}{V_{eff}}\mathrm{yr},$$ (2) where $`f_{15}\times 10^{15}h_{75}^1\mathrm{cm}`$ is the scalesize of the source, and $$V_{eff}=(1+z_l)\frac{D_{os}}{D_{ol}}\frac{v_{300}}{h_{75}}\mathrm{km}\mathrm{s}^1$$ (3) is the effective velocity of the microlensing caustics, with $`z_l`$ as the redshift of the lensing galaxy and $`300v_{300}\mathrm{km}\mathrm{s}^1`$ as the velocity of the microlensing stars across the line of sight, assumed to be the bulk motion of the galaxy due to its departure from the Hubble flow. The situation is more complex when one considers motions due to the stellar velocity dispersion (Schramm et al., 1993; Wambsganss & Kundic, 1995), a point which will be considered later. For macrolensed quasars, typical values of $`\tau `$ are around several months. But what is the effect of microlensing on each of the scattering models discussed in Section 2? To answer this question, we need to construct, for each model, the ‘view’ of the central regions of the BAL quasars as seen by the microlensing caustic structure. Considering Figure 1, such a view is represented in Figure 2, which will be discussed in more detail in the following sections. With this picture, the question of how microlensing influences our view of BAL quasars can be addressed. ### 3.1 Model A: Scattering within the BAL Region For this model, the enhanced polarization seen in the BAL troughs is resonance scattering within the BAL region itself. When investigating the possible microlensing effects, the size of the continuum source as seen through the scattering BAL region must be considered. Many previous studies have demonstrated that this is smaller than the typical Einstein radius by at least an order of magnitude, and hence, is subject to extreme gravitational lensing events (e.g., Lewis et al. 1998). Similarly, as where the scattering region is the BAL region, the view of the continuum source through the BAL region in both polarized and unpolarized light will present very similar profiles. This model is illustrated in the left-hand panel of Figure 2. The microlensing masses will see unpolarized and polarized light coming directly from the continuum source through the BAL region. Therefore, as the caustic sweeps over the BAL region, both the scattered light and absorbed light are being magnified equally. In this picture, we would expect that while microlensing results in a pronounced photometric change during a high magnification event, the observed percentage polarization in the BAL trough should remain unchanged. Similarly, if a snap-shot view were obtained of a multiply imaged BAL quasar, each image would be in a different state of microlensing, although each would possess the same polarization properties in the BAL troughs. ### 3.2 Model B: External Scattering In the case of the external scattering region, we are observing light traveling two different paths as it leaves the continuum source, with the radiation which impinges on the scattering region not being significantly attenuated by the broad absorption material. As stated previously, it is necessary to take into account the size of the scattering region, which in this case is unknown. If it is too large to be affected by microlensing (i.e., larger in comparison to the Einstein radius of the microlensing stars), then the polarized flux will remain effectively constant, as the BAL quasar is being microlensed, while the continuum flux will be subject to variability. This leads to variability in the observed percentage polarization which will be seen in both the absorption lines and the continuum. However, if the region is indeed small enough to be significantly magnified then the flux in polarized light will also vary during microlensing, although there will be a delay between observed variability in the continuum and in the polarized flux. If the caustic is traveling parallel to the line connecting the continuum and the scattering region, the extent of this delay depends upon the projected separation, S, of the BAL and scattering regions, and the apparent velocity of the caustics and may be defined as, $$\tau _{delay}\frac{\mathrm{S}}{V_{eff}},$$ (4) where $`V_{eff}`$ is the effective velocity (Equation 3). To more clearly illustrate this point, the second panel of Figure 2 depicts the view which the microlensing caustics will have of the scattering region and the continuum source as seen through the BAL clouds. It is apparent that the caustic will pass over the individual regions separately, thereby producing a time delay between observed variabilities. Naturally, the value of $`\tau _{delay}`$ and which component leads the variability depend on the direction that the microlensing magnification map sweeps across the quasar central region. Figure 3 schematically depicts the passing of a caustic across such a quasar source; both the continuum and scattering region are treated as being uniform disks with a radius of 0.05$`\eta `$ and 0.5$`\eta `$, separated by S$`=1\eta `$. The caustic sweeps first across the continuum source, then across the scattering region resulting in the magnification fluctuations presented in the upper and middle panels of Figure 3. As pointed out earlier, the scattering region suffers a lower magnification than the continuum source due to its more extended nature. The lower panel of Figure 3 presents the fluctuation in the observed percentage polarization in the absorption line, assuming an intrinsic 10% polarization. As the continuum flux in the line is magnified there is a marked decrease in the percentage polarization as the emission from the scattering region is diluted, but later, when the strong magnification of the source has ceased, the scattering region is slightly magnified, leading to an enhancement of the percentage polarization. After the caustic has passed both sources they are equally magnified and the percentage polarization settles back to its pre-microlensed value, although the quasar image now appears brighter as it is still magnified by (in this case) 10%. ## 4 A Microlensed BAL Quasar Case Study ### 4.1 H 1413+117 To date, there are several confirmed gravitationally lensed BAL quasars: UM 425 (Meylan & Djorgovski, 1988, 1989; Michalitsianos & Oliversen, 1995); APM 08279+5255 (Irwin et al., 1998; Ellison et al., 1999a, b); HE 2149-2745 (Wisotzki et al., 1996); SBS 1520+530 (Chavushyan et al., 1997); PG 1115+080 (Weymann et al., 1980; Michalitsianos & Oliversen, 1996); and H 1413+117 (Magain et al., 1988). While the results of this investigation apply to all of these subjects, we focus on H 1413+117 as it is the most studied and observed gravitationally lensed BAL quasar. Identified in a survey of luminous quasars, H 1413+117 was found to possess four images of a z=2.55 quasar which exhibits broad absorption features, with image separations of $`0\text{′′}\text{.}77`$ to $`1\text{′′}\text{.}36`$ (Magain et al., 1988). While spectroscopy has revealed the presence of a number of absorption systems between $`\mathrm{z}1.42.1`$, there has been no indication of a lensing system between the quasar images in ground based observations (Lawrence, 1996), although the system does lie behind a cluster of galaxies at z$`1.7`$ which perturbs the lensing via shearing (Kneib et al., 1998a). Recent analysis of NICMOS images of H 1413+117, however, does reveal a very faint $`(\mathrm{H}_{160\mathrm{W}}=20.5)`$, extended system at the center of the cross-like quasar image configuration (Kneib et al., 1998b). Angonin et al. (1990) undertook integral field spectroscopy of individual images in H 1413+117. While confirming the BAL nature of the source quasar, these observations also revealed spectroscopic differences between the images, namely in the relative strength of the Si IV $`\lambda \lambda `$1394, 1403 and C IV $`\lambda `$1549 broad emission features (Hutsemékers, 1993), a signature of differential gravitational microlensing by stellar mass objects in the intervening galaxy (Sanitt, 1971; Lewis et al., 1998). This phenomenon has also been observed in another quadruple quasar, Q 2237+0305 (Lewis et al., 1998). Such a conclusion is supported by an observed photometric variability in H 1413+117 (Remy et al., 1996) and significant differences between the BAL profiles of the individual images (Angonin et al., 1990; Hutsemékers, 1993). There have been several investigations into the microlensing effects on BAL profiles within the spectra of the multiple images of H 1413+117. The first, which used a Chang-Refsdal Lens (Chang & Refsdal, 1979, 1984) to examine the effects on an individual cloud within the BAL region, was presented by Hutsemékers (1993; Hutsemékers, Surdej, & Van Drom 1994). Although this model was able to reproduce the observed spectral variations, it required a very specific alignment of the caustic, an individual BAL cloud, and the BAL region, and even a slight change would have a significant effect on the predicted magnification. A different approach was taken in Paper I, where we considered a model for the BAL region consisting of multiple clouds with varying amounts of absorption. Although the caustic framework is rather intricate at high optical depth, the scalesizes of this and of the source quasar are such that most of the caustics crossing the BAL region will be isolated fold catastrophes (Schneider, Ehlers, & Falco, 1992). This allowed a fairly simple calculation of the microlensing effects on the BAL region, and it was shown that the microlensing variability seen within the spectra of the lensed images of H 1413+117 could be reproduced if the scalesize of the BAL clouds are $`10^{15}`$ cm. ### 4.2 Event Statistics and Time Scale An important consideration is the timescale on which microlensing events will occur. To characterize the lensing scenario, two caustic crossing timescales need to be defined: the crossing times of the continuum source and the Einstein radius. For these calculations, we assume a standard cosmology of $`\mathrm{\Omega }=1`$ and $`\mathrm{\Lambda }=0`$, and $`H_0=75\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. Using Equation 2, the timescales can all be calculated rather simply. If the continuum source is assumed to be $`1\times 10^{15}`$ cm (Wambsganss, Schneider, & Paczyński, 1990) and the velocity of the lensing galaxy across the source plane is taken to be $`300\mathrm{km}\mathrm{s}^1`$, then the crossing time of the continuum source is $`\tau _{cont}0.3`$ yr. The Einstein radius for the parameters pertaining to H 1413+117 is $`\eta 0.006\sqrt{M/M_{}}`$ pc giving a crossing time of $`\tau _{ER}6.5`$ yr for a one solar mass star. If the separation, S, between the scattering region and continuum source is taken to be $`\mathrm{S}=1\eta =19.4\times 10^{15}`$ cm, then $`\tau _{delay}=\tau _{ER}6.5`$ yr. As discussed in Section 3, the pattern of magnification that a source undergoes is rather intricate, and the positions of the caustics projected onto the source plane will obviously dictate when and how often a source will undergo a microlensing event. In order to determine the timescales on which these events will occur, it is necessary to examine the light curve of a source which results as the caustic network passes over the source. Such calculations were performed for the lensed quasar Q 2237+0305 by Witt, Kayser, & Refsdal (1993), who determined the nature of the light curve specific to the lensing parameters of each individual image. From these light curves the statistics of the microlensing events could be calculated, including the average time between high magnification events. To determine similar parameters for H 1413+117, we can extrapolate from the work of Witt, Kayser, & Refsdal (1993) (because of similarities in lensing parameters) and then scale these numbers to H 1413+117. We find that if the source is moving parallel to the shear, then a lensing event will occur every $`1.3`$ yr and if the source is moving perpendicular to the shear, then lensing events will occur on timescales of $`0.9`$ yr. These timescales mean that significant microlensing effects can be seen if observations are made every couple of weeks over the course of a few years. ## 5 Conclusions While current polarimetric data of BAL quasars have led to theories about the origin of the polarized light, these observations cannot clearly delineate between the various scattering models. However, as we have argued in this paper, microlensing may provide more detailed clues about the structure of the inner regions of quasars. Considering a scattering region associated with the BAL material and a scattering region separate from the BAL region as the most widely accepted scattering models, there will result three different signatures apparent in observations of a microlensed BAL quasar: * Case 1 (Model A): There is no time delay between continuum and polarization variability and no percentage polarization change in the BALs. * Case 2 (Model B): No time delay is observed between continuum and polarization variability but a percentage polarization change in the BALs is detected. * Case 3 (Model B): There is a time delay between continuum and polarization variability, and a percentage polarization change is apparent in the BAL troughs. (Note that as there is an overall enhancement in polarized flux, the percentage polarization of the continuum will also be seen to change during a microlensing event for all of these cases.) Case 1 implies that the polarization is actually associated with the absorbing material. Case 2 and 3 both suggest that the scattering region is not associated with the absorbing material, although, Case 2 indicates that the scattering region is large, while Case 3 points to a small scattering region. If Case 2 or 3 is apparent in observations of a gravitationally lensed BAL quasar, then the separation between the scattering region and the continuum source may be calculated from the observed time delay. Most importantly, observations of a single microlensing event will allow us to differentiate between the two scattering models. In considering these models, however, it is important to also examine the effects of intrinsic variability of the AGN source which may mask any microlensing signal. Foremost is luminosity variability. Important for model B, any change in the source luminosity will manifest itself in polarized flux after a delay due to the geometric light travel distance. Such variability, however, possesses several properties that make it easily distinguishable from any microlensing induced features. Namely, that change in unpolarized flux always leads that of the polarized flux and the observed delay between the two will be the same in all images of a multiply lensed quasar. With microlensing, however, the relative phase and time delay between the unpolarized and polarized variability depend upon the direction and velocity that a caustic sweeps across the AGN source, properties that will be different in each macrolensed image. Similarly, an observed polarization fluctuation that may be attributed to microlensing of model A could possibly be due to intrinsic variability of the polarized emission from the quasar source. As with luminosity variability, however, such intrinsic polarization variability will manifest itself in each component of a multiply imaged quasar, modulo a gravitational lensing time delay, and hence can be accounted for. The details of microlensing induced variability, however, will be unique to a particular image and will be uncorrelated with that of the other images. Although we have presented a simple and straightforward method for using polarimetric observations of gravitationally microlensed BAL quasars to determine various parameters pertaining to the polarization, there is a degeneracy in these calculations as all of the quantities depend on the mass of the microlensing objects (as $`\sqrt{M}`$) and the unknown transverse velocity. This is also complicated by the fact that there may be a significant velocity dispersion within the lensing galaxy which would make the calculations outlined in this paper much more complex. Removing these degeneracies in another gravitationally lensed quasar, Q 2237+0305, has been the goal of recent studies by Wyithe (1999) and Wyithe, Webster, & Turner (1999a, b, c) who have demonstrated that monitoring gravitationally lensed quasars will lead to the determination of these unknown quantities. We plan to extend the current study presented in this paper with simulations that will more closely examine microlensing effects occurring within the multiple images of H 1413+117, and which will also consider the mass and velocity degeneracies. Zdenka Kuncic and Nahum Arav are thanked for extremely useful discussions.
no-problem/9911/cond-mat9911200.html
ar5iv
text
# Quantum Poincaré Recurrences for Hydrogen Atom in a Microwave Field ## Abstract We study the time dependence of the ionization probability of Rydberg atoms driven by a microwave field, both in classical and in quantum mechanics. The quantum survival probability follows the classical one up to the Heisenberg time and then decays algebraically as $`P(t)1/t`$. This decay law derives from the exponentially long times required to escape from some region of the phase space, due to tunneling and localization effects. We also provide parameter values which should allow to observe such decay in laboratory experiments. During the last two decades the manifestations of classical chaos in microwave ionization of Rydberg atoms have been studied experimentally by different groups and many interesting results have been obtained . In particular, laboratory experiments showed the quantum suppression of classically diffusive ionization process, in agreement with the predictions of dynamical localization theory . The experimental technique based on accelerated proton beams, which is used for production of hydrogen atoms, allows to obtain interaction times with the microwave field of only few hundreds microwave periods . On the contrary, the thermal beams used with alkali Rydberg atoms allow to vary the interaction time by orders of magnitude up to $`10^5`$ microwave periods . The first experiments of Walther’s group indicated an anomalously slow decay of the $`10\%`$-ionization threshold field as a function of the interaction time. This result cannot be explained within the picture of diffusive ionization in the domain of classical chaos. Some suggestions have been put forward to explain this slow decay which was attributed to some possible effects of noise for such long interaction times . More recently, new experimental data for the behavior of the survival probability $`P(t)`$ with time have been presented , showing an algebraic law decay $`P(t)t^\alpha `$, with $`\alpha 0.5`$. In the same paper, numerical simulations of quantum dynamics have been made, giving a value of $`\alpha `$ consistent with experimental data. The origin of the slow algebraic decay was attributed to the underlying structure of classical mixed phase space composed by integrable islands surrounded by chaotic components. However, the investigations of classical chaotic systems with mixed phase space showed that the probability of Poincaré recurrences to the same region, or the survival probability up to time $`t`$, decays algebraically with power $`\alpha 1.53`$ . Moreover, since the integral $`_t^{\mathrm{}}P(\tau )𝑑\tau `$ is proportional to the measure of the finite chaotic region where the trajectory is trapped, the value of $`\alpha `$ should be greater than one. According to the correspondence principle, one expects that, in the semiclassical regime, classical and quantum systems exhibit the same decay law. Therefore the above exponent $`\alpha 0.5`$ found in the experiments requires, in this respect, an explanation. In particular the question arises whether this obtained value is generic or corresponds to some initial transient time behavior in a regime where quantum effects play an important role. Recent studies of quantum Poincaré recurrences for the Chirikov standard map in the semiclassical regime with mixed phase space showed that quantum $`P(t)`$ follows the classical decay during a relatively large time $`t_H`$. The time $`t_H`$ gives the Heisenberg time scale, which is determined by inverse level spacings. For $`t>t_H`$, the quantum survival probability starts to decay inversely proportional to time ($`\alpha =1`$) and becomes much larger than the classical one. The power $`\alpha =1`$ is due to exponentially long times required to escape from some region of phase space. These exponentially long escape times are originated by tunneling from classically integrable region or by the exponential quantum localization. The above quantum behavior, with exponent $`\alpha =1`$, is different from the experimental data and this constitutes an additional motivation for the present paper. Indeed the highly excited states of hydrogen atom in a microwave field can be described by the Kepler map which is very similar to the Chirikov standard map and therefore one would expect the same behavior for the time dependence of survival probability. In order to investigate the probability decay for the hydrogen atom in a microwave field, we choose the initial state with principal quantum number $`n_0`$ and numerically studied the survival probability $`P(t)`$ in a linearly polarized monochromatic electric field $`ϵ(t)=ϵ\mathrm{sin}(\omega t)`$. Here $`ϵ`$ and $`\omega `$ are the strength and frequency of the microwave field, measured in atomic units. The quantum evolution is numerically simulated by the Kepler map , by the one-dimensional ($`1d`$) model of a hydrogen atom and by the $`3d`$ model for atoms initially prepared in states extended along the field direction and with magnetic quantum number $`m=0`$. The comparison of these three models shows that the essential physics is captured by the $`1d`$ model as already discussed in . In addition we show that also the Kepler map gives an approximate correct description of the dynamics. The hydrogen atom in a linearly polarized monochromatic electric field is described by the Hamiltonian $$H=\frac{p^2}{2}\frac{1}{r}+ϵz\mathrm{sin}(\omega t),$$ (1) where, in the $`1d`$ model the motion is assumed to take place along the field direction ($`z`$-axis, with $`z0`$). In order to compare classical and quantum dynamics it is convenient to use the scaled field strength $`ϵ_0=ϵn_0^4`$ and frequency $`\omega _0=\omega n_0^3`$, which completely determine the classical dynamics. The classical limit corresponds to $`\mathrm{}_{\mathrm{eff}}=\mathrm{}/n_00`$, at constant $`ϵ_0`$, $`\omega _0`$. For $`\omega _0>1`$ the main change of the electron energy $`E`$ occurs when the electron is close to the nucleus. As a consequence the dynamics is approximately given by the Kepler map. $$\overline{N}=N+k\mathrm{sin}\varphi ,\overline{\varphi }=\varphi +2\pi \omega (2\omega \overline{N})^{3/2},$$ (2) where $`N=E/\omega `$, $`k=2.6ϵ\omega ^{5/3}`$, $`\varphi =\omega t`$ is the phase of the microwave field when the electron passes through the perihelion and the bar marks the new values of variables. In the quantum case, the change of $`N`$ gives the number of absorbed photons while the number of photons required to ionize the atom is $`N_I=1/(2n_0^2\omega )`$. In classical mechanics diffusive ionization takes place for fields above the chaos border: $`ϵ_0>ϵ_c1/(49\omega _0^{1/3})`$ . The quantum dynamics of the model (2) is described by the quantum Kepler map for the wave function $`\psi (\varphi )`$: $$\overline{\psi }=\mathrm{exp}(iH_0(\widehat{N}))\widehat{P}\mathrm{exp}(ik\mathrm{cos}\widehat{\varphi })\psi ,$$ (3) where $`H_0(\widehat{N})=2\pi /\sqrt{2\omega \widehat{N}}`$, $`\widehat{N}=id/d\varphi `$, $`\widehat{\varphi }=\varphi `$ ($`\mathrm{}<\varphi <+\mathrm{}`$), and the operator $`\widehat{P}`$ projects probability over the states with negative energy ($`N<0`$) . We introduce an absorption border for levels with $`nn_c`$, which for the Kepler map corresponds to $`NN_c1/(2n_c^2\omega )`$ . Such border occurs in real laboratory experiments, for example as a consequence of unavoidable static electric field experienced by the Rydberg atoms during their interaction with the microwave field. The absorption border $`n_c`$ can be varied in a controlled way via a static electric field $`ϵ_s`$, the static field ionization border being $`ϵ_sn_c^40.13`$. The results of quantum simulations for the situation similar to the experimental one (Fig. 2 (b) in Ref. ) are shown in Fig. 1. The Kepler map description allows us to study the quantum dynamics up to very long times ($`t=10^8`$, here and below time is given in microwave periods). In the case $`n_0=23`$ the quantum data for the survival probability $`P(t)`$ obtained from the quantum Kepler map and the $`1d`$ hydrogen atom model agree with each other (see the inset of Fig. 1) and with the numerical computations of . However all these data are strongly different from the classical probability decay shown in Fig.1, which displays a slope $`\alpha 2`$. The reason of this disagreement should be attributed to the fact that $`n_0=23`$ is not in the semiclassical regime. Our data for the Husimi distribution, obtained from the Wigner function by smoothing over the size $`\mathrm{}`$ , show that a significant part of the probability is trapped inside the stable island at $`n20`$ ($`\omega n^31`$). For this reason the probability decays slowly during a long time $`t10^5`$ after which it drops faster. If $`n_0`$ is increased significantly, the semiclassical regime is reached and the quantum probability decay becomes close to the classical one up to the time scale $`t_H10^4`$. Our data show that $`t_H`$ is proportional to $`n_0`$ (at fixed $`ϵ_0`$, $`\omega _0`$), in agreement with previous estimates of Ref. , according to which $`t_H1/\mathrm{}_{\mathrm{eff}}`$. After this time the quantum $`1/t`$ decay is clearly observed in agreement with the results of . In Fig. 2 we show a more realistic case in which, initially, classical and quantum probabilities decay in a very similar way and where only after a time $`t_H5\times 10^2`$, the quantum survival probability starts to decay more slowly ($`P(t)1/t`$) than the classical one which decays as $`1/t^\alpha `$, with $`\alpha 2.15`$. This case corresponds to $`n_0=60`$ and can be observed in experiments similar to those performed in . Again the quantum Kepler map gives a qualitatively correct description of the ionization process up to very long interaction times. The comparison of quantum simulations for the $`1d`$ hydrogen atom model and the $`3d`$ dynamics is shown in the inset of Fig. 2. It demonstrates that both dynamics give very close results, confirming that the essential physics is captured by the $`1d`$ model. We put the absorption border near the initial state ($`n_c=64`$) in order to have $`\rho _c=\mathrm{}_\varphi /\mathrm{\Delta }N_c3.5>1`$, where $`\mathrm{}_\varphi =3.3ϵ_0^2\omega _0^{10/3}n_0^2`$ is the localization length in number of photons and $`\mathrm{\Delta }N_c=(n_0/2\omega _0)(n_0^2/n_c^21)`$ is the number of photons required to reach the absorption border. In this way the probability can go out very easily and the $`1/t`$ probability decay is observed after a short transient time of the order of $`20`$ microwave periods. On the contrary, when $`\rho _c<1`$, as in the case of Fig. 1 for $`n_0=23`$ ($`\rho _c0.3`$), strong fluctuactions around the $`1/t`$ decay take place. This is analogous to the huge (log-normally distributed) conductance fluctuactions in a disorder solid with localization length smaller than the system size . In order to confirm that the algebraic probability decay is related to the sticking of classical trajectories and of quantum probability near the integrable islands in the phase space, we show in Fig. 3 the time evolution of the survival probability distribution in the phase space of action-angle variables $`(n,\theta )`$ for the $`1d`$ model. In the classical case $`3\times 10^6`$ orbits were initially homogenously distributed in the angle $`\theta `$ on the line $`n=n_0=60`$, corresponding to the initial quantum state with principal quantum number $`n_0=60`$. After $`50`$ microwave periods, the classical distribution of non ionized orbits shows a fractal structure which surrounds the stability islands (Fig. 3 top left). At larger times this distribution approaches more and more closely the boundary critical invariant curves (Fig. 3 bottom left). One of them confines the motion in the region with $`n>n_bn_0(ϵ_c/ϵ_0)^{1/5}41`$ where $`n_b`$ determines the classical chaos border for given $`ϵ_0`$. Other invariant curves mark the critical boundaries around internal stability islands (for example at $`n55`$, corresponding to $`\omega n^32`$). In the quantum case the value of $`\mathrm{}_{\mathrm{eff}}`$ is not sufficiently small to resolve the fractal structure at small scales. However, the Husimi function shows similarities with classical probability distribution at $`t=50`$ (Fig. 3 top right). At longer times, the diffusion towards the boundary at $`n_b`$ is slowed down due to localization effects and penetration of the quantum probability inside the classical integrable islands. At $`t=10^4`$ (Fig. 3 bottom right) the quantum probability is concentrated in a layer near $`n_b`$. Due to localization effects, the Husimi function does not change significantly for a very long interaction time ($`10^3<t<3\times 10^4`$). Eventually the probability starts to penetrate very slowly inside the main island at $`nn_b`$. Therefore tunneling and localization effects are responsible for the slow $`1/t`$ decay of the quantum survival probability seen in Fig. 2. The fractal structure of the classical distribution is washed out at scales smaller than the minimal quantum cell $`\mathrm{}_{\mathrm{eff}}`$. Therefore a better resolution can be obtained increasing the principal quantum number $`n_0`$, at fixed $`ϵ_0`$, $`\omega _0`$, and $`n_c/n_0`$. The Husimi function clearly reflects the underline fractal structure at very high principal quantum numbers $`n_0=150`$ (Fig. 4 left) and $`n_0=1200`$ (Fig. 4 right). Similar quantum fractals have been found in the kicked rotator model with absorbing boundary conditions . Notice that the probability decay $`P(t)`$ is related to correlations decay via $`C(t)tP(t)`$ . In the case of $`\alpha =1`$ this implies that correlations do not decay. The Fourier transform of $`C(t)`$ gives the spectral density $`S(\omega )`$ of the effective noise produced by the dynamics: $`S(\omega )=C(t)\mathrm{exp}(i\omega t)𝑑t1/\omega `$. This shows that the spectral noise associated with the quantum Poincaré recurrences with $`\alpha =1`$ scales like $`S(\omega )1/\omega `$. A similar behavior of noise has been observed in many scientific disciplines , for example in the resistence fluctuactions of different solid state devices . This phenomenon is known as $`1/f`$ noise and usually extends over several orders of magnitude in frequency, indicating a broad distribution of time scales in the system. In the case of quantum Poincaré recurrences this property stems from the exponentially low escape rate from some regions of the phase space. In summary, on the basis of our previous investigations and of the numerical studies presented in this paper we conclude that the survival probability for Rydberg atoms in a microwave field decays, up to the time scale $`t_Hn_0`$, in a way similar to the classical probability. For $`t>t_H`$ the quantum probability starts to decay slower than the classical one, with the exponent of the algebraic decay $`\alpha =1`$. We have given parameter values which should allow to observe quantum Poincaré recurrences in microwave experiments with Rydberg atoms. This research is done in the frame of EC program RTN1-1999-00400.
no-problem/9911/quant-ph9911071.html
ar5iv
text
# Analogy between optimal spin estimation and interferometry ## I Introduction Recently Peres and Wotters formulated a conjecture: Coherent measurement performed on the collective system is more efficient than sequential measurements of individual particles. This idea has been further developed by Massar and Popescu . They formulated this conjecture as a proposal for a “quantum game.” The player has $`N`$ identical copies of 1/2 spin particles prepared in an unknown pure state $`|\psi `$, and he is allowed to do any measurement. Possible results of the measurement will be denoted by an index $`r.`$ The aim of the measurement is to determine the original state of the system. Therefore, in the next step the measured data should be attached to a state $`|\psi _r`$, which represents the players’s estimation of an unknown true state. In the last stage of the game the true state is compared with its estimate and their coincidence is quantified by the so-called fidelity: $`|\psi |\psi _r|^2.`$ The runs are repeated many times with varying true state $`|\psi .`$ The final score of the quantum game is given by the averaged fidelity $$S=\{|\psi |\psi _r|^2\}_{av},$$ (1) where averaging is carried out over the measured data and all the true states $`\{\}_{av}=\{r,|\psi \}.`$ Massar and Popescu proved that the maximum score is $`(N+1)/(N+2).`$ This value cannot be reached by measurements acting on isolated particles. Derka, Bužek and Ekert showed that this score can be obtained by coherent measurement described by a finite-dimensional probability operator valued measure. The aim of this contribution is to address the relation between recently optimized measurement, repeated measurement on the Stern-Gerlach apparatus and interferometry. Particularly, it will be demonstrated that the above mentioned score represents the ultimate limitation for sequential measurements performed on each particle separately for any quantum state. This corresponds to the standard resolution $`1/\sqrt{N}`$ currently reached in interferometry. However, this regime does not represent the ultimate strategy. In analogy with quantum interferometry performance may achieve the resolution up to $`1/N`$ provided that spin orientation is properly coded into a quantum state of $`N`$ particles. In the particular case addressed in this contribution the optimal strategy corresponds to sequentially performed coherent measurements. This explicit example demonstrates the complexity of optimal measurement, which can combine advantages of both the coherent and sequential measurements with groups of particles. ## II Adaptive Stern-Gerlach spin detection Assume a standard measurement on an ideal Stern-Gerlach apparatus. A sample particle is prepared in the pure spin state $$|𝐧𝐧|=\frac{1}{2}[1+(𝐧\sigma )],$$ (2) where $`𝐧`$ represents the unity vector on the Poincare sphere, $`(𝐧\sigma )`$ being its scalar product with vector of Pauli matrices. The impinging particles will be deviated up or down. In the long run of repetitions the relative frequencies will approach the prediction of quantum theory. Representing the setting of the Stern-Gerlach apparatus by the unity vector $`𝐦`$, the probabilities of detection “up” ($`+`$) and “down” ($``$) read $$p_\pm =\frac{1}{2}[1\pm (\mathrm{𝐦𝐧})].$$ (3) What is the best possible but still feasible result, which would predict the spin orientation with the highest accuracy? The most accurate state estimation may be done if all tested particles were registered in the same output channel of the SG apparatus. In such a case the best estimation of the spin corresponds to the orientation of the SG apparatus. Of course, it does not mean that the estimated direction will fit the spin orientation exactly. Deviations are distributed according to the posterior distribution conditioned by the detected data. This can be handled analogously to the phase estimation . The deviations between the estimated and the true directions are given by the Bayes theorem as the posterior probability density $$P(𝐧)=\frac{N+1}{4\pi }\mathrm{cos}^{2N}(\theta /2)$$ (4) over the Poincaré sphere. The vector $`𝐧`$ is parametrized by the Euler angles $`\theta ,\varphi `$ in the coordinate system where the direction of $`z`$-axis coincide always with the estimated direction (i.e. with the orientation of SG apparatus $`𝐦`$). Notice that this is in accordance with the rules of the quantum game as defined in . The result of the measurement is always a specific direction, namely the setting of SG apparatus. In this case the score reads $$S=P(𝐧)d^2\mathrm{\Omega }_𝐧\mathrm{cos}^2(\theta /2)=\frac{N+1}{N+2}.$$ (5) This is the upper bound of sequential measurements on single particles with felicitously rotated SG apparatus. This is obviously an ultimate limitation since such results are possible and none measurement performed sequentially on single particles can yield better spin prediction. However, on the contrary to the coherent measurements, this resolution cannot be really achieved, but it may be approximated with an arbitrary accuracy for $`N`$ large enough. The possible realization may be suggested as an adaptive scheme, where the orientation of SG device depends on the previous results. The aim of the scheme is to find such an orientation, where almost all the particles are counted on the same port. This is obviously always a little worse than ideal case, since some portion of counted particles must always be used for corrections of the SG-device orientation and are therefore “lost”. Differences between ideal and realistic scheme seem to be negligible as demonstrated in Fig. 1. An adaptive measurement is simulated here. The procedure starts with projecting a single particle into the three orthogonal randomly chosen directions. The choice of the orientation of the subsequent SG measurement represents its own interesting optimization problem. Obviously, for getting the best score, it seems to be advantageous to project the particles into the most probable orientation. However, this will not reveal new corrections to the orientation of SG apparatus well. On the other hand, the particles may be counted with the SG device oriented perpendicularly to the most probable orientation. In this case the spreading of the data will be obviously broader than in the former case, but such measurement will be more sensitive to the deviation from the true direction. In the simulation scheme, the latter approach has been used and algorithm for synthesis of incompatible spin projections has been used . The treatment is still not optimal, however, as will be seen in the next paragraph, the differences seem to be unimportant. ## III Analogy between spin estimation and interferometry There is a clear analogy between spin measurement and phase interferometry. As the proper resolution measure, the dispersion of phase may be defined as $$D^2=1\{\mathrm{cos}\theta \}_{av}^2.$$ (6) Usually, the averaging is done over the data only. Provided that the width of phase distribution is small, dispersion tends to the standard variance of phase. Using the definition of the score $$S=\frac{1}{2}+\frac{1}{2}\{\mathrm{cos}\theta \}_{av},$$ (7) both the measures fulfil the relation $$S(S+1)=\frac{D^2}{4}.$$ (8) Consequently, the value $`S=(N+1)/(N+2)`$ is nothing else as dispersion (phase variance) $`D2/\sqrt{N}.`$ This is the so-called standard limit of phase resolution . Any standard measurement is scaled in this way and all the measurements differ by some multiplicative factors only. Loosely speaking, all the classical strategies are essentially equivalent from this viewpoint. This is why it has perhaps little sense to optimize further the adaptive scheme. For example, provided that one will use the most straightforward method of spin estimation based on the measurement of $`x,y,`$ and $`z`$ components of the spin on the Poincaré sphere always with $`N/3`$ particles, the resulting score may be evaluated asymptotically as $`S1114/(100N).`$ The difference between optimal coherent and realistic sequential measurements is assumed sometimes to be significant for small number of particles. However, in this case all the predictions are rather uncertain. In the case of phase detection it has little sense to compare two phase distributions, whose widths are comparable to $`2\pi `$ window. Then the phase knowledge is almost equally bad. Conventional resolution measures possess good meaning only when the effective width is substantially less than the width of the interval. More profound analogy between the spin measurement and interferometry follows from the common nature of the SU(2) symmetry . As well known, the resolution up to the order $`1/N`$ may be achieved in interferometry. In this case, however, it is recommended to modify slightly the proposed quantum game. Suppose that somebody wants to communicate an orientation of the axis in 3D space. Just an axis, not the direction of an “arrow”. For this purpose $`N`$ 1/2-spin particles are available and any measurement on these particles is allowed. Instead of $`N`$ identical copies, one may consider a general quantum state spanned by $`N`$ particles, in which the unknown orientation is coded. As the result of the measurement the unknown axis should be find. The score is defined in the same manner as before. The question are: “How to encode information on axis orientation into a quantum state of the particles? What measurement should be done in order to obtain the best score?” ## IV Superresolution in interferometry and spin estimation Let us review briefly the description of SU(2) interferometers. The transformation of an internal state is given by an unitary transformation of an input state $`\widehat{U}(\theta ,\phi )=\mathrm{exp}\left[i\theta \widehat{J}_2(\phi )\right]`$ (9) $`\mathrm{exp}(i\phi \widehat{J}_3)\mathrm{exp}(i\theta \widehat{J}_2)\mathrm{exp}(i\phi \widehat{J}_3),`$ (10) where $`\widehat{J}_1,\widehat{J}_2,\widehat{J}_3`$ corresponds to generators of SU(2) group . The transformation is given by the unity vector $`𝐧=(\mathrm{cos}\phi \mathrm{sin}\theta ,\mathrm{sin}\phi \mathrm{sin}\theta ,\mathrm{cos}\theta ).`$ An input state $`|\mathrm{in}`$ may be any $`N`$ particle state. The measurement may be represented by projectors into an output state $`|\mathrm{out}.`$ The posterior phase distribution corresponds to the scattering amplitude $$P(\theta ,\phi )|\mathrm{in}|\widehat{U}(\theta ,\phi )|\mathrm{out}|^2.$$ (11) This scheme encompasses the Massar, Popescu quantum game as a special case for the choice of the input state $`|\mathrm{in}=|N,N.`$ Score depends on the accuracy of detection of $`\theta `$ variable. As well known, the highest accuracy is achieved when the phase shift near the zero value is detected. This corresponds to the detection of the same quantum state on the output $`|\mathrm{out}=|N,N.`$ The particles feeding a single input port of an interferometer appears again on the corresponding output port. Obviously, the interpretation does not change provided that particles will enter and leave the interferometer sequentially. This is the consequence of the famous Dirac statement that “each particle interferes with itself.” Ultimate score $`(N+1)/(N+2)`$ is relevant just to this regime. Considerations related to adaptive SG detection are obviously related also to this situation. In the following, for given input state and for given setting of the parameters of the interferometer, only the most favourable but still feasible output will be assumed. Such felicitous results provide an upper bound conditioned by the realistic measurements. Obviously, no other measurement of the given type, can provide better prediction. As well known in interferometry , better resolution may be obtained provided that both the input ports of an interferometer are fed simultaneously by an equal number of $`r`$ particles. In the case when the same output state appears on the output, the phase shift prediction will be the sharpest. This corresponds to the input and output states $`|\mathrm{in}=|2r,0`$ and $`|\mathrm{out}=|2r,0.`$ The scattering amplitude then depends only on the variable $`\theta `$ as $$P(\theta )\left[P_r^0(\mathrm{cos}\theta )\right]^2.$$ (12) The Legendre function $`P_r^0(\mathrm{cos}\theta )`$ may be approximated by Bessel function $`J_0(r\theta )`$ for large index $`r`$. Consequently, the probability distribution is not integrable for $`r\mathrm{}`$ and must be therefore treated more carefully . Particularly, it is therefore not the best strategy to use all the energy of $`N`$ states in the single coherent measurement. The particles should be divided into several groups and the measurement should be sequentially repeated several times. The accumulation of information is expressed mathematically by the multiplication of corresponding distribution functions (12). This tends to narrowing of the posterior probability distribution. The optimal regime for the $`\theta `$ detection has been roughly estimated in Ref.. Optimal number of repetitions of the coherent measurement was expected to be approximately $`n4`$. Let us interpret this interferometric measurement in terms of the spin estimation. The significant difference between spin 1/2 particles and photons is connected with their fermionic and bosonic nature. The input wave function constructed from fermions must be “artificially” symmetrized with respect to all the distinguishable particles. The input state $`|N,0`$ is a superposition involving all the combinations of $`N/2`$ particles with the spin up and $`N/2`$ particles with the spin down. This $`N`$ particle state is entangled. Let us analyse in detail the the score for such states. As before the most favourable but still physically feasible situation will be interpreted as an ultimate bound of the state detection. Due to the symmetry of the problem, the best resolution may be inferred provided that the state $`|N,0`$ feeds the SG apparatus. The results of such measurement will be deterministic in the case when the apparatus is oriented along the spin state of each particle. In this case a half of particles will be detected with the spin “up” and a half of particles with the spin “down”. Assume now that this really happened. What can be inferred from this event? Obviously, this might occure also in the case when the setting of SG differs from the right spin orientation by an angle $`\theta .`$ The probability that this happens is proportional to the scattering amplitude (12). The probability is sketched in Fig. 2 – as can be seen it shows oscillations. The domain is restricted to the values $`(0,\pi /2)`$ since the method just finds the axis but not its vector orientation. The score is plotted in Fig. 3. As shown it does not improve with increasing number of particles. It is caused by the heavy tails of the posterior distribution (Fig. 2), which does not contain the dominant amount of the probability in the central peak. Nevertheless, there is a way how to suppress this undesired behaviour. Provided that the measurement is repeated, the corresponding posterior distribution will be given by the product of partial results. Again, the most favourable situation for spin estimation is characterized by the repetition of the same “optimal” result. This is again feasible, provided that the true spin orientation and projection do not differ significantly. The resulting score in dependence on the total number of particles is sketched in Fig. 3. Notice, that SG measurement must be done with $`N/2`$ particle state in this case, since the measurement is repeated twice. This procedure may be further generalized including an arbitrary number of repetitions. As shown in Fig 3, the score increases up to the 5 repetitions and then starts to decrease. This seems to be in good agreement with the rough asymptotical analysis presented in Ref. , where the optimal repetition rate has been found as 4. This result provides an ultimate bound in the following sense: If the given input state is measured with the help of SG projections, the score cannot be better than this ultimate value. The question whether this value may be attained is not answered here. Intuitively, in the case of large number of particles it might be well approximated by an adaptive scheme as in previous case of standard resolution. However, the adaptive scheme cannot be applied to the first measurement. The results may therefore appear as overestimated in this case. This explains why the analysis applied here provides the score $`S0.875`$ for the state $`|,,`$ whereas the result of Gisin and Popescu gives the value $`0.789.`$ In the asymptotical limit, the dispersion is approximately given by the relation $$D\frac{\sqrt{4n}}{N}=\frac{2\sqrt{5}}{N}.$$ (13) The optimal score reads asymptotically $$S1\frac{5}{N^2}.$$ (14) One may ask whether the required states containing half particles with spins “up” and half with spins “down” (in an arbitrary direction) can be prepared from a set of particles with all spins “up”, i.e. if it is possible to “turn” arbitrary quantum state into a state orthogonal to it. The general answer is no – the linearity of quantum mechanics does not allow this. Particularly in case of 1/2-spin particles, if one is able to do: $`||`$ and $`|e^{i\phi }|`$ then for an arbitrary state it follows from linearity of quantum evolution that $$|\mathrm{\Psi }=(\alpha |+\beta |)(\alpha |+e^{i\varphi }\beta |).$$ The last state is orthogonal to $`|\mathrm{\Psi }`$ if and only if $`\mathrm{arg}(\alpha )+\pi =\mathrm{arg}(\beta )+\varphi `$. Thus in special situations (corresponding to “real” subspaces) the mentioned transformation is possible (e.g. when spin projections lay in a given plane, also for linear polarizations of photon or for interferometry with fixed splitting ratios and varied a phase difference only). But an arbitrary spin cannot be turned to the perfectly opposite one. ## V Conclusions The profound analogy between interferometry and spin estimation has been addressed. The performance of both the schemes discussed has been conditioned by the realistic measurements only. As demonstrated, the recently reported optimal spin estimation corresponds to the standard quantum limit characterized by the resolution $`1/\sqrt{N}.`$ It may be achieved by coherent measurements and well approximated by sequential ones. Beyond this regime, the quantum theory admits the resolution up to $`1/N.`$ Then, however, quantum interferences must be employed. In the case of a thought experiment with an ensemble of spin 1/2 particles it requires an entangled input state of $`N`$ particles. Besides this, optimal SG detection must combine advantages of both the coherent and sequential measurements. This example illustrates complexity of the optimal treatment in estimation problems. ## Acknowledgments Discussions with V. Bužek and D. Terno are gratefully acknowledged. This work has been supported by the grant VS 96028 and by the research project “Wave and particle optics” of the Czech Ministry of Education and by the grant No. 19982003012 of the Czech National Security Authority.
no-problem/9911/astro-ph9911113.html
ar5iv
text
# Extreme scattering of pulsars ## 1. Introduction Extreme Scattering Events (ESEs) were discovered as a result of long-term monitoring of the radio fluxes of compact radio quasars (Fiedler et al 1987). The events themselves are characterised by dramatic, frequency-dependent flux variations occurring within a period of a month or two. It is broadly agreed (Fiedler et al 1987, 1994; Romani, Blandford & Cordes 1987; Clegg, Chernoff & Cordes 1988) that ESEs are a due to ionised gas drifting across the line-of-sight, and that the transverse dimensions of the ionised region are only a few AU. If this scale is also representative of the longitudinal dimension of the ionised gas – i.e. it takes the form of a quasi-spherical blob – then the mere existence of such regions presents us with a severe problem: the necessary electron density is $`10^3\mathrm{cm}^3`$ and at $`10^4`$ K this implies pressures $`10^3`$ times larger than is typical of the Interstellar Medium (ISM). How can such regions exist? One possible solution to this conundrum was advanced by Walker & Wardle (1998): a neutral gas cloud of radius $``$ AU would develop a photo-ionised wind, as a consequence of the ambient Galactic radiation field, with about the right electron density at the ionisation front. Furthermore, the light-curves for such a lens offer a good representation of the data for the best known example of an ESE. However, if this interpretation is correct, then it follows that the neutral gas clouds must comprise a large fraction of the mass of the Galaxy, i.e. they form a major component of the Galactic dark matter. The success of this picture as a model for ESEs provides a strong incentive for studying ESEs in detail, and with renewed vigour. This paper offers some ideas on how this can best be achieved, arguing that pulsars are by far the best targets for such studies. ## 2. Observables In designing experiments to study lenses, it is helpful to first set-out what quantities we might be able to observe during a lensing event. There are, in principle, six measurable quantities of interest for each image of any small-diameter source: the size, shape and orientation of the image; the location of the image; and the delay of the image. If $`n`$ is the refractive index, and $`Nds(1n)`$ is its line-of-sight integral, then these measurables are determined, respectively, by $`_i_jN`$ (three independent quantities; $`i,j`$ denote the transverse coordinates), $`_iN`$ (two quantities), and a combination of $`N`$ and $`(_iN)^2`$. The image size (solid angle) is perhaps the most basic measurable as it determines the received flux; the image magnification is just the ratio of apparent source sizes with/without the lens present. Because the lenses are only milli-arcseconds in size, measurement of image location, shape and orientation is only possible with VLBI, if at all. Measurement of the time delay requires a temporal variation in the intrinsic brightness of the source, and pulsars are eminently suitable. An important point about the foregoing is that single dish (or compact interferometer) observations, which cannot hope to resolve the individual images on the sky, are enormously more informative if pulsars are targeted rather than quasars. The information gain is not simply a factor of two, but rather it is a factor of two for every image present, and it may be expected that three or five images (always an odd number) could be present during an ESE, so a factor of six or ten is possible in principle. The reason for this gain is that the presence of a delay allows the images to be resolved in time, whereas without the time dimension one only knows the total magnification (i.e. summed over all images). In practice it might not prove possible to separate all of the images, but the point remains that pulsars are much more informative than quasars when it comes to single-dish observations. In turn this means that we can distinguish between ESE models far more effectively with pulsars than with quasars. ## 3. Why pulsars are so useful The considerations just given provide, in themselves, good reason to prefer pulsars as targets for studying ESEs. There are two other considerations which are important: pulsars are extremely compact, and they are steep-spectrum sources. Their very small angular size has two benefits. First, the coherence patch of the radiation field is correspondingly large. This means that when multiple images are present, one of their manifestations is periodicity in the dynamic spectrum — a consequence of interference between two ray paths. This phenomenon is of great utility: a relative delay between images of order a microsecond, say, would be straightforward to observe as a MHz periodicity (even for glitching pulsars), but would be a tall order to detect in pulse time-of-arrival residuals (even for a millisecond pulsar; cf. Cognard et al 1993). Another big advantage of utilising pulsar dynamic spectra is that one can study very weak (low magnification) secondary images, because the power resident in periodic interference fringes is the cross-power between two images, and is proportional to the geometric mean of their magnifications. This point has been beautifully illustrated by Rickett, Lyne & Gupta’s (1997) study of the dynamic spectra of B0834+06, in which secondary images of magnification $`0.1`$% were revealed. The second benefit of small angular size is that one retains maximum sensitivity to lenses of small angular diameter; if a lens were apparently much smaller than the background source, then it would have little detectable effect. Compact radio quasars have angular sizes of order a milli-arcsecond, typically; this is comparable to the size of the lenses (at a distance of a few kpc), so the Fiedler et al (1994) data set is unlikely to have provided a complete census, and could conceivably have missed a large fraction of the lens population! In the case of pulsars, the intrinsic angular diameter is so small that the observed size is always limited by interstellar scattering, i.e. diffraction caused by density inhomogeneities in the ionised ISM (e.g. Rickett 1990). This is as good as we can hope to achieve. The fact that pulsars are steep-spectrum sources is also beneficial in that they are easily studied at long wavelengths, $`\lambda `$, where rays are refracted through large angles. Indeed, because $`N\lambda ^2`$, the cross-section for multiple imaging increases roughly as $`\lambda ^4`$ at long wavelengths. Interpretation of this fact requires care, because multiple imaging events are usually recognised as such only if they split the images by more than the angular size of the image, i.e. the scattering disk in the case of pulsars. (In particular this is true of the spectral periodicities mentioned earlier.) As the scattering disk size also scales as $`\lambda ^2`$, roughly, the apparent incidence of multiple imaging may be wavelength-independent. One point to bear in mind is that refraction through larger angles means that power is spread over larger physical areas in the plane of the observer, so the observed flux is smaller. Thus large lensing cross-sections go hand-in-hand with secondary images which are, typically, weak. ## 4. Finding events One of the main barriers to studying ESEs is that they are rare events: Fiedler et al (1994) estimated that the lenses cover only $`5\times 10^3`$ of the (extragalactic) sky, requiring many source-years of monitoring for every event. The burden of such a monitoring program can be lightened considerably if it is acceptable to observe only infrequently, rather than the daily sampling of Fiedler et al (1994), but if one relied upon flux monitoring, then the signature of an ESE might well be missed if sampling were infrequent. Here, again, pulsar dynamic spectra prove useful: if periodicities are present then we know, without reference to any other data, that multiple imaging is occurring; this phenomenon (with weak secondary images) is expected, in essentially all lens models, as a pre- and post-cursor of ESEs. (ESEs can be thought of as multiple imaging events in which the primary image magnification reaches values far from unity.) The following strategy thus suggests itself. Monitor pulsar dynamic spectra for the presence of periodic fringes; the sampling interval need only be a month or so, but high spectral resolution is important (1 KHz, say), as the image delays could be large. If no periodicity is detected in the spectrum of a particular pulsar, at a given epoch, then that pulsar would not be expected to undergo an ESE within the next month. On the other hand, if periodicity is seen, then multiple images are present at that epoch, and there is a chance that an ESE will take place within the next month. It is then appropriate to activate a program of detailed multi-frequency observations, including daily acquisition of dynamic spectra, and VLBI. This strategy allows one to monitor a large sample of pulsars, achieving good sampling on the ESEs, with only a modest commitment of telescope time. For example, a total sample of 100 pulsars could be monitored monthly; a small fraction of these will display multiple imaging at any one time, so only a handful of pulsars need be observed on any given day. Not all pulsars are equally appropriate for this type of program: nearby pulsars offer little chance of the signal encountering a lens between source and observer; strongly scattered pulsars are not useful because any lens is unlikely to split the images by an angle larger than the size of the scattering disk. ## 5. Signatures The acid test of any lens model, if not the physical model it derives from, lies in fitting the observed image properties to predictions calculated from the model $`N`$. However, there are also some qualitative aspects of the lensing behaviour which would help to determine the correct physical picture. In particular it is important to note that observations have not yet established the lens symmetry, if any; some symmetry does appear to be required because the light-curves are, crudely, time-symmetric. The natural choice is axisymmetry, arising from an underlying spherical symmetry; however, this immediately confronts us with the implication of an exploding lens, as discussed in §1. Two qualitative tests for axisymmetric lenses can be given. First, if VLBI observations are able to separate the images from each other, it should be seen that all images lie on the same line (of azimuth $`\varphi `$). This line should rotate systematically with time, $`t`$, such that $`\mathrm{tan}\varphi t`$, where both $`\varphi `$ and $`t`$ are measured from the mid-point of the event. Secondly, for an axisymmetric lens, the geometry is effectively stationary at $`t=0`$, so that all image magnifications and delays should be quadratic in time around $`t=0`$. Even simple tests of this kind could advance our understanding of ESEs considerably. ## References Clegg A.W., Chernoff D.F. & Cordes J.M. 1988 in Radio Wave Scattering in the Interstellar Medium, ed. J.M. Cordes, B.J. Rickett & D.C. Backer (AIP: New York) 174 Cognard I., Bourgois G., Lestrade J.F., Biraud F., Aubry D. & Drouhin J.P. 1993 Nat 366, 320 Fiedler R.L., Dennison B., Johnston K.J. & Hewish A. 1987 Nat 326, 675 Fiedler R.L., Johnston K.J., Waltman E.B. & Simon R.S. 1994 ApJ 430, 581 Rickett B.J. 1990 ARAA 28, 561 Rickett B.J., Lyne A.G. & Gupta Y. 1997 MNRAS 287, 739 Romani R., Blandford R. D. & Cordes J. M. 1987 Nat 328, 324 Walker M. & Wardle M. 1998 ApJL 498, L125
no-problem/9911/astro-ph9911198.html
ar5iv
text
# Geodetic Precession in PSR B1534+12 ## 1. Introduction Binary pulsars in close, highly eccentric orbits have long provided the best strong-field tests of the predictions of gravitational theories. Timing observations of PSR B1913$`+`$16 have allowed the measurement of three “post-Keplerian” parameters: the rate of periastron advance, $`\dot{\omega }`$, the time-dilation and gravitational-redshift parameter, $`\gamma `$, and the rate of orbital period decay, $`\dot{P}_b`$ (Taylor & Weisberg 1989); while PSR B1534$`+`$12 permits, in addition, the measurement of the Shapiro-delay parameters, $`r`$ and $`s`$ (Stairs et al. 1998). These observations have yielded highly precise tests of both the radiative and quasi-static predictions of the theory of general relativity. General relativity also predicts geodetic precession. The pulsar spin axis, if misaligned with the orbital angular momentum vector, will evolve according to: $$\frac{d𝐒_\mathrm{𝟏}}{dt}=𝛀_\mathrm{𝟏}^{\mathrm{spin}}\times 𝐒_\mathrm{𝟏},$$ (1) where, in general relativity, $$\mathrm{\Omega }_1^{\mathrm{spin}}=\frac{G^{3/2}M^{1/2}\mu }{c^2a^{5/2}(1e^2)}\left[2+\frac{3}{2}\frac{m_2}{m_1}\right],$$ (2) where $`G`$ is Newton’s constant, $`M`$ is the total mass of the system, $`\mu `$ the reduced mass, $`c`$ the speed of light, $`a`$ the projected semi-major axis of the orbit, $`e`$ the orbital eccentricity, and $`m_1`$ and $`m_2`$ the pulsar and companion masses respectively. For PSR B1534+12, $`\mathrm{\Omega }_1^{\mathrm{spin}}`$ amounts to $`0.52^{}\mathrm{yr}^1`$; an entire precession cycle would take approximately 690 years. As the angle between the spin axis and the line-of-sight to the pulsar changes, the observed cut across the pulsar emission region will also change, and result in the secular evolution of the pulse profile. The long-term change in profile shape of PSR B1913$`+`$16 has been interpreted as evidence for this precession (e.g, Weisberg, Romani & Taylor 1989; Kramer 1998; Weisberg & Taylor, these proceedings). Similar profile evolution in PSR B1534+12 was noted by Arzoumanian (1995) in examining Arecibo observations from 1990-94; these changes took the form of an apparent increase with time in the strength of the interpulse relative to the main pulse at 1400 MHz. ## 2. Observations and Results In May 1999, we carried out 1400 MHz observations of PSR B1534+12 at Arecibo Observatory to investigate whether this secular evolution was still occurring. During the Arecibo upgrade period, we had developed a 10-MHz bandwidth coherent-dedispersion baseband recorder, known as “Mark IV,” which we used in parallel with the older $`2\times 32\times 1.25`$ MHz “Mark III” filterbank. This strategy enabled us to compare our newest profile with those of Arzoumanian, which were also obtained with Mark III, to identify any possible small instrumental differences between the two observing systems, and to establish a baseline coherently-dedispersed profile against which to compare future observations. Our recent data confirm Arzoumanian’s observations of precession, and reveal new detail in the pulse profile evolution. This can be seen in Figure 1, in which the small interpulse is taken as a reference point. The profile dated MJD 51314 is from our May 1999 observations with the Mark III system; the earlier three profiles are taken from Arzoumanian (1995). It is clear that the “wings” of the main pulse are growing relative to the interpulse, and it appears that the total width at roughly the half-height of the wings is also increasing with time. The peak height of the main pulse relative to the interpulse is also changing, though not monotonically: it decreased between MJDs 48376 and 49276, then began increasing again at some point prior to MJD 51314. While the change in relative strength of the profile components is certainly an indication that we are viewing a slightly different part of the emission region, this cannot by itself provide an estimate of the change in viewing angle. However, if the apparent change in width of the base of the main pulse is real, it will be possible to combine this information with the magnetic geometry determined from the Rotating Vector Model (RVM) to put together a model of the beam shape and change in viewing angle, as has been done for PSR B1913+16 (e.g., Cordes, Wasserman & Blaskiewicz 1990, Kramer 1998, Karastergiou et al., these proceedings), and perhaps even to test quantitatively the rate at which precession is occurring. The latter may be possible for PSR B1534+12 because its magnetic geometry is very well constrained from polarization observations. With the Mark IV instrument, we have obtained a high-resolution polarization profile of this pulsar at 1400 MHz (see Figure 2); the resulting fit to the RVM agrees fairly well with the “Fit B” fit derived by Arzoumanian et al (1996), in which the magnetic inclination angle $`\alpha =114^{}`$ and the impact parameter $`\beta =19^{}`$. If long-term changes in polarization properties can be measured, or if aberration-induced orbital effects can be found, we will have a way to determine the entire orbital geometry of the system, including the vital but currently poorly-constrained spin-orbit misalignment angle, and achieve a test of the rate of precession. ## 3. Acknowledgments We thank Eric Splaver for help with data reduction. ## References Arzoumanian, Z. 1995, PhD thesis, Princeton University Arzoumanian, Z. et al. 1996, ApJ, 470, 1111 Cordes, J. M., Wasserman, I. & Blaskiewicz, M. 1990, ApJ, 349, 546 Kramer, M. 1998, ApJ, 509, 856 Stairs, I. H. et al. 1998, ApJ, 505, 352 Taylor, J. H. & Weisberg, J. M. 1989, ApJ, 345, 434 Weisberg, J. M., Romani, R. W., & Taylor, J. H. 1989, ApJ, 347, 1030
no-problem/9911/astro-ph9911434.html
ar5iv
text
# Linear Polarization Properties of Pulsars at 35 & 327 MHz ## References Asgekar, A., & Deshpande, A.A. 1999, elsewhere in this volume Deshpande, A.A., & Ramkumar, P. S. 1999, J. Astrophys. Astr., 20, 37 (RD99) Smirnova, T.V., & Boriakoff, V. 1997, A&A, 321, 305 Suleimanova, S.A., Volodin, Yu.V., & Shitov, Yu.P. 1988, Soviet Ast., 32, 177
no-problem/9911/astro-ph9911095.html
ar5iv
text
# Protogalactic Extension of the Parker Bound ## 1 Introduction The existence of magnetic monopoles has long been an intriguing prospect, motivating both theorists and experimentalists for more than 50 years. Dirac first showed that magnetic monopoles could be accomodated within electromagnetic theory if their magnetic charge, $`g`$, is given by an integer multiple of $`\mathrm{}c/2e`$. In 1974, t’Hooft and Polyakov independently demonstrated that monopoles are necessary components of any Grand Unified theory (GUT) that includes electromagnetism. If GUTs are shown to be correct, monopoles with mass $`mm_{GUT}/\alpha _{GUT}10^{15}10^{19}\text{ GeV}`$ are expected. However, no magnetic monopole has ever been observed, and the cosmic abundance of monopoles remains an open question. The experimental search for magnetic monopoles has intensified in the past decade. The MACRO experiment , now fully operational, has placed stringent limits (90% confidence level) on the terrestrial monopole flux. For a combined direct magnetic monopole search using scintillators, streamer tubes, and nuclear track detectors, MACRO has reached flux sensitivities below 3.2 $`\times 10^{16}`$ cm<sup>-2</sup> sec<sup>-1</sup> sr<sup>-1</sup> for monopoles in the velocity range $`10^4<\beta <10^1`$. Estimates of monopole abundance based on GUT phase transitions yield a monopole density which overcloses the universe by several orders of magnitude . Inflationary models can resolve this “monopole problem” by reducing the monopole abundance within the observable universe to an exponentially small value. It is difficult to subsequently predict theoretical expectations for the number of monopoles in the universe. Hence astrophysics can provide useful benchmark abundances for experimenters looking for monopoles, and the most practical constraints on the present day magnetic monopole flux arise from these astrophysical bounds, which fall into three categories: 1) cosmological bounds, which require that monopoles can provide at most the critical density of the universe; 2) nucleon decay catalysis bounds, arising from the hypothesis that monopoles catalyze nucleon decay in the cores of neutron stars and white dwarfs; and, 3) Parker-type bounds, which demand that monopoles not drain energy from astrophysical magnetic fields faster than they are regenerated. Catalysis bounds are by far the most stringent , but depend on the value of the cross-section for monopole catalyzed nucleon decay. In this paper, we reconsider the Parker-type bounds, noting that a stronger limit may be derived by considering the survival of a magnetic seed field during the collapse of the protogalaxy. Parker first emphasized that the existence of observable Galactic magnetic fields must place an upper limit on the magnetic monopole flux . The presence of a Galactic magnetic field indicates a relative dearth of magnetic monopoles. The original flux bound based upon the survival of today’s galactic magnetic field is the most straightforward . Magnetic monopoles which move along field lines absorb kinetic energy at the expense of the field. In order for the galactic field to survive, the magnetic monopoles must not drain field energy faster than it is regenerated by the dynamo, presumed to act on the order of the Galactic rotation time period, $`10^8`$ yrs . This implies an upper bound on the flux of monopoles, $`10^{16}\text{ cm}^2\text{sec}^1\text{sr}^1`$. This bound was later shown to be mass dependent . Subsequently Adams et al obtained an “extended Parker bound” by requiring survival and growth of a small galactic seed field after the collapse of the protogalaxy: $$10^{16}(m/10^{17}\mathrm{GeV})\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1.$$ (1) Here we extend these bounds further by considering an even earlier epoch in the history of the Galaxy, namely during collapse of the protogalaxy to the size that it has today. The flux bounds that we obtain in this way are the most stringent Parker type bounds to date for three reasons: (i) the magnetic field at this early time is very small with a larger coherence length than today, (ii) the protogalaxy starts to collapse at a higher redshift $`z5`$, and (iii) at this early time no dynamo can yet be important, so that we take the primary enhancement mechanism for the B field to be flux freezing, $`\frac{d}{dt}(BR^2)=0`$. Section 2 presents the protogalactic extension of the Parker bound. Section 3 concludes with a discussion of caveats to this bound, in particular the suggestion by Kulsrud $`\mathrm{𝑒𝑡}\mathrm{𝑎𝑙}`$ that Kolmogorov turbulence amplifies the magnetic field. The farther back one goes in the history of the Galaxy, the more speculative the bounds become, since we do not really understand the origin of the galactic magnetic field. ## 2 The Protogalactic Extension The origin of the Galactic magnetic field remains an outstanding problem of theoretical astrophysics. Faraday rotation measurements of polarized pulsar radio emissions indicate a mean Galactic magnetic field strength of $`(23)\times 10^6`$ G. The polarization of starlight by aligned interstellar dust grains suggests a large scale field, coherent over scales of (1-2) kpc, and extending more or less in the azimuthal direction in the disk of the Galaxy. Such fields may have originated from a large (i.e. $`10^9`$ G) primordial seed field amplified by the collapse of the Galaxy, or by a much smaller seed field ($`10^{20}`$ G) that was subsequently amplified by a fast dynamo. Recent work by Enqvist has argued that a large scale $`10^{20}`$ G seed field can arise via turbulent evolution from microscopic primordial magnetic fields such as those arising during cosmological phase transitions. Additional schemes for obtaining seed fields by considering magnetic flux entrained in the winds from young stellar objects , or by various battery mechanisms also exist, and under the most favorable conditions can produce seed fields of $`10^{11}`$ G. As an alternative to dynamo amplification of a small seed field, Kulsrud et. al. argue for a protogalactic origin for the magnetic field, wherein primary amplification of the field comes from Kolmogorov turbulence; we will further consider this possibility below. The protogalactic extension to the flux bound is based on the fact that, regardless of which scenario is correct, the existence of a field today requires that some field must exist after the collapse of the protogalaxy. To be conservative, we will take the largest possible protogalactic field value, $`B_0=10^9`$G, which assumes that flux freezing was the only early amplification mechanism. This value is conservative in that it will give the least restrictive bound on the monopole flux. Clearly the Galactic magnetic field is most vulnerable to dissipation in the absence of a regenerating dynamo. By considering the evolution of a small seed field in an era during which such a dynamo is not yet functioning, or when its effect is negligible (i.e., the protogalactic era), we develop a tighter bound; that is, we require a smaller upper bound on the flux of monopoles to avoid extinguishing the existing Galactic field. The time evolution of the magnetic field in the protogalaxy is governed by competition between amplification due to flux freezing, and dissipation due to a possible flux of magnetic monopoles. The details of this competition may be modeled by an equation of motion for the magnetic field of the form . $$\frac{dB}{dt}=\gamma _{coll}B\frac{g}{1+\mu /B}$$ (2) Here the first term on the right hand side describes the field amplification due to flux freezing and the second term describes dissipation by a flux of monopoles. Each of the quantities has been written in non-dimensional units. Here $`g`$ is the magnetic charge in Dirac units (we take $`g=1`$). The magnetic field is measured in units of the present day Galactic field strength, $`3\times 10^6G`$. The parameter $`\gamma _{coll}`$ represents the growth rate of the galactic field, and has units of $`10^8\text{ yr}^1`$ (i.e. the Galactic rotation rate). The flux, $``$, is measured in units of $`1.2\times 10^{16}\text{ cm}^2\text{sec}^1\text{sr}^1`$. Finally, the dissipation term depends upon $`\mu `$, where $`\mu =mv^2/\mathrm{}`$; $`m`$ is the monopole mass in units of $`10^{17}`$ GeV, $`v`$ is the monopole velocity in units of $`10^3c`$, and $`\mathrm{}`$ is the coherence length of the Galactic field measured in units of 1 kpc . As the protogalaxy collapses, we expect that the only significant amplification mechanism will be flux freezing. This results in an effective growth rate, $`\gamma _{coll}2/\tau _{coll}`$, where $`\tau _{coll}`$ is the collapse time of the galaxy, approximately $`10^9`$ yrs . In the dimensionless units just defined, $`\gamma _{coll}=0.2`$. Dynamo action during this era is negligible since, prior to collapse, the rotation period is extremely large compared to the collapse time. The rotation period today is $`10^8`$yr; using conservation of angular momentum, we see that during the protogalactic epoch, when the radius was larger by a factor of roughly 50, the rotation period was $`>10^{11}`$yr. In comparison, as mentioned above, the collapse timescale is roughly $`10^9`$yr. Consequently, our results are independent of any specific dynamo model. Additional dissipative effects which depend upon the magnitude of the Galactic magnetic field, such as turbulent dissipation (which evolves as $`B^2`$), may also be ignored. One point of concern is the value of the coherence length of the Galactic field during the protogalactic era, as the dissipative term in equation (1) depends upon $`\mu =mv^2/\mathrm{}g`$. Before Galactic collapse, we expect this coherence length to be much longer than today’s value of $`1`$ kpc. Although the value is highly uncertain, we expect $`l30100`$ kpc; hence, throughout the paper we adopt the value $`\mathrm{}=50`$ kpc. The monopole velocity $`v`$ is in units of $`10^3`$c, as mentioned above. We expect a massive monopole to acquire this velocity due to gravitational acceleration by the Galaxy during infall. In a model of galaxy formation in which the dark matter aggregates first into a dark halo, and the baryons subsequently fall into the potential well provided by the dark matter, it is reasonable to assume that the monopoles attained the virial velocity of $`10^3`$c once the dark matter haloes came into existence. Subequently, the protogalactic collapse of the baryons drags the magnetic field with it. If some other model of galaxy formation is considered in which the dark haloes do not form first, then it is possible that the monopoles were moving somewhat slower; a slower monopole velocity will only lead to a tighter bound on the monopole flux. Hence we will use the value $`10^3`$ as the lowest monopole velocity, in order to be on the conservative side. Light monopoles ($`\mu <<B_0`$) were accelerated to higher velocities by the galactic field, while ‘heavy’ monopoles ($`\mu >>B_0`$) did not have their velocities changed significantly. For our fiducial values of $`B_010^9`$G and $`\mathrm{}=50`$kpc, we expect that monopoles heavier than $`2\times 10^{15}`$ GeV were moving at $`10^3`$ while lighter monopoles were moving faster. As seen in figure 1 and discussed further below, the monopole flux bound changes slope at this value of the mass: the bound is linear as a function of mass for monopoles heavier than $`2\times 10^{15}`$ GeV and flat for lower masses. Given equation (2), the possible behavior of the field is straightforward to determine. If the field is to survive, we must have $`dB/dt>0`$. This requires $$\gamma _{coll}B+(\gamma _{coll}\mu g)>0$$ (3) If the monopole flux $`<\mu \gamma _{coll}`$, then the field survives for all initial values of the field strength. This behavior holds for $`\mu >B_0`$, i.e., for heavy monopoles with $`m>2\times 10^{15}\left(\frac{B_0}{10^9\mathrm{G}}\right)`$ GeV , and gives rise to the linear dependence of the monopole flux bound on the mass (see the Figure). If, on the other hand, $`>\mu \gamma _{coll}`$, then only initial field strengths $`B_0>B_c/\gamma _{coll}\mu `$ will survive. This behavior holds for $`\mu <B_0`$, i.e., for light monopoles with $`m<2\times 10^{15}\left(\frac{B_0}{10^9\mathrm{G}}\right)`$ GeV, and gives rise to the flat part of the monopole flux bound as a function of mass (see the Figure). Thus, for the protogalactic field $`B_0`$ to survive during collapse, the flux of monopoles at this time must obey one of two bounds $$<\mu \gamma _{coll}\text{or}$$ (4) $$<(B_0+\mu )\gamma _{coll}.$$ (5) We note that the flux of monopoles is $`=nv/(4\pi )`$, where $`n`$ is the monopole number density, which scales with the redshift as $`n(1+z)^3`$. The flux today is smaller than the flux at the time of protogalactic collapse by a factor of $`(1+z_{proto})^3100`$, where $`z_{proto}`$ is the redshift of galaxy formation. Hence the flux of monopoles today is constrained to be smaller than the bounds of eqns. (4) and (5) by this factor. Thus we obtain an analytical estimate for the bound on monopole flux, $$<5\times 10^{21}\left(\frac{m}{10^{17}\text{GeV}}\right)\text{cm}^2\text{sec}^1\text{sr}^1$$ (6) for $`m>2\times 10^{15}`$GeV, and $$<9\times 10^{23}\left(\frac{B_0}{10^9\text{G}}\right)\text{cm}^2\text{sec}^1\text{sr}^1$$ (7) for $`m<2\times 10^{15}`$GeV. ## 3 Discussion and Conclusion Without the support of a regenerative mechanism, the Galactic magnetic field is particularly susceptible to dissipation by a flux of magnetic monopoles. We have exploited this vulnerability of a small B field early in the history of the Galaxy to arrive at strict bounds on the monopole flux given in eqns. (6) and (7). Figure 1 summarizes the new flux bounds, displays previous Parker bounds, and has a line indicating a closure density of monopoles. Here, $`\mathrm{\Omega }_m=\rho _m/\rho _c`$ is the mass density of monopoles $`\rho _m`$ in units of the critical density required to close the universe, $`\rho _c=2.8\times 10^{29}`$ g cm<sup>-3</sup> (for Hubble constant $`H_o=70`$ km s<sup>-1</sup> Mpc<sup>-1</sup>). Note that the new protogalactic extension of the Parker bound differs from the extended Parker bound even at low monopole masses, where the curves flatten out. The reason for this discrepancy is twofold: i) the collapse timescale in eqn. (2) is $`1/\gamma _{coll}10^9`$ yr, whereas the corresponding rotation timescale in the extended Parker bound is 200Myr; ii) we have taken the coherence length of the magnetic field in the protogalaxy to be 50 kpc, whereas the coherence length in the Galaxy is roughly 1 kpc. As a caveat, note that the bounds in this paper rest on the assumption that no amplifying mechanism other than flux freezing exists during collapse. Recent work suggests that a preglactic seed field, amplified by a post-collapse dynamo may not be the only way to generate the observed field. Kulsrud et al. suggest the possibility that Kolmogorov turbulence and flux freezing together, acting on a small seed field can produce the observed Galactic field, without recourse to a large scale dynamo. In this case, the equation of motion becomes $$\frac{dB}{dt}=(\gamma _{coll}+\gamma _{turb})B\frac{g}{1+\mu /B}$$ (8) where the $`\gamma _{turb}`$ factor measures field growth due to Kolmogorov turbulence, and $`\gamma _{turb}600(\rho _B/\rho _D)`$ in the previously defined units; here, $`\rho _B/\rho _D`$ is the ratio of baryonic matter to dark matter. Hence, looking at eqns (4) and (5) one might conclude that in this scenario the flux bounds would be weaker by at least this factor. However, there is another more important difference between the scenario of Kulsrud et al. and the flux freezing we have described above: in the Kulsrud et al. model, the coherence length of the magnetic field during this era is much smaller, on the order of viscous length scales. If indeed $`\mathrm{}`$ is this small, then eqns. (5) and (6) do not result in an interesting bound. Monopoles would only begin to dominate the contribution to $`dB/dt`$ once $`\mathrm{}`$ becomes large; currently, in this model, it is unclear when this would happen. However, numerical simulations employed in lack the resolution to demonstrate that homogenous Kolmogorov turbulence actually occurs down to the smallest scales, so the final word on this issue must wait until alternate theoretical methods are developed or computational power is increased. Modulo this caveat, we have found a protogalactic extension to the Parker bound that is four and a half orders of magnitude more stringent than the extended Parker bound for a given monopole mass. Monopoles with $`\mathrm{\Omega }_m<1`$ that satisfy the extended Parker bound of Ref. were in principle accessible to direct searches (that rely on electromagnetic interactions of monopoles) in an experimental apparatus like MACRO with longer running time. However, monopoles with $`\mathrm{\Omega }_m<1`$ that satisfy the new protogalactic extension of the Parker bound presented in this paper are no longer accessible to existing direct search experiments. The new maximum monopole flux at the closure bound is $`=8\times 10^{19}`$ cm<sup>-2</sup> s<sup>-1</sup> sr<sup>-1</sup> and occurs for monopole mass near the Planck scale $`10^{19}`$GeV. This maximum flux at the closure bound is a factor of 150 lower than the maximum allowed by the extended Parker bound, which occurs for a monopole mass near the GUT scale $`10^{17}`$GeV. Thus MACRO would have to be scaled up in size by a factor of more than 100 to access the new maximum flux compatible with the protogalactic extension as well as the closure bound. Hence the new monopole bound in this paper, while more uncertain than previous bounds, would have serious consequences for experimental searches. Figure Caption Monopole flux limits as a function of the monopole mass $`m`$ in GeV. The line labeled TPB Bound shows the modified Parker bound obtained in Ref. . The dotted lines show the extended Parker bound of Ref. . The solid lines show the protogalactic extension of the Parker bound of this paper. The line labeled $`\mathrm{\Omega }_M`$ represents the bound obtained by assuming monopoles are uniformly distributed throughout the Universe but do not ‘over close’ the Universe. Here, we have taken $`H_o=70`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. If the monopoles are clustered with galaxies, this closure bound becomes weaker by a factor of $`10^5`$. Acknowledgements We would like to thank the Department of Energy for support at the University of Michigan. K. Freese would like to thank the Max Planck Institut fuer Physik in Muenchen, where some of this work was completed, for hospitality during her stay. We also thank F. Adams, G. Laughlin, and M. Turner for useful conversations.
no-problem/9911/hep-ex9911013.html
ar5iv
text
# High Resolution Hybrid Pixel Sensors for the 𝑒⁺⁢𝑒⁻ TESLA Linear Collider Vertex Tracker ## 1 The requirements of the linear collider physics programme The next generation of high energy $`e^+e^{}`$ experiments, following the LEP and SLC programs, will be at a linear collider, operating at centre-of-mass energies ranging from the $`Z^0`$ pole up to about 1 TeV. Expected to be commissioned by the end of the first decade of the new millennium, the linear collider will complement the physics reach of the Tevatron and LHC hadron colliders in the study of the mechanism of electro-weak symmetry breaking and in the search for new physics beyond the Standard Model. Both precision measurements and particle searches set stringent requirements on the efficiency and purity of the flavour identification of hadronic jets since final states including short-lived $`b`$ and $`c`$-quarks and $`\tau `$ leptons are expected to be the main signatures. High accuracy in the reconstruction of the charged particle trajectories close to their production point is required in order to reconstruct the topologies of the secondary vertices in the decay chain of short-lived heavy flavour particles. If a Higgs boson exists with mass below 150 GeV/c<sup>2</sup>, as indicated by the fit to the present electro-weak data Gross , it will be essential to carry out precision measurements of its couplings to different fermion species as a proof of the mass generation mechanism and to identify its Standard Model or Supersymmetric nature higgs1 . This can be achieved by accurate determinations of its decay rate to $`b\overline{b}`$, $`c\overline{c}`$, $`\tau ^+\tau ^{}`$, $`W^+W^{}`$ and gluon pairs to detect possible deviations from the Standard Model predictions higgs2 . Since the rates for the Higgs decay modes into lighter fermions $`h^0c\overline{c}`$, $`\tau ^+\tau ^{}`$ or into gluon pairs are expected to be only about 10% or less of that for the dominant $`h^0b\overline{b}`$ process, the extraction and measurement of the signals of these decay modes requires suppression of the $`b\overline{b}`$ contribution by a factor of twenty or better while preserving a good efficiency. The measurement of the top Yukawa coupling TopYuk as well as the top-quark mass measurement will require efficient $`b`$-tagging to reduce combinatorial background in the reconstruction of the six and eight jet final states. If Supersymmetry is realized in nature, the study of its rich Higgs sector will also require an efficient identification of $`b`$-jets and $`\tau `$ leptons to isolate the signals for the decays of the heavier $`A^0`$, $`H^0`$ and $`H^\pm `$ bosons from the severe combinatorial backgrounds in the complex multi-jet hadronic final states. Due to the expected large $`b`$-jet multiplicity, highly efficient tagging is required to preserve a sizeable statistics of the signal events. Finally, both $`b`$ and $`c`$-tagging will be important in the study of the quark scalar partners, while $`\tau `$ identification may be instrumental in isolating signals from Gauge Mediated Supersymmetry Breaking GMSB . A set of curves representative of the performance of a jet flavour tagging algorithm at the linear collider are given in Figure 1 under two different assumptions for the track impact parameter resolution. The requirement of efficient $`c`$ identification with a rejection factor against $`b`$-jets of more than ten, highlights the need of track impact parameter resolution $`\sigma _{I.P.}`$ better than $`10\mu m\frac{30\mu m\mathrm{G}eV/c}{\mathrm{p}\mathrm{sin}^{3/2}\theta }`$ in both projections. It is important to minimise the multiple scattering contribution to the impact parameter resolution. In fact, $`b`$/$`c`$ discrimination is obtained mostly by probing the difference in charged decay multiplicities and invariant masses of the $`B`$ and $`D`$ hadrons. This requires to identify the majority of charged decay products that have typical momenta of a few GeV/c in multi-jet events. A precise determination of the track perigee parameters close to their production point also assists in the track reconstruction and improves the momentum resolution. The addition of the Vertex Tracker space points with 7 $`\mu `$m resolution to those from a TPC main tracker improves the estimated momentum resolution $`\sigma _p/p^2`$ from 1.5 $`\times `$ 10<sup>-4</sup> (GeV/c)<sup>-1</sup> to 0.6 $`\times `$ 10<sup>-4</sup> (GeV/c)<sup>-1</sup> for charged particles of large momentum. The design of the linear collider Vertex Tracker and the choice of the sensor technology are driven by these requirements, to be achieved within the constraints set by the accelerator induced backgrounds at the interaction region and by the characteristics of the physics events. In the next section these constraints are discussed with specific reference to the TESLA linear collider project, while section 3 presents the conceptual design proposed for the Vertex Tracker. The silicon pixel sensor technology, developed to overcome the hybrid pixel sensor limitations in terms of single point space resolution, is discussed in details in section 4. ## 2 The experimental conditions at the TESLA interaction region Several concepts have been developed for the acceleration, preservation of low emittance and final focusing of electron and positron beams at energies in excess of 200 GeV/beam lcreview . The TESLA project CDR has proposed to use superconducting accelerating structures, operating at L-band frequency that delivers very long beam pulses ($``$ 900 $`\mu `$s) accelerating up to 4500 bunches per pulse. This scheme allows a large bunch spacing (190-340 ns) making it possible to resolve single bunch crossings (BX) and also to perform fast bunch-to-bunch feedback needed to stabilize the beam trajectory within a single pulse, thus preserving the nominal luminosity of 3-5 $`\times 10^{34}`$ cm<sup>-2</sup> s<sup>-1</sup>. The large luminosity of each individual bunch-crossing ( 2.2 $`\times 10^3`$ nb<sup>-1</sup> BX<sup>-1</sup>) and the large number of bunches in a single pulse imply a high rate of background events that need to be minimised by identifying the bunch corresponding to the physics event of interest. A primary source of background at the linear collider interaction region is the incoherent pair production in the electromagnetic interactions of the colliding beams. These particles are confined by the solenoidal magnetic field, spiralling in an envelope defined by the field strength, $`B`$, and their intrinsic transverse momentum acquired at production and the subsequent deflection in the electric field of the opposite beam. The pairs represent an irreducible source of spurious particle hits, which interferes with the reconstruction of the particles from the physics processes of interest. The maximum radius and longitudinal position of crossing of the envelope of deflected pairs defines both the inward bound for the first sensor layer and its maximum length. At 1.2 cm radial distance from the TESLA colliding beams, with a magnetic field $`B`$ = 3 T, the hit pair density is expected to be $``$ 0.2 hit mm<sup>-2</sup> BX<sup>-1</sup> with $`\pm 5`$ cm available in the longitudinal coordinate to fit the first sensitive layer of the detector. An additional background source that needs to be taken into account in the choice of the sensor technology is the flux of neutrons photo-produced at the dump of electrons from pairs and radiative Bhabha scattering and of beamstrahlung photons. The computation of this neutron flux at the interaction region relies on the modelling of their production and transport in the accelerator tunnel and in the detector and is subject to significant uncertainties. Estimated fluxes are of the order of a few 10<sup>9</sup> $`n`$ (1 MeV) cm<sup>-2</sup> year<sup>-1</sup> Tesch ; Ye , where the anticipated neutron flux has been normalised in terms of equivalent 1 MeV neutrons assuming NIEL scaling. Finally the large $`e^+e^{}\gamma \gamma `$ cross-section requires single bunch identification and high resolution on the longitudinal position of production of forward charged particles in order to suppress to a few % the rate of two photon background overlap with physics events. In addition to these background sources, the occupancy from the charged particles inside dense jets in multi-parton hadronic final states has to be considered. A study of $`t\overline{t}`$ and $`h^0Z^0`$ events at $`\sqrt{s}`$ = 500 GeV showed that at 3.0 cm from the interaction point, about 20 % (10 %) of the particles in the jet have at least one additional hit from another particle of the same jet within a 150 $`\mu m`$ distance in the $`R\mathrm{\Phi }`$ ($`Rz`$) plane, which corresponds to the typical two-track separation capability of a microstrip detector. Therefore, sensors with small sensitive cells have to be used in order to avoid a large number of merged hits and ambiguities in the pattern recognition. In summary, the linear collider Vertex Tracker must be able to provide a track impact parameter resolution better than $`10\mu m\frac{30\mu m\mathrm{G}eV/c}{\mathrm{p}\mathrm{sin}^{3/2}\theta }`$ in both the $`R\mathrm{\Phi }`$ and $`Rz`$ projections for jet flavour identification, identify single bunch crossings separated by about 200 ns to reduce pair and $`\gamma \gamma `$ backgrounds and have sensitive cells of 150 $`\times `$ 150 $`\mu `$m<sup>2</sup> or less to keep the occupancy from pairs and hadronic jets below 1%. There are two types of such silicon sensors, already used at collider experiments, that have the potential to satisfy these specifications in terms of sensitive cell size: the Charged Coupled Devices (CCD) and hybrid pixels sensors. The CCD sensors have successfully been used for the SLD Vertex Detector at the SLC collider at SLAC vxd3 while hybrid pixel sensors, pioneered by the WA-97 experiment at CERN WA97 , have been adopted at LEP in the upgraded DELPHI Silicon Tracker SiTracker . These have been further developed for the ALICE alice , ATLAS atlas and CMS cms experiments to meet the experimental conditions of the LHC collider. CCD detectors have been already proposed for the linear collider Vertex Tracker ccdlc . While CCD’s have ideal characteristics in terms of spatial resolution and detector thickness, they presently lack the required read-out speed necessary to cope with the TESLA bunch timing and are possibly sensitive to neutron damage at fluxes of the order of that expected at the linear collider. An intense R&D program is presently underway to overcome these limitations CCDnew . The use of hybrid pixel sensors for the linear collider Vertex Tracker was also proposed a few years ago and a first conceptual design of the detector was defined ABC . Compared to the CCD’s, hybrid pixel sensors have the advantage of allowing fast time stamping and sparse data scan read-out, thereby reducing the occupancies due to backgrounds, and of being tolerant to neutron fluxes well beyond those expected at the linear collider. Both these characteristics have been demonstrated for their application in the LHC experiments. On the other hand, there are areas of R&D that are specific to the linear collider, namely the improvement of the pixel sensor spatial resolution and the reduction of its total thickness. ## 3 The TESLA Vertex Detector design The proposed layout of the TESLA Vertex Detector ABC based on hybrid pixel sensors is shown in fig. 2 and consists of a three-layer cylindrical detector surrounding the beam-pipe complemented by forward crowns and disks extending the polar acceptance to small angles. This geometry follows solutions adopted for the DELPHI Silicon Tracker. The first detector layer, closest to the interaction region, is located at a radius of 1.2 cm and has a length of 10 cm. The two additional barrel layers are located at 3.5 cm and 10 cm respectively and have a polar acceptance down to $`|\mathrm{cos}\theta |=0.82`$. At lower angles, additional space points are obtained by extending the barrel section by a forward crown and two disks of detectors providing three hits down to $`|\mathrm{cos}\theta |=0.995`$. The transition from the barrel cylindrical to the forward conical and planar geometries optimises the angle of incidence of the particles onto the detector modules in terms of achievable single point resolution and multiple scattering. This tracker can be assembled by two independent half-shells, allowing its installation and removal with the beam-pipe in place. Overlaps of neighbouring detector modules will provide an useful mean of veryfing the relative detector alignment using particle tracks from dedicated calibration runs taken at $`Z^0`$ centre-of-mass energy. The geometry optimisation and the study of the physics performances of the Tracker design has been performed with a GEANT based simulation of the detector, accounting for benchmark physics processes with enhanced forward production cross-section, such as $`e^+e^{}H^0\nu \overline{\nu }`$, pair and $`\gamma \gamma `$ backgrounds, local pattern recognition and detector inefficiencies. The impact parameter resolution has been obtained by a Kalman filter track fit to the associated hits in the Vertex Tracker and in the TPC. In order to achieve a resolution for the impact parameter reconstruction better than 7 $`\mu `$m for tracks at large momenta, a detector space point accuracy of better than 10 $`\mu `$m must be obtained. The requirement on the multiple scattering contribution to the track extrapolation resolution, lower than 30 $`\mu `$m/$`p_t`$, and the need to minimise the amount of material in front of the calorimeters and to ensure the optimal track matching with the main tracking system, set a constraint on the material budget of the Vertex Tracker to less than 3 % of a radiation length ($`X_0`$). These requirements can be fulfilled by adopting 200 $`\mu `$m thick detectors and back-thinning of the read-out chip to 50 $`\mu `$m, corresponding to 0.3 % $`X_0`$ of a radiation length, and a light support structure. The present concept for the mechanical structure envisages the use of diamond-coated carbon fiber detector support layers acting also as heat pipes to extract the heat dissipated by the read-out electronics uniformly distributed over the whole active surface of the detector. Assuming a power dissipation of 60 $`\mu `$W/channel, the total heat flux is 450 W, corresponding to 1500 W/m<sup>2</sup>, for a read-out pitch of 200 $`\mu `$m. Preliminary results from a finite element analysis show that pipes circulating liquid coolant must be placed every 5 cm along the longitudinal coordinate except for the innermost layer where they can be placed only at the detector ends to minimise the amount of material. Signals will be routed along the beam pipe and the end-cap disks to the repeater electronics installed between the Vertex Tracker and the forward mask protecting the Vertex Tracker from direct and backscattered radiation from the accelerator. The material budget for the proposed design is shown in Figure 4. ## 4 Hybrid pixel sensors with interleaved pixels The desired impact parameter resolution defined in section 1 requires a single point precision in the Vertex Detector better than $`10\mu \mathrm{m}`$. This can be accomplished sampling the diffusion of the charge carriers generated along the particle path and assuming an analog read-out to interpolate the signals of neighbouring cells. In such a case, the expected resolution is $`\sigma =\mathrm{a}_{\mathrm{cf}}\frac{\mathrm{pitch}}{\mathrm{S}/\mathrm{N}}`$, where $`\mathrm{a}_{\mathrm{cf}}2`$ is a centroid finding constant and S/N defines the performance of the front-end electronics in terms of signal amplitude normalised to the noise. Given that the charge diffusion is $`8\mu \mathrm{m}`$ in $`300\mu \mathrm{m}`$ thick silicon, its efficient sampling and signal interpolation requires a pitch of not more than 25 $`\mu `$m. This has been successfully proven to work in one-dimensional microstrip sensors Anna . In pixel devices the ultimate read-out pitch is constrained by the front-end electronics, to be integrated in a cell matching the sensor pattern. At present, the most advanced read-out electronics have a minimum cell dimension of $`50\times 300\mu \mathrm{m}^2`$ not suitable for an efficient charge sampling. The trend of the VLSI development and recent studies Snow on intrinsic radiation hardness of deep sub-micron CMOS technology certainly allows to envisage of a sizeable reduction in the cell dimensions on a linear collider timescale but sensor designs without such basic limitations are definitely worth being explored. A possible way out is to exploit the capacitive coupling of neighbouring pixels and to have a read-out pitch n times larger than the implant pitch Bonvicini . The proposed sensor layout is shown in Figure 5 for n=4. In this configuration, the charge carriers created underneath an interleaved pixel will induce a signal on the read-out nodes, capacitively coupled to the interleaved pixel. In a simplified model, where the sensor is reduced to a capacitive network, the ratio of the signal amplitudes on the read-out nodes at the left- and right-hand side of the interleaved pixel in both dimensions will be correlated to the particle position and the resolution is expected to be better than $`(\mathrm{implant}\mathrm{pitch})/\sqrt{12}`$ for an implant pitch of 25 $`\mu `$m or smaller. The ratio between the inter-pixel capacitance and the pixel capacitance to backplane will play a crucial role, as it defines the signal amplitude reduction at the output nodes and therefore the sustainable number of interleaved pixels. Calculations with such capacitive network models Pindo show that resolutions similar to those achieved by reading out all pixels are obtainable if the signal amplitude loss to the backplane is small. Recent tests on a microstrip sensor, with 200 $`\mu `$m read-out pitch, have achieved a $`10\mu \mathrm{m}`$ resolution with three interleaved strip layout Krammer . Similar results are expected in a pixel sensor, taking into account both the lower noise because of the intrinsically smaller load capacitance and the charge sharing in two dimensions. Reducing the read-out density, without compromising the achievable space resolution, is also beneficial to limit the power dissipation and the overall costs. In order to verify the feasibility of this scheme, a prototype set of sensors with interleaved pixels and different choices of implant and read-out pitch have been designed, produced and tested. These test structures have been designed in 1998 and delivered in January 1999. Ten high resistivity wafers were processed <sup>1</sup><sup>1</sup>1at the Institute of Electron Technology, Warszawa, Poland together with an equal number of low resistivity wafers for process control. A detailed description of the processing can be found in ref. AZ . A bias grid surrounding the pixel cells allows the polarisation of both the interleaved and read-out pixels and each $`p^+`$ pixel implant is connected to the metal bias line by polysilicon resistors. This is to ensure a similar potential for all pixels and hence a uniform charge collection. A metal layer is deposited on top of the pixels to be bump-bonded to a VLSI cell. The backplane has a meshed metal layer to allow the use of an infrared diode for charge collection studies. Structures with the number of interleaved pixels ranging between 0 and 3 were fitted on a 4” wafer, assuming a VLSI cell size of 200 $`\times `$ 200 or 300 $`\times `$ 300 $`\mu `$m<sup>2</sup>. All of the structures on six undiced wafers were visually inspected and characteristic I–V and C–V curves were measured up to 250 V. The I–V and 1/C<sup>2</sup>–V curves obtained for a good structure are shown in fig. 6. Two wafers have extremely good characteristics, with a mean current of $``$ 50 nA/cm<sup>2</sup> and about 50% of good structures. The test structures have not shown any design fault even if processing and layout optimisation has to be considered. A more detailed summary of the measurements can be found in ref. MCa . On a short term, measurements of the inter-pixel and backplane capacitances are planned, completing the electrostatics characterisation of these sensors. A charge collection study will follow, relying on a low noise analog chip designed for microstrip sensor read-out and shining an infrared light spot on the meshed backplane. These measurements will be a proof of principle of the proposed device and define the guidelines for further iterations, aiming at a 25 $`\mu `$m pitch. The device thickness is a particularly relevant issue for the application of hybrid pixel sensors at the linear collider. The minimal thickness is set by the combination of the sensor noise performance and the limit of back-thinning technology for a bump-bonded assembly. Industrial standards guarantee back-thinning down to 50 $`\mu `$m for the electronics and procedures to obtain thin sensors are currently being tested SOTT . The small load capacitance of the pixel cells should guarantee an extremely good S/N. Scaling what has been obtained for the microstrip sensors, the desired resolutions should be obtained with a 200 $`\mu `$m thick sensor. ## 5 Conclusion Hybrid pixel sensors are an attractive option for a linear collider Vertex Tracker owing to their fast read-out and radiation hardness, well suited to the high repetition rate of the TESLA design. The main present limitations of this option are the achievable single point space resolution and sensor thickness. To overcome these limitations, pixel sensor design with interleaved cells is proposed. Test structures with different configurations of interleaved cells have been designed and produced and the results of their first electrostatic characterisation are discussed. The conceptual design of a Vertex Tracker based on these detectors has been developed and its performances have been evaluated, using a detailed simulation accounting for the relevant background processes, and the preliminary results have been presented. Acknowledgements This activity has been funded in part by the Academy of Finland under the R&D Program for Detectors at Future Colliders and by MURST under grant 3418/C.I.
no-problem/9911/astro-ph9911340.html
ar5iv
text
# ASCA Observations of “Type 2” LINERs: Evidence for a Stellar Source of Ionization ## 1. Introduction LINERs (low-ionization nuclear emission-line regions; Heckman 1980) are found in a significant fraction of bright galaxies (Ho, Filippenko, & Sargent 1997a). The ionizing source of LINERs is still under debate (see Filippenko 1996 for a review), with candidate ionization mechanisms being photoionization by low-luminosity active galactic nuclei (AGNs), photoionization by very hot stars, and collisional ionization by fast shocks. Recent observations have shown that at least some LINERs are low-luminosity AGNs (hereafter LLAGNs); see Ho (1999a and references therein) for a review. According to the extensive spectroscopic survey of Ho et al. (1997b), which includes all bright ($`B_T`$ $`\stackrel{<}{}`$ 12.5 mag) galaxies with declinations greater than 0, about 20% of LINERs exhibit a broad H$`\alpha `$ emission line in their optical spectra. Hard X-ray observations provide a powerful means for searching for evidence of an AGN by detecting a pointlike hard X-ray source at the nucleus. Compact hard X-ray sources have been detected in LINERs that show broad H$`\alpha `$ emission (hereafter LINER 1s; Iyomoto et al. 1996, 1998; Ptak et al. 1998; Terashima et al. 1998a; Guainazzi & Antonelli 1999; Weaver et al. 1999). The X-ray spectra of these objects are well represented by a two-component model: a power-law component plus soft thermal emission. The X-ray spectra of the hard component are quite similar to those of Seyfert galaxies, and typical X-ray luminosities are $`10^{4041}`$ ergs s<sup>-1</sup>. The H$`\alpha `$ luminosities of LINER 1s are positively correlated with the X-ray luminosities in the 2–10 keV band (Terashima 1999; Terashima et al. 1999). These observations strongly support the idea that most LINER 1s are LLAGNs. LINERs constitute the majority of the objects that show spectroscopic evidence for nuclear activity, and most LINERs ($`80`$%; Ho et al. 1997a, b) show no detectable broad H$`\alpha `$ emission and are classified as “LINER 2s”. Therefore, LINER 2s are the most abundant form of low-level activity in nearby galaxies. It is not clear whether the origin of LINER 2s is similar to that of LINER 1s. If LINER 2s are also genuine AGNs, then the emission from their nuclei may be obscured, by analogy with the popular obscuration model for Seyfert 2 galaxies (Lawrence & Elvis 1982; Antonucci & Miller 1985). If this is the case, then the X-ray spectra of LINER 2s should show evidence for heavy absorption ($`N_\mathrm{H}`$ $`>10^{23}`$ cm<sup>-2</sup>) and strong fluorescent iron K emission lines. For example, NGC 1052, a LINER 1.9 (Ho et al. 1997a) from which polarized broad H$`\alpha `$ line has been detected (Barth et al. 1999), shows an X-ray spectrum absorbed by a column density of $`N_\mathrm{H}`$ $`\mathrm{\hspace{0.17em}3}\times 10^{23}`$ cm<sup>-2</sup> and a fluorescent iron K emission line with an equivalent width of $``$300 eV (Weaver et al. 1999). These X-ray characteristics are quite similar to those of luminous Seyfert 2 galaxies (Awaki et al. 1991a; Smith & Done 1994; Turner et al. 1997a). Thus, this is an example of an active nucleus which is a low-ionization analog of Seyfert 2 galaxies such as NGC 1068. Alternatively, the optical emission lines in LINER 2s may be ionized by sources other than an AGN. Collisional ionization from fast-moving shocks (e.g., Koski & Osterbrock 1976; Fosbury et al. 1978; Dopita & Sutherland 1995) and photoionization by a cluster of hot, young stars (Terlevich & Melnick 1985; Filippenko & Terlevich 1992; Shields 1992) have also been proposed as possible excitation mechanisms to power the narrow-line emission in LINERs. Ultraviolet (UV) spectra of several LINER 2s are available from the Hubble Space Telescope (HST), and these indicate the presence of massive stars in some LINER 2s (Maoz et al. 1998). Maoz et al. (1998) find that in NGC 404, NGC 4569, and NGC 5055 hot stars play a significant role as an ionizing source for the optical emission lines. In order to explain the observed H$`\alpha `$ luminosities by stellar photoionization, however, very massive stars ($`M\mathrm{\hspace{0.17em}100}`$ $`M_{}`$) are required to be still present (see Fig. 5 in Maoz et al. 1998). Although UV spectroscopy can probe the presence of massive stars, only UV bright objects ($``$20%–30% of LINERs; Maoz et al. 1995; Barth et al. 1998; Ho, Filippenko, & Sargent 1999) can be studied. Moreover, Maoz et al. (1998) have argued that even in objects such as M81, NGC 4579 or NGC 4594, where an LLAGN is known to be present based on other evidence, the observed UV continuum power is insufficient to account for the observed line emission, and the nonstellar component most likely becomes more prominent at higher energies. Searching for the ionizing source in the X-rays is necessary to test this hypothesis. Only a limited number of X-ray observations of LINER 2s have been performed so far. Previous X-ray observations with Einstein and ROSAT were limited to soft energies, where heavily obscured AGNs are difficult to detect. Furthermore, the limited spectral resolution and bandpass of these observations cannot distinguish the thermal emission of the host galaxy from the emission from the AGN. We observed a small sample of three LINER 2 nuclei (NGC 404, NGC 4111, and NGC 4457) with the ASCA satellite to search for a hidden ionizing source; the sample also included NGC 4192 and NGC 4569, which are classified as “transition objects,” emission-line nuclei whose optical spectra suggest a composite source of ionization, possibly due to a genuine LINER nucleus mixed in with signal from circumnuclear H II regions. The imaging capability of ASCA (Tanaka, Inoue, & Holt 1994), which extends up to 10 keV, and its moderate spectral resolution enable it to identify thermal emission from the host galaxy. We also analyze the X-ray properties of NGC 4117, a low-luminosity Seyfert 2 galaxy serendipitously observed in the field of NGC 4111. This paper is organized as follows. In $`\mathrm{\S }2`$ we summarize the ASCA observations and data reduction. Image and spectral analysis are reported in $`\mathrm{\S }3`$ and $`\mathrm{\S }4`$, respectively. We discuss the origin of X-ray emission and the ionizing source in type 2 LINERs in $`\mathrm{\S }5`$. A summary of our findings is presented in $`\mathrm{\S }6`$. ## 2. Observations We observed the three LINER 2s and two transition objects shown in Table 1. These objects are selected from the Palomar survey of nearby galaxies conducted by Ho et al. (1995, 1997a), for which Ho et al. (1997b) determined that broad H$`\alpha `$ emission is not present. We selected objects that are bright in the narrow H$`\alpha `$ emission line, since a large X-ray flux is expected if the ionization source is due to an AGN (Halpern, & Steiner 1983; Elvis, Soltan, & Keel 1984; Ward et al. 1988; Koratkar et al. 1995; Terashima 1999; Terashima et al. 1999). We also gave preference to objects that have previously been studied in the UV using the HST. Both NGC 404 and NGC 4569 are UV bright and have been studied spectroscopically in the UV by Maoz et al. (1998), while NGC 4111 and NGC 4192 were imaged in the UV but were not detected (Maoz et al. 1996; Barth et al. 1998). The log of the ASCA observations is shown in Table 2. Detailed descriptions of the ASCA instruments can be found in Serlemitsos et al. (1995), Ohashi et al. (1996), Makishima et al. (1996), Burke et al. (1994), and Yamashita et al. (1997). The observation mode of the Solid-state Imaging Spectrometers (SIS) is summarized in Table 2; the Gas Imaging Spectrometers (GIS) were operated in the nominal pulse-height mode. We screened the data using standard criteria. We excluded data taken when (1) the elevation angle from the earth’s limb was less than 5 , (2) the cut-off rigidity was less than 6 GeV c<sup>-1</sup>, (3) the satellite was passing through the South Atlantic Anomaly, and (4) the elevation angle from the day earth’s limb was less than 25 (only for the SIS). The observed count rates, after background subtraction, and the net exposure times, after data screening, are also tabulated in Table 2. Although we observed NGC 404, 4111, 4192 and 4569 on two occasions in order to search for variability, no significant variability was found. The typical upper limit on variability is 50%. We therefore use images and spectra combined from the two observations in the following analysis. In this paper, the quoted errors are at the 90% confidence level for one parameter of interest, unless otherwise noted. ## 3. X-ray images We detected X-ray emission from all objects except for NGC 404. In this section, we show X-ray images and estimate the spatial extension of the X-ray emission. ### 3.1. NGC 404 The nucleus of NGC 404 was not detected in either the SIS or in the GIS images. One serendipitous source was detected in the SIS image in the 0.5–2 keV band of the second observation. The SIS image in the 0.5–2 keV band is shown in Figure 1a. The peak position of this source is ($`\alpha `$$`\delta `$)<sub>J2000</sub> = ($`1^h\mathrm{\hspace{0.17em}9}^m\mathrm{\hspace{0.17em}28}^s`$, 35 38 59<sup>′′</sup>), and the error radius is about 1. This source is not clearly seen in the GIS image or in the SIS image above 2 keV. We calculated an upper limit for the X-ray flux seen toward the nucleus of NGC 404 using the following procedure. We made a one-dimensional projection of width 2$`\stackrel{}{\mathrm{.}}`$5 along the nucleus and the serendipitous source and then fitted the profile with a model consisting of two point-spread functions (PSFs) at the positions of the two objects plus a constant background. The model PSFs were obtained using a ray-tracing code, and they were projected using the same method applied to the data. The free parameters are the normalizations of the two PSFs and the background level. We fitted the projected SIS images in the 0.5–2 keV and 2–10 keV bands and obtained a 3$`\sigma `$ upper limit of $`1.1\times 10^{14}`$ ergs s<sup>-1</sup> cm<sup>-2</sup> and $`6.6\times 10^{14}`$ ergs s<sup>-1</sup> cm<sup>-2</sup>, respectively, assuming a power-law spectrum with a photon index of $`\mathrm{\Gamma }`$ = 2. For an assumed distance of 2.4 Mpc (Tully 1988), these upper limits for the X-ray flux correspond to luminosities of $`7.6\times 10^{36}`$ ergs s<sup>-1</sup> in the 0.5–2 keV band and $`4.6\times 10^{37}`$ ergs s<sup>-1</sup> in the 2–10 keV band. ### 3.2. NGC 4111 Figures 1c and 1d show the GIS images in the 0.5–2 keV and 2–7 keV bands, respectively. At least three X-ray sources were detected in the GIS field of view, and the positions and tentative identifications of these are summarized in Table 3. One of these sources, at ($`\alpha `$$`\delta `$)<sub>J2000</sub> = ($`12^h\mathrm{\hspace{0.17em}7}^m\mathrm{\hspace{0.17em}47}^s`$, 43 6 53<sup>′′</sup>) is brighter in the $`>`$2 keV image than in the $`<`$2 keV image. This hard source is positionally coincident, within the astrometric uncertainty of ASCA, with NGC 4117, a $`B`$ 14.0 mag Seyfert 2 galaxy first recognized by Huchra, Wyatt, & Davis (1982). The other two sources are brighter in the soft-band image. One of the soft sources coincides with the nucleus of NGC 4111, while the other has no counterpart in the NASA Extragalactic Database (NED). NGC 4111 is detected also in the SIS image, but the other sources are out of the field of view of the detector. In order to estimate the spatial extent of the X-ray emission, we made azimuthally averaged radial profiles of surface brightness using the SIS images and compared them with those of the PSF. We used the SIS images to examine the spatial extent of the emission, since the spatial resolution of the SIS is better than that of the GIS. We tried to fit the radial profiles in the 0.5–2 keV and 2–7 keV bands with a PSF plus constant background model. The free parameters are the normalization of the PSF and the background level. The fits were unacceptable: $`\chi ^2`$ = 20.9 and 30.4 for 11 degrees of freedom, in the 0.5–2 keV and 2–7 keV bands, respectively. In order to parameterize the spatial extent, we fitted the radial profiles with a constant background plus a two-dimensional Gaussian convolved through the PSF. The free parameters in this fit are the dispersion and normalization of the Gaussian and the background level. This model gave a significantly better fit with $`\mathrm{\Delta }\chi ^2`$=14 and 21, respectively, for one additional parameter. These are significant at more than 99% confidence. The best-fit values of $`\sigma `$ are summarized in Table 4. The Gaussian model provides a reasonably good representation of the observed profile. These results indicate that the X-ray images are extended on kpc scales in both the 0.5–2 keV and 2–7 keV bands. The difference in spatial extent between the two energy bands is not significant. The best-fit profile is shown in Figure 2. ### 3.3. NGC 4192 NGC 4192 and a few serendipitous sources were detected in the field. The contour maps of the SIS images in the 0.5–7 keV band are shown in Figure 1b. The positions of the detected sources are summarized in Table 3. An archival ROSAT PSPC image shows two sources, which combine to yield an elongated morphology, centered on the galaxy. The position angle of the elongation is $`72`$ deg and these sources are aligned roughly along the direction of the minor axis of the galaxy. In the ASCA image, this elongation is not clearly seen because of limited photon statistics and spatial resolution. Since NGC 4192 is very dim and its exposure time was shorter than those of other objects, we fitted the one-dimensional projection of the SIS images, as in the case of NGC 404, to measure the X-ray fluxes; the projection was made with a width of 3$`\stackrel{}{\mathrm{.}}`$2 along the nucleus and the serendipitous source (source 3 in Table 3). We fitted the resulting profile with a constant background plus two PSFs centered at the position of the galaxy nucleus and source 3. The free parameters in the fit are the normalization of each PSF and the background level. The fit of the projected profiles in the 0.5–2 keV and 2–10 keV bands yielded X-ray fluxes of $`5.7\times 10^{14}`$ ergs s<sup>-1</sup> cm<sup>-2</sup> and $`1.1\times 10^{13}`$ ergs s<sup>-1</sup> cm<sup>-2</sup>, respectively, for an assumed power-law spectrum with $`\mathrm{\Gamma }`$ = 1.7 (see § 4). At an adopted distance of 16.8 Mpc (Tully 1988), these fluxes correspond to the luminosities of $`L`$(0.5–2 keV) = $`1.9\times 10^{39}`$ ergs s<sup>-1</sup> and $`L`$(2–10 keV) = $`3.8\times 10^{39}`$ ergs s<sup>-1</sup>. The significance of the detection is 9.6$`\sigma `$ and 6.7$`\sigma `$ in the 0.5–2 keV and 2–10 keV bands, respectively. ### 3.4. NGC 4457 NGC 4457 was detected in both the SIS and GIS images. Contour maps of GIS images in the 0.5–2 keV and 2–7 keV bands are shown in Figure 1e and 1f. Serendipitous sources are also detected in the GIS field of view, and their positions are summarized in Table 3. The brightest one, located at ($`\alpha `$$`\delta `$)<sub>J2000</sub> = ($`12^h\mathrm{\hspace{0.17em}29}^m\mathrm{\hspace{0.17em}47}^s`$, 3 35 32<sup>′′</sup>), is plausibly identified with the Virgo cluster galaxy VCC 1208 \[($`\alpha `$$`\delta `$)<sub>J2000</sub> = ($`12^h\mathrm{\hspace{0.17em}29}^m\mathrm{\hspace{0.17em}39.2}^s`$, 3 36 43<sup>′′</sup>)\]. We fitted the radial profiles of the SIS images in the 0.5–2 keV and 2–7 keV bands using the same procedure as in NGC 4111. The PSF fits gave $`\chi ^2`$ values of 26.9 and 15.3, respectively, for 11 degrees of freedom. The results of the Gaussian fits are shown in Table 4. The image in the hard band is consistent with being pointlike, but the upper limit on $`\sigma `$ is large ($`\sigma <1.8`$ arcmin). Although the lower boundary of the Gaussian $`\sigma `$ for the soft-band image fit is greater than zero, this result cannot rule out the possibility that the soft X-ray source is pointlike, since it is possible that Gaussian fits to a point source results in nonzero $`\sigma `$ ($`\stackrel{<}{}`$0$`\stackrel{}{\mathrm{.}}`$2) (Ptak 1997). The best-fit profile is shown in Figure 2. ### 3.5. NGC 4569 NGC 4569 was detected in both the SIS and GIS images, and a few serendipitous sources were found in the GIS field of view. The GIS images in the 0.5–2 keV and 2–7 keV bands are shown in Figures 1g and 1h. The position of the brightest source \[($`\alpha `$$`\delta `$)<sub>J2000</sub> = ($`12^h\mathrm{\hspace{0.17em}37}^m\mathrm{\hspace{0.17em}34}^s`$, 13 18 47<sup>′′</sup>)\] coincides closely with that of the QSO Q1235+1335 \[($`\alpha `$$`\delta `$)<sub>J2000</sub> = ($`12^h\mathrm{\hspace{0.17em}37}^m\mathrm{\hspace{0.17em}33.6}^s`$, 13 19 6$`\stackrel{}{\mathrm{.}}`$6); $`z`$=0.15\]. Diffuse emission from the hot gas in the Virgo cluster is also seen in the soft-band image. NGC 4569 is separated from M87 by 2.1 degrees, and the cluster emission at this angular distance has been detected in a ROSAT PSPC image by Böhringer et al. (1994). We compared the radial profiles of the SIS images in the 0.5–2 keV and 2–7 keV bands with those of the PSF and found that the SIS images are clearly extended in both energy bands. The PSF fits yielded $`\chi ^2`$ = 37.2 and 30.7 for 11 degrees of freedom, respectively. The best-fit $`\sigma `$ for the Gaussian model is shown in Table 4, and the profiles are shown in Figure 2. The $`\chi ^2`$ improved significantly in this model for one additional parameter ($`\mathrm{\Delta }\chi ^2`$ = 22 and 24 for the 0.5–2 keV and 2–7 keV images, respectively). The profile in the 2–7 keV band is well fitted by a Gaussian with $`\sigma `$ = 1$`\stackrel{}{\mathrm{.}}`$6. On the other hand, the residuals of the fit in the 0.5–2 keV band suggest the presence of a compact source at the center in addition to the emission extended over arcminute scales. These two components can be identified with the unresolved emission seen in a ROSAT HRI image (Colbert & Mushotzky 1999) and the extended emission detected in a ROSAT PSPC image (Junkes & Hensler 1996). ## 4. X-ray spectra We fitted the X-ray spectra of NGC 4111, 4117, 4457, and 4569. The spectra were extracted using a circular region centered on the nucleus with a radius of 3–4which is consistent with the sizes of objects. There were no confusing sources within the extraction radius. Background spectra were extracted from a source-free region in the same field. The spectra from the two SIS detectors (SIS0 and SIS1) were combined, as were those from the two GIS detectors (GIS2 and GIS3). Then we fitted the SIS and GIS spectra simultaneously. The Galactic hydrogen column densities used in the spectral fits are derived from the H I observations by Dickey & Lockman (1990). Since NGC 4192 is too faint for a detailed spectral analysis, we estimated its spectral shape using the hardness ratio, defined to be the photon flux ratio between the 2–10 keV and 0.5–2 keV bands. ### 4.1. NGC 4111, NGC 4457, and NGC 4569 The X-ray spectra of NGC 4111, NGC 4457 and NGC 4569 cannot be fitted with a single-component model. We tried a power-law model and a thermal bremsstrahlung model. The absorption column density of the matter along the line of sight was treated as a free parameter. We obtained unacceptable fits (Table 5). Since the value of the obtained reduced $`\chi ^2`$ is large ($`>2`$), errors are not shown for spectral parameters in Table 5. A bump is seen around 0.8–0.9 keV in all the spectra, which can be identified with the Fe L line complex. This feature suggests the presence of a thermal plasma with a temperature of $`0.7`$ keV. A Raymond-Smith (hereafter RS; Raymond & Smith 1977) thermal plasma model also failed to give an adequate fit, and significant positive residuals were seen above $`1.5`$ keV. This indicates the presence of a hard component in addition to the soft thermal emission. Accordingly, we fitted the spectra with a two-component model which consists of a soft thermal component and a hard component. We used the RS plasma to represent the soft thermal component and a power-law or thermal bremsstrahlung contribution as the hard component. We assumed a Galactic value for the absorption column density of the RS component. The abundances were fixed at 0.1 of the solar value since it would otherwise not be well constrained from the data; adopting other values (0.3 and 0.5 solar) gave similar results within the errors. The absorption column density for the hard component is treated as free parameter. Table 5 summarizes the results of the fitting. The two-component model reproduces well the observed spectra, and both a power law and a thermal bremsstrahlung model yield similarly good fits. The observed spectra and the best-fit RS plus power-law models are shown in Figure 3, and Table 7 lists the derived X-ray luminosities. The RS plus thermal bremsstrahlung model gives very similar luminosities. It is worth noting that the temperatures of the thermal bremsstrahlung component in NGC 4111 and NGC 4457 are not well constrained, and only lower limits of the temperature are obtained. Additionally, if the abundance of the RS component were allowed to vary, only NGC 4111 gives a constrained value (0.006–1.0 solar). We could set only a lower limit of the abundance for NGC 4457 ($`>`$0.001 solar) and NGC 4569 ($`>`$0.05 solar). The X-ray fluxes obtained from SIS and GIS are consistent with each other, within the errors, for NGC 4111 and NGC 4457. In the case of NGC 4569, the SIS and GIS fluxes differ at the level of 20%–50%; the SIS gives $``$ 20% smaller flux in the soft band below 2 keV, while its flux in hard band above 2 keV is $``$ 50% larger. This discrepancy is possibly due to the diffuse emission in the NGC 4569 field and to the different background extract regions. In the spectra of NGC 4569, negative residuals are seen around 1.2 keV. This may be also due to imperfect subtraction of surrounding diffuse emission, since the temperature of the Virgo intracluster gas in this region is $``$ 2.5 keV, and both the Fe L line complex and the He-like Mg line are expected around an energy of 1.1–1.3 keV (Böhringer et al. 1994; Matsumoto 1998). Because the SIS and GIS each has a different spatial resolution and field of view, we cannot perfectly match the source-free region used for background subtraction. Therefore, we regard this discrepancy (up to 50%) as a systematic error inherent in the derive X-ray fluxes and luminosities of NGC 4569. The typical errors for the fluxes and luminosities of NGC 4111 and NGC 4457 are $``$25% (this value does not include the calibration uncertainty of $``$10%). ### 4.2. NGC 4192 Since NGC 4192 is very faint, we estimated its spectral slope using a hardness ratio, $`f`$(2–10 keV)/$`f`$(0.5–2 keV). The photon flux in each energy band was calculated from the fits of the projected image described in § 3.3. We obtained a hardness ratio of $`0.48\pm 0.09`$ (1$`\sigma `$ errors). If we assume a spectral shape of a power law absorbed by the Galactic hydrogen column density in the direction of this galaxy ($`N_\mathrm{H}`$ = $`2.7\times 10^{20}`$ cm<sup>-2</sup>), this hardness ratio corresponds to $`\mathrm{\Gamma }=1.70_{0.16}^{+0.19}`$ (1$`\sigma `$ errors). ### 4.3. NGC 4117 For completeness, we mention the results of the spectral analysis for NGC 4117, a low-luminosity Seyfert 2 galaxy serendipitously observed in the GIS field of NGC 4111. It is of great interest to compare the spectral properties of LINER 2s with low-luminosity Seyfert 2s. The GIS spectrum of NGC 4117 is shown in Figure 4. It is clear that the soft X-rays are significantly absorbed by a large column density. We fitted the spectrum with an absorbed power-law model. The best-fit parameters are: $`\mathrm{\Gamma }=0.92_{0.81}^{+1.16}`$ and $`N_\mathrm{H}`$ = $`2.8_{1.0}^{+1.6}\times 10^{23}`$ cm<sup>-2</sup>. Since small positive residuals are seen below 2 keV, we also tried to add a power-law component with little absorption. We assumed that the photon indices of both power laws are the same (equivalent to a partially covered power-law model) and that the absorption column for the less absorbed power-law component is equal to the Galactic value ($`N_\mathrm{H}`$ = $`1.4\times 10^{20}`$ cm<sup>-2</sup>). In this model, $`\chi ^2`$ improved by only $`\mathrm{\Delta }\chi ^2=2.5`$, and the resulting best-fit model parameters, as summarized in Table 6, are $`\mathrm{\Gamma }=1.11_{1.01}^{+0.97}`$ and $`N_\mathrm{H}`$ = $`3.0_{1.1}^{+0.9}\times 10^{23}`$ cm<sup>-2</sup>. The X-ray luminosities for this model are shown in Table 7, where we adopt a distance of 17 Mpc (Tully 1988). The intrinsic X-ray luminosity of $`1.3\times 10^{41}`$ ergs s<sup>-1</sup> is one of the lowest values ever observed for Seyfert 2 galaxies in the hard X-ray band. We added a narrow Gaussian to the above models to constrain the Fe K fluorescent line. No significant improvement of $`\chi ^2`$ was obtained. The upper limits of the equivalent width are 150 and 125 eV for the single power-law model and the partially covered power-law model, respectively. The observed X-ray spectrum obscured by large column density ($`N_\mathrm{H}`$ $`>10^{23}`$ cm<sup>-2</sup>) is quite similar to more luminous Seyfert 2 galaxies. The obtained best-fit photon index is flat (0.9–1.1), although the error is large. It is flatter than the canonical value in Seyfert 1 galaxies (e.g. Nandra et al. 1997) and such an apparently flat spectral slope is often observed in Seyfert 2 galaxies (Awaki et al. 1991a; Smith & Done 1996; Turner et al. 1997a). Seyfert 2 galaxies usually show fluorescent Fe K emission line. Although the upper limit on the equivalent width is slightly smaller than that expected from cold matter of $`N_\mathrm{H}`$ = $`3\times 10^{23}`$ cm<sup>-2</sup> along the line of sight (150–200 eV; Awaki et al. 1991a; Leahy & Creighton 1993; Ghisellini, Haardt, & Matt 1994), it is consistent with Seyfert 2 galaxies within the scatter in the plot of equivalent width versus absorption column density (Fig 1 in Turner et al. 1997b). Therefore, we found no clear difference between luminous Seyfert 2 galaxies and the low luminosity Seyfert 2 NGC 4117. ## 5. Discussion ### 5.1. Hard Component We have detected hard X-ray emission from all the objects except for NGC 404. The hard X-ray (2–7 keV) images of NGC 4111 and NGC 4569 are clearly extended on scales of several kpc, an indication that a nonstellar, active nucleus is not the primary source of the hard X-ray emission. This is also consistent with the lack of time variability. Other lines of evidence suggest that at least some of these galaxies have experienced recent star formation. The UV spectra of NGC 404 and NGC 4569, for example, show unambiguous stellar absorption lines arising from young, massive stars (Maoz et al. 1998). Starburst galaxies are also a source of hard X-rays, and their X-ray spectra can be modeled as thermal bremsstrahlung emission with a temperature of several keV (e.g., Moran & Lehnert 1997; Ptak et al. 1997; Persic et al. 1998). However, the morphologies of the hard X-ray emission in starburst galaxies tend to be either pointlike or only slightly extended (Tsuru et al. 1997; Cappi et al. 1999), significantly more compact than observed in our sample. It appears, therefore, that hard X-ray emission associated with starburst activity does not significantly contribute to the emission observed in the objects in our sample, although this conclusion remains at the moment tentative because only a small number of starburst galaxies have been studied in the hard X-rays. Note that the extended hard X-ray emission in the starburst galaxy M83 is interpreted as due to a collection of X-ray binaries in its bulge (Okada, Mitsuda, & Dotani 1997). In normal spiral galaxies, X-ray emission comes mainly from discrete sources such as low-mass X-ray binaries (Fabbiano 1989; Makishima et al. 1989). The X-ray size of NGC 4111 and NGC 4569 in the hard band is similar to the optical size. The upper limit on the X-ray size of NGC 4457 is also consistent with its optical size. The extended hard-band images of the above objects are consistent with a discrete source origin. Their X-ray spectra, approximated by a thermal bremsstrahlung model with a temperature of several keV, are as expected from a collection of low-mass X-ray binaries (Makishima et al. 1989). The X-ray luminosities of normal spiral galaxies are roughly proportional to the their optical ($`B`$ band) light (Fabbiano 1989). Table 8 gives the $`L_\mathrm{X}`$/$`L_B`$ values for the galaxies in our sample; taking into consideration the scatter in the $`L_\mathrm{X}`$$`L_B`$ relation, these values agree with those seen in normal galaxies. We conclude that the origin of the extended hard X-ray emission in our sample probably arises from a collection of discrete X-ray sources in the host galaxy, and we find no clear evidence for the presence of an AGN. Note, however, that our data do not rule out the presence of an AGN. If present, the nonstellar X-ray luminosity in the 2–10 keV band must be significantly smaller than few $`\times `$ $`10^{39}`$ ergs s<sup>-1</sup>. In the case that an AGN core is heavily obscured by a large column density of $`N_\mathrm{H}`$ $`>\mathrm{\hspace{0.17em}10}^{24}`$ cm<sup>-2</sup>, then we expect only to see X-ray emission scattered by a warm and/or cold reflector. If the scattering fraction is $``$3% (Turner et al. 1997b; Awaki, Ueno, & Taniguchi 1999), an upper limit on the intrinsic luminosity is estimated to be $`10^{41}`$ ergs s<sup>-1</sup>. Recent BeppoSAX observations have shown that several X-ray weak Seyfert 2s are highly obscured by Compton-thick matter (Maiolino et al. 1998). Such a situation could also occur in LINER 2s. If a heavily obscured AGN is present, a strong Fe K emission line at 6.4 keV is expected with an equivalent width larger than 1 keV (e.g., Terashima et al. 1998b). Unfortunately, this hypothesis cannot be tested with our data because the limited photon statistics do not permit us to set stringent upper limits on the equivalent width of the Fe K line (typical upper limits of the equivalent widths are $``$ 2 keV). We will be able to address this issue with future high-energy observations on missions that have larger effective areas, such as XMM and ASTRO-E, and that have much finer spatial resolution, such as Chandra. Recent work on massive black holes in (nearly) normal galaxies have shown that the mass of the central black hole is about 1/100 to 1/1000 of the mass of the bulge (e.g., Magorrian et al. 1998). Using this black hole mass - bulge mass correlation and our upper limits on X-ray luminosities of AGN, we can estimate upper limits of the Eddington ratio of the accretion onto massive black holes in the galaxies in our sample. We calculated the black hole mass using the relations $`M_{\mathrm{BH}}`$ = 0.005$`M_{\mathrm{bulge}}`$ and $`M_{\mathrm{bulge}}`$ = $`5\times 10^9M_{}(L_{\mathrm{bulge}}/10^9L_{})^{1.2}`$ (Richstone et al. 1998 and references therein). We used the upper limits on the X-ray luminosities in the 2–10 keV band, of $`1.5\times 10^{39}`$ ergs s<sup>-1</sup> for NGC 404 and $`1\times 10^{41}`$ ergs s<sup>-1</sup> for the others, and assumed a bolometric correction of a factor of 10 and a scattering fraction of $`3\%`$. The bulge luminosities are calculated by using the data in Table 11 in Ho et al. 1997a. We obtained upper limits on the Eddington ratios in the range of $`(15)\times 10^5`$. Thus we found that mass accretion is taking place with very low accretion rate or at very low efficiency if super massive black holes are present in these galaxies, as is in most galaxies. ### 5.2. Soft Component In NGC 4111, NGC 4457, and NGC 4569 we detected soft X-ray emission that can be represented by a Raymond-Smith thermal plasma with $`kT0.65`$ keV. Extended hot gas with a temperature $`kT<1`$ keV is generally observed in starburst galaxies, and it is interpreted as due to gas shock heated by the collective action of supernovae (e.g., Dahlem, Weaver, & Heckman 1998). The X-ray luminosity of the hot gas component is roughly proportional to the far-infrared (FIR) luminosity: $`\mathrm{log}`$ $`L_\mathrm{X}`$/$`L_{\mathrm{FIR}}`$ $``$ –4 (Heckman, Armus, & Milley 1990; David, Forman, & Jones 1992). We calculated the $`L_\mathrm{X}`$/$`L_{\mathrm{FIR}}`$ ratios for the galaxies in our sample in order to test the starburst origin of the soft thermal component. We use the X-ray luminosities of the RS component corrected for absorption in the 0.5–4 keV band, and $`L_{\mathrm{FIR}}`$ is calculated from the flux $`1.26\times 10^{14}(2.58S_{60}+S_{100})`$ W m<sup>-2</sup>, where $`S_{60}`$ and $`S_{100}`$ are the flux densities at 60 $`\mu `$m and 100 $`\mu `$m, respectively, in units of janskys (Table 1; see Ho et al. 1997a for details). We used observed total luminosities in the 0.5–2 keV band for NGC 404 and NGC 4192, since we cannot separate the thermal component from the total emission. Relations between soft X-ray luminosities and far infrared luminosities are plotted in Figure 5, where Soft X-ray luminosities are only for a Raymond-Smith plasma component except for NGC 404 and NGC 4192. We compiled X-ray luminosities of the soft thermal component in X-ray bright starburst galaxies in ASCA archives (for detailed results, see Okada et al. 1996; Ptak et al. 1997; Dahlem, Weaver, & Heckman 1998; Ptak et al. 1999; Della Ceca et al. 1999; Zenzas et al. 1999; Heckman et al. 1999; Moran, Lehnert, & Helfand 1999). We fitted their ASCA spectra with a model consisting of a RS plasma plus a thermal bremsstrahlung and measured intrinsic (absorption corrected) X-ray luminosities of the RS component. These points are also shown in Figure 5. The $`\mathrm{log}`$ $`L_\mathrm{X}`$/$`L_{\mathrm{FIR}}`$ values (Table 8 and Figure 5) distribute around a value of $``$–3.5 – –4, which is consistent, within the scatter, with what is seen in starburst galaxies. With ASCA spatial resolution we can only say that the X-ray size is roughly the same as the optical size, consistent with that indicated by higher resolution ROSAT data for starburst galaxies. The radial profile of the soft X-ray image of NGC 4569 indicates the presence of a compact nuclear component in addition to the extended component ($`\sigma >1`$). A comparison between the radial profiles in the soft and hard bands suggests that the compact component has a soft spectrum which could be identified with an unresolved source seen in the ROSAT HRI image of Colbert & Mushotzky (1999). A compact but resolved nuclear source is detected in a UV image of NGC 4569 taken with HST (Barth et al. 1998), and absorption features in its UV spectrum indicate that the UV source is a cluster of hot stars (Maoz et al. 1998). The X-ray spectra of O-type stars can be modeled by a thermal plasma with a temperature of $`kT`$ 1 keV (e.g., Corcoran et al. 1994; Kitamoto & Mukai 1996). Since individual O stars have X-ray luminosities of $`10^{32}10^{33}`$ ergs s<sup>-1</sup> (e.g., Rosner, Golub, & Vaiana 1985), about $`10^7`$ O stars would be required to explain the X-ray source at the nucleus detected in the ROSAT HRI image and in the ASCA soft-band image. This number, however, is too large compared to the number of O stars needed to explain the observed H$`\alpha `$ luminosity and the strength of the UV continuum ($`<`$1000; Maoz 1999). Moreover, we can rule out the possibility of such a giant cluster of O stars from dynamical constraints. A cluster of $`10^7`$ O5 stars, each $``$40 $`M_{}`$, would amount to a total mass of $`4\times 10^8`$ $`M_{}`$. Even under the unreasonable assumption that stars of lower masses are absent, this mass strongly violates the dynamical mass limit of the nucleus, which has been estimated to be $`<10^610^7`$ $`M_{}`$ by Keel (1996). Therefore, we conclude that the hot star contribution to the observed soft X-ray luminosity is minor. This argument also applies to all the objects in our sample, except for NGC 404, for which only an upper limit of X-ray luminosity is available. Finally, we note that scattered light from a hidden AGN is unlikely to be the source of the soft X-ray emission. If AGN emission is blocked by a large column density along the line of sight, only emission scattered by cold and/or warm material would be observed. If we fit the X-ray spectra below 2 keV with a simple power-law model, the photon indices become steeper than 3 for NGC 4111, NGC 4457, and NGC 4569. This spectral slope is significantly steeper than normally seen in Seyfert 2s, where scattering by warm material is thought to prevail. Cold reflection produces an X-ray spectrum that is flatter than the intrinsic spectrum. Additionally, the extended morphologies of the soft X-ray emission makes it unlikely that the photoionized medium can be maintained at a highly ionized state. Thus the soft component probably does not originate from scattered AGN emission. Instead, the most likely origin for the soft thermal component is supernovae-heated hot gas. ### 5.3. Ionization Photon Budget We have found no clear evidence for the presence of an AGN in the galaxies in our sample. In order to compare the ionization source in LINER 2s and transition objects with those of LINER 1s and low-luminosity Seyfert galaxies, we calculated $`L_\mathrm{X}`$(2–10 keV)/$`L_{\mathrm{H}\alpha }`$ ratios (Table 8), where we used the luminosities of the narrow component of the H$`\alpha `$ emission. The $`L_\mathrm{X}`$/$`L_{\mathrm{H}\alpha }`$ values of the galaxies in our sample are systematically lower, by more than one order of magnitude, compared to LINER 1s and low-luminosity Seyferts (Terashima 1999; Terashima et al. 1999), objects where LLAGNs are almost certainly present. The mean $`\mathrm{log}`$ $`L_\mathrm{X}`$/$`L_{\mathrm{H}\alpha }`$ is 1.6 for LLAGNs, while the $`\mathrm{log}`$ $`L_\mathrm{X}`$/$`L_{\mathrm{H}\alpha }`$ values for our sample are smaller than 0.61. We estimate the number of ionizing photons needed to account for the observed H$`\alpha `$ luminosities, assuming a spectral energy distribution of $`f_\nu \nu ^1`$, which is the typical spectral shape between the UV and X-rays observed in LLAGNs (Ho 1999b), Case B recombination (Osterbrock 1989), and a covering factor of unity for the ionized gas. A value of $`\mathrm{log}`$ $`L_\mathrm{X}`$/$`L_{\mathrm{H}\alpha }`$ = 1.4 is sufficient to explain the H$`\alpha `$ luminosity by photoionization by an AGN. The observed $`\mathrm{log}`$ $`L_\mathrm{X}`$/$`L_{\mathrm{H}\alpha }`$ values, on the other hand, are significantly lower than this. If the AGN is not significantly obscured, its luminosity in the 2–10 keV band is estimated to be less than a few $`\times `$ $`10^{39}`$ ergs s<sup>-1</sup>. In this case, photons from an AGN account for only a very small fraction ($``$5%) of the observed H$`\alpha `$ luminosity. If H$`\alpha `$ is due to ionization by an AGN, one would then have to postulate that the AGN is heavily obscured with a column density greater than $`10^{24}`$ cm<sup>-2</sup> and that only scattered radiation is observable. Alternatively, an ionizing source other than an AGN is required. As discussed in the next subsection, galaxies in our sample have lower \[OI\]$`\lambda 6300`$/H$`\alpha `$ values than LINERs that are most likely to be AGN. If the low \[O I\]/H$`\alpha `$ ratio is due to dilution by H II regions, which produce strong Balmer lines and very weak \[OI\], the difference in $`L_\mathrm{X}`$/$`L_{\mathrm{H}\alpha }`$ between the two classes is reduced. Since the median of the \[OI\]/H$`\alpha `$ ratio for the LINER 2s is about a factor of 3 smaller than that for the LINER 1s with LLAGNs, the H$`\alpha `$ emission tracing the AGN would be 1/3 of the total measured value. Even in this case, however, the observed hard X-ray luminosities are not enough to drive the H$`\alpha `$ luminosities. ### 5.4. Optical Emission Line Ratios and Ionizing Source Our sample includes three LINERs (NGC 404, NGC 4111, and NGC 4457) and two transition objects (NGC 4192 and NGC 4569). By the definition of Ho et al. (1993, 1997a), transition objects have a smaller \[O I\] $`\lambda `$6300/H$`\alpha `$ ratio than LINERs (Table 8; data from Ho et al. 1997a). This class of emission-line nuclei has been postulated to be composite systems where a LINER nucleus is spatially contaminated by circumnuclear star-forming regions (Ho et al. 1993; Ho 1996). On the other hand, photoionization by hot stars in environments with ionization parameters characteristically lower than in “normal” giant extragalactic H II regions also generates the spectral properties of transition objects (Filippenko, & Terlevich 1992; Shields 1992). The presence of hot stars is seen directly in the UV spectrum of NGC 4569 (Maoz et al. 1998); these stars can provide the power to explain the observed emission-line luminosities if very massive stars are still present. The low X-ray output of these systems, as found in this study, lends further support for this conclusion. Thus, at least some transition objects are likely to be powered by hot stars. The nucleus of NGC 4192, however, is not detected in the UV band, possibly because of the large extinction due to the high inclination of the galaxy (83; Barth et al. 1998). The X-ray properties and optical emission-line ratios of NGC 4192 are similar to those of NGC 4569, and it, too, might be primarily powered by hot stars. We compared the \[O I\]/H$`\alpha `$ ratios of the LINERs in our sample with those of LINERs from which AGN-like X-ray emission has been detected. The \[O I\]/H$`\alpha `$ ratios for NGC 404, NGC 4111, and NGC 4457 (Table 8) are lower than in LINERs that are strong LLAGN candidates, and they are located at the lowest end of the distribution of \[O I\]/H$`\alpha `$ in LINERs (Fig. 7 in Ho et al. 1997a). For comparison, the \[O I\]/H$`\alpha `$ values for the few LINERs where compact hard X-ray emission has been detected are 0.71 (NGC 1052), 0.53 (NGC 3998), 1.22 (NGC 4203), 0.48 (NGC 4579), 0.18 (NGC 4594), and 0.24 (NGC 4736).<sup>1</sup><sup>1</sup>1The X-ray results for these objects are published in Weaver et al. (1999), Guainazzi, & Antonelli (1999), Awaki et al. (1991b), Iyomoto et al. (1998), Terashima et al. (1998a), Nicholson et al. (1998), and Roberts et al. (1999). It is intriguing that the HST UV spectrum of NGC 404 also shows strong evidence for the presence of energetically significant hot stars (Maoz et al. 1998); no UV spectral information is available for NGC 4111 and NGC 4457. It is conceivable that the subset of LINERs with exceptionally weak \[O I\] emission owe their primary excitation mechanism to stellar photoionization. Obviously more observations are necessary to settle this issue. A statistical study using a large sample of objects will be presented elsewhere. ## 6. Summary We presented ASCA results for a small sample of LINERs (NGC 404, NGC 4111, and NGC 4457) and transition objects (NGC 4192 and NGC 4569). X-ray emission was detected in all objects except NGC 404. The X-ray luminosities in the 2–10 keV band range from $`4\times 10^{39}`$ to $`1\times 10^{40}`$ ergs s<sup>-1</sup>. The images of NGC 4111 and NGC 4569 are extended on scales of several kpc in both the soft ($`<`$2 keV) and hard ($`>`$2 keV) energy bands. The X-ray spectra of NGC 4111, NGC 4457, and NGC 4569 are well represented by a two-component model consisting of a soft thermal plasma of $`kT0.65`$ keV plus a hard component (power law or thermal bremsstrahlung). The soft X-ray emission probably originates from hot gas produced via recent star formation activity because both the temperature of the gas and the $`L_\mathrm{X}`$/$`L_{\mathrm{FIR}}`$ ratios are typical of starburst galaxies. The extended morphology of the hard X-ray emission indicates that it mainly comes from discrete sources in the host galaxies, and that the AGN contribution is small, if any. The $`L_\mathrm{X}`$(2–10 keV)/$`L_{\mathrm{H}\alpha }`$ values for the galaxies in our sample are more than one order of magnitude smaller than in LINERs with bona fide LLAGNs (those with a detectable broad H$`\alpha `$ emission line and compact hard X-ray emission), and the X-ray luminosities are insufficient for driving the optical emission-line luminosities. These facts imply that, if an AGN is present, it would have to be heavily obscured with a column density much greater than $`N_\mathrm{H}`$ $`\mathrm{\hspace{0.17em}10}^{23}`$ cm<sup>-2</sup>. We suggest that the optical emission lines in the galaxies in our sample are mainly powered by photoionization by hot, young stars rather than by an AGN. This hypothesis is supported by the detection of stellar features due to massive stars in the UV spectra of NGC 404 and NGC 4569, by the systematically lower \[O I\]/H$`\alpha `$ ratios in these objects compared to LINERs with bona fide LLAGNs, and by the low observed X-ray luminosities reported in this work. We also analyzed the X-ray properties of NGC 4117, a low-luminosity Seyfert 2 galaxy serendipitously observed in the field of NGC 4111, and found its properties to be consistent with other Seyfert 2 galaxies with moderate absorbing columns. The authors are grateful to all the ASCA team members. We also thank an anonymous referee for useful comments. YT thanks JSPS for support. LCH acknowledges partial financial support from NASA grants GO-06837.01-95A, GO-07357.02-96A, and AR-07527.02-96A, which have been awarded by the Space Telescope Science Institute (operated by AURA, Inc., under NASA contract NAS5-26555). We made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. Figure Captions
no-problem/9911/astro-ph9911059.html
ar5iv
text
# TESTING HOMOGENEITY ON LARGE SCALES ## 1. Introduction The Cosmological Principle was first adopted when observational cosmology was in its infancy; it was then little more than a conjecture, embodying ’Occam’s razor’ for the simplest possible model. Observations could not then probe to significant redshifts, the ‘dark matter’ problem was not well-established and the Cosmic Microwave Background (CMB) and the X-Ray Background (XRB) were still unknown. If the Cosmological Principle turned out to be invalid then the consequences to our understanding of cosmology would be dramatic, for example the conventional way of interpreting the age of the Universe, its geometry and matter content would have to be revised. Therefore it is important to revisit this underlying assumption in the light of new galaxy surveys and measurements of the background radiations. Like with any other idea about the physical world, we cannot prove a model, but only falsify it. Proving the homogeneity of the Universe is in particular difficult as we observe the Universe from one point in space, and we can only deduce directly isotropy. The practical methodology we adopt is to assume homogeneity and to assess the level of fluctuations relative to the mean, and hence to test for consistency with the underlying hypothesis. If the assumption of homogeneity turns out to be wrong, then there are numerous possibilities for inhomogeneous models, and each of them must be tested against the observations. Despite the rapid progress in estimating the density fluctuations as a function of scale, two gaps remain: (i) It is still unclear how to relate the distributions of galaxies and mass (i.e. ‘biasing’); (ii) Relatively little is known about fluctuations on intermediate scales between these of local galaxy surveys ($`100h^1`$ Mpc) and the scales probed by COBE ($`1000h^1`$ Mpc). Here we examine the degree of smoothness with scale by considering redshift and peculiar velocities surveys, radio-sources, the XRB, the Ly-$`\alpha `$ forest, and the CMB. We discuss some inhomogeneous models and show that a fractal model on large scales is highly improbable. Assuming an FRW metric we evaluate the ’best fit Universe’ by performing a joint analysis of cosmic probes. ## 2. Cosmological Principle(s) Cosmological Principles were stated over different periods in human history based on philosophical and aesthetic considerations rather than on fundamental physical laws. Rudnicki (1995) summarized some of these principles in modern-day language: $``$ The Ancient Indian: The Universe is infinite in space and time and is infinitely heterogeneous. $``$ The Ancient Greek: Our Earth is the natural centre of the Universe. $``$ The Copernican CP: The Universe as observed from any planet looks much the same. $``$ The Generalized CP: The Universe is (roughly) homogeneous and isotropic. $``$ The Perfect CP: The Universe is (roughly) homogeneous in space and time, and is isotropic in space. $``$ The Anthropic Principle: A human being, as he/she is, can exist only in the Universe as it is. We note that the Ancient Indian principle can be viewed as a ‘fractal model’. The Perfect CP led to the steady state model, which although more symmetric than the PC, was rejected on observational grounds. The Anthropic Principle is becoming popular again, e.g. in explaining a non-zero cosmological constant. Our goal here is to quantify ’roughly’ in the definition of the generalized CP, and to assess if one may assume safely the Friedmann-Robertson-Walker (FRW) metric of space-time. ## 3. Probes of Smoothness ### 3.1. The CMB The CMB is the strongest evidence for homogeneity. Ehlers, Garen and Sachs (1968) showed that by combining the CMB isotropy with the Copernican principle one can deduce homogeneity. More formally the EGS theorem (based on Liouville theorem) states that “If the fundamental observers in a dust spacetime see an isotropic radiation field, then the spacetime is locally FRW”. The COBE measurements of temperature fluctuations $`\mathrm{\Delta }T/T=10^5`$ on scales of $`10^{}`$ give via the Sachs Wolfe effect ($`\mathrm{\Delta }T/T=\frac{1}{3}\mathrm{\Delta }\varphi /c^2`$) and Poisson equation rms density fluctuations of $`\frac{\delta \rho }{\rho }10^4`$ on $`1000h^1\mathrm{Mpc}`$ (e.g. Wu, Lahav & Rees 1999; see Fig 3 here), i.e. the deviations from a smooth Universe are tiny. ### 3.2. Galaxy Redshift Surveys Figure 1 shows the distribution of galaxies in the ORS and IRAS redshift surveys. It is apparent that the distribution is highly clumpy, with the Supergalactic Plane seen in full glory. However, deeper surveys such as LCRS show that the fluctuations decline as the length-scales increase. Peebles (1993) has shown that the angular correlation functions for the Lick and APM surveys scale with magnitude as expected in a universe which approaches homogeneity on large scales. Existing optical and IRAS (PSCz) redshift surveys contain $`10^4`$ galaxies. Multifibre technology now allows us to measure redshifts of millions of galaxies. Two major surveys are underway. The US Sloan Digital Sky Survey (SDSS) will measure redshifts to about 1 million galaxies over a quarter of the sky. The Anglo-Australian 2 degree Field (2dF) survey will measure redshifts for 250,000 galaxies selected from the APM catalogue. About 60,000 2dF redshifts have been measured so far (as of October 1999). The median redshift of both the SDSS and 2dF galaxy redshift surveys is $`\overline{z}0.1`$. While they can provide interesting estimates of the fluctuations on scales of hundreds of Mpc’s, the problems of biasing, evolution and $`K`$-correction, would limit the ability of SDSS and 2dF to ‘prove’ the Cosmological Principle. (cf. the analysis of the ESO slice by Scaramella et al 1998 and Joyce et al. 1999). ### 3.3. Peculiar Velocities Being the topic of this conference, the most recent work in this area is summarized by others in this volume. Peculiar velocities are powerful as they probe directly the mass distribution. Unfortunately, as distance measurements increase with distance, the scales probed are smaller than the interesting scale of transition to homogeneity. On the other hand, the gravity tidal field can tell us about scales outside the survey volume (e.g. Lilje, Jones & Yahil 1986; Hoffman 1999). The rms bulk flow for a sphere of radius $`R`$ is $`V_{bulk}=AR^{(n+1)/2}`$ for power-spectrum of the form $`P(k)k^n`$. Conflicting results reported in this conference on both the amplitude $`A`$ and coherence of the flow suggest that peculiar velocities cannot yet set strong constraints on the amplitude of fluctuations on scales of hundreds of Mpc’s. Perhaps the most promising method for the future is the kinematic Sunyaev-Zeldovich effect which allows one to measure the peculiar velocities of clusters out to high redshift. There are also conflicting claims about a ’local bubble’. Zehavi et al. (1998) found, using a SNIa sample, an evidence for a bubble of radius of $`70h^1\mathrm{Mpc}`$ with $`\mathrm{\Delta }H/H6.5\%\pm 2\%`$ (20 % underdensity). Giovanelli et al. (1999), using samples of clusters, claimed a smooth flow beyond $`50h^1\mathrm{Mpc}`$ . The agreement between the CMB dipole and the dipole anisotropy of relatively nearby galaxies argues in favour of large scale homogeneity. The IRAS dipole (Strauss et al 1992, Webster et al 1998, Schmoldt et al 1999) shows an apparent convergence of the dipole, with misalignment angle of only $`15^{}`$. Schmoldt et al. (1999) claim that 2/3 of the dipole arises from within a $`40h^1\mathrm{Mpc}`$, but again it is difficult to ‘prove’ convergence from catalogues of finite depth. ### 3.4. Radio Sources Radio sources in surveys have typical median redshift $`\overline{z}1`$, and hence are useful probes of clustering at high redshift. Unfortunately, it is difficult to obtain distance information from these surveys: the radio luminosity function is very broad, and it is difficult to measure optical redshifts of distant radio sources. Earlier studies claimed that the distribution of radio sources supports the ‘Cosmological Principle’. However, the wide range in intrinsic luminosities of radio sources would dilute any clustering when projected on the sky. Recent analyses of new deep radio surveys (e.g. FIRST) suggest that radio sources are actually clustered at least as strongly as local optical galaxies (e.g. Cress et al. 1996; Magliocchetti et al. 1998). Nevertheless, on the very large scales the distribution of radio sources seems nearly isotropic. Comparison of the measured quadrupole in a radio sample in the Green Bank and Parkes-MIT-NRAO 4.85 GHz surveys to the theoretically predicted ones (Baleisis et al. 1998) offers a crude estimate of the fluctuations on scales $`\lambda 600h^1`$ Mpc. The derived amplitudes are shown in Figure 3 for the two assumed Cold Dark Matter (CDM) models. Given the problems of catalogue matching and shot-noise, these points should be interpreted at best as ‘upper limits’, not as detections. ### 3.5. The XRB Although discovered in 1962, the origin of the X-ray Background (XRB) is still unknown, but is likely to be due to sources at high redshift (for review see Boldt 1987; Fabian & Barcons 1992). Here we shall not attempt to speculate on the nature of the XRB sources. Instead, we utilise the XRB as a probe of the density fluctuations at high redshift. The XRB sources are probably located at redshift $`z<5`$, making them convenient tracers of the mass distribution on scales intermediate between those in the CMB as probed by COBE, and those probed by optical and IRAS redshift surveys (see Figure 3). The interpretation of the results depends somewhat on the nature of the X-ray sources and their evolution. The rms dipole and higher moments of spherical harmonics can be predicted (Lahav et al. 1997) in the framework of growth of structure by gravitational instability from initial density fluctuations. By comparing the predicted multipoles to those observed by HEAO1 (Treyer et al. 1998) we estimate the amplitude of fluctuations for an assumed shape of the density fluctuations (e.g. CDM models). Figure 3 shows the amplitude of fluctuations derived at the effective scale $`\lambda 600h^1`$ Mpc probed by the XRB. The observed fluctuations in the XRB are roughly as expected from interpolating between the local galaxy surveys and the COBE CMB experiment. The rms fluctuations $`\frac{\delta \rho }{\rho }`$ on a scale of $`600h^1`$Mpc are less than 0.2 %. Scharf et al. (1999) have shown that by eliminating known X-ray sources out to effective depth of $`60h^1\mathrm{Mpc}`$ one can estimate the bulk flow of that sphere due to the mass represented by the remaining unresolved XRB sources. They found that under certain approximations the expected bulk flow is $`V_{bulk}1400\mathrm{\Omega }_m^{0.6}/b_x(0)`$ km/sec, where $`b_x(0)`$ is the present epoch X-ray bias parameter. Using current estimates of the bulk flow of $`60h^1\mathrm{Mpc}`$ spheres to be $`300`$ km/sec (Dekel et al. 1999) this suggests $`\mathrm{\Omega }_m^{0.6}/b_x(0)1/5`$, quite low relative to other studies. ### 3.6. The Lyman-$`\alpha `$ Forest The Lyman-$`\alpha `$ forest reflects the neutral hydrogen distribution and therefore is likely to be a more direct trace of the mass distribution than galaxies are. Unlike galaxy surveys which are limited to the low redshift Universe, the forest spans a large redshift interval, typically $`1.8<z<4`$, corresponding to comoving interval of $`600h^1\mathrm{Mpc}`$. Also, observations of the forest are not contaminated by complex selection effects such as those inherent in galaxy surveys. It has been suggested qualitatively by Davis (1997) that the absence of big voids in the distribution of Lyman-$`\alpha `$ absorbers is inconsistent with the fractal model. Furthermore, all lines-of-sight towards quasars look statistically similar. Nusser & Lahav (1999) predicted the distribution of the flux in Lyman-$`\alpha `$ observations in a specific truncated fractal-like model. They found that indeed in this model there are too many voids compared with the observations and conventional (CDM-like) models for structure formation. This too supports the common view that on large scales the Universe is homogeneous. ## 4. Is the Universe Fractal ? The question of whether the Universe is isotropic and homogeneous on large scales can also be phrased in terms of the fractal structure of the Universe. A fractal is a geometric shape that is not homogeneous, yet preserves the property that each part is a reduced-scale version of the whole. If the matter in the Universe were actually distributed like a pure fractal on all scales then the Cosmological Principle would be invalid, and the standard model in trouble. As shown in Figure 3 current data already strongly constrain any non-uniformities in the galaxy distribution (as well as the overall mass distribution) on scales $`>300h^1\mathrm{Mpc}`$. If we count, for each galaxy, the number of galaxies within a distance $`R`$ from it, and call the average number obtained $`N(<R)`$, then the distribution is said to be a fractal of correlation dimension $`D_2`$ if $`N(<R)R^{D_2}`$. Of course $`D_2`$ may be 3, in which case the distribution is homogeneous rather than fractal. In the pure fractal model this power law holds for all scales of $`R`$. The fractal proponents (Pietronero et al. 1997) have estimated $`D_22`$ for all scales up to $`500h^1\mathrm{Mpc}`$, whereas other groups have obtained scale-dependent values (for review see Wu et al. 1999 and references therein). These measurements can be directly compared with the popular Cold Dark Matter models of density fluctuations, which predict the increase of $`D_2`$ with $`R`$ for the hybrid fractal model. If we now assume homogeneity on large scales, then we have a direct mapping between correlation function $`\xi (r)`$ (or the Power-spectrum) and $`D_2`$. For $`\xi (r)r^\gamma `$ it follows that $`D_2=3\gamma `$ if $`\xi 1`$, while if $`\xi (r)=0`$ then $`D_2=3`$. The predicted behaviour of $`D_2`$ with $`R`$ from three different CDM models is shown Figure 4. Above $`100h^1\mathrm{Mpc}`$ $`D_2`$ is indistinguishably close to 3. We also see that it is inappropriate to quote a single crossover scale to homogeneity, for the transition is gradual. Direct estimates of $`D_2`$ are not possible for much larger scales, but we can calculate values of $`D_2`$ at the scales probed by the XRB and CMB by using CDM models normalised with the XRB and CMB as described above. The resulting values are consistent with $`D_2=3`$ to within $`10^4`$ on the very large scales (Peebles 1993; Wu et al. 1999). Isotropy does not imply homogeneity, but the near-isotropy of the CMB can be combined with the Copernican principle that we are not in a preferred position. All observers would then measure the same near-isotropy, and an important result has been proven that the Universe must then be very well approximated by the FRW metric (Maartens et al. 1996). While we reject the pure fractal model in this review, the performance of CDM-like models of fluctuations on large scales have yet to be tested without assuming homogeneity a priori. On scales below, say, $`30h^1\mathrm{Mpc}`$, the fractal nature of clustering implies that one has to exercise caution when using statistical methods which assume homogeneity (e.g. in deriving cosmological parameters). We emphasize that we only considered one ‘alternative’ here, which is the pure fractal model where $`D_2`$ is a constant on all scales. ## 5. More Realistic Inhomogeneous Models As the Universe appears clumpy on small scales it is clear that assuming the Cosmological Principle and the FRW metric is only an approximation, and one has to average carefully the density in Newtonian Cosmology (Buchert & Ehlers 1997). Several models in which the matter in clumpy (e.g. ’Swiss cheese’ and voids) have been proposed (e.g. Zeldovich 1964; Krasinski 1997; Kantowski 1998; Dyer & Roeder 1973; Holz & Wald 1998; Célérier 1999; Tomita 1999). For example, if the line-of-sight to a distant object is ‘empty’ it results in a gravitational lensing de-magnification of the object. This modifies the FRW luminosity-distance relation, with a clumping factor as another free parameter. When applied to a sample of SNIa the density parameter of the Universe $`\mathrm{\Omega }_m`$ could be underestimated if FRW is used (Kantowski 1998; Perlmutter et al. 1999). Metcalf and Silk (1999) pointed out that this effect can be used as a test for the nature of the dark matter, i.e. to test if it is smooth or clumpy. ## 6. A ‘Best Fit Universe’: a Cosmic Harmony ? Several groups (e.g. Eisenstein, Hu & Tegmark 1998; Webster et al. 1998; Gawiser & Silk 1998; Bridle et al. 1999) have recently estimated cosmological parameters by joint analysis of data sets (e.g. CMB, SN, redshift surveys, cluster abundance and peculiar velocities) in the framework of FRW cosmology. The idea is the the cosmological parameters can be better estimated due to the complementary nature of the different probes. While this approach is promising and we will see more of it in the next generation of galaxy and CMB surveys (2dF/SDSS/MAP/Planck) it is worth emphasizing a ‘health warning’ on this approach. First, the choice of parameters space is arbitrary and in the Bayesian framework there is freedom in choosing a prior for the model. Second, the ‘topology’ of the parameter space is only helpful when ‘ridges’ of 2 likelihood ‘mountains’ cross each other (e.g. as in the case of the CMB and the SN). It is more problematic if the joint maximum ends up in a ’valley’. Finally, there is the uncertainty that a sample does not represent a typical patch of the FRW Universe to yield reliable global cosmological parameters. Webster et al. (1998) combined results from a range of CMB experiments, with a likelihood analysis of the IRAS 1.2Jy survey, performed in spherical harmonics. This method expresses the effects of the underlying mass distribution on both the CMB potential fluctuations and the IRAS redshift distortion. This breaks the degeneracy e.g. between $`\mathrm{\Omega }_m`$ and the bias parameter. The family of CDM models analysed corresponds to a spatially-flat Universe with with an initially scale-invariant spectrum and a cosmological constant $`\lambda `$. Free parameters in the joint model are the mass density due to all matter ($`\mathrm{\Omega }_m`$), Hubble’s parameter ($`h=H_0/100`$ km/sec), IRAS light-to-mass bias ($`b_{iras}`$) and the variance in the mass density field measured in an $`8h^1`$ Mpc radius sphere ($`\sigma _8`$). For fixed baryon density $`\mathrm{\Omega }_b=0.02/h^2`$ the joint optimum lies at $`\mathrm{\Omega }_m=1\lambda =0.41\pm 0.13`$, $`h=0.52\pm 0.10`$, $`\sigma _8=0.63\pm 0.15`$, $`b_{iras}=1.28\pm 0.40`$ (marginalised 1-sigma error bars). For these values of $`\mathrm{\Omega }_m,\lambda `$ and $`H_0`$ the age of the Universe is $`16.6`$ Gyr. The above parameters correspond to the combination of parameters $`\mathrm{\Omega }_m^{0.6}\sigma _8=0.4\pm 0.2`$. This is quite in agreement from results form cluster abundance (Eke et al. 1998), $`\mathrm{\Omega }_m^{0.5}\sigma _8=0.5\pm 0.1`$. By combining the abundance of clusters with the CMB and IRAS Bridle et al. (1999) found $`\mathrm{\Omega }_m=1\lambda =0.36`$, $`h=0.54`$, $`\sigma _8=0.74`$, and $`b_{iras}=1.08`$ (with error bars similar to those above). On the other hand, results from peculiar velocities yield higher values (Zehavi & Dekel 1999 and in these proceedings), $`\mathrm{\Omega }_m^{0.6}\sigma _8=0.8\pm 0.1`$. By combining the peculiar velocities (from the SFI sample) with cluster abundance and SN Ia one obtains overlapping likelihoods at the level of $`2sigma`$ (Bridle et al. 2000). The best fit parameters are $`\mathrm{\Omega }_m=1\lambda =0.52`$, $`h=0.57`$, and $`\sigma _8=1.10`$. As the $`\mathrm{\Omega }_m`$ from peculiar velocities is higher than that from the other probes, the joint value is higher than above. The 3-D likelihoods are shown in Zehavi& Dekel in this volume. We show in the Figure 5 the 2-D contours in the plane $`\sigma _8\mathrm{\Omega }_m^{0.6}`$ (which controls the amplitude of the velocity field) and $`\mathrm{\Omega }_mh`$ (which controls the shape of a CDM power-spectrum). The contours are shown for the peculiar velocities (PV), and the CMB independently and for the combination PV+CMB and PV+CMB+SN. We see that combining the sets helps significantly to constrain the parameters. ## 7. Discussion Analysis of the CMB, the XRB, radio sources and the Lyman-$`\alpha `$ which probe scales of $`1001000h^1\mathrm{Mpc}`$ strongly support the Cosmological Principle of homogeneity and isotropy. They rule out a pure fractal model. However, there is a need for more realistic inhomogeneous models for the small scales. This is in particular important for understanding the validity of cosmological parameters obtained within the standard FRW cosmology. Joint analyses of the CMB, IRAS, SN, cluster abundance and peculiar velocities suggests $`\mathrm{\Omega }_m=1\lambda 0.30.5`$. With the dramatic increase of data, we should soon be able to map the fluctuations with scale and epoch, and to analyze jointly LSS (2dF, SDSS) and CMB (MAP, Planck) data, taking into account generalized forms of biasing. #### Acknowledgments. I thank my collaborators for their contribution to the work presented here. ## References Baleisis, A., Lahav, O., Loan, A.J. & Wall, J.V. 1998, MNRAS, 297, 545 Baugh C.M. & Efstathiou G. 1994, MNRAS , 267, 323 Boldt, E. A. 1987, Phys. Reports, 146, 215 Bridle, S.L., Eke, V.R., Lahav, O., Lasenby, A.N., Hobson, M.P., Cole, S., Frenk, C.S., & Henry, J.P. 1999, MNRAS, in press, astro-ph/9903472 Bridle, S.L., Zehavi, I., Dekel, A., Lahav, O., Hobson, M.P. & Lasenby, A.N., 2000, in preparation Buchert T & Ehlers, J. 1997, A&A, 320, 1 Célérier, M.N. 1999, submitted to A&A (astro-ph/9907206) Cress C.M., Helfand D.J., Becker R.H., Gregg. M.D. & White, R.L. 1996, ApJ, 473, 7 Davis, M. 1997, Critical Dialogues in Cosmology, World Scientific, ed. N. Turok, pg. 13. Dekel, A. et al., 1999, ApJ, in press (astro-ph/9812197) Dyer, C.C. & Roeder, R.C. 1973, ApJ, 180, L31 Ehlers, J., Geren, P & Sachs, R.K. 1968, J Math Phys, 9(9), 1344, 1968 Eisenstein, D.J., Hu, W. & Tegmark, M. 1998 (astro-ph/9807130) Eke, V.R., Cole, S., Frenk, C.S. & Henry, J.P. 1998, MNRAS, 298, 1145 Fabian, A. C. & Barcons, X. 1992, ARAA, 30, 429 Gawiser, E. & Silk, J., 1998, Science, 280, 1405 Giovanelli, R. et al. 1999, submitted to ApJ (astro-ph/9906362) Hoffman, Y., 1999, in Evolution of Large Scale Structure, MPA/ESO Conference, August 1997, eds. A. Banday & R. Sheth. Holz, D.E. & Wald, R.M. 1998, Phys Rev D, 58, 063501 Joyce, M., Montuori, M., Sylos-Labini F. & Pietronero, L., 1999, A&A, 344, 387 Kantowski, R. 1998, ApJ, 507, 483 Krasinski, A. 1997, Inhomogeneous Cosmological Models, Cambridge University Press, Cambridge Lahav O., Piran T. & Treyer M.A. 1997, MNRAS, 284, 499 Lahav, O., Santiago, B.X., Webster, A.M., Strauss, M.A., Davis, M., Dressler, A. & Huchra, J.P. 1999, MNRAS, in press Lilje, P.B., Yahil, A. & Jones, B.J.T. 1986, ApJ, 307, 91 Maartens, R., Ellis, G. F. R. & Stoeger, W. R. 1996, A&A , 309, L7 Magliocchetti, M., Maddox, S.J., Lahav, O.& Wall, J.V. 1998, MNRAS, 300, 257 Metcalf, R. B. , Silk, J. 1999, ApJ L, 519, L1 Nusser, A. & Lahav, O. 1999, submitted to MNRAS (astro-ph/991017) Peebles, P. J. E. 1993, Principles of Physical Cosmology, Princeton University Press, Princeton. Perlmutter et al. 1999, ApJ, 517, 565 Pietronero, L., Montuori M., & Sylos-Labini, F. 1997, in Critical Dialogues in Cosmology, World Scientific, ed. N. Turok, pg. 24 Rudnicki, K. 1995, The cosmological principles, Jagiellonian University, Krakow 1995 Scaramella, R. et al. 1998, A&A, 334, 404 Scharf, C.A., Jahoda, K., Treyer, M., Lahav, O., Boldt, E. & Piran, T., et al., 1999, submitted to ApJ (astro-ph/9908187) Schmoldt, I. et al. 1999, MNRAS, 304, 893 Strauss M.A. et al., 1992, ApJ, 397, 395 Tomita, K. 1999 (astro-ph/9906027) Treyer, M., Scharf, C., Lahav, O., Jahoda, K., Boldt, E. & Piran, T. 1998, ApJ, 509, 531 Webster, M.A., Lahav, O., & Fisher, K.B. 1998, MNRAS, 287, 425 Webster, M., Hobson, M.P., Lasenby, A.N., Lahav, O., Rocha, G. & Bridle, S. 1998, ApJ, 509, L65 Wu, K.K.S., Lahav, O. & Rees, M.J. 1998, Nature, 397, 225 Zehavi,I & Dekel, A. 1999, Nature, 401, 252 Zehavi, I, Riess, A.G., Kirshner, R.P. & Dekel, A. 1998, ApJ, 503, 483 Zeldovich, Ya, B. 1964, Soviet Astron, 8, 13
no-problem/9911/cond-mat9911124.html
ar5iv
text
# Concentration Profiles and Reaction Fronts in A+B→C Type Processes: Effect of Background Ions ## I Introduction The reaction-diffusion process $`A+BC`$ has been discussed for a long time. This conceptually simple process displays a rich variety of phenomena (nonclassical reaction kinetics , clustering and segregation , front formation ) and, depending on the interpretation of $`A`$ and $`B`$ (particles, quasi-particles, topological defects, chemical reagents, etc.), it provides a model for a number of phenomena in physics, chemistry, and biology. In many cases of interest, $`A`$ and $`B`$ are ions ($`A^{}`$ and $`B^+`$) and these ions are initially separated from each other. An example we shall discuss below is the formation of Liesegang bands where an electrolyte $`A^{}\widehat{A}^+`$ diffuses into a gel column containing another electrolyte $`\widehat{B}^{}B^+`$. The concentration of $`A`$-s is taken to be much larger than those of the $`B`$-s, thus the reaction front $`A^{}+B^+C`$ moves along the column. An appropriate choice of reagents then leads to quasiperiodic precipitation ($`CD`$) in the wake of the front (Fig.1). In general, the background ions ($`\widehat{A}^+`$ and $`\widehat{B}^{}`$) are expected to play a role in a process described above. Nevertheless, the usual approach is to neglect them and consider only a contact interaction between neutral reagents $`A`$ and $`B`$. This approximation is based on the argument that the background ions provide only screening and, furthermore, the screening length is much smaller than the scale of concentration variations relevant in the formation of a macroscopic pattern. Although the argument sounds compelling, one should note that the background ions may generate macroscopic effects even if the screening length is negligible. Indeed, if the mobility of one of the background ions ($`\widehat{A}^+`$ in the Liesegang case) is much smaller than the other mobilities then the motion and the properties of the reaction front are altered. Since the properties of the reaction front are crucial in determining the pattern one expects that the presence of background ions gives rise to macroscopic changes in the observed patterns. Our aim with this work is to verify the above expectation and to investigate how the diffusion and front formation are affected by unequal mobilities of background ions. More precisely, we shall study the time evolution of ion-concentrations in the process $$A^{^{}}+\widehat{A}^^++B^^++\widehat{B}^{^{}}C+\widehat{A}^^++\widehat{B}^{^{}}$$ (1) where the reaction product $`C=A^{^{}}B^^+`$ is assumed to vanish from the system. The process starts at $`t=0`$ form an initial condition where the electrolytes $`A^{}\widehat{A}^+`$ and $`B^+\widehat{B}^{}`$ are separated and their concentrations ($`a^+,\widehat{a}^{},b^{},\widehat{b}^+`$) are constant in the left ($`x<0`$) and right ($`x>0`$) half-spaces, respectively $`a^{}(x,t=0)=`$ $`\widehat{a}^+(x,t=0)=a_0`$ $`\theta (x)`$ (2) $`b^+(x,t=0)=`$ $`\widehat{b}^{}(x,t=0)=b_0`$ $`\theta (x)`$ (3) where $`\theta (x)`$ is the step function. Such an initial state with $`a_0b_0`$ is actually used in Liesegang experiments, and this choice is also motivated by the fact that investigations of front formation from such initial state have proved to be instrumental in understanding the $`A+BC`$ process . The study of motion of ions is not an easy task and we must simplify the problem to make it tractable. We believe, however, that our approximations listed below are appropriate at least for the description of the Liesegang experiments. 1. It is assumed that the phenomena can be described by reaction-diffusion equations. This appears to be a correct assumption for reactions taking place in a gel where convection is absent. 2. The screening length is assumed to be negligible and screening is taken into account by enforcing local electroneutrality. At characteristic ion-concentrations ($`10^3\mathrm{M}1\mathrm{M}`$) present in Liesegang experiments, the screening length is indeed small ($`10^9m`$) compared both to the characteristic diffusion length ($`10^2m`$) and to the width of the reaction zone ($`10^6m`$). Further discussion can be found in Sec.II. 3. The concentration profiles are assumed to depend only on one spatial coordinate ($`x`$ in Fig.1). Although a one dimensional geometry can be set up in experiments on Liesegang phenomena (the length of the gel column can be made much larger than its width), one should note that the finite extent of the sample in the transverse direction poses nontrivial problems with edge effects. It appears, however, that these effects can be neglected since the final pattern is usually one dimensional to a good accuracy. 4. The mobility of the reagents and of the background ions are, in general, different. For simplicity, we shall consider the case with one of the background ions having a significantly distinct diffusion coefficient $$D_a=D_b=D_{\widehat{b}}DD_{\widehat{a}}\widehat{D}.$$ (4) This is just a technical assumption to keep the number of parameters small and, this also appears to be the most interesting case for Liesegang phenomena where $`a_0b_0`$. Once the above approximations are made one arrives at a problem that can be studied numerically and, in some limits, analytically. The process is now simple enough so that the numerical analysis is not hindered by computer time and memory problems, or by difficulties arising from discretization. In order to arrive at the results, we shall proceed as follows. First we discuss how to take into account the electroneutrality constraint in the reaction-diffusion equations (Sec.II). Then the case without reaction is studied and we show that interesting concentration profiles emerge even in the pure diffusion process (Sec.III). The effects of reactions are considered in (Sec.IV) where the properties of the reaction front are calculated. Finally, the implications for understanding the Liesegang phenomena is discussed in Sec.V. ## II Equations in the electroneutrality approximation In a medium such as a gel, the ions move by diffusion and, in the presence of an electric field $`\stackrel{}{E}=\phi `$, the flux of ions $`\stackrel{}{j}_i`$ is given by the Nernst-Planck relation $$\stackrel{}{j}_i=\stackrel{}{j}_{i\text{,diff}}+\stackrel{}{j}_{i\text{,drift}}=D\left(n_i+\frac{z_i}{\phi _0}n_i\phi \right).$$ (5) Here $`n_i`$ is the concentration of $`i`$-th ions of integer charge $`z_i`$, $`D_i`$ is their diffusion coefficient, and $`\phi _0=RT/F`$ is a constant combined of the temperature $`T`$, the gas constant $`R`$, and the Faraday number, $`F`$. The potential $`\phi `$ is determined from the Poisson equation $$\mathrm{\Delta }\phi =\frac{F}{\epsilon _r\epsilon _0}\underset{i}{}z_in_i$$ (6) where $`\epsilon _0`$ is the permittivity of free space while $`\epsilon _r`$ is the dielectric constant of the system. An important quantity in ionic diffusion is the Debye length $`r_D`$ that gives the characteristic length-scale associated with charge inbalances $$r_D=\sqrt{\frac{\epsilon _r\epsilon _0RT}{F^2n_0}},$$ (7) where $`n_0`$ is the characteristic scale of ionic concentrations. In a Liesegang experiment, one usually has $`n_010^3\mathrm{M}1\mathrm{M}`$ and the process takes place in an aqueous medium ($`\epsilon _r80`$). Thus $`r_D10^{10}m10^8m`$ and one can see that $`r_D`$ is much smaller than the scale of the macroscopic pattern (e.g. the width of the bands, $`10^3m10^2m`$). As a consequence, one can use the electroneutrality approximation that consists of replacing eq.(6) by the constraint $$\underset{i}{}z_in_i=0.$$ (8) Denoting now the rate of reaction of the $`i`$-th ion with the others by $`R_i(\{n\})`$ and assuming that the reaction does not violate electroneutrality ($`_iz_iR_i=0`$), eq.(5) together with the constraint (8) yields the following equations for the time-evolution of the concentration fields $$_tn_i=D_i\left[\mathrm{\Delta }n_iz_i\left(n_i\stackrel{}{}\right)\right]R_i(\{n\})$$ (9) where the appropriately scaled electric field that arises from the electroneutrality constraint is given by $$\stackrel{}{}=\frac{_iz_iD_in_i}{_iz_i^2D_in_i}.$$ (10) It should be noted that there is an extra term in $`\stackrel{}{}`$ if a steady global current flows through the system. Such a current is not present in the Liesegang problem and, trying to keep the discussion as simple as possible, we shall assume that the global current is zero. For the process of actual interest (1), reaction takes place only between the ions $`A^{}`$ and $`B^+`$ and their rate of reaction is given by $`ka^{}b^+`$ where $`k`$ is the rate constant. Thus the above equations in a one dimensional geometry take the form $`_ta^{}`$ $`=`$ $`D\left[_x^2a^{}+_x(a^{})\right]ka^{}b^+`$ (11) $`_tb^+`$ $`=`$ $`D\left[_x^2b^+_x(b^+)\right]ka^{}b^+`$ (12) $`_t\widehat{a}^+`$ $`=`$ $`\widehat{D}\left[_x^2\widehat{a}^+_x(\widehat{a}^+)\right]`$ (13) $`_t\widehat{b}^{}`$ $`=`$ $`D\left[_x^2\widehat{b}^{}+_x(\widehat{b}^{})\right]`$ (14) with $$=\frac{D_x(a^{}+b^+\widehat{b}^{})+\widehat{D}_x\widehat{a}^+}{D(a^{}+b^++\widehat{b}^{})+\widehat{D}\widehat{a}^+}.$$ (15) Equations (11-15) together with the initial conditions (3) provide the mathematical formulation of our problem. Before turning to the solution of the above equations let us mention that the diffusion-reaction problem of ions in one-dimensional geometry can be tackled numerically without assuming the electroneutrality condition. The only difficulty is that the discretization of space must be on a finer scale than the Debye length and so, in the range of physical parameters where $`r_D`$ is exceedingly small, the calculation becomes impractical. One expects (and we verified it for some cases) that the solution of the full problem approaches the solution of the corresponding ”electroneutral” problem as $`r_D`$ is decreased. ## III Concentration profiles with no reactions Let us begin the analysis of eqs.(11-15) by considering the case of no reactions ($`k=0`$) and let us further restrict our study to the case of $`a_0b_0`$ corresponding to the Liesegang initial conditions. The limit of $`b_0=0`$ is especially simple and treated in textbooks. In this case, the two ions $`A^{}`$ and $`\widehat{A}^+`$ must move together thus an electric field is generated that slows down the more mobile ions and accelerates the slower ions. The result is an effective diffusion with a diffusion coefficient $`D_{\mathrm{eff}}=2D\widehat{D}/(D+\widehat{D})`$ . The presence of a small amount of $`B`$-s ($`a_0b_0`$) does not significantly change the motion of $`A`$-s. The ions $`A^{}`$ and $`\widehat{A}^+`$ can separate now but only by an small amount that is compensated by the motion of $`B`$-s. On Fig.2, we can see the results for the case of $`b_0/a_0=0.01`$ and $`\widehat{D}/D=0.1`$ (slow background ions $`\widehat{A}^+`$). The electric field is mainly generated by the motion of the majority $`A`$ ions and, in turn, this field is the determining factor in the motion of the minority $`B`$ ions. Since this field, shown in Fig.2b, moves the $`\widehat{A}^+`$ ($`A^{}`$) ions in the $`+x`$ ($`x`$) direction, similar effect is felt by the $`B`$ ions. Indeed, as one can see in Fig.2c, the $`B^+`$-s are repelled from the region the $`A`$ ions moved into, while the ions $`\widehat{B}^{}`$ are pulled through this region. As a result a region emerges where the ionic and diffusive drift of the $`B`$-s are in opposite directions. It should be noted that the profiles shown in Fig.2 keep their shape in time. The pictures at $`t^{}`$ are obtained from those at $`t=1h`$ by rescaling the $`x`$-axis by a factor $`\sqrt{t^{}/t}`$. This numerical observation is the consequence of the fact that the Debye length is zero and the initial conditions (3) do not contain any length-scales. As a consequence, all the length-scales are diffusive lengths proportional to $`\sqrt{t}`$. The above argument can be seen to work explictely in the limit $`\widehat{D}=0`$ where an analytic calculation gives the concentration profiles that can be expressed through error functions of argument $`x/\sqrt{t}`$. The complexity of concentration profiles shown in Fig.2a and 2c suggest that if reactions are switched on between the ions $`A^{}`$ and $`B^+`$ then the emerging reaction front may be rather different from the case of neutral reagents. This is what we shall study in Sec.IV. ## IV Reaction front The full reaction-diffusion process is described by eqs.(11-15) and the solution of these equations with initial condition (3) provides the description of the reaction front. Indeed, once the concentration profiles are known, the location and the time evolution of the production of $`A^{}+B^+C`$ particles is given by $$R(x,t)=ka^{}(x,t)b^+(x,t).$$ (16) The properties of $`R(x,t)`$ are well known for the case of neutral reagents ($`=0`$) . In that case, the reaction takes place in a narrow, moving region whose width is much smaller than the diffusive scales. The motion of the reaction zone is ‘diffusive’ characterized by a diffusion constant $`D_f`$ $$x_f=\sqrt{2D_ft}.$$ (17) Another important feature of the front is that it leaves behind a density of $`C`$-s $$c_0=_0^{\mathrm{}}R(x,t)𝑑t,$$ (18) that is independent of $`x`$. The parameters $`D_f`$ and $`c_0`$ can easily be determined for the neutral case by exploiting the smallness of the width of the reaction zone. The reaction zone is replaced by a point where the diffusion equations are supplemented by boundary conditions and as a result the parameters $`D_f`$ and $`c_0`$ are given as functions of $`a_0`$, $`b_0`$, $`D_a`$ and $`D_b`$ . The presence of a localized, diffusive front is an essential ingredient in the theories of Liesegang phenomena , and the parameters of the front (especially $`D_f`$ and $`c_0`$) are known to influence the properties of the patterns. Thus the next step is now to find out how the above picture is modified as a result of the ionic character of the reagents. Eqs.(11-15) with initial condition (3) can be studied by straightforward numerical methods and one finds that the localized-diffusive-front picture does hold and, furthermore, the scaling properties (17-18) also remain valid when the ionic interactions are switched on. The actual values of the parameters $`D_f`$ and $`c_0`$, however, are affected by the presence of background ions. In order to understand how these results arise, let us begin with the numerical observation that the reaction front remains narrow even if the ionic interactions are switched on. Indeed, for characteristic values of $`a_0100b_01M`$, $`D_aD_b10^{10}m^2/s`$ and $`k10^{10}(Ms)^1`$ , we find that the width is in the mesoscopic range ($`10^6m`$) at all times available in a Liesegang experiment. Thus, on diffusive length-scales, the reaction zone can be treated as a point (as in the neutral case) and one arrives at equations with no reaction terms $$_tn_i=D_i\left[_x^2n_iz_i_x\left(n_i\right)\right].$$ (19) The reactions are taken into account by the following boundary conditions at the front $`a^{}(x_f)=b^+(x_f)=0,`$ (20) $`|j_a^{}(x_f)|=|j_{b^+}(x_f)|.`$ (21) The meaning of the above conditions is that the concentrations of the reagents are zero at the front and the flux of ions $`A^{}`$ and $`B^+`$ to the reaction zone are equal. Let us now suppose that $`x_f(t)`$ and $`n_i(x,t)`$ are the solutions of the above equations (19-21) with the initial condition (3). Then one can easily verify that the front-position $`\lambda x_f(t/\lambda ^2)`$ and the concentrations $`n_i(x/\lambda ,t/\lambda ^2)`$ also solve the same problem for an arbitrary $`\lambda >0`$ (note that the initial conditions do not contain any length-scale). Thus the functions $`n_i(x,t)`$ and $`x_f(t)`$ must satisfy the conditions $`n_i(x,t)=n_i(x/\lambda ,t/\lambda ^2)`$ and $`x_f(t)=\lambda x_f(t/\lambda ^2)`$. As a consequence, we find that the concetration profiles obey the following scaling form $$n_i(x,t)=\mathrm{\Phi }_i\left(\frac{x}{\sqrt{t}}\right)$$ (22) and the front moves diffusively even if the ionic interactions are taken into account $$x_f\sqrt{t}.$$ (23) The above relationship (23) defines the diffusion constant $`D_f`$ through $`x_f=\sqrt{2D_ft}`$. The scaling of the concentrations (22) together with equation (15) imply scaling for the electric field $$(x,t)=\frac{1}{\sqrt{t}}\mathrm{\Psi }\left(\frac{x}{\sqrt{t}}\right).$$ (24) These scaling results allow us to investigate the production of the $`C`$ particles. The number of the $`C`$-s arising in the reaction zone in a unit time is given by the flux of one of the reagents (e.g. $`j_a^{}`$) entering the front. According to eqs.(22-24) $`j_a^{}`$ at $`x_f`$ is proportional to $`1/\sqrt{t}`$ and the velocity of the front decays in time in the same way, $`x_f1/\sqrt{t}`$. It follows then that the density of the $`C`$-s emerging in the wake of the front is a constant $$c=\frac{j_a^{}}{\dot{x}_f}=\text{const.}=c_0$$ (25) The results (22-25) given by the above analytical argument have been confirmed by computer simulations. An example of such numerical calculation can be seen in Fig.3. Having established the same scaling properties of the front (23,25) as in the case of neutral reagents, we turn now to the actual values of the parameters $`D_f`$ and $`c_0`$. Since the motion of the reagents is modified by the electric field (24), one expects that $`D_f`$ and $`c_0`$ will depend not only on the properties of the reagents but on the properties of the background ions, as well. We studied the effect of the background ions by changing the diffusion coefficient $`\widehat{D}`$ (4) and keeping all the other parameters ($`a_0,b_0,D`$) fixed. The numerical results for $`D_f`$ and $`c_0`$ as functions of $`\widehat{D}`$, are shown in Fig.4. As one can see, $`c_0`$ is does not change significantly in the physically relevant range of $`0.1<\widehat{D}<10`$ (Fig.4b). The reason for this insensitivity of $`c_0`$ is that the density of the reaction product for $`a_0b_0`$ and $`D_aD_b`$ is mainly determined by the concentration $`b_0`$ . The parameter $`D_f`$ is much more sensitive to the mobility of the counter ions as shown in Fig.4a. Although the motion of the front is determined by the interplay of all four types of ions and the process is rather complex, the result in Fig.4a can be easily understood. For $`a_0b_0`$, the main effect comes from the counter ions $`\widehat{A}^+`$ slowing down or speeding up of the motion of the $`A^{}`$-s. If the diffusion coefficient $`\widehat{D}`$ is smaller than $`D`$, the $`A^{}`$ ions are pulled back by the $`\widehat{A}^+`$-s (otherwise the slow $`\widehat{A}^+`$ ions would form positive charge density in the left region) thus fewer $`A^{}`$ particles enter the front which yields a smaller value of $`D_f`$. Similar argument leads to the opposite effect for the case of $`\widehat{D}>D`$. The case ($`\widehat{D}=D`$) is special in the sense that the electric field (15) vanishes and the result corresponds to the case of neutral reagents. In the next section we turn to the theory of Liesegang phenomena in order to demostrate the relevance of the above results in the description of a relatively simple pattern-forming process. ## V Implications for Liesegang theories Liesegang patterns described in Sec.I have been much investigated for about a century . The gross features of normal patterns in reproducible experiments are rather simple, namely the distance between consecutive bands $`x_{n+1}x_n`$ increases with band order $`n`$ and the positions of the bands obey a spacing law : $$\frac{x_{n+1}}{x_n}1+p_n\stackrel{n1}{}1+p,$$ (26) where $`1+p`$ is called the spacing coefficient and $`p>0`$. Currently, the Liesegang phenomenon is mainly studied as a nontrivial example of pattern formation in the wake of a moving front and the theories of normal patterns revolve around the calculation of $`p`$. The main feature of these theories is that the precipitate appears as the system goes through some nucleation- , spinodal- , or coagulation , thresholds. Most of these theories are rather complicated, however, and have been developed only recently to the level that $`p`$ can be investigated in detail and, in particular, its dependence on the initial concentrations $`a_0`$ and $`b_0`$ can be determined, and connection can be made to the experimentally established Matalon-Packter law . None of the above theories address the question of how the Liesegang patterns are affected by the presence of background ions although the existence of such an effect is expected. Indeed, let us take, for example, the expression for $`p`$ obtained in a simple version of the nucleation and growth theory \[see eq.(25) in \] $$p\frac{D_cc^{}}{D_f(c_0c^{})},$$ (27) where $`D_c`$ is the diffusion coefficient of the $`C`$ particles while $`c^{}`$ is the threshold concentration of $`C`$-s. The meaning of $`D_f`$ (diffusion coefficient of the $`C`$-s) and $`c_0`$ (the concentration of $`C`$-s left behind the front) is the same as defined in this paper. As one can see from (27), the spacing coefficient depends both on $`D_f`$ and $`c_0`$. Thus, on the basis of our results (see Fig.4), we expect $`p`$ to be affected by the background ions. In order to put our expectation on a firmer basis, we calculated $`p`$ numerically employing a recent theory where the addition of the background ions is straightforward. The main ingredients in this theory are i) a moving reaction front that leaves behind the particles $`C`$, and ii) a Cahn-Hilliard type phase-separation dynamics for the $`C`$ particles. This theory yields the spacing law, and the results for $`p`$ are in agreement with the Matalon-Packter law. Thus it appears to be a good candidate for the description of the Liesegang process. Since the reaction front enters the description only as a source in the Cahn-Hilliard equation \[see eq.(3) in \], one can study the effect of background ions by modifying the source according to what has been described in Sec.IV. The results of our numerical work for a particular case with $`b_0/a_0=0.01`$ (the parameters in the Cahn-Hilliard equation were set to unity) is displayed in Fig.5 As can be seen from Fig.5, $`p`$ does depend on $`\widehat{D}`$ and, actually, $`p`$ can change by a factor five compared to the neutral case ($`\widehat{D}D`$) provided $`\widehat{D}`$ decreases by a factor ten. One can also observe that the ionic effect is larger when the counterion $`A^+`$ is slower than $`A^{}`$. These obsevations and the overall picture is in agreement with the result (27) obtained in the nucleation and growth theory. Indeed, $`c_0`$ is weakly dependent on $`\widehat{D}`$ thus the main effect comes from $`D_f`$. As Fig.4 shows, $`D_f`$ is a smooth, monotonically increasing function of $`\widehat{D}`$ and this translates through eq.(27) into a monotonically decreasing $`p(\widehat{D})`$. We have thus shown that the backgroung ions cannot be neglected in the description of the Liesegang phenomena unless the diffusivities of the ions are roughly equal. Although this conclusion appears to complicate the description significantly, the reassuring aspect of the result is that all the complications can be absorbed into the parameters ($`D_f`$ and $`c_0`$) of the front. As a consequence, previous ideas about the pattern formation remain intact apart from the need of taking account of the renormalization of the parameters $`D_f`$ and $`c_0`$. ## VI Final remarks A general conclusion we can draw from the present work is that the dynamics of reaction fronts is strongly altered if the diffusivities of the reacting ions differ significantly from those of the background ions. This conclusion is based on the nontrivial density profiles found in a study of the the simplest reaction scheme $`A+BC`$ and assuming negligible screening length (electroneutrality approximation). We believe, however, that some aspects of our results (the reaction front can still be characterized by effective diffusion constant and it still leaves behind a constant density of reaction product) are robust since they appear to follow from more general considerations and thus they should be applicable to the more complicated cases. ## Acknowledgments We thank M. Droz, P. Hantz, L. Szilágyi, and M. Zrínyi for useful discussions. This work has been supported by Hungarian Research Funds (Grants OTKA T 029792 and FKFP-0128/1997).
no-problem/9911/hep-ph9911366.html
ar5iv
text
# 1 The evolution of neutrino asymmetry at the resonance temperature for mixing parameters 𝛿⁢𝑚²=-10⁻²⁢"eV"² and sin²{2⁢𝜃}=10^{-7.5}. Solid line corresponds to initial value 𝜂=10⁻¹⁰ and dashed line to 𝜂=2×10⁻¹⁰. The recent observation of strong zenith angle dependence of the atmospheric neutrino deficit by the Super-Kamiokande neutrino experiment has provided strong evidence for neutrino oscillations $`\nu _\mu \nu _X`$, where $`\nu _X`$ is either $`\nu _\tau `$ or a new, sterile neutrino $`\nu _s`$ . While the $`\nu _\mu \nu _s`$ solution presently is less favored by SK data , reconciling the existing data from all the neutrino experiments, including LSND, is not possible unless there exists at least one sterile neutrino mixing with the active neutrinos. Such mixing would have interesting consequences for primordial nucleosynthesis and CMB radiation . For example, sterile neutrinos could be brought into equilibrium prior to nucleosynthesis, increasing the energy density of the universe and thereby neutron-to-proton freeze out temperature, leading to more helium-4 being produced. This scenario has been numerically studied and strong limits to neutrino mixing parameters have been obtained . Under certain conditions active-sterile neutrino oscillations may also lead to exponential growth of neutrino asymmetry, as was first discovered by Barbieri and Dolgov . Later Foot and Volkas observed that by this mechanism very large asymmetries could be generated, which would have a significant effect on the primordial nucleosynthesis by directly modifying directly the $`np`$ reactions. Moreover, they showed that an asymmetry generated by $`\nu _\tau \nu _s`$ mixing could suppress sterile neutrino production in $`\nu _\mu \nu _s`$-sector, loosening the bounds of , according to which the SK atmospheric deficit could not be explained by $`\nu _\mu \nu _s`$ mixing. Later it was found that this asymmetry generation is chaotic in the sense that determining the sign of the final asymmetry sign($`L`$) does not simply follow from the initial conditions . This phenomenon was studied in , and it was shown that the indeterminacy is associated with a region of mixing parameters, where asymmetry is rapidly oscillating right after the resonance. As a consequence the the amount of Helium-4 produced cannot be precisely determined in such a scenario . In a recent paper it was claimed, however, that sign($`L`$) is completely determined by the initial asymmetry, and moreover, that there is only a slight growth of asymmetry after the resonance. In this article we clarify the origin of indeterminacy in sign($`L`$) and show that it is a real physical phenomenon, not disturbed by numerical inaccuracy. Instead, we will argue that the disagreement arises due to overly simplifying approximations used in . Finally, we will point out to a likely cause leading to observed suppression of the final magnitude of the asymmetry in . In the early universe the coherent evolution of the neutrino states is interrupted by frequent decohering collisions. Therefore the evolution of the system needs to be studied using the density matrix formalism. We parameterize the reduced density matrices of the neutrino and anti-neutrino ensembles as $$\rho _\nu \frac{1}{2}P_0(1+𝐏),\rho _{\overline{\nu }}\frac{1}{2}\overline{P}_0(1+\overline{𝐏}),$$ (1) where each matrix is assumed to be diagonal in momentum space, while each momentum state has $`2\times 2`$-mixing matrix structure in the flavour space. Solving the full momentum dependent kinetic equations for these density matrices numerically is a very complicated task and all attempts published to date have used some approximations to simplify the problem. Here we use momentum averaged approximation i.e. we set $`𝐏(p)𝐏\left(p\right)`$, with $`p3.15T`$. This approach has been found to give a very good approximation of the $`\nu _s`$ equilibration , and it will be sufficient for the purposes of this letter. The coupled equations are then (in the case of $`\nu _\tau \nu _s`$ oscillations, other cases can be obtained easily by simple redefinitions which are found for example in ): $`\dot{𝐏}`$ $`=`$ $`𝐕\times 𝐏(D+{\displaystyle \frac{d}{dt}}\mathrm{log}P_0)𝐏_T+(1P_z){\displaystyle \frac{d}{dt}}\mathrm{log}P_0\widehat{𝐳},`$ $`\dot{\overline{𝐏}}`$ $`=`$ $`\overline{𝐕}\times \overline{𝐏}(\overline{D}+{\displaystyle \frac{d}{dt}}\mathrm{log}\overline{P}_0)\overline{𝐏}_T+(1\overline{P}_z){\displaystyle \frac{d}{dt}}\mathrm{log}\overline{P}_0\widehat{𝐳},`$ $`\dot{P_0}`$ $`=`$ $`\mathrm{\Gamma }^{}(\nu _\tau \overline{\nu }_\tau \alpha \overline{\alpha })\left(n_{eq}^2n_{\nu _\tau }n_{\overline{\nu }_\tau }\right),`$ $`\dot{\overline{P}_0}`$ $`=`$ $`\overline{\mathrm{\Gamma }}(\nu _\tau \overline{\nu }_\tau \alpha \overline{\alpha })\left(n_{eq}^2n_{\nu _\tau }n_{\overline{\nu }_\tau }\right),`$ (2) where $`\dot{x}dx/dt`$ and $`𝐏_T=P_x\widehat{𝐱}+P_y\widehat{𝐲}`$. The damping coefficients for particles and anti-particles are $`D\overline{D}1.8G_F^2T^5`$ very accurately. The rotation vector $`𝐕`$ is $$𝐕=V_x\widehat{𝐱}+\left(V_0+V_L\right)\widehat{𝐳},$$ (3) where $`V_x`$ $`=`$ $`{\displaystyle \frac{\delta m^2}{2p}}\mathrm{sin}2\theta ,`$ $`V_0`$ $`=`$ $`{\displaystyle \frac{\delta m^2}{2p}}\mathrm{cos}2\theta +\delta V_\tau ,`$ $`V_L`$ $`=`$ $`\sqrt{2}G_FN_\gamma L,`$ (4) where $`\theta `$ is the vacuum mixing angle, $`\delta m^2=m_{\nu _s}^2m_{\nu _\tau }^2`$, $`N_\gamma `$ is the photon number density and the effective asymmetry $`L`$ in the potential $`V_L`$ is given by $$L=\frac{1}{2}L_n+L_{\nu _e}+L_{\nu _\mu }+2L_{\nu _\tau }(P)=\eta +2L_{\nu _\tau }(P),$$ (5) in the case of an electrically neutral plasma. Asymmetry $`L_{\nu _\tau }`$ is obtained from $$L_{\nu _\tau }=\frac{3}{8}\left(P_0(1+P_z)\overline{P}_0(1+\overline{P}_z)\right)$$ (6) and $`L_n`$ is the neutron asymmetry. The potential term $`\delta V_\tau `$ is approximately $$\delta V_\tau =17.8G_FN_\gamma \frac{pT}{2M_Z^2}.$$ (7) The rotation vector for anti-neutrinos is simply $`\overline{𝐕}(L)=𝐕(L)`$. Neutrino and anti-neutrino ensembles are very strongly coupled in Eq. (2) through the effective potential term $`V_L(L)`$, which makes their numerical solution particularly difficult; as long as neutrino asymmetry remains small, there is a large cancellation in the Eq. (6), leading to a potential loss of accuracy. To overcome this problem we define new variables $$P_\alpha ^\pm P_\alpha \pm \overline{P}_\alpha .$$ (8) In terms of these the Eq. (2) for $`𝐏^\pm `$ becomes $`\dot{P}_x^\pm `$ $`=`$ $`V_0P_y^\pm V_LP_y^{}\stackrel{~}{D}P_x^\pm ,`$ $`\dot{P}_y^\pm `$ $`=`$ $`V_0P_x^\pm +V_LP_x^{}\stackrel{~}{D}P_y^\pm V_xP_z^\pm ,`$ $`\dot{P}_z^\pm `$ $`=`$ $`V_xP_y^\pm +A_\pm \left(2P_z^+\right)A_{}P_z^\pm ,`$ (9) where we have defined $`A_+`$ $``$ $`{\displaystyle \frac{d}{dt}}\mathrm{log}P_0^+,`$ $`A_{}`$ $`=`$ $`\mathrm{\hspace{0.25em}\hspace{0.25em}\hspace{0.25em}0},`$ $`\stackrel{~}{D}_{}`$ $`=`$ $`D+A_+.`$ (10) Finally, since for the averaged interaction rates $`\mathrm{\Gamma }=\overline{\mathrm{\Gamma }}`$, the difference $`P_0^{}`$ is not affected by collisions and we find $`\dot{P}_0^+`$ $`=`$ $`2\mathrm{\Gamma }\left(n_{eq}^2n_{\nu _\tau }n_{\overline{\nu }_\tau }\right),`$ $`\dot{P}_0^{}`$ $`=`$ $`0.`$ (11) Our objective is to study whether the sign of the final asymmetry, sign($`L`$) , follows deterministically from the initial conditions. In it was found to be chaotic, whereas the authors of claim that sign($`L`$) is fully deterministic and equal to the sign of the initial neutrino asymmetry. Similar results have been reported by other groups as well . The key ingredient in the physics leading to the growth of the asymmetry is the appearance of the resonance. Indeed, if the squared mass difference $`\delta m^2<0`$, the effective potential $`V_0\pm V_L`$ goes through zero at the resonance temperature $$T_c16.0(|\delta m^2|\mathrm{cos}2\theta )^{1/6}\text{MeV},$$ (12) where $`V_L0`$ is assumed, as effective asymmetry $`L`$ is driven to zero well before the resonance. After the resonance the balance of the system abruptly changes which leads to a rapidly growing $`L`$. We have numerically solved Eqs. (9) and (11), and examples of the results are shown in Fig. 1. The behaviour of the system can be understood by a simple analogy of a ball rolling down a valley. After the resonance temperature $`T_c`$ the originally stable valley at $`L=0`$ becomes a ridge line separating two new, degenerate valleys corresponding to solutions of $`V_0\pm V_L=0`$. The system may first oscillate from one valley to another passing over the ridge line. However, because of friction (represented by damping terms) it will eventually settle into one or the other of the new valleys. It is easy to imagine that when there are many oscillations, even a very small difference in initial conditions may grow to a large phase difference at the time of settling down. In Fig. 1 these effects are demonstrated for two initial values: $`\eta =10^{10}`$ (solid line) and $`\eta =2\times 10^{10}`$ (dashed line) and oscillation parameters $`\delta m^2=10^2\text{eV}^2`$ and $`\mathrm{sin}^22\theta =10^{7.5}`$. For these parameters the resonance occurs at temperature $`T_c=7.41\text{MeV}`$. Let us point out that change of sign($`L`$) can be effected either by variations in the oscillation parameters $`\delta m^2`$ and $`\mathrm{sin}^22\theta `$ or variations in the initial asymmetry $`\eta `$ as was shown in . These two cases are physically quite different of course. Varying oscillation parameters changes the shape of valleys forming after the resonance. This is an important issue because there will always be some experimental uncertainty in the measurements of masses and mixings, leading to unavoidable uncertainty of SBBN predictions in this scenario . Variations in $`\eta `$ are physical, for example due to local inhomogenieties in the baryon asymmetry created by early phase transitions, and correspond to deviations in the initial conditions (speed and direction of the ball) upon entering the resonance region. In a recent article by Dolgov *et.al.*, results were presented which are in sharp disagreement with ours. In particular, it was claimed that the sign of neutrino asymmetry is completely deterministic. Moreover, they suggested that the chaotic behavior seen in would be due to accumulated errors in the numerical codes. This suggestion is not correct. The numerical errors are completely under control in our computations. We have checked that allowing for the local error tolerance magnitudes larger than actually employed to get our results, have no effects on them. Rather we will now argue that the disagreement is due to ill-justified analytic approximations used in to simplify the problem, which lead to artificial sign determinacy. First approximation made in was to neglect the term $`V_xP_z^{}`$ in the equation for $`P_y^{}`$ under the assumption that it is small in comparision to $`V_LP_y^+`$-term. However, just before the resonance effective asymmetry $`L`$ is driven to zero, so that effective potential $`V_L`$ is in fact very small, and $`V_xP_z^{}`$ should not be expected to be subdominant. We have indeed found that this term is of crucial importance for the initial asymmetry growth, as it prevents $`L`$ from getting arbitrarily small value before the resonance, as will be discussed below. Second approximation made in has even more dramatic effects. There it was argued that because Eq. (9) for $`P_x^{}`$ and $`P_y^{}`$ can be scaled to a form $`\dot{P}_{x,y}^{}=Q(a+b+c)`$, where $`Q5.6\times 10^4\sqrt{|\mathrm{cos}2\theta \delta m^2|}`$ is a large parameter, it should be safe to set the derivatives $`\dot{P}_{x,y}^{}`$ to zero. The rapid oscillations seen in our solutions indicate this approximation breaks down at the resonance temperature. To study the effect of these approximations quantitatively, we introduce them into our equations. Dropping the term $`V_xP_z^{}`$ and setting $`\dot{P}_{x,y}^{}`$ to zero in the equations (9) we then find the constraints $`0`$ $`=`$ $`V_0P_y^{}V_LP_y^+\stackrel{~}{D}P_x^{},`$ $`0`$ $`=`$ $`V_0P_x^{}+V_LP_x^+\stackrel{~}{D}P_y^{}.`$ (13) From these equations, one can solve the evolution of $`P_x^{}`$ and $`P_y^{}`$ algebraically with the result $`P_x^{}`$ $`=`$ $`{\displaystyle \frac{V_L}{V_0^2+\stackrel{~}{D}^2}}\left(V_0P_x^++\stackrel{~}{D}P_y^+\right)`$ $`P_y^{}`$ $`=`$ $`{\displaystyle \frac{V_L}{V_0^2+\stackrel{~}{D}^2}}\left(V_0P_y^++\stackrel{~}{D}P_x^+\right).`$ (14) The remaining variables in Eqs. (9) and (11) are then solved numerically. In Fig. 2 we plot the results of a computation with (solid line) and without (dashed line) the implementation of the constraints (14) for same parameters as in Fig. 1. The constrained solutions which fall on top of each others in the figure are indeed fully deterministic and display no oscillation. This is to be expected because the solutions (14) cannot produce sign changing oscillations in $`L`$, since when $`L`$ goes to zero so does $`P_y^{}`$ and hence $`\dot{P}_z^{}`$ is then strongly suppressed in the constrained case. This prohibits $`L`$ from changing sign, so that sign($`L`$) is decided by the initial value of $`L`$. When the evolution of the off-diagonal components is neglected, the mechanism of the asymmetry growth is different, and instead of oscillating between the two new valleys, the system slowly rolls down to one of them after the resonance. This is in fact common characteristic to so called static approximations , where the differences between off-diagonals in the neutrino and antineutrino density matrices are ignored. Let us now consider in detail how the complicated interplay between the various terms in the Eq. (9) leads to the initial exponential growth and oscillations of the asymmetry at the resonance temperature. Before the resonance $`𝐏^+`$ components are practically independent of $`𝐏^{}`$ components as effective asymmetry $`L`$ is very small. However, off-diagonal components $`P_x^+`$ and $`P_y^+`$ do grow to fairly large values. When $`V_0`$ changes sign at the resonance $`P_x^+`$ is rapidly driven to change sign, while the evolution of $`P_y^+`$ remains unaffected due to the additional term $`V_xP_z^+`$. $`P_z^+`$ stays near its initial value until the off-diagonals begin to grow, after which it begins to diminish, signalling the sterile neutrino production. Overall the evolution of $`𝐏^+`$ components is smooth (see Fig. 3), and the noticeable direct effect of the resonance is the changing of $`P_x^+`$. The evolution of $`𝐏^{}`$ components is more complicated. Before the resonance neutrino and anti-neutrino ensembles follow each other closely in the sense that off-diagonals $`P_{x,y}^{}`$ are small. The main force which is keeping the difference of neutrino and antineutrino off-diagonals small is the potential $`V_L`$ and not the effect of damping terms. This can be confirmed by explicitly plotting individual terms appearing in the derivatives $`\dot{P}_{x,y}^{}`$ (see Fig. 4). The underlying mechanism driving $`\dot{P}_{x,y}^{}`$ close to zero before the resonance is then the cancellations between the effective potential terms. The $`V_xP_z^{}`$ term in Eq. (9) prevents $`L`$ from stabilizing to an arbitrarily small value, since when $`L`$ and off-diagonal components $`P_{x,y}^{}`$ are driven towards zero before the resonance, the small difference $`P_z^{}`$ becomes important. The value of $`V_L`$ together with differences $`P_{x,y}^{}`$ will cancel the effect of a non-zero $`P_z^{}`$. In this way the terms $`V_0P_y^{}`$ and $`V_LP_y^+`$ cancel each other in the equation for $`P_x^{}`$, and $`V_0P_x^{}`$ together with $`V_LP_x^+`$ cancel $`V_xP_z^{}`$ in the equation for $`P_y^{}`$. After $`V_0`$ changes sign the cancellation of the potentials in $`\dot{P}_x^{}`$ equation will not work, which turns around the effect of $`V_L`$ in the $`P_x^{}`$ equation. Moreover, as $`P_x^+`$ changes sign due to the resonance, the effect of $`V_L`$ changes also in the $`P_y^{}`$ equation. The difference between both off-diagonal components is now growing due to a non-zero value of $`L`$ leading to a self supporting exponential growth of $`L`$. We wish to stress that the magnitudes of all the effects discussed here are well above the numerical accuracy. It is true that before the resonance $`L`$ goes very close to zero, but this is not relevant. What is relevant instead is that $`P_z^{}`$ remains at a value of order $`\eta `$. It is interesting to note that the dynamics explained above will always lead to an initial growth of $`L`$ into the direction given by the sign of $`\eta `$. This results agrees with the analytical considerations in . Nevertheless, as we have seen, this does not guarantee that the final sign is that of $`\eta `$. One should bear in mind that the beginning of the asymmetry growth is a very complicated phenomenon, where all the variables and almost all the terms in the evolution equations are important to the outcome of the resonance and a complete account of the dynamics of the off-diagonal elements is of crucial importance. This makes it of course very difficult, if not impossible, to find any sensible analytical approximation to Eq. (9). There still remains one contradiction with respect to the results of and the constrained solution obtained here; namely, as seen from Fig. 2, even the constrained solutions yield $`L`$ which grows to a large value. The quantitative study of this effect is beyond the validity of the method employed here. We may however point out a further approximation made in , which appears to be the reason why they see a much weaker asymmetry growth. The collision term in for the active-active component, $`\rho _{aa}`$ of the density matrix (1) is of the form $$\mathrm{\Gamma }(p)(\rho _{aa}(p)f_{eq}),$$ (15) where in $`f_{eq}`$ was taken to be the free Fermi distribution function $`f_{eq}=[1+\mathrm{exp}(p/T)]^1`$. The authors in note themselves that $`f_{eq}`$ should actually be the distribution function which includes a chemical potential, $`f_{eq}(\mu )=[1+\mathrm{exp}(p/T\mu /T)]^1`$, but they assume the difference to be minor because the $`\mu /T`$ is small. However, one can show that $`\mu /T0.7L_{\nu _\tau }`$ and expand $`f_{eq}(L_{\nu _\tau })`$ to give $$f_{eq}(L_{\nu _\tau })f_{eq}(0)\pm 0.7L_{\nu _\tau }\frac{f_{eq}(0)}{1+\mathrm{exp}(p/T)}.$$ (16) Approximating the second term of the expansion to zero in (15) then means that the system is seeking the free Fermi distribution instead of the correct equilibrium distribution which includes an asymmetry. This gives rise to an artificial force proportional to $`\mathrm{\Gamma }(p)L_{\nu _\tau }`$ resisting the growth of the asymmetry. Taking this into account, together with their previous assumptions leading to constraints (14), according to which the rate of asymmetry generation itself is weak $$\dot{P}_z^{}P_y^{}V_LL,$$ (17) it appears likely that this force is able stop the asymmetry growth before the system has reached the true bottom of the valley. We chose not to pursue this issue further, since even the previous approximations has rendered the system unphysical by denying the possibility of chaoticity. In this letter we have considered the indeterminacy or chaoticity in the sign of the neutrino asymmetry $`L_{\nu _\tau }`$ arising from $`\nu _\tau \nu _s`$ mixing in the early universe. We carefully discussed the dynamics giving rise to the growth of asymmetry, and unravelled the mechanism leading to the uncertaintity in sign($`L`$) . We confirmed that in the region of the parameter space identified in the system is very sensitive to small variations in the initial conditions and mixing parameters. We have carefully checked that our numerical methods are higly accurate, so that the effects of the variations leading to a sign indeterminacy in the final asymmetry are physical. As we pointed out, our results are in contradiction with the recent claims , according to which the sign of asymmetry is completely deterministic and the asymmetry growth is small compared to the results of . We have showed that the results of are in fact unphysical and that they arise because of oversimplyfying approximations which artificially stabilize the dynamics responsible for the chaoticity. Indeed, we found that all the terms in the evolution equations and all the components of the density matrix are important for the dynamics of the system. Hence it appears unlikely that any simplifying analytic approximation can be found that would describe the system adequately. We have employed the momentum averaged equations, as they are sufficient to study the chaoticity of the asymmetry growth and to resolve the validity of approximations imposed in . Our unpublished results with a momentum dependent code support the results of this letter, as well as do the results obtained by other groups working with momentum dependent equations , although the chaotic region appears to be somewhat smaller, when momentum dependence is included. Acknowledgements The author wishes to thank Kari Enqvist and Kimmo Kainulainen for enjoyable collaboration on issues related to this letter, and in particular KK for suggesting this research and his patient help in preparation of this manuscript. It is a pleasure to acknowledge the hospitality of NORDITA, Copenhagen, during the time this work was completed.
no-problem/9911/astro-ph9911207.html
ar5iv
text
# The Bologna submillisecond pulsar survey ## 1. Introduction The discovery of the first millisecond pulsar (MSP) PSR J1939+2134 (Backer et al, 1982), having a rotational period of $``$ 1.56 ms, rised the challenging question of the limiting spin period of neutron stars (see D’Amico, 1998, and references therein). A sensitive search for ultrafast pulsars requires sophysticated observing equipments, huge data storage devices, and supercomputing facilities, so it is not surprising that the minimum period observed so far is that of the original brigth millisecond pulsar: indeed only periodicities above this value were effectively searched by radioastronomers resulting in the present sample of about 80 objects (in the field and in Globular Clusters) with periods of the order of few milliseconds. The minimum observed period is remarkably close to the limiting spin period of a neutron star predicted by the so called ”stiff” equations of state of the ultradense matter. The existence of such a pulsar, and its ”clock” stability suggests that this objects must be spinning well above the break-up limit of neutron stars, implying a much lower degree of stiffness of the ultradense matter. Indeed, other realistic and equally qualified equations of state were proposed, resulting in a range of neutron star limiting spin periods. The shortest break-up periods ($``$ 0.6 ms) are those predicted by the so called ”soft” equations of state (see Burderi & D’Amico, 1997, and reference therein). Possenti et al (1999), showed that Nature could provide evolutionary paths for spinning a significant amount of neutron stars up to such extremely high rotational regime. So, in principle, a search of the submillisecond period range can be used to put constraints on the equation of state of matter at nuclear densities. Triggered by these considerations, we commissioned a new pulsar search experiment at the Northern Cross radiotelescope, near Bologna, Italy. The experiment has enough time resolution to detect these objects, and it is equipped with an online data processing system. It is designed such that very narrow pulses as those of the original millisecond pulsar can be easily detected, and it has a similar sensitivity level for a pulsar spinning near the limiting spin period predicted by the softest equation of state ($``$ 0.6 ms). Substantially, the present experiment represents the first systematic large scale search of the ultrashort period range (P$`<`$1.5 ms). ## 2. Sensitivity requirements of the experiment The detectability of MSPs is the result of a compromise between several requirements. The minimum detectable pulsar mean flux density is given by: $$S_{min}k\frac{S_{SYS}}{\sqrt{\mathrm{\Delta }\nu \mathrm{\Delta }t}}\sqrt{\frac{w_e}{Pw_e}}$$ (1) where S<sub>SYS</sub> is the system noise equivalent flux density in Jy, $`k`$ is a factor that accounts for the adopted detection thereshold and various system losses, (typically $`k`$ $``$ 10), $`\mathrm{\Delta }\nu `$ is the observed bandwidth, $`\mathrm{\Delta }t`$ is the integration time, P is the pulsar period, and $`w_e`$ is the effective pulse width given by $$w_e=\sqrt{w^2+(\delta t)^2+(\frac{DM}{1.210^4}\frac{\delta \nu }{\nu ^3})^2+(\delta t_{scatt})^2}$$ (2) where $`w`$ is the intrinsic pulse width, $`\delta t`$ is the time resolution as determined by the sampling time, the post-detection time constant and anti-aliasing filter, $`DM`$ is the dispersion measure in cm<sup>-3</sup>pc, $`\delta \nu `$ and $`\nu `$ are respectively the frequency resolution and the center observing frequency in MHz, and $`\delta t_{scatt}`$ is the broadening of pulses due to multi-path scattering in the interstellar medium. The scattering term $`\delta t_{scatt}`$ is roughly proportional to the pulsar distance and scales as $`\nu ^4`$ so, searches at meter wavelengths are usually expected to find mainly nearby, high latitudes millisecond pulsars. The instrument used for the present experiment is the E-W arm of the Northern Cross radiotelescope, near Bologna, Italy, which operates at 408 MHz. The experiment parameters are summarized in Table 1, and the corresponding sensitivity is shown in Fig. 1. ## 3. Data acquisition and online processing In the Northern Cross pulsar system (D’Amico et al, 1996), the IF signal is split into a filterbank consisting of 128 channels each of width 32 kHz. The outputs are detected, low-pass filtered and one-bit digitised. The data acquisition and the hardware set-up are controlled by a PC-style microcomputer using a customized external bus and a digital I/O expansion unit. The PC is used to program the various blocks and log the relevant observation parameters. A special purpose card (D’Amico & Maccaferri, 1994) is used to manage a data rate up to several megasamples per second, and to keep timing precision. Digital data are transferred to a realtime data processing subsystem using a fast link. The original feature of the realtime data processing subsystem (Fig. 2) is the dedispersing unit. This is essentially a large bank of 1-bit addressable RAM where the raw digital data of each observed beam position are stored. A special programmable circuitry allows to pick up the the 1-bit samples, sum them with a given dispersion delay, and output the corresponding dedispersed time series to a local bus. Any dedispersed time series transferred on the local bus can be read by any of the four CPUs available on the same bus and searched for periodicities. The most relevant suspect periodicities of each trial DM step are saved into a memory buffer and finally sorted. The most significant suspect periodicities detected on each beam position are saved on a permanent database. An offline program is available to selectively search the database, and classify the suspect by human inspection. ## 4. Survey status and results So far, we observed about 80 % of the sky region in the declination interval 4<sup>o</sup> $`<`$ $`\delta `$ $`<`$ 42<sup>o</sup>. We have’nt discovered any pulsar with a period shorter or similar to that of the original millisecond pulsar. We have detected 35 known long period pulsars, 5 known millisecond pulsars and 1 new millisecond pulsar PSR J0030+0451, which was also discovered at Arecibo (Somer et al, these proceedings). This new millisecond pulsar has a period P=4.86 ms and a very low DM=4.3 pc<sup>-3</sup>pc. After the discovery, made early in 1999, we begun a series of follow-up observations using the Parkes radiotelescope. In particular we made long integrations at 70cm in order to measure the scintillation parameters and derive the scintillations velocity (Nigro et al, in preparation). ## References Backer,D.C., Kulkarni, S.R., Heiles, C., Davis, M.M., & Goss W.M. 1982, Nature, 300, 615 Burderi, L., & D’Amico. N. 1997, ApJ, 490, 343 Manchester, R.N., Lyne, A.G., D’Amico, N., Bailes, M., Johnston, S., Lorimer, D.R., Harrison, P.A., Nicastro, L., & Bell, J.F. 1996, MNRAS D’Amico, N., Grueff, G., Montebugnoli, S., Maccaferri A., Cattani, A., Bortolotti, C., Nicastro, L., Fauci, F., Tomassetti, G., Roma, M., Ambrosini, R., & Rodriguez, E. 1996, ApJS, 106, 611 D’Amico N., & Maccaferri, A. 1994, Experimental Astronomy, 4, 189 D’Amico N. 1998, in The Many Faces of Neutron Stars. Edited by R. Buccheri, J. van Paradijs, and M. A. Alpar. Dordrecht ; Boston : Kluwer Academic Publishers, 129 Possenti, A., Colpi, M., Geppert, U., Burderi, L., & D’Amico, N. 1999, these proceedings Somer, A., Backer, D.C., Zepka, A., Cordes, J.M., Arzoumanian, Z., McLaughlin, M. 1999, these proceedings
no-problem/9911/cond-mat9911147.html
ar5iv
text
# The energy-level statistics in the core of a vortex in a p-wave superconductor. ## Abstract In the presence of strong disorder, the statistics of quasiparticle levels in the core of a vortex in a two-dimensional $`p`$-wave superconductor belongs to the universality class $`B`$ corresponding to the ensemble of orthogonal matrices in odd dimensions. This novel universality class appears as a consequence of the $`O(2)`$ spin symmetry of $`p`$-wave pairing. It is preserved in the presence of random disorder, of electromagnetic vector potential, and of an admixture of the pairing of opposite chirality in the vortex core, but may be destroyed by spin-orbit coupling and by Zeeman splitting. The indications of $`p`$-wave superconductivity in Sr<sub>2</sub>RuO<sub>4</sub> stimulated the study of exotic properties of $`p`$-wave superconductors. The order parameter in this compound is expected to be the same as in the $`A`$ phase of <sup>3</sup>He, $`\widehat{d}_\pm (𝐤)\widehat{z}(k_x\pm ik_y)`$. The direction of the vector $`\widehat{z}`$ of the triplet orientation is fixed by the anisotropy of Sr<sub>2</sub>RuO<sub>4</sub> to be perpendicular to the Ru-O planes. Because of the strong anisotropy, one may consider a two-dimensional model as a starting approximation. In two dimensions, the $`p`$-wave superconducting gap does not have nodes, which resembles conventional superconductors. However, many differences appear in inhomogeneous setups including boundaries, vortices, and impurities. In particular, impurities generate bound states with circular currents ; similar subgap states appear at the boundary and at domain walls ; a single-quantum vortex posesses a zero-energy state of topological origin . It is this last property that motivates the present work. The zero-energy state will be shown to survive a certain class of disordered perturbations, such as a random potential (modeling the impurities) or a random electromagnetic vector potential. If such a disorder is strong, the quasiparticle levels in the vortex core mix and, at the energy scale much smaller than the superconducting gap, they may be described by a random-matrix ensemble. In the present paper we identify the corresponding ensemble with that of orthogonal matrices in odd number of dimensions (class $`B`$ in Cartan’s classification) . Thus this example completes the list of universality classes corresponding to the eleven families of symmetric spaces (see Table 1). The three Wigner-Dyson universality classes correspond to $`A`$, $`A`$I, and $`A`$II series (unitary, orthogonal, and symplectic, respectively) . Three more classes ($`A`$III, $`BD`$I, and $`C`$II) appear in systems with massless Dirac fermions, as a consequence of the chiral symmetry . Finally, four more classes ($`C`$, $`D`$, $`C`$I, and $`D`$III) were shown to describe mesoscopic superconducting systems, depending on the presence of the spin-rotation and time-reversal symmetries . In the present paper we demonstrate that the remaining eleventh class $`B`$ appears in $`p`$-wave superconductors under topologically-nontrivial (vortex-type) boundary conditions responsible for the zero-energy level. Let us start with briefly describing the properties of the ensemble of orthogonal matrices in odd dimensions (class $`B`$). The Lie algebra $`so(2N+1)`$ consists of real antisymmetric $`(2N+1)\times (2N+1)`$ matrices. Its dimension is $`\mathrm{dim}[so(2N+1)]=N(2N+1)`$ and its rank $`\mathrm{rk}[so(2N+1)]=N`$. If a matrix $`A`$ belongs to $`so(2N+1)`$, the Hermitian matrix $`H=iA`$ has one zero eigenvalue, and the remaining eigenvalues form pairs $`(\omega _i,\omega _i)`$, $`i=1,\mathrm{},N`$. At low energies we may, without loss of generality, assume the Gaussian probability distribution for the Hamiltonian $`dP(H)=\mathrm{exp}(\mathrm{Tr}H^{}H/2v^2)dH_{ij}`$, where $`v`$ is a large cut-off energy (in our problem, $`v`$ is of the order of the superconducting gap). Then the joint probability distribution for the (positive) eigenvalues $`\omega _i`$ is of the conventional form : $$dP\{\omega _i\}=|J\{\omega _i\}|\underset{i=1}{\overset{N}{}}e^{\omega _i^2/v^2}d\omega _i,$$ (1) where $`J\{\omega _i\}`$ is the Jacobian of the diagonalization of the Hamiltonian, $$|J\{\omega _i\}|=\underset{i<j}{}|\omega _i^2\omega _j^2|^\beta \underset{i=1}{\overset{N}{}}|\omega _i|^\alpha .$$ (2) At energies much less than the cut-off energy $`v`$, the correlations of the quasiparticle levels $`\omega _i`$ are determined solely by the Jacobian $`J\{\omega _i\}`$ \[the expression (2) follows from the explicit form of the roots $`𝝃_{(k)}`$ of the Lie algebra $`so(2N+1)`$ and from $`|J\{\omega _i\}|=_k|_i\xi _{(k)}^i\omega _i|`$; the values of $`\alpha `$ and $`\beta `$ may also be easily found from dimension counting\]. For $`so(2N+1)`$ the parameters of the level statistics are $`\beta =2`$, $`\alpha =2`$. The values of $`\alpha `$ and $`\beta `$ for the universality class $`B`$ coincide with those for class $`C`$, and only the presence or absence of the zero-energy level distinguishes the level distributions in the two classes. Next, we shall prove that a single-quantum vortex in a two-dimensional $`p`$-wave superconductor obeys the statistics of the $`so(2N+1)`$ ensemble, provided the symmetry of the Hamiltonian preserves the zero-energy level. Consider the Bogoliubov-de-Gennes Hamiltonian $$H=\underset{\alpha }{}\mathrm{\Psi }_\alpha ^{}\left[\frac{(𝐩e𝐀)^2}{2m}+V(𝐫)\epsilon _F\right]\mathrm{\Psi }_\alpha +\mathrm{\Psi }_{}^{}\left(\mathrm{\Delta }_x\frac{p_x}{k_F}+\mathrm{\Delta }_y\frac{p_y}{k_F}\right)\mathrm{\Psi }_{}^{}+\mathrm{h}.\mathrm{c}.,$$ (3) where $`\mathrm{\Psi }_\alpha `$ are the electron operators ($`\alpha `$ is the spin index), $`V(𝐫)`$ is the external potential of impurities, $`𝐀(𝐫)`$ is the electromagnetic vector potential, $`\mathrm{\Delta }_x(𝐫)`$ and $`\mathrm{\Delta }_y(𝐫)`$ are the coordinate-dependent components of the superconducting gap. \[In the bulk, the preferred superconducting order is one of the two chiral components $`\eta _\pm =\mathrm{\Delta }_x\pm \mathrm{\Delta }_y`$, but in inhomogeneuos systems, such as a vortex core, an admixture of the opposite component is self-consistently generated . We account for this effect by allowing the two independent order parameters $`\mathrm{\Delta }_x`$ and $`\mathrm{\Delta }_y`$.\] Star ($``$) denotes the symmetrized ordering of the gradients $`p_\mu `$ and the order parameters $`\mathrm{\Delta }_\mu `$ \[definition: $`AB(AB+BA)/2`$\]. At infinity, the order parameters impose the vortex boundary conditions: $$\mathrm{\Delta }_x(r\mathrm{},\varphi )=\mathrm{\Delta }_0e^{\pm i\varphi },\mathrm{\Delta }_y(r\mathrm{},\varphi )=i\mathrm{\Delta }_0e^{\pm i\varphi },$$ (4) where $`r`$ and $`\varphi `$ are polar coordinates. Plus or minus signs in the exponent correspond to a positive or a negative single-quantum vortex. For an axially-symmetric vortex with the chirality of the order parameter non-self-consistently fixed ($`\mathrm{\Delta }_yi\mathrm{\Delta }_x`$), without the vector-potential $`𝐀(𝐫)`$ and without disorder $`V(𝐫)`$, the low-lying eigenstates of the Hamiltonian (3) has been found by Kopnin and Salomaa . The spectrum is $$E_n=n\omega _0,(p\mathrm{wave}),n=0,\pm 1,\pm 2,\mathrm{}$$ (5) with $`\omega _0\mathrm{\Delta }^2/\epsilon _F`$. This result should be compared with the spectrum of the vortex core in a $`s`$-wave superconductor : $$E_n=\left(n+\frac{1}{2}\right)\omega _0,(s\mathrm{wave}),n=0,\pm 1,\pm 2,\mathrm{}.$$ (6) The common feature of the spectra in the $`s`$-wave and $`p`$-wave cases is the symmetry about zero energy. If we interpret holes in the negative-energy levels as excitations with positive energies (and with the opposite spin), then this symmetry implies that the excitations are doubly degenerate in spin: to each spin-up excitation there corresponds a spin-down excitation at the same energy. For a $`s`$-wave vortex, this degeneracy is due to the full spin-rotation $`SU(2)`$ symmetry. The $`p`$-wave Hamiltonian (3) has a reduced spin symmetry. Namely, it has the symmetry group $`O(2)`$ generated by rotations about the $`z`$-axis ($`\mathrm{\Psi }_{}e^{i\alpha }\mathrm{\Psi }_{}`$, $`\mathrm{\Psi }_{}e^{i\alpha }\mathrm{\Psi }_{}`$) and by the spin flip $`\mathrm{\Psi }_{}\mathrm{\Psi }_{}`$, $`\mathrm{\Psi }_{}\mathrm{\Psi }_{}`$. This non-abelian group causes the two-fold degeneracy of all levels (except for the zero-energy level(s) where the symmetry $`O(2)`$ may mix the creation and annihilation operators for the same state). This symmetry is crucial for our discussion. Note that we have not included in the Hamiltonian neither the spin-orbit term $`(𝐔_{SO}[𝝈\times 𝐩])`$, nor the Zeeman splitting $`𝐇(𝐫)𝝈`$. Either of these terms would break the spin symmetry $`O(2)`$, which would eventually result in a different universality class of the disordered system (type $`D`$ with non-degenerate levels), if these terms are sufficiently strong. The difference between the $`s`$\- and $`p`$-wave vortices is the zero-energy level in the $`p`$-wave case. It has been shown by Volovik that this level has a topological nature . Indeed, suppose we gradually increase disorder in the Hamiltonian (3). The levels shift and mix, but the degeneracy of the levels remains the same as long as the symmetry $`O(2)`$ is preserved. The total number of levels remains odd, and therefore the zero-energy level cannot shift if the final Hamiltonian is a continuous deformation of the original one (without disorder), i.e. if the topological class of the boundary conditions (4) remains the same. Now we proceed along the usual lines of the random-matrix approach. Let us take the most random distribution of Hamiltonians within a given symmetry class. The only symmetry of the Hamiltonian (3) is the spin symmetry $`O(2)`$. The time-reversal symmetry is already broken by the vortex and by the pairing, and therefore neither the vector potential $`𝐀(x)`$ nor local deformations of $`\mathrm{\Delta }_\mu `$ can reduce the symmetry of the Hamiltonian. When projected onto spin-up excitations $`\gamma _{}^{}=[u(𝐫)\mathrm{\Psi }_{}^{}(𝐫)+v(𝐫)\mathrm{\Psi }_{}(𝐫)]d^2𝐫`$, the Hamiltonian for the two-component vector $`(u,v)`$ takes the form: $$H=\left(\begin{array}{cc}\left[\frac{(ie𝐀)^2}{2m}+V(𝐫)\epsilon _F\right]& \left[\frac{\mathrm{\Delta }_x}{k_F}(i_x)+\frac{\mathrm{\Delta }_y}{k_F}(i_y)\right]\\ \left[\frac{\mathrm{\Delta }_x^{}}{k_F}(i_x)+\frac{\mathrm{\Delta }_y^{}}{k_F}(i_y)\right]& \left[\frac{(i+e𝐀)^2}{2m}+V(𝐫)\epsilon _F\right]\end{array}\right).$$ (7) In an arbitrary orthonormal basis of electronic states, this Hamiltonian may be written as a matrix $$H=\left(\begin{array}{cc}h& \mathrm{\Delta }\\ \mathrm{\Delta }^{}& h^{}\end{array}\right).$$ (8) From the hermiticity of the Hamiltonian, it follows that $`h^{}=h`$. From the explicit form of the $`p`$-wave pairing, $`\mathrm{\Delta }=\mathrm{\Delta }^T`$ (it is here that the $`p`$-wave structure of the pairing is important; for $`s`$-wave pairing we would have $`\mathrm{\Delta }=\mathrm{\Delta }^T`$ instead). These are the only restrictions on the Hamiltonian (8). If we define $$U_0=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1& 1\\ i& i\end{array}\right),$$ (9) the restrictions on the Hamiltonian (8) are equivalent to the condition that the rotated matrix $`iU_0HU_0^1`$ is real antisymmetric, i.e. it belongs to the Lie algebra $`so(M)`$, where $`M`$ is the dimension of the Hilbert space (the same rotation of the Hamiltonian was used in Ref. to identify the $`D`$ universality class). The last step in our argument is to note that, under the vortex boundary conditions, the dimension of the Hamiltonian (8) is odd, not even (this may be difficult to visualize from the particle-hole representation (8), but easier from the rotated Hamiltonian $`U_0HU_0^1`$). Thus for a single-quantum vortex, we identify the space of the Hamiltonians with $`so(2N+1)`$. A simple consequence of this result is the level distribution (2) with $`\alpha =\beta =2`$. Besides the zero-energy level, this distribution is identical to that of class $`C`$ realized in $`s`$-wave vortices . This allows us to use the trick of mapping onto free fermions to compute any correlation function of the density of states (DOS) . In particular, the average DOS is $$\rho (\omega )=\frac{1}{\omega _0}\frac{\mathrm{sin}(2\pi \omega /\omega _0)}{2\pi \omega }+\delta (\omega ),$$ (10) where $`\omega _0`$ is the average inter-level distance. One more lesson from our analysis is the difference in the universality classes of $`p`$-wave mesoscopic systems from their $`s`$-wave analogues. In particular, a $`p`$-wave system without a topological zero-energy level would belong to the universality class $`D`$ ($`\beta =2`$, $`\alpha =0`$), in contrast to the class $`C`$ for its $`s`$-wave counterpart. A detectable physical consequence of such a difference is an increase of the average DOS near the Fermi energy (in class $`D`$) as opposed to a suppression of DOS near Fermi level in class $`C`$ \[in class $`B`$, such a suppression is compensated by a $`\delta `$-function at zero\]. In the present work we do not discuss the microscopic derivation of the level mixing. We assume “strong level mixing” which allows us to use the random-matrix approach. On the other hand, the symmetry of the $`p`$-wave pairing is known to provide certain “spectrum rigidity” suppressing shift and mixing of the low-lying levels by impurities . Thus to drive the system into the regime of “strong level mixing” may require a stronger disorder than in a $`s`$-wave vortex. A microsopic picture of the crossover to the disordered regime in a $`s`$-wave case was developed in , and its extension to the $`p`$-wave case will be a subject of future studies. Furthermore, a disorder is known to suppress the $`p`$-wave superconductivity, and the possibility to reach the required level mixing before destroying superconductivity is not obvious. A coexistence of sufficiently strong disorder and $`p`$-wave superconductivity may possibly be achieved in alternative setups such as disordered normal-superconducting ($`p`$-wave) hybrid devices. Besides strong level mixing, the only requirements on the system to exhibit type $`B`$ level statistics are the $`p`$-wave symmetry of the pairing and the topological zero-energy level. One more approximation made in our model is neglecting the Zeeman splitting. This is a good approximation for strong type II superconductors (with $`\kappa =\lambda /\xi 1`$). In Sr<sub>2</sub>RuO<sub>4</sub> the experiments indicate $`\kappa 2.6`$ , which implies that the Zeeman level shift is of the order of the inter-level spacing $`\omega _0`$. However, in a clean vortex, this shift is approximately constant for the low-lying levels, due to the large coherence length ($`k_F\xi 1`$) and to a smooth magnetic-field profile $`𝐇(𝐫)`$. It is likely that this property will hold even in the limit of strong disorder, then the overall level distribution will be simply shifted by a constant energy. An important observation related to the Zeeman splitting of the energy levels is the fractional spin $`1/4`$ of the vortex in the ground state. Indeed, in the absence of the Zeeman field, the multi-particle ground state of the vortex is doubly degenerate, with the $`z`$-component of the spin in the two degenerate states differing by $`1/2`$. The Zeeman field splits the ground state as if the vortex were a particle with the $`z`$-component of the spin equal half the spin of the electron. Thus we conclude that the vortex has the $`z`$-component of the spin $`S_z=\pm 1/4`$. It would be interesting to understand possible physcial implications of this effect. Finally, for the convenince of the reader, we have gathered the information about all eleven symmetry classes in Table 1. This table is compiled from Refs. and contains the dimensions and ranks of the symmetric spaces as well as the parameters $`\alpha `$ and $`\beta `$ of the joint probability distributions of the energy levels (they may be computed solely from the ranks and dimensions by a simple power counting). The present work provides the example of the physical system belonging to the class $`B`$ (the last line of the table). Besides, one more symmetry subclass has not been studied so far in the context of mesoscopics: the odd-$`N`$ subclass of $`D`$III. In the work of Altland and Zirnbauer , the even-$`N`$ subclass of $`D`$III is represented by a $`s`$-wave mesoscopic system with time-reversal symmetry and with broken spin symmetry. The novel odd-$`N`$ subclass of $`D`$III may occur as a topological modification of that construction or in a $`p`$-wave superconducting system without time-reversal-symmetry breaking, provided a zero mode is required by topology. Finding a physically plausible mesoscopic realization of the odd-$`N`$ $`D`$III subclass is an interesting problem in the framework of the symmetry classification of mesoscopic superconducting systems. TABLE 1. Symmetric spaces and universality classes of random-matrix ensembles. P.S. At the final stage of the preparation of this manuscript, the author has learned about the recent work of Bocquet, Serban, and Zirnbauer , where the type-$`B`$ level statistics in $`p`$-wave vortices has been pointed out. The author thanks M. V. Feigel’man for suggesting this problem, for fruitful discussions and for helpful comments on the manuscript. Useful discussions with G. Blatter, V. Geshkenbein, D. Gorokhov, R. Heeb, and M. Zhitomirsky are greatfully acknowledged. The author thanks Swiss National Foundation for financial support.
no-problem/9911/hep-th9911006.html
ar5iv
text
# Soliton solutions of a gauged O(3) sigma model with interpolating potential ## I Introduction The O(3) nonlinear sigma model has long been the subject of intense research due to its theoretical and phenomenological basis. This theory describes classical (anti) ferromagnetic spin systems at their critical points in Euclidean space, while in the Minkowski one it delineates the long wavelength limit of quantum antiferromagnets. The model exhibits solitons, Hopf instantons and novel spin and statistics in 2+1 space-time dimensions with inclusion of the Chern-Simons term. The soliton solutions of the model exhibit scale invariance which poses difficulty in the particle interpretation on quantization. A popular means of breaking this scale invariance is to gauge a U(1) subgroup of the O(3) symmetry of the model by coupling the sigma model fields with a gauge field through the corresponding U(1) current. <sup>1</sup><sup>1</sup>1This is different from the minimal coupling via the topological current discussed previously wil . This class of gauged O(3) sigma models in three dimensions have been studied over a long timesch ; gho ; lee ; muk1 ; muk2 ; mend ; land . Initially the gauge field dynamics was assumed to be dictated by the Maxwell term sch . Later the extension of the model with the Chern - Simons coupling was investigated gho . A particular form of self - interaction was required to be included in these models in order to saturate the Bogomol’nyi bounds bog . The form of the assumed self - interaction potential is of crucial importance. The minima of the potential determine the vaccum structure of the theory. The solutions change remarkably when the vaccum structure exhibits spontaneous breaking of the symmetry of the gauge group. Thus it was demonstrated that the observed degeneracy of the solutions of sch ; gho is lifted when potentials with symmetry breaking minima were incorporated muk1 ; muk2 . The studies of the gauged O(3) sigma model is important due to their intrinsic interest and also due to the fact that the soliton solutions of the gauged O(3) Chern-Simons model may be relevant in planar condensed matter systems pani ; pani1 ; han . Recently gauged nonlinear sigma model was considered in order to obtain self-dual cosmic string solutions verb ; ham . This explains the continuing interest in such models in the literature sch ; gho ; lee ; muk1 ; muk2 ; mend ; land . A particular aspect of the gauged O(3) sigma models where the gauge field dynamics is governed by the Maxwell term can be identified by comparing the results of sch and muk2 . In sch the vaccum is symmetric and the $`n=1`$ soliton solution does not exist, $`n`$ being the topological charge. Here, solutions exists for $`n=2`$ onwards. Moreover, these soliton solutions have arbitrary magnetic flux. When we achieve symmetrybreaking vaccum by chosing the potential appropriately muk2 soliton solutions are obtained for $`n=1`$. <sup>2</sup><sup>2</sup>2The disappearence of the $`n=1`$ soliton has been shown to follow from general analytical method in sch1 .These solutions have quntized magnetic flux and qualify as magnetic vortices. It will be interesting to follow the solutions from the symmetrybreaking to the symmetric phase. This is the motivation of the present paper. We will consider a generalisation of the models of sch and muk2 with an adjustable real parameter $`v`$ in the expression of the self - interaction potential which interpolates between the symmetric and the symmetrybreaking vaccua. This will in particular allow us to investigate the soliton solutions in the entire regime of the symmetrybreaking vacuum structures and also to follow the collapse of the $`n=1`$ soliton as we move from the assymmetric to the symmetric phase. The solitons of the model are obtained as the solutions of the self – dual equation obtained by saturating the Bogomoln’yi bounds. Unfortunately, these equations fall outside the Liouville class even after assuming a rotationally symmetric ansatz. Thus exact analytical solutions are not obtainable and numerical methods are to be invoked. The organisation of the paper is as follows. In the following section we present a brief review of the O(3) nonlinear sigma model. This will be helpful in presenting our work in the proper context. In section 3 our model is introduced. General topological classifications of the soliton solutions of the model has been discussed here. In section 4 the saturation of the self-dual limits has been examined and the Bogomol’nyi equations have been written down. Also the analytical form of the Bogomol’nyi equations has been worked out assuming a rotationally symmetyric ansatz. These equuations, even in the rotationally symmetric scenario, are not exactly integrable. A numerical solution has been performed to understand the details of the solution. A fourth order Runge – Kutta algorithm is adopted with provision of tuning the potential appropriately. In section 5 the numerical method and some results are presented. We conclude in section 6. ## II O(3) nonlinear sigma models It will be useful to start with a brief review of the nonlinear O(3) sigma model bel . The lagrangian of the model is given by, $$=\frac{1}{2}_\mu \mathit{\varphi }^\mu \mathit{\varphi }$$ (1) Here $`\mathit{\varphi }`$ is a triplet of scalar fields constituting a vector in the internal space with unit norm $`\varphi _a=𝐧_𝐚\mathit{\varphi },(a=1,2,3)`$ (2) $`\mathit{\varphi }\mathit{\varphi }=\varphi _a\varphi _a=1`$ (3) The vectors $`𝐧_a`$ constitute a basis of unit orthogonal vectors in the internal space. We work in the Minkowskian space - time with the metric tensor diagonal, $`g_{\mu \nu }=(1,1,1)`$. The finite energy solutions of the model (1) satisfies the boundary condition $$\mathrm{lim}\varphi ^a=\varphi _{(0)}^a$$ (4) at physiacal infinity. The condition (4) corresponds to one point compactification of the physical infinity. The physical space $`R_2`$ becomes topologically equivalent to $`S_2`$ due to this compactification. The static finite energy solutions of the model are then maps from this sphere to the internal sphere. Such solutions are classified by the homotopy dho $$\mathrm{\Pi }_2(S_2)=Z$$ (5) We can construct a current $$K_\mu =\frac{1}{8\pi }ϵ_{\mu \nu \lambda }\varphi (^\nu \varphi \times ^\lambda \varphi )$$ (6) which is conserved irrespective of the equation of motion. The corresponding charge $`T`$ $`=`$ $`{\displaystyle d^2𝐱K_0}`$ (7) $`=`$ $`{\displaystyle \frac{1}{8\pi }}{\displaystyle d^2𝐱ϵ_{ij}\varphi (^𝐢\varphi \times ^𝐣\varphi )}`$ gives the winding number of the mapping (5) raj . ## III Our model - topological classification of the soliton solutions In the class of gauged models of our interest here a U(1) subgroup of the rotation symmetry of the model (1) is gauged. We chose this to be the SO(2) \[U(1)\] subgroup of rotations about the 3 - axis in the internal space. The Lagrangian of our model is given by $$=\frac{1}{2}D_\mu \mathit{\varphi }D^\mu \mathit{\varphi }\frac{1}{4}F_{\mu \nu }F^{\mu \nu }+U(\mathit{\varphi })$$ (8) $`D_\mu \mathit{\varphi }`$ is the covariant derivative given by $$D_\mu \mathit{\varphi }=_\mu \mathit{\varphi }+A_\mu 𝐧_3\times \mathit{\varphi }$$ (9) The SO(2) (U(1)) subgroup is gauged by the vector potential $`A_\mu `$ whose dynamics is dictated by the Maxwell term. Here $`F_{\mu \nu }`$ are the electromagnetic field tensor, $$F_{\mu \nu }=_\mu A_\nu _\nu A_\mu $$ (10) $`U(\mathit{\varphi })`$ is the self - interaction potential required for saturating the self - dual limits. We chose $$U(\mathit{\varphi })=\frac{1}{2}(v\varphi _3)^2$$ (11) where $`v`$ is a real parameter. Substituting $`v=0`$ we get back the model of muk2 whereas $`v=1`$ gives the model of sch . We observe that the minima of the potential arise when , $$\varphi _3=v$$ (12) which is equivalent to the condition $$\varphi _1^2+\varphi _2^2=1v^2$$ (13) on account of the constraint (3). The values of v must be restricted to $$\left|v\right|1$$ (14) The condition (13) denotes a latitudinal circle (i.e. circle with fixed latitude) on the unit sphere in the internal space. By varying v from -1 to +1 we span the sphere from the south pole to the north pole. It is clear that the finite energy solutions of the model must satisfy (13) at physical infinity. For $`v1`$ this boundary condition corresponds to the spontaneous breaking of the symmetry of the gauge group and in the limit $$\left|v\right|1$$ (15) the asymmetric phase changes to the symmetric phase. We call the potential (11) interpolating in this sense. In the asymmetric phase the soliton solutions are classified according to the homotopy $$\mathrm{\Pi }_1(S_1)=Z$$ (16) instead of (5). In the symmetric phase, however, this new topology disappears and the solitons are classified according to (5) as in the usual sigma model (1). A remarkable fallout of this change of topology is the disappearence of the soliton with unit charge. The fundamental solitonic mode $`n=1`$ ($`n`$ being the vorticity) ceases to exist in the symmetric phase. The modes corresponding to n = 2 onwards still persist but the magnetic flux associated with them ceases to remain quantised. In the asymmetric phase the vorticity is the winding number i.e. the number of times by which the infinite physical circle winds over the latitudinal circle (13). Associated with this is a uniqe mapping of the internal sphere where the degree of mapping is usually fractional. By inspection we construct a current $$K_\mu =\frac{1}{8\pi }ϵ_{\mu \nu \lambda }[\mathit{\varphi }D^\nu \mathit{\varphi }\times D^\lambda \mathit{\varphi }F^{\nu \lambda }(v\varphi _3)]$$ (17) generalising the topological current (6). The current (17) is manifestly gauge invariant and differs from (6) by the curl of a vector field. The conservation principle $$_\mu K^\mu =0$$ (18) thus automatically follows from the conservation of (6). The corresponding conserved charge is $$T=d^2xK_0$$ (19) Using (17) and (19) we can write $`T`$ $`=`$ $`{\displaystyle d^2x[\frac{1}{8\pi }ϵ_{ij}\mathit{\varphi }(^i\mathit{\varphi }\times ^j\mathit{\varphi })]}`$ (20) $``$ $`{\displaystyle \frac{1}{4\pi }}{\displaystyle _{boundary}}(v\varphi _3)A_\theta r𝑑\theta `$ where r,$`\theta `$ are polar coordinates in the physical space and $`A_\theta =𝐞_\theta 𝐀`$. Using the boundary condition (12) we find that T is equal to the degree of the mapping of the internal sphere. Note that this situation is different from lee where the topological charge usually differs from the degree of the mapping. In this context it is interesting to observe that the current (17) is not unique because we can always add an arbitrary multiple of $$\frac{1}{8\pi }ϵ_{\mu \nu \lambda }F^{\nu \lambda }$$ with it without affecting its conservation. We chose (17) because it generates proper topological charge. ## IV Self dual equations in the rotationally symmetric ansatz In the previous section we have discussed the general topological classification of the solutions of the equations of motion following from (8). In the present section we will discuss the solution of the equations of motion. The Euler - Lagrange equations of the system (8) is derived subject to the constraint (3) by the Lagrange multiplier technique $`D_\nu (D^\nu \mathit{\varphi })`$ $`=`$ $`[D_\nu (D^\nu \mathit{\varphi })\mathit{\varphi }]\mathit{\varphi }+𝐧_3(v\varphi _3)`$ (21) $`+`$ $`(v\varphi _3)\varphi _3\mathit{\varphi }`$ $`_\nu F^{\nu \mu }=j^\mu `$ (22) where $$j^\mu =𝐧_3𝐉^\mu and𝐉^\mu =\mathit{\varphi }\times D^\mu \mathit{\varphi }$$ (23) Using (21) we get $$D_\mu 𝐉^\mu =(v\varphi _3)(𝐧_3\times \mathit{\varphi })\varphi _3$$ (24) From (22) we find,for static configurations $$^2A^0=A^0(1\varphi _3^2)$$ (25) From the last equation it is evident that we can chose $$A^0=0$$ (26) As a consequence we find that the excitations of the model are electrically neutral. The equations (21) and (22) are second order differential equations in time. As is well known, first order equations which are the solutions of the equations of motion can be derived by minimizing the energy functional in the static limit. Keeping this goal in mind we now construct the energy functional from the symmetric energy - momentum tensor following from (8). The energy $$E=\frac{1}{2}d^2𝐱\left[D_0\mathit{\varphi }D_0\mathit{\varphi }D_i\mathit{\varphi }D^i\mathit{\varphi }+(v\varphi _3)^22(F_0^\sigma F_{0\sigma }\frac{1}{4}F_{\rho \sigma }F^{\rho \sigma })\right].$$ (27) For static configuration and the choice $`A^0`$ = 0, $`E`$ becomes $$E=\frac{1}{2}d^2x[(D_i\mathit{\varphi })(D_i\mathit{\varphi })+F_{12}^2+(v\varphi _3)^2]$$ (28) Several observations about the finite energy solutions can be made at this stage from (28). By defining $$\psi =\varphi _1+i\varphi _2$$ (29) we get $$D_i\mathit{\varphi }D_i\mathit{\varphi }=|(_i+iA_i)\psi |^2+(_i\varphi _3)^2$$ (30) The boundary condition (13) dictates that $$\psi (1v^2)^{\frac{1}{2}}e^{in\theta }$$ (31) at infinity. From (28) we observe that for finite energy configurations we require $$𝐀=𝐞_\theta \frac{n}{r}$$ (32) on the boundary. This scenario is exactly identical with the observations of muk2 and leads to the quantisation of the magnetic flux $$\mathrm{\Phi }=Bd^2x=_{boundary}A_\theta r𝑑\theta =2\pi n$$ (33) The basic mechanism leading to this quantisation remains operative so far as $`v`$ is less than 1. At $`v=1`$, however, the gauge field $`𝐀`$ becomes arbitrary on the boundary except for the requirement that the magnetic field B should vanish on the boundary. Remember that not all the vortices present in the broaken phase survives this demand. Specifically, the $`n=1`$ vortex becomes inadmissible. Now the search for the self - dual conditions proceed in the usual way. We rearrange the energy functional as $$E=\frac{1}{2}d^2x[\frac{1}{2}(D_i\mathit{\varphi }\pm ϵ_{ij}\mathit{\varphi }\times D_j\mathit{\varphi })^2+(F^{12}(v\varphi _3))^2]\pm 4\pi T$$ (34) Equation (34) gives the Bogomol’nyi conditions $`D_i\mathit{\varphi }\pm ϵ_{ij}\varphi \times D_j\mathit{\varphi }=0`$ (35) $`F_{12}(v\varphi _3)=0`$ (36) which minimize the energy functional in a particular topological sector, the upper sign corresponds to +ve and the lower sign corresponds to -ve value of the topological charge. We will now turn towards the analysis of the self - dual equations using the rotationally symmetric ansatz wu $`\varphi _1(r,\theta )=\mathrm{sin}g(r)\mathrm{cos}n\theta `$ $`\varphi _2(r,\theta )=\mathrm{sin}g(r)\mathrm{sin}n\theta `$ $`\varphi _3(r,\theta )=\mathrm{cos}g(r)`$ $`𝐀(r,\theta )=𝐞_\theta {\displaystyle \frac{na(r)}{r}}`$ (37) From (12) we observe that we require the boundary condition $$g(r)\mathrm{cos}^1v\mathrm{as}r\mathrm{}$$ (38) and equation (32) dictates that $$a(r)1\mathrm{as}r\mathrm{}$$ (39) Remember that equation (32) was obtained so as the solutions have finite energy. Again for the fields to be well defined at the origin we require $$g(r)0\mathrm{or}\pi \mathrm{and}a(r)0\mathrm{as}r0$$ (40) Substituting the Ansatz(37) into (35) and (36) we find that $`g^{}(r)=\pm {\displaystyle \frac{n(a+1)}{r}}\mathrm{sin}g,`$ (41) $`a^{}(r)=\pm {\displaystyle \frac{r}{n}}(v\mathrm{cos}g)`$ (42) where the upper sign holds for +ve T and the lower sign corresponds to -ve T.Equations (41) and (42) are not exactly integrable. In the following section we will discuss the numerical solution of the boundary value problem defined by (41) and (42) with (38) to (40). Using the Ansatz (37) we can explicitly compute the topological charge T by performing the integration in (19).The result is $$T=\frac{n}{2}[\mathrm{cos}g(\mathrm{})\mathrm{cos}g(0)]\frac{1}{2}[v\mathrm{cos}g(\mathrm{})]$$ (43) The second term of (43) vanishes due to the boundary condition (38). Also, when g(0) = 0, $$T=\frac{n}{2}(1v)$$ (44) and, when g(0) = $`\pi `$, $$T=\frac{n}{2}(1+v)$$ (45) It is evident that T is in general fractional. Due to (20) it is equal to the degree of mapping of the internal sphere. This can also be checked explicitly. From the above analysis we find that g(0) = 0 corresponds to +ve T and g(0) = $`\pi `$ corresponds to -ve T. We shall restrict our attention on negetive T which will be useful for comparision of results with those available in the literature. The boundary value problem of interest is then $`g^{}(r)={\displaystyle \frac{n(a+1)}{r}}\mathrm{sin}g`$ (46) $`a^{}(r)={\displaystyle \frac{r}{n}}(v\mathrm{cos}g)`$ (47) with $`g(0)=\pi ,a(0)=0`$ $`g(\mathrm{})=\mathrm{cos}^1v,a(\mathrm{})=0`$ (48) In addition we require $`a^{}(r)`$ $``$ 0 as $`r\mathrm{}`$. This condition follows from (46), (47) and (48) and should be considered as a consistency condition to be satisfied by their soloutions. ## V Numerical solution The simultaneous equations (46) and (47) subject to the boundary conditions (48) are not amenable to exact solution. They can however be integrated numerically. We have already mentioned the quenching of the $`n=1`$ solution in the limit $`v1`$. This is connected with the transition from the symmetry breaking to the symmetric phase. The numerical solution is thus interesting because it will enable us to see how the solutions change as we follow them from the deep assymetric phase $`(v=0)`$ to the symmetric phase $`(v=1)`$. In the following we provide the results of numerical solution to highlight these issues. Let us note some details of the numerical method. A fourth order Runge – Kutta method was employed. The point $`r=0`$ is a regular singular point of the equation. So it was not possible to start the code from $`r=0`$. Instead, we start it from a small value of $`r`$. The behaviour of the functions near r = 0 can be easily derived from (46) and(47) $$g(r)\pi +Ar^n$$ (49) $$a(r)\frac{r^2}{2n}(1+v)$$ (50) Here $`A`$ is an arbitrary constant which fixes the values of g and a at infinity. In the symmetrybreaking phase the numerical solution depends sensitively on the value of $`A`$. <sup>3</sup><sup>3</sup>3This should be contrasted with the symmetric vacuum solution where $`A`$ may have arbitrary values.There is a critical value of $`A`$, $`A=A^{crit}`$ for which the boundary conditions are satisfied. If the value of $`A`$ is larger than $`A^{crit}`$ the conditions at infinity are overshooted, whereas, if the value is smaller than the critical value g(r) vanishes asymptotically after reaching a maximum. The situation is comparable with similar findings elsewhere jac . The values of $`g`$ and $`a`$ were calulated at a small value of $`r`$ using (49) and (50). The parameter $`A`$ was tuned to match boundary conditions at the other end. Interestingly, this matching is not obtainable when $`n=1`$ and $`v=1`$. This is consistent with the quenching of the $`n=1`$ mode in the symmetric vacuum situation. After the brief discussion of the numerical technique we will present a summary of the results. As may be recalled, the purpose of the paper is to study the solutions throughout the asymmetric phase with an eye to the disappearence of the $`n=1`$ mode. Accordingly profiles of $`g`$ and $`a`$ will be given for $`n=1`$, for different values of $`v`$. In figures 1 and 2 these profiles are shown for $`v=0,.2,.4,.6,.8`$. The corresponding magnetic field distributions are given in figure 3. Another interesting issue is the change of the matter and the gauge profiles with the topological charge. In figures 4 and 5 this is demonstrated for different $`n`$ values for a constant $`v`$. ## VI Conclusion The O(3) sigma model in (2+1) dimensional space – time with its U(1) subgroup gauged was mooted sch as a possible mechanism to break the scale – invariance of the soutions of the original 3 - dimensional O(3) sigma model. The model finds possible applications in such diverse areas such as planar condensed matter physics pani ; pani1 ; han , gravitating cosmic strings verb ; han and as such is being continuously explored in the literature sch ; gho ; sch1 ; lee ; muk1 ; muk2 ; mend ; land . An interesting aspect of the gauged O(3) sigma models is the qualitative change of the soliton modes in the symmetric and symmetrybreaking vacuum scenario, as can be appreciated by a comparision of solutions given in sch ; muk2 . In this paper we have considered a gauged O(3) sigma model with the gauge field dynamics determined by the Maxwell term as in sch ; muk2 . An interpolating potential was included to invesigate the solutions in the entire symmetrybreaking regime . This potential depends on a free parameter, the variation of which effects transition from the asymmetric to symmetric phase. We have discussed the transition of the associated topology of the soliton solutions. The Bogomol’nyi bound was saturated to give the self – dual solutions of the equation of motion. The self - dual equations are, however, not exactly solvable. They were studied numerically to trace out the solutions in the entire asymmetric phase with particular emphasis on the $`n=1`$ mode. Our analysis may be interesting from the point of view of applications, particularly in condensed matter physics. ## VII Acknowledgement The author likes to thank Muktish Acharyya for his assistance in the mumerical solution.
no-problem/9911/hep-th9911198.html
ar5iv
text
# Bloch–Wilson Hamiltonian and a Generalization of the Gell-Mann–Low Theorem1footnote 11footnote 1This work was supported in part by Conacyt grant 3298P–E9608 and the Coordinación de la Investigación Científica of the Universidad Michoacana de San Nicolás de Hidalgo. ## Abstract The effective Hamiltonian introduced many years ago by Bloch and generalized later by Wilson, appears to be the ideal starting point for Hamiltonian perturbation theory in quantum field theory. The present contribution derives the Bloch–Wilson Hamiltonian from a generalization of the Gell-Mann–Low theorem, thereby enabling a diagrammatic analysis of Hamiltonian perturbation theory in this approach. The presently available techniques for calculations in quantum field theory reflect the dominance of scattering processes for the experimental exploration of the physics of elementary particles. The single most important technique is beyond doubt Lagrangian perturbation theory, the explicit covariance of which has historically played an important rôle in the implementation of the renormalization program. This in turn was the crucial ingredient for converting the formal expressions of Lagrangian perturbation theory into predictions for measurable quantities. On the other hand, the identification of physical states defined as eigenstates of the Hamiltonian and the Hilbert space they span, becomes a complicated task in this approach, which is exemplified by the serious problems arising in the solution of the Bethe–Salpeter equation. In short, Lagrangian perturbation theory is primarily a theory of processes as opposed to a theory of states. This contribution is concerned with the development of a theory of states, establishing efficient techniques for Hamiltonian perturbation theory. Apart from the possibility of gaining a new perspective on the foundations of quantum field theory, this approach appears to be natural for the description of hadronic structure and of bound state phenomena in general. In a very general setting, consider the problem of solving the Schrödinger equation $$H|\psi =E|\psi $$ (1) for the state $`|\psi `$. The Hamiltonian is supposed to be decomposable into a “free” and an “interacting” part, $`H=H_0+H_I`$, where the eigenstates of $`H_0`$ are explicitly known and span the full Hilbert space (or Fock space) $``$, which we picture as a direct sum of free $`n`$–particle subspaces $`(n0)`$. The eigenstates of $`H`$ are expected to be representable as (infinite) linear combinations of the eigenstates of $`H_0`$, hence the Schrödinger equation (1) can be written in a Fock space basis, where in general an infinite number of $`n`$–particle subspaces are involved. The problem in this generality is obviously too difficult to be solved in practice. Restricting attention momentarily to the vacuum state, the Gell-Mann–Low theorem axel:GL states that the free (Fock space) vacuum evolves dynamically into the physical vacuum as $`H_0`$ turns adiabatically into $`H`$. Explicit expressions can then be given for the physical vacuum state and its energy in terms of the free $`n`$–particle states and their energies in the form of a perturbative series. It is natural to ask whether it is possible to generalize the theorem to the case where the perturbative vacuum is replaced by a linear subspace $`\mathrm{\Omega }`$ of $``$ consisting of eigenspaces of $`H_0`$, i.e. $`H_0\mathrm{\Omega }\mathrm{\Omega }`$, the simplest non–trivial example being the free two–particle subspace of $``$. When the interaction $`H_I`$ is switched on adiabatically, one may expect that $`\mathrm{\Omega }`$ evolves into the suspace of interacting physical two–particle states, where now different eigenstates of $`H_0`$ are allowed to mix during the adiabatic process. If this expectation comes true, the determination of the physical two–particle states may be reduced to a problem within the free two–particle subspace, thus dramatically reducing the number of degrees of freedom to be considered and converting the problem into a (at least numerically) solvable one. Couched into mathematical jargon, what one is looking for is a map $`U_{BW}`$ from $`\mathrm{\Omega }`$ to a direct sum of eigenspaces of $`H`$, i.e. $`HU_{BW}\mathrm{\Omega }U_{BW}\mathrm{\Omega }`$, where $`U_{BW}`$ is expected to be related to the adiabatic evolution operator. One would then hope that $`U_{BW}`$ induces a similarity transformation, so that the problem of diagonalizing $`H`$ in $`U_{BW}\mathrm{\Omega }`$ is equivalent to diagonalizing $`H_{BW}:=U_{BW}^1HU_{BW}:\mathrm{\Omega }\mathrm{\Omega }`$, which in the example above is equivalent to a relativistic two–particle Schrödinger equation. The simplest (but not unique) choice for $`U_{BW}^1:U_{BW}\mathrm{\Omega }\mathrm{\Omega }`$ is the orthogonal projector $`P`$ to $`\mathrm{\Omega }`$,<sup>2</sup><sup>2</sup>2That the choice of $`P`$ for the similarity transformation is not unreasonably simple is suggested by phenomenology: even in the highly non–perturbative situation of low–energy QCD the physical hadrons can be associated with a specific content of constituent quarks (and thus with an element of the free two– or three–particle subspace). hence we will look for an operator $`U_{BW}`$ in $`\mathrm{\Omega }`$ with $$PU_{BW}=P=\mathrm{𝟏}_\mathrm{\Omega }.$$ (2) Eq. (2) implies in turn the injectivity of $`U_{BW}`$, hence also $`U_{BW}P=\mathrm{𝟏}`$ in $`U_{BW}\mathrm{\Omega }`$. Together with $`HU_{BW}\mathrm{\Omega }U_{BW}\mathrm{\Omega }`$ one then has that $$(\mathrm{𝟏}U_{BW}P)HU_{BW}=0\text{in}\mathrm{\Omega }.$$ (3) Eqs. (2) and (3) together in fact characterize $`U_{BW}`$: (3) implies $`HU_{BW}\mathrm{\Omega }=U_{BW}(PHU_{BW}\mathrm{\Omega })U_{BW}\mathrm{\Omega }`$. Consequently, $`H|_{U_{BW}\mathrm{\Omega }}`$ is diagonalizable, and by (2) it is a similarity transform of $`H_{BW}`$. Remarkably, Eqs. (2) and (3) also determine $`U_{BW}`$ uniquely, at least within the perturbative regime. To see this, rewrite (3) as $`H_IU_{BW}U_{BW}PH_IU_{BW}`$ $`=`$ $`U_{BW}PH_0U_{BW}H_0U_{BW}`$ (4) $`=`$ $`U_{BW}H_0H_0U_{BW},`$ where I have used $`PH_0U_{BW}=H_0PU_{BW}=H_0`$. Now consider the matrix element of (4) between $`u|`$ and $`|k`$, where $`|k\mathrm{\Omega }`$ and $`|u\mathrm{\Omega }^{}`$ (the orthogonal complement of $`\mathrm{\Omega }`$ in $``$) are eigenstates of $`H_0`$ with eigenvalues $`E_k`$ and $`E_u`$, respectively, $$u|H_IU_{BW}U_{BW}PH_IU_{BW}|k=(E_kE_u)u|U_{BW}|k.$$ (5) It then follows that $`U_{BW}`$ $`=`$ $`P+(\mathrm{𝟏}P)U_{BW}P`$ (6) $`=`$ $`P+{\displaystyle _\mathrm{\Omega }}𝑑k{\displaystyle _\mathrm{\Omega }^{}}𝑑u|uu|U_{BW}|kk|`$ $`=`$ $`P+{\displaystyle _\mathrm{\Omega }}𝑑k{\displaystyle _\mathrm{\Omega }^{}}𝑑u|u{\displaystyle \frac{u|H_IU_{BW}U_{BW}PH_IU_{BW}|k}{E_kE_u}}k|,`$ where I have taken $`k`$ and $`u`$ to label the eigenstates of $`H_0`$ in $`\mathrm{\Omega }`$ and $`\mathrm{\Omega }^{}`$, respectively. Eq. (6) can be solved iteratively to obtain $`U_{BW}`$ as a power series in $`H_I`$. It should be emphasized, however, that the individual terms in the series are not guaranteed to give convergent expressions (let alone the series as a whole). This depends, among other things, on the choice of $`\mathrm{\Omega }`$. Eqs. (2) and (3) have been used for the characterization of $`U_{BW}`$ before, first by Bloch axel:CB in the context of degenerate quantum mechanical perturbation theory, and later by Wilson axel:KGW for the formulation of a non–perturbative renormalization group in Minkowski space. In practical applications, one will calculate $`U_{BW}`$ to a certain order in the iterative expansion of (6) and solve the Schrödinger equation for the corresponding Hamiltonian $`H_{BW}=PHU_{BW}`$. Its solution yields an approximation to the eigenvalues of $`H|_{U_{BW}\mathrm{\Omega }}`$ (the eigenvalues are invariant under similarity transformations) and also to the eigenstates via $`|\psi =U_{BW}|\varphi `$ where $`|\varphi `$ are the eigenstates of $`H_{BW}`$. The solutions will in general also include bound states (e.g., if $`\mathrm{\Omega }`$ is the free two–particle subspace), in contrast to Lagrangian perturbation theory. The reason for this difference is that although in the present formalism $`H_{BW}`$ is determined perturbatively, the corresponding Schrödinger equation can be solved exactly (at least to arbitrary precision with numerical methods). This is somewhat analogous to the Bethe–Salpeter equation, but avoids the conceptual problems associated with the latter. In this context, it is worth mentioning that the normalizability of the free two–particle component $`|\varphi =P|\psi `$ gives a natural criterium for the “boundedness” of the state $`|\psi `$, although the latter may not be normalizable in the Hilbert space sense. The formulation presented so far has two important shortcomings: first, the terms in the perturbative series following from (6) are not well–defined in the case of vanishing energy denominators, and a consistent prescription is at least not obvious from (3) or (6). Second, it is not a priori clear how to translate the terms in the perturbative series into diagrams. A diagrammatic formulation, however, is expected to be at least helpful, if not imperative, for the investigation of such important properties as renormalizability and Lorentz and gauge invariance at finite orders of the expansion, as well as for practical applications of the formalism. In search of an alternative characterization of $`U_{BW}`$, I will now return to the idea of the adiabatic evolution. Consider the adiabatic evolution operator from $`t=\mathrm{}`$ to $`t=0`$, $`U_ϵ`$ $`=`$ $`T\mathrm{exp}i{\displaystyle _{\mathrm{}}^0}𝑑te^{ϵ|t|}H_I(t)`$ (7) $`=`$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{(i)^n}{n!}}{\displaystyle _{\mathrm{}}^0}𝑑t_1\mathrm{}{\displaystyle _{\mathrm{}}^0}𝑑t_ne^{ϵ(|t_1|+\mathrm{}+|t_n|)}T[H_I(t_1)\mathrm{}H_I(t_n)],`$ where $$H_I(t)=e^{iH_0t}H_Ie^{iH_0t}$$ (8) is the usual expression in the interaction picture and $`T`$ stands for the decreasing time ordering operator. Then the following theorem holds: Generalized Gell-Mann–Low Theorem. With the notations introduced before, suppose that the operator $`U_{BW}:=lim_{ϵ0}U_ϵ(PU_ϵP)^1`$ exists in $`\mathrm{\Omega }`$. Then it has the properties $`PU_{BW}=P`$ and $`(\mathrm{𝟏}U_{BW}P)HU_{BW}=0`$ in $`\mathrm{\Omega }`$. Remarks. We thus have an explicit expression for $`U_{BW}`$ in terms of the adiabatic evolution operator. Given that $`PU_ϵP`$ is always formally invertible as a power series in $`H_I`$, the implications of the theorem rest on the existence of the limit $`ϵ0`$ of $`U_ϵ(PU_ϵP)^1`$, which in turn depends on the choice of $`\mathrm{\Omega }`$. Proof. The property $`PU_{BW}=P`$ follows directly from the definition of $`U_{BW}`$. The first part of the proof of $`(\mathrm{𝟏}U_{BW}P)HU_{BW}=0`$ is identical to the original Gell-Mann–Low proof axel:GL and will not be reproduced here. It establishes by manipulation of the series (7) for $`U_ϵ`$ that (before taking the limit $`ϵ0`$) $$HU_ϵ=U_ϵH_0+iϵg\frac{}{g}U_ϵ,$$ (9) where $`H_I`$ is assumed to be proportional to some “coupling constant” $`g`$. Now choose any $`|\varphi \mathrm{\Omega }`$. Eq. (9) implies $$HU_ϵ(PU_ϵP)^1|\varphi =U_ϵH_0(PU_ϵP)^1|\varphi +iϵ\left(g\frac{}{g}U_ϵ\right)(PU_ϵP)^1|\varphi .$$ (10) It follows that $`HU_ϵ(PU_ϵP)^1|\varphi iϵg{\displaystyle \frac{}{g}}\left(U_ϵ(PU_ϵP)^1\right)|\varphi `$ (11) $`=`$ $`U_ϵH_0(PU_ϵP)^1|\varphi +iϵU_ϵ(PU_ϵP)^1\left(Pg{\displaystyle \frac{}{g}}U_ϵ\right)(PU_ϵP)^1|\varphi `$ $`=`$ $`U_ϵ(PU_ϵP)^1PHU_ϵ(PU_ϵP)^1|\varphi ,`$ (12) where in going from (11) to (12) Eq. (10) has been used again, multiplied by $`U_ϵ(PU_ϵP)^1P`$ from the left, and $`P`$ has been inserted to the left of $`H_0`$, which is possible due to $`H_0\mathrm{\Omega }\mathrm{\Omega }`$. Taking the limit $`ϵ0`$, we have $`HU_{BW}|\varphi =U_{BW}PHU_{BW}|\varphi `$, which proves the theorem. In taking the limit, the existence of the $`g`$–derivative of $`U_{BW}`$ in $`\mathrm{\Omega }`$ has been assumed. Incidentally, this assumption implies that the expression $`U_ϵ(g/g)(PU_ϵP)^1|\varphi `$ is in general divergent in the limit $`ϵ0`$, since $`HU_ϵ(PU_ϵP)^1|\varphi `$ cannot be expected to be equal to $`U_ϵH_0(PU_ϵP)^1|\varphi `$ in this limit axel:GL . The theorem corroborates the expectation detailed at the beginning of this contribution. More importantly, the adiabatic formulation also has the benefit of fixing an $`iϵ`$–prescription for the energy denominators appearing in the series generated by (6). Performing the time integrations in $`U_ϵ(PU_ϵP)^1`$ yields explicitly to second order in $`H_I`$ $`U_{BW}`$ $`=`$ $`{\displaystyle _\mathrm{\Omega }}𝑑k|kk|+{\displaystyle _\mathrm{\Omega }}𝑑k{\displaystyle _\mathrm{\Omega }^{}}𝑑u|u{\displaystyle \frac{u|H_I|k}{E_kE_u+iϵ}}k|`$ (13) $`{\displaystyle _\mathrm{\Omega }}𝑑k𝑑k^{}{\displaystyle _\mathrm{\Omega }^{}}𝑑u|u{\displaystyle \frac{u|H_I|k^{}k^{}|H_I|k}{(E_kE_u+2iϵ)(E_k^{}E_u+iϵ)}}k|`$ $`+{\displaystyle _\mathrm{\Omega }}𝑑k{\displaystyle _\mathrm{\Omega }^{}}𝑑u𝑑u^{}|u{\displaystyle \frac{u|H_I|u^{}u^{}|H_I|k}{(E_kE_u+2iϵ)(E_kE_u^{}+iϵ)}}k|+\mathrm{},`$ where the limit $`ϵ0`$ is understood. The same expression without the $`iϵ`$–prescription follows from iterating (6). The second important advantage of the formulation in terms of $`U_ϵ`$ is the ready translation into diagrams. The diagrams associated with the perturbative expansion of $`H_{BW}`$ turn out to be similar to Goldstone or time–ordered diagrams, but unlike the latter they do not combine into a set of Feynman diagrams. This is essentially due to the fact that the matrix elements of the effective Hamiltonian $`k|HU_{BW}|k^{}`$ in general do not vanish if the energies $`E_k`$ and $`E_k^{}`$ are different.
no-problem/9911/astro-ph9911174.html
ar5iv
text
# 1 a. L-amino acid; b. D-amino acid IS AMINO-ACID HOMOCHIRALITY DUE TO ASYMMETRIC PHOTOLYSIS IN SPACE? C. CERF PhD in Biochemistry Département de Mathématique, CP 216 and A. JORISSEN<sup>1</sup><sup>1</sup>1Author to whom all correspondence should be addressed PhD in Astrophysics Institut d’Astronomie et d’Astrophysique, CP 226 Université Libre de Bruxelles, Boulevard du Triomphe, B-1050 Bruxelles, Belgium E-mail: ajorisse@astro.ulb.ac.be Abstract. It is well known that the amino acids occurring in proteins (natural amino acids) are, with rare exceptions, exclusively of the L-configuration. Among the many scenarios put forward to explain the origin of this chiral homogeneity (i.e., homochirality), one involves the asymmetric photolysis of amino acids present in space, triggered by circularly polarized UV radiation. The recent observation of circularly polarized light (CPL) in the Orion OMC-1 star-forming region has been presented as providing a strong, or even definitive, validation of this scenario. The present paper reviews the situation and shows that it is far more complicated than usually apprehended in the astronomical literature. It is stressed for example that one important condition for the asymmetric photolysis by CPL to be at the origin of the terrestrial homochirality of natural amino acids is generally overlooked, namely, the asymmetric photolysis should favour the L-enantiomer for all the primordial amino acids involved in the genesis of life (i.e., biogenic amino acids). Although this condition is probably satisfied for aliphatic amino acids, some non-aliphatic amino acids like tryptophan and proline may violate the condition and thus invalidate the asymmetric photolysis scenario, assuming they were among the primordial amino acids. Alternatively, if CPL photolysis in space is indeed the source of homochirality of amino acids, then tryptophan and proline may be crossed out from the list of biogenic amino acids. Laboratory experiments suggested in this paper could shed further light on the composition of the set of amino acids that were required for development of the homochirality of first life. 1. The Origin of Amino Acid Homochirality: A Long-Standing Question Proteins play a crucial role in life, taking part in all vital processes. The building blocks of proteins are the amino acids. They consist of a central carbon atom (called $`\alpha `$-carbon) bound to four groups: an amino or basic group (NH<sub>2</sub>), an acid group (COOH), a hydrogen atom, and a variable group R, called side chain, that makes the specificity of each amino acid. Only 20 different amino acids are used as building blocks in today’s proteins; they constitute the set of so-called natural amino acids. The primordial amino acids involved in the genesis of life constitute the set of biogenic amino acids. In the following, we will assume that some (if not all) of these primordial amino acids are now part of the set of natural amino acids. Because the four chemical groups bound to the $`\alpha `$-carbon of amino acids are not in a plane but rather adopt a tetrahedral shape (Fig. 1), all amino acids are chiral (except glycine whose R group is a hydrogen atom), i.e., they possess two non-superposable three-dimensional mirror image structures or enantiomers. The refractive indices of (a solution of) chiral molecules for clockwise and counterclockwise circularly polarized light are different, leading to a net rotation of the plane of linearly polarized light. By convention, molecules that make the polarization plane of sodium D light (at $`\lambda =589.3`$ nm) turn to the right or to the left are called $`(+)`$ or $`()`$, respectively (note that, for several amino acids, the $`+/`$ assignment is different for acid, neutral or basic solutions; see Sect. 3). The $`+/`$ convention thus classifies enantiomers on the ground of their optical rotatory power. Several other classification schemes of enantiomers exist, based on their geometrical conformation. One of these compares the amino acid structure to that of a reference chiral molecule, namely glyceraldehyde. By convention, $`(+)`$-glyceraldehyde was assigned configuration D (from the Latin dexter, right), and $`()`$-glyceraldehyde was assigned configuration L (from the Latin laevus, left). Using some correspondence rule (see e.g., Morrison and Boyd, 1987), the considered amino acid structure can be superposed on either D- or L-glyceraldehyde, and it is classified as D or L accordingly. Since the D/L classification refers to the geometrical conformation whereas $`+/`$ refers to the optical rotatory power, there is not a one-to-one correspondence between the two assignments. Actually, only a small majority of the 19 chiral natural L-amino acids rotate the plane of polarized sodium light to the left, i.e., belong to the L-($``$) type (in a neutral solution) (see e.g., Morrison and Boyd, 1987). It is known since long ago that the natural amino acids are, with rare exceptions, exclusively of the L-configuration (Davies, 1977). The origin of this chiral homogeneity (i.e., homochirality) has been a puzzle since its discovery, and remains the subject of a warm debate (Cline, 1996; Podlech, 1999, and references therein). Several mechanisms have been proposed, as reviewed e.g. by Bonner (1991) (see also the various contributions in Cline, 1996), which may be grouped into biotic and abiotic theories. The former ones assume that life originated on Earth through chemical evolution in a primordial racemic (i.e., containing equal amounts of the L- and D-enantiomers) milieu, and that chiral homogeneity inevitably resulted from the evolution of living matter. Gol’danskii and Kuz’min (1988) convincingly argued, however, that a biotic scenario for the origin of chiral purity is not viable in principle, since without preexisting chiral purity the self-replication characteristic of living matter could not occur. This argument thus strongly favours abiotic theories which may be grouped into the following classes (Bonner, 1991): chance mechanisms (spontaneous symmetry breaking by stereospecific autocatalysis, spontaneous resolution on crystallization, asymmetric synthesis on chiral crystals, asymmetric adsorption) and determinate mechanisms, the latter being subdivided into regional/temporal processes (symmetry breaking induced by electric, magnetic or gravitational fields, by circularly polarized light via asymmetric photoequilibration, photochemical asymmetric synthesis or asymmetric photolysis) and universal processes (violation of parity in the weak interaction). Only the hypothesis of symmetry breaking by the action of circularly polarized light (CPL) will be discussed in some details in this paper. The role of CPL present in natural skylight as a chiral engine was already suggested in the nineteenth century by Le Bel (1874) and van t’Hoff (1894). More recently, a similar scenario invoking the asymmetric photolysis of amino acids taking place not on Earth but rather in space (probably in the organic mantles at the surface of interstellar grains) has been put forward by several authors (Norden, 1977; Rubenstein et al., 1983; Bonner and Rubenstein, 1987; Bonner, 1991; Bonner, 1992; Greenberg et al., 1994; Greenberg, 1997). After some debate regarding the possible role of CPL from pulsars (Rubenstein et al., 1983; Roberts, 1984; Bonner and Rubenstein, 1987; Greenberg et al., 1994; Engel and Macko, 1997; Mason, 1997; Bonner et al., 1999), this idea regained interest recently with the observation of CPL in the Orion OMC-1 star-forming region (Bailey et al., 1998). 2. Asymmetric Photolysis: A Cosmic Enantioselective Engine? Asymmetric photolysis, first demonstrated successfully by Kuhn and coworkers (1929, 1930a, 1930b) involves the preferential destruction of one enantiomer during the photodegradation of a racemic mixture by CPL. CPL-mediated reactions depend on the circular dichroism (CD) of the reactant (Crabbé, 1965; Buchardt, 1974; Rau, 1983), i.e., on the difference in its molar absorption coefficients $`ϵ_{()}`$ and $`ϵ_{(+)}`$ for $`()`$-CPL (left CPL) and $`(+)`$-CPL (right CPL), respectively \[$`\mathrm{\Delta }ϵ=ϵ_{()}ϵ_{(+)}0`$\]. Note that here the difference in molar absorption coefficients is involved, not the difference in refractive indices (they are nevertheless related via the Kramers-Kronig integral relation (see e.g., Cantor and Schimmel, 1980). Since the photolysis rate depends upon the amount of light absorbed by the reactant, CD thus leads to different reaction rates for the two enantiomers, inducing an enantiomeric asymmetry as the reaction proceeds. Asymmetric photolysis, as considered here, results in the preferential destruction of the enantiomer having the higher absorption coefficient. The efficiency with which photolysis yields an enantiomeric excess is directly related to the so-called anisotropy factor $`g=\mathrm{\Delta }ϵ/ϵ`$, where $`ϵ=(ϵ_{()}+ϵ_{(+)})/2`$ (Kuhn, 1930; Balavoine et al., 1974). In the spectral region where $`\mathrm{\Delta }ϵ0`$, the molecule is said to have a CD band, which corresponds to an absorption band of the pertinent chromophore in the substrate. The electronic absorption bands of amino acids occur in the UV (shortwards of 300 nm) (Donovan, 1969), and most are optically active. Successful asymmetric photolyses of amino acids were performed by Flores et al. (1977) (2.5% enantiomeric excess in leucine after 75% photolysis of a racemic mixture), Norden (1977) (0.22% excess in glutamic acid after 52% photolysis, 0.06% excess in alanine after 20% photolysis), and Greenberg et al. (1994) (3% excess after irradiating racemic tryptophan at 10 K for 50 hours with monochromatic light at 252.4 nm with about $`10^{12}`$ photons cm<sup>-2</sup> s<sup>-1</sup>). In his review on the Origin and Amplification of Biomolecular Chirality, Bonner (1991) concludes that an extraterrestrial origin of the biological homochirality on Earth seems the most likely. This author suggests that an enantiomeric excess in amino acids originated in space as a result of asymmetric photolysis triggered by CPL (Norden, 1977; Rubenstein et al., 1983; Bonner and Rubenstein, 1987; Greenberg et al., 1994; Greenberg, 1996), and was somehow transported to the prebiotic Earth. This scenario received further support recently from the detection of IR CPL in the Orion OMC-1 star- forming region (Bailey et al., 1998). In order for such a scenario to work, several conditions must be met: (i) amino acids must be able to form in an extraterrestrial environment; (ii) UV CPL must be present in space to irradiate these extraterrestrial amino acids; (iii) all the biogenic L-amino acids must have a CD spectrum such that the photolysis by CPL of a given sign produces an excess of the L-enantiomer for all of them; (iv) amino acids must be transported from space onto the primitive Earth e.g. via cometary and asteroidal impacts or via accretion of interstellar grains when the Earth traverses molecular clouds. They must survive the heat generated during the passage through the atmosphere and the impact with the surface; (v) amplification mechanisms are required to bring the small excess induced by asymmetric photolysis to complete homochirality. Most of these conditions remain speculative, though with very different levels of uncertainty. The proof of (i) would require the direct detection of amino acids in space, which has not yet been achieved with certainty, despite several attempts (Miao et al., 1994; Travis, 1994, Combes et al., 1996). Indirect arguments in favour of an exogenous synthesis of amino acids are however provided by the discovery of apparently extraterrestrial amino acids in the Cretaceous/Tertiary boundary sediments (Zhao and Bada, 1989; for critical assessments, see Cronin, 1989, Chyba et al., 1990, Chyba and Sagan, 1992), and in the Murchison meteorite (Kvenvolden et al., 1970; Engel and Nagy, 1982; Engel et al., 1990, Engel and Macko, 1997; Cronin and Pizzarello, 1997, and references therein; for a critical assessment, see Cronin and Chang, 1993, and Pizzarello and Cronin, 1998). Moreover, laboratory experiments that simulate the formation of the organic mantles on interstellar grains by the action of UV light have been able to produce amino acids (Mendoza-Gómez, 1992; Greenberg, 1997). The detection of CPL in the Orion OMC-1 star-forming region by Bailey et al. in 1998 (condition ii) gave a new impetus to asymmetric photolysis in space as a cosmic enantioselective engine. The large (17%) circular polarization reported in Orion OMC-1 was observed in the IR domain, though amino acids CD bands are located shortward of 300 nm. Nevertheless, model calculations seem to indicate that, if CPL is produced by scattering on nonspherical grains aligned in a magnetic field, similar circular polarization levels should be attained in the UV and IR domains (Bailey et al, 1998). Somewhat lower circular polarization levels were reported previously in the Chamaleon low-mass star-forming region (Gledhill et al., 1996) and around the pre-main sequence object GSS30 (Chrysostomou et al., 1997). Transport of amino acids from space to the prebiotic Earth (condition iv) and amplification of a small exogeneous enantiomeric excess (condition v) seem possible as well, as shown by detailed studies (Chyba et al., 1990; Bonner, 1991; Chyba and Sagan, 1992; Greenberg et al., 1994). Thus, only condition (iii), i.e., the possibility of forming the same enantiomer for all the biogenic amino acids by asymmetric photolysis, has not yet been the subject of a critical assessment in order to validate the above scenario (although a weaker form of this condition was already expressed by Mason, 1997). In fact, the assessment of condition (iii) requires the knowledge of both the composition of the set of biogenic amino acids, and their CD spectrum in the conditions prevailing in space (e.g., solid or gas phase, temperature). Both of these are unknown, unfortunately. Nevertheless, the consideration of CD spectra of natural amino acids in liquid solution may already provide some useful information, as shown in Sect. 3. 3. CD Properties of Amino Acids The possibility that the terrestrial homochirality of amino acids originated from asymmetric photolysis by CPL in space requires that the L-enantiomer be favoured for all the biogenic amino acids. In other words, this requirement implies that the substrate was irradiated by CPL in a spectral window where all the biogenic amino acids have a CD band of one and the same sign. As noted by Mason (1997) and Bailey et al. (1998), an enantioselective effect on amino acids is best obtained if the CPL spectrum is confined to a single CD band, because CD bands alternate in sign and sum to zero over the whole spectrum (the Kuhn-Condon rule: Kuhn, 1930; Condon, 1937). In the case of broad-band CPL, a net enantioselective effect may nevertheless result if the wavelength integral of the CD index weighted by the CPL spectrum yields a non-zero effective CD coefficient $`\mathrm{\Delta }ϵ`$ (Buchardt, 1974). To be at the origin of the biomolecular homochirality, the (effective) CD coefficient must be of the same sign for all the biogenic amino acids. CD data for amino acids may be found in Legrand and Viennet (1965, 1966), Myer and MacDonald (1967), Katzin and Gulyas (1968), Anand and Hargreaves (1968), Horwitz et al. (1969), Sakota et al. (1970), Fowden et al. (1971), as well as in the references quoted by Blout (1973). Their general properties, along with the chromophore assignments, are summarized in Donovan (1969), Crabbé (1971) and Blout (1973). As already mentioned, the CD data of biogenic amino acids only should be examined in principle. However, as the composition of the set of biogenic amino acids is not currently known, we will discuss CD data of all natural amino acids, assuming biogenic amino acids are among them. The optical activity of amino acids arises from the acid group chromophore bound to their $`\alpha `$-carbon (i.e., a carboxyl group COOH in acid medium, that deprotonates in a neutral or basic medium to give a carboxylate group COO<sup>-</sup>) and from possible supplementary chromophores located in their side chain. The CD spectrum of aliphatic amino acids (with side chains involving only C and H atoms without double bonds, i.e., alanine, valine, leucine and isoleucine) is quite simple, as it is due to the sole acid group chromophore bound to the $`\alpha `$-carbon (Crabbé, 1971). By contrast, other amino acids exhibit a more complex CD behaviour because they possess a supplementary chromophore in their side chain (aromatic ring for phenylalanine, tyrosine and tryptophan; sulfur-containing group for cysteine and methionine; basic group for lysine, arginine and histidine; acid group for aspartic and glutamic acids; side chain closing back onto the $`\alpha `$-amino group for proline) (see Donovan, 1969; Blout, 1973). Because the pH of the medium modifies the amino acid by protonating or deprotonating the basic and acid groups (bound to the $`\alpha `$-carbon or located in the side chain), the optical properties of amino acids depend on the acidity of the medium and on the nature of the solvent (Donovan, 1969). A sensitivity upon temperature (Horwitz et al., 1969) and upon ionization state (Katzin and Gulyas, 1968) has also been reported. Assuming that the acid group of amino acids in space occur in the form of a carboxyl group COOH rather than of a carboxylate group COO<sup>-</sup> (since there is no reason for it to deprotonate as in neutral or basic solutions), CD measurements in acid medium should be considered when the optical properties of amino acids are dominated by their carboxyl chromophore. As indicated in the references quoted above, laboratory measurements show that the carboxyl group bound to the $`\alpha `$-carbon has a strong CD band centered at about 210 nm. The sign of this CD band is directly related to the stereochemistry of the $`\alpha `$-carbon and is thus the same for all L-amino acids. If this band were the only one involved in the asymmetric photolysis, the photolysis of amino acids would indeed favour the same enantiomer for all amino acids, and extraterrestrial asymmetric photolysis could indeed be considered as a viable explanation for the amino acid homochirality on Earth. However, for non-aliphatic amino acids, the side chains complicate the picture as they introduce supplementary chromophores. The situation appears especially critical with tryptophan, whose indole chromophore exhibits a strong CD band centered at about 195 nm, with opposite sign to the carboxyl 210 nm band (Legrand and Viennet, 1965; Myer and MacDonald, 1967; Blout, 1973). Proline also has a strong CD band of opposite sign around 193 nm in a neutral solution (this band however disappears in acid solution; Fowden et al, 1971). At this point, it should be stressed that Greenberg et al. (1994) have obtained an enantiomeric excess starting from racemic tryptophan in a laboratory experiment simulating photolysis by CPL irradiating an interstellar dust grain. The experiment was conducted at a temperature of 10 K with monochromatic light at 252.4 nm from a high pressure mercury lamp. Although that experiment certainly demonstrates the potential of CPL to trigger asymmetric photolysis of amino acids at the surface of interstellar grains, it does not ensure that an irradiation with broad-band UV CPL, as is more likely to be the case in space as discussed by Bailey et al. (1998), would still result in an enantiomeric excess (given the presence of CD bands of opposite signs in tryptophan). Moreover, in order to ensure homochirality with the other amino acids, the asymmetric photolysis of tryptophan should be governed by the carboxyl chromophore (210 nm) rather than by the indole chromophore (195 nm). This condition might be satisfied for an irradiation by UV light from main sequence stars later than about A8, but it does not necessarily hold true for irradiation by UV light from earlier stars whose flux raises shortward of 200 nm (Bailey et al., 1998). The same problem may arise in case of an irradiation by pulsar synchrotron radiation with a constant $`\lambda F_\lambda `$ spectrum (Rubenstein et al., 1983; Bonner and Rubenstein, 1987; Greenberg et al., 1994). 4. Conclusion: Asymmetric Photolysis vs. Composition of the Set of Biogenic Amino Acids The present paper has reviewed the conditions necessary for the asymmetric photolysis of biogenic amino acids by CPL in space to be at the origin of today’s homochirality of natural amino acids. It has shown that a critical requirement in that respect is that asymmetric photolysis should select the L-enantiomer for all the biogenic amino acids. A survey of the available CD data for amino acids has revealed that tryptophan and proline pose the most serious problem, as they exhibit CD bands of opposite signs in the UV region where early-type main sequence stars emit most of their radiation. Because the signs and intensities of CD bands depend however on the properties of the medium, extrapolation of laboratory data obtained in liquid solutions to infer the CD properties of amino acids in space (where they are likely to be found in solid or gas phase) is not straightforward, and prevents any firm conclusion to be drawn at this stage. Asymmetric photolysis laboratory experiments along the guideline of Greenberg et al. (1994), extended to other amino acids and using broad band UV CPL rather than monochromatic light, would be of great interest. As would be CD data for as many amino acids as possible, obtained under experimental conditions matching as closely as possible the conditions prevailing in space. To summarize, the consideration of the available CD properties of amino acids currently leads to two mutually exclusive possibilities regarding the possible role of asymmetric photolysis in space for the homochirality of the natural amino acids: (i) if tryptophan or proline is biogenic, then the extraterrestrial asymmetric photolysis scenario has to be rejected, or (ii) if that scenario is valid, then the CD properties of amino acids (along with the spectral properties of CPL in space) allow to eliminate tryptophan and proline from the set of biogenic amino acids. This reasoning could be extended to any other amino acid for which new CD measurements in conditions mimicking those in space would uncover CD bands with a sign opposite to that of the carboxyl chromophore, in the spectral region characterizing CPL in space. It is currently impossible to decide between the two alternatives, as there are experimental facts in support of each alternative. On the one hand, proline has been found in the Murchison meteorite (Kvenvolden et al., 1970; Engel and Nagy, 1982), which may be indicative of its biogenic nature if life started from the amino acids deposited on the early Earth, but on the other hand, some authors (Isoyama et al., 1984) have argued that tryptophan may have appeared quite late in the biological evolution. In conclusion, we hope to have convinced the reader that the role of extraterrestrial asymmetric photolysis in the origin of the homochirality of natural amino acids on Earth, if at all involved, is far more complicated than is usually apprehended in the astronomical literature. Acknowledgments We thank Prof. W. A. Bonner for sending us a paper in advance of publication. A. J. is Research Associate of the FNRS (Belgian National Fund for Scientific Research). References Anand, R.D. and Hargreaves, M.K.: 1968, Chem. Ind., 880. Bailey, J., Chrysostomou, A., Hough, J.H., Gledhill, T.M., McCall, A., Clark, S., Ménard, F. and Tamura, M.: 1998, Science 281, 672. Balavoine, G., Moradpour, A. and Kagan, H.B.: 1974, J. Am. Chem. Soc. 96, 5152. Blout, E.R.: 1973, in F. Ciardelli and P. Salvadori (eds.), Fundamental Aspects and Recent Developments in ORD and CD, Heyden & Son, London, p. 352. Bonner, W.A.: 1991, Origins of Life Evol. Biosph. 21, 59. Bonner, W.A.: 1992, Origins of Life Evol. Biosph. 22, 407. Bonner, W.A. and Rubenstein, E.: 1987, Biosystems 20, 99. Bonner, W.A., Rubenstein, E. and Brown, G.S.: 1999, Origins of Life Evol. Biosph.29, 329-332. Buchardt, O.: 1974, Angew. Chem. Intern. Ed. 13, 179. Cantor, C.R. and Schimmel, P.R.: 1980, Biophysical Chemistry, Part II, Freeman and Co., San Francisco, p.413. Chrysostomou, A., Ménard, F., Gledhill, T.M., Clark, S., Hough, J.H., McCall, A. and Tamura, M.: 1997, MNRAS 285, 750. Chyba, C.F. and Sagan, C.: 1992, Nature 355, 125. Chyba, C.F., Thomas, P.J., Brookshaw, L. and Sagan, C.: 1990, Science 249, 366. Cline, D.B. (ed.): 1996, Physical Origin of Homochirality in Life (AIP Conf. Proc. 379, American Institute of Physics, Woodbury and New York). Combes, F., Nguyen-Q-Rieu and Wlodarczak, G: 1996, A&A 308, 618. Condon, E.U.: 1937, Rev. Mod. Phys. 9, 432. Crabbé, P.: 1965, in Optical Rotatory Dispersion and Circular Dichroism in Organic Chemistry, Holden-Day, San Francisco. Crabbé, P.: 1971, in F.C. Nachod and J.J. Zuckerman (eds.), Determination of Organic Structures by Physical Methods, Academic Press, New York, p. 133. Cronin, J.R.: 1989, Nature 339, 423. Cronin, J.R. and Chang, S.: 1993, in J.M. Greenberg et al. (eds.), The Chemistry of Life’s Origins, Kluwer, Dordrecht, p. 209. Cronin, J.R. and Pizzarello, S.: 1997, Science 275, 951. Davies, J.S.: 1977, in B. Weinstein (ed.), Chemistry and Biochemistry of Amino Acids, Peptides and Proteins, Marcel Dekker, New York, p. 1. Donovan, J.W.: 1969, in S.J. Leach (ed.), Physical Principles and Techniques of Protein Chemistry (Part A), Academic Press, New York, p.101. Engel, M.H. and Macko, S.A.: 1997, Nature 389, 265. Engel, M.H., Macko, S.A. and Silfer, J.A.: 1990, Nature 348, 47. Engel, M.H. and Nagy, B.: 1982, Nature 296, 837. Flores, J.J., Bonner, W.A. and Massey, G.A.: 1977, J. Am. Chem. Soc. 99, 3622. Fowden, L., Scopes, P.M. and Thomas, R.N.: 1971, J. Chem. Soc. (C), 834. Gledhill, T.M., Chrysostomou, A. and Hough, J.H.: 1996, MNRAS 282, 1418. Gol’danskii, V.I. and Kuz’min, V.V.: 1988, Z. Phys. Chem. 269, 216. Greenberg, J.M.: 1996, in D.B. Cline (ed.) Physical Origin of Homochirality in Life (AIP Conf. Proc. 379), American Institute of Physics, Woodbury and New York, p.185. Greenberg, J.M.: 1997, in R.B. Hoover (ed.) Instruments, Methods, and Missions for the Investigation of Extraterrestrial Microorganisms (Proc. SPIE Vol. 3111), p. 226. Greenberg, J.M., Kouchi, A., Niessen, W., Irth, H., Van Paradijs, J., de Groot, M. and Hermsen, W.: 1994, J. Biol. Phys. 20, 61. Horwitz, J., Hardin Strickland, E. and Billups, C.: 1969, J. Am. Chem. Soc. 91, 184. Isoyama, M., Ohoka, H., Kikuchi, H., Shimada, A. and Yuasa, S.: 1984, Origins of Life 14, 439. Katzin, L.I. and Gulyas, E.: 1968, J. Am. Chem. Soc. 90, 247. Kuhn, W.: 1930, Trans. Faraday Soc. 26, 293. Kuhn, W. and Braun, E.: 1929, Naturwissenschaften 17, 227. Kuhn, W. and Knopf, E.: 1930a, Z. Physik. Chem. B 7, 292. Kuhn, W. and Knopf, E.: 1930b, Naturwissenschaften 18, 183. Kvenvolden, K.A., Lawless, J., Pering, K., Peterson, E., Flores, J., Ponnamperuma, C., Kaplan, I.R. and Moore, C.: 1970, Nature 228, 923. Le Bel, J.A.: 1874, Bull. Soc. Chim. France 22, 337. Legrand, M. and Viennet, R.: 1965, Bull. Soc. Chim. France, 679. Legrand, M. and Viennet, R.: 1966, Bull. Soc. Chim. France, 2798. Mason, S.F.: 1997, Nature 389, 804. Mendoza-Gómez, C.X.: 1992, Complex Irradiation Products in the Interstellar Medium, Ph.D. thesis (University of Leiden). Miao, Y., Snyder, L.E., Kuan, Y.J. and Lovas, F.J.: 1994, BAAS 26, 906. Morrison, R.T. and Boyd, R.N.: 1987, Organic Chemistry, Allyn and Bacon, Boston. Myer, Y.P. and MacDonald, L.H.: 1967, J. Am. Chem. Soc. 89, 7142. Norden, B.: 1977, Nature 266, 567. Pizzarello, S. and Cronin, J.R.: 1998, Nature 394, 236. Podlech, J.: 1999, Angew. Chem. Int. Ed. 38, 477. Rau, H.: 1983, Chem. Rev. 83, 535. Roberts, J.A.: 1984, Nature 308, 318. Rubenstein, E., Bonner, W.A., Noyes, H.P. and Brown, G.S.: 1983, Nature 306, 118. Sakota, N., Okita, K. and Matsui, Y.: 1970, Bull. Chem. Soc. Japan 43, 1138. Travis, J.: 1994, Science 264, 1668. van’t Hoff, J.H.: 1894, in The Arrangement of Atoms in Space, 2nd ed., Braunschweig, p. 30. Zhao, M. and Bada, J.L.: 1989, Nature 339, 463.
no-problem/9911/hep-th9911202.html
ar5iv
text
# Acknowledgments ## Acknowledgments This work has been partially supported by the European Union TMR program FMRX-CT96-0012 Integrability, Non-perturbative Effects, and Symmetry in Quantum Field Theory and by the Spanish grant AEN96-1655. The work of E.A. has also been supported by the European Union TMR program ERBFMRX-CT96-0090 Beyond the Standard model and the Spanish grant AEN96-1664.
no-problem/9911/astro-ph9911397.html
ar5iv
text
# Acknowledgements ## Acknowledgements It is a pleasure to thank my supervisor, Jon Davies, for the help he continuously gave me during the three years of my PhD project, not to speak about his contagious enthusiasm. I have benefit a lot from the discussions and comments of the past and current members of our research group, Paul Alton, Lea Morshidi, Matthew Trewhella and Alexandros Kambas. A particular thank goes to Rodney Smith, for not having shouted once during my continuous reports of real (and sometimes fictitious) problems with the computer, usually followed by a request of immediate solution; and to Judy Haynes, for having spared me months of data reduction. Among all the other people that have helped me during these three years, I would like to remember Andrea Ferrara, for suggesting new directions of investigation, and Spyros Kitsionas, Phillip Gladwin and Neil Francis, for the numerous hints they gave me. Un grazie particolare va ai molti amici italiani, españoles, latinoamericanos, $`\stackrel{`}{}E\lambda \lambda \eta \nu ϵ\varsigma `$, portugueses e di molti altri paesi, che mi hanno aiutato a passare felicemente questi tre lunghi anni di permanenza in Galles. Vorrei infine dedicare questa tesi alla mia famiglia e ai miei amici di sempre. Cu! ## Summary The dust distributions observed in spiral galaxies play a major role in Astrophysics. Dust very effectively extinguishes UV and optical starlight. Therefore it may alter considerably our view of the galaxy itself and of the distant universe in its background. The dust opacity in spiral galaxies is still a debated issue. Since the energy absorbed by dust grains from starlight is re-emitted at longer wavelengths, mainly in the Far-Infrared (FIR) and Sub-millimetre ($`\lambda >`$ 60-$`\mu m`$), observations of dust emission can help to constrain the parameters of the dust distribution. I have developed an original model for the FIR emission in spirals, starting from an existing radiative transfer code (Bianchi, Ferrara & Giovanardi 1996). The model’s main features are: a complete treatment of multiple scattering within geometries appropriate for spirals; a full coverage of spectral range of stellar emission; the use of empirically determined dust properties (some of which are derived in the present work); the production of maps of the dust temperature distribution, together with simulated optical, FIR and sub-millimetre images. The model has been applied to observations of stellar and dust emission in the galaxy NGC 6946. It is found that optically thick models (central face-on optrical depth $`\tau _\mathrm{V}5`$) are necessary to explain the observed FIR output. For such models, almost 30-40% of the intrinsic starlight is absorbed. The observed ratio of FIR and optical scalelengths can be explained if the dust distribution is more extended than the stellar. However, because of the less steep gradients of optical emission in optically thick cases, a very extended dust distribution is needed ($`\alpha _\mathrm{d}3\alpha _{}`$). The distribution of atomic gas in NGC 6946 has a similar extent. I discuss the approximations in the modelling (mainly the use of smooth distributions against the observed clumpiness of the interstellar medium) and the implications of the results. ## Chapter 1 Introduction Observations of our Galaxy, as well as of other spirals (Fig. 1.1) reveal the presence of regions of sky darker than the surroundings. This apparent decrease in the number of stars is caused by one of the constituent of the Inter-Stellar Medium (ISM), dust. Dust is made of small (mean radius 0.1$`\mu `$m, Hildebrand 1983) solid grains possibly made of silicates and graphite (Draine & Lee 1984). It constitutes only a tiny fraction of the ISM: for instance, in the Solar neighbourhood, the mass of dust is less than 1% of the mass of the gas (Sect. 2.2). Despite its relatively small abundance, dust plays a major role in astrophysics. Dust grains are very effective in extinguishing Ultra-Violet and Optical ($`\lambda <1\mu `$m) starlight because the radiation wavelength is of the same order of the grain size (Sect. 2.2). Our view of the universe, being mainly based on optical observation, can therefore be severely biased by dust extinction. Dust’s ability to extinguish radiation is usually quantified by the extinction $`A_\lambda `$, i.e. the ratio between the observed and the intrinsic unextinguished luminosity, in a magnitude scale. In the case of dust laying between a light source and the observer, the extinction is approximately equal to the optical depth $`\tau _\lambda `$, the inverse of the mean free path of light in a dusty medium (See Sect. 2.1 for mathematical definitions). Extinction and optical depth depend on the wavelength $`\lambda `$, the effect of dust being larger for smaller wavelengths (Sect. 2.2). The higher transparency of a dusty medium for radiation at large wavelength rather than for short wavelengths, goes under the name of reddening. A medium is defined as optically thin for a radiation of wavelength $`\lambda `$ if $`\tau _\lambda <1`$, the amount of dust not reducing drastically the source radiation, or optically thick otherwise. Within the Galaxy, an extinction in the V band $`A_V=0.10.2`$ mag is observed in the direction of the poles, while in the direction of the Galactic centre it reaches $`A_V30`$ (Whittet 1992). While the relative transparency out of the Galactic plane has permitted us to observe extragalactic objects, the high opacity along the plane has severely biased the first determination of shape and dimension of the Galaxy (Whittet 1992). Dust extinction in the Galaxy is directly assessed through studies of the obscuration of individual stars of known intrinsic luminosity. This is not possible in external galaxies. For these objects, estimates of the extinction of galactic light by dust in its own ISM (usually referred to as internal extinction) rely on a comparison of the observed luminosity profiles with models of radiative transfer. Realistic models are necessary, to avoid misinterpretations and mutually exclusive results (Disney, Davies & Phillipps 1989). Among the requirements of realistic models, the choice of stellar-dust geometries appropriate to galaxies and the inclusion of light scattering by dust in the radiative transfer are vital (Sect. 3.1). A brief review of extinction studies is presented in Sect. 1.2. The stellar radiation absorbed by dust is re-emitted at infrared wavelengths, mainly in the Mid-Infrared (5$`\mu `$m-60$`\mu `$m; MIR) and in the Far-Infrared (60$`\mu `$m-300$`\mu `$m; FIR) spectral ranges. Dust emission has been observed in our Galaxy as well as in the other spirals (Sect. 1.3 and Sect. 1.4). In our Galaxy, 10-30% of the total Galactic bolometric luminosity is emitted by dust (Whittet 1992). The Infrared Astronomical Satellite (IRAS) has revealed that the Galactic dust emission is characterised by regions of Star-Formation, with dust at higher temperatures because of the closeness to the radiation sources (Sect. 2.3), and diffuse, thin clouds (often denoted as cirrus) of colder dust, heated by a diffuse Inter-Stellar Radiation Field (ISRF) (Beichman 1987). The cold diffuse cirrus dust, due to its ubiquity, is responsible of the interstellar extinction. As for external galaxies, studies of dust emission are limited by the instrument resolution, sensitivity and spectral range observed (Sect. 1.3). However, the recent technological development (mainly in the Sub-millimetre and millimetre spectral ranges for $`\lambda >300\mu `$m) has permitted the observation of cold dust responsible for extinction in external galaxies as well (Sect. 1.4). Because of the direct link between dust emission and extinction, it is possible in principle to derive the quantity of dust in a spiral galaxy by comparing the observed stellar luminosity with the FIR emission, if an accurate radiative transfer model is used. For this Thesis, I have modified an existing radiative transfer code for spiral galaxies (Bianchi, Ferrara & Giovanardi 1996) to model dust emission in the FIR. The observed Spectral Energy Distributions (SEDs) of stellar and dust emission, as well as their spatial distribution, will be compared to the model output to gain clues about the galaxy dust content and star-dust relative geometry. In this Chapter I will describe observations of extinction and FIR emission, introducing the main topics that will be discussed throughout the rest of the Thesis. A plan of the Thesis is presented at the end of this Chapter. A brief discussion of the relevance of dust studies to the understanding of spiral galaxies and distant universe is given in the next Section. ### 1.1 Relevance of dust studies to extra-galactic astronomy Dust plays a very important role in many astrophysical processes, from the formation of molecular gas, that is believed to combine on grain surfaces, to the obscuration of the distant universe. Without the pretension of being complete, I discuss now a few problems that may benefit from a proper knowledge of dust distribution and amount. The study of galactic morphology depends, obviously, on the observed radiation. Galactic properties, like luminosity and dimension, may be severely biased by dust extinction. Objects with the same intrinsic properties but different dust distribution properties may look of different type, thus prejudicing any morphological classification based on the optical aspect. As an example, Trewhella (1998a) found that an Sc galaxy observed in the B-band, NGC 6946, is similar to an Sb when a correction for extinction, from his model, is applied. Because of the selective extinction with the wavelength, ages of distant objects inferred from broad-band colours may be biased by the reddening introduced by dust (Cimatti et al. 1997). Rotation curves of spiral galaxies derived from atomic Hydrogen observation can be used to infer the galactic mass. The mass of a galaxy derived from the luminosity, assuming a constant mass-to-luminosity ratio, is always smaller than that derived from the gravitational studies: large amounts of dark matter are present. At present, dark matter is unexplained. Although it is improbable that dark matter is due to a large underestimate of the stellar content due to extinction, dust emission may trace a possible extended halo component of cold gas, that can account for some of the unseen mass (Gerhard & Silk 1996). Extinction due to dust in foreground objects may be able to explain the fall-off in the number of detected objects at large redshifts (Ostriker & Heisler 1984, Fall & Pei 1993). If extended distributions of dust are present (Sect. 1.6) the effect may be stronger than believed. the relation between the HI line width at 21cm and the galactic luminosity is used to derive the object distance from its apparent magnitude. Because of dust, corrections are necessary to bring the luminosities of objects with different inclination to a common face-on value (Sect. 1.2). The Tully-Fisher relation in the optical band presents a large scatter, mainly because of dust extinction. Although the scatter is significantly reduced using luminosities in the less extinguished Near-Infrared (NIR), extinction correction may still be necessary (Moriondo, Giovanelli & Haynes 1998). ultraviolet and blue fluxes are used to derive the star-formation history of the Universe (Madau et al. 1996). Star-formation rates are therefore greatly dependent on correct estimates of extinction. On the other hand, if the rate is to be derived from dust FIR emission, a knowledge of the dust heating mechanism is necessary (Sec. 1.5). ### 1.2 Studies of dust extinction A derivation of the extinction in an astrophysical object is relatively easy only in the case when dust lies between the source of radiation and the observer (a screen model; Sect. 3.1), as for stars in the Galaxy. Even in this case, a knowledge of the intrinsic luminosity of the source is necessary to assess the opacity of the dust screen. This is not the case for spiral galaxies, where the intrinsic properties of unextinguished objects are unknown and the dust distribution is co-spatial with the stars. One method used to infer the opacity of spirals is the study of the variation of some observables, like surface brightness and magnitude, with the inclination of the object. For a simple model where dust and stars are homogeneously distributed in an infinite plane-parallel geometry (a slab model; Sect. 3.1), the surface brightness (magnitude per unit solid angle) of the object will increase with the inclination in the optically thin case, because lines of sight closer to the model plane intersect a larger portion of the galaxy. In the optically thick case, instead, only the radiation coming from a region of the dimension of the mean free path for a photon is observed, and this is independent of the inclination. Opposite behaviours are expected for the total magnitude, that will be constant in the optically thin case, because all the light emitted by stars can be seen from any direction, while it will decrease in an optically thick case, as a result of the decrease of the object’s area projected on the sky. These results for the slab model constitute two limiting case, an object with more realistic dust geometry will have intermediate behaviours. Statistical studies have been conducted on large samples of galaxies of supposedly similar properties as a function of the inclination with which they are observed. The first study of this kind was that conducted by Holmberg (1958) analysing the variation of the projected surface brightness with the inclination for a sample of 119 spirals (53 of types Sa-Sb, 66 of type Sc). Using a model derived from the variation of the Galactic extinction with the latitude (a screen model, as he recognised later; Holmberg 1975) he inferred a substantial transparency for spiral galaxies. This result was widely accepted, until Disney et al. (1989) showed how it depended heavily on the assumed model. They were able to fit Holmberg data with an optically thick model, provided the dust distribution was internal to the stellar one, as inferred from observation of edge-on galaxies. For an infinite opacity, such a model would behave as a transparent one, because of the unextinguished dust-free layer of stars above the dust distribution. They showed how more realistic models are necessary to ascertain the opacity of a galaxy (See Sect. 3.1 for a description of the problems involved in producing realistic models of internal extinction in a spiral galaxy). Unfortunately statistical studies, even within the framework of proper models, can be severely biased by selection effects. For instance, two works suggesting high optical depths up to large distances from the galactic centre (Valentijn 1990, Burstein et al. 1991) are shown to be affected by the object selection criteria, by selecting galaxies with similar surface brightness independently of the inclination (Davies et al. 1993) or lying in a too small space volume (Davies et al. 1995). Giovanelli et al. (1994) analysed the photometric properties of a sample of 1235 Sbc-Sc galaxies observed in the I band. The derived laws relating galactic photometric properties to the inclination are then compared to the results from a TRIPLEX model (Disney et al. 1989, Sect. 3.1). Observations are compatible with a galactic disk having a central face-on optical depth, $`\tau _I<5`$ ($`\tau _V<10`$, using the Galactic extinction law in Table 2.1). A similar analysis was conducted by Moriondo et al. (1998), on a sample of 154 spirals observed in the Near-Infrared (1$`\mu `$m-5$`\mu `$m; NIR) band H (68 of which with I band data). Each galaxy is decomposed into its structural parameters and their variation with inclination were studied. The effects of internal extinction were detected, especially the increase of disk scalelengths and I-H colour with inclination. Simulations from a modified version of the TRIPLEX model, allowing for larger dust scalelengths (Xilouris et al. 1997; 1998, Sect. 1.6), lead to results compatible to the observations if the central face-on optical depth is $`0.3<\tau _H<0.5`$ ($`1.5<\tau _V<3`$). For such optical depths the galactic disk would be moderately opaque, becoming optically thin, for face-on inclinations, at about 1 disk radial scalelength from the centre. Extinction studies have also been carried out more directly on single objects. One of the dust properties that is frequently exploited in extinction studies is the selective extinction at different wavelengths. If intrinsic variations of starlight colour, like those due to different stellar populations being present in different parts of a disk, are not present, a reddening of the radiation will reveal the dusty regions. Therefore, comparing images in the optical, where extinction should be present, with observation in the NIR, which are far less affected by dust ($`\tau _B/\tau _K14`$; Table 2.1) a map of the extinction can be produced. Using a radiative transfer model, the optical depth can be finally retrieved. However, even in the hypothetical case of no intrinsic colour variation, the method is not of easy application, since it is difficult to derive the intrinsic unextinguished colour for the stellar population. Block et al. (1994) used optical-NIR colours to derive the extinction in two mid-inclined spiral galaxies, NGC 4736 and NGC 4826. They estimate the intrinsic colour from regions that look free of extinction or by using synthetic models of stellar populations. With a radiative transfer code, they retrieve a relation between the colour and the dust optical depth. In NGC 4736 they detect a dust component demarcating the spiral arms, with optical depth 2$`<\tau _V<`$4, and diffuse interarm dust with $`\tau _V0.75`$. In NGC 4826 it is distributed in a foreground screen of $`\tau _V2`$. From the optical depth they deduced the dust mass and found values an order of magnitude higher than those inferred by IRAS observations of dust emission (I’ll show in Sect. 1.3 and 1.4 how IRAS observations are not able to detect the more massive dust component in a galaxy). Regan, Vogel & Teuben (1997) applied the same technique to NGC 1530, using six optical-NIR colours. A peak face-on optical depth $`\tau _V4`$ is derived in a nuclear dust ring. The same line of reasoning is used by Peletier et al. (1995). They measured the radial scalelength in B and K and studied their ratio as a function of inclination. Assuming that the colour radial gradient is due mainly to dust (see De Jong 1996a, for an opposite view), the variation of the scalelength ratio can be described by an extended distribution of dust (Sect. 1.6) with face-on optical depth of order unity in the V band. Beckman et al. (1996) measure radial scalelength in B, V, R, I for three face-on galaxies, over the whole disk and over selected regions, to separate the arm contribution from the interarm. The increase of the radial scalelength with the decreasing wavelength of the observation, is modelled with a radiative transfer code for face-on galaxies. Higher central face-on optical depths are found for the arm region than for the interarm. When optical depths are derived from the mean scalelength of each whole galaxy, values of $`\tau _V=34`$ are found. Kuchinski et al. (1998) study several optical-NIR colours of 15 highly inclined spiral galaxies. Colour-colour plots for different position along the minor axis of each galaxy are compared with analogous data from a complex Monte Carlo radiative transfer model (Sect. 3.1) inclusive of scattering, stellar and dust disk, stellar spheroidal distribution and clumping. The observed trajectories in the colour-colour plots can be explained by models with optical depths in the range $`0.5<\tau _V<2.0`$. In all the optical depth determinations from radiative transfer models presented above, assumptions where made about the parameters describing the stellar and dust distribution. In edge-on galaxies, where the gradients due to dust extinction are maximised, resulting in evident extinction lanes, it should be possible to derive both the optical depth and the distribution parameters from an image of the galaxy. This is achieved by the fitting procedure of Xilouris et al. (1999). A mean V-band face on optical depth $`\tau _V=0.5`$ is derived from models of 7 edge-on galaxies. Alternatively, light from background objects can be used to derive the extinction, in the simple screen scheme, without any knowledge or assumption required for the relative distribution of dust and stars in the galaxy. The first application of this technique is made by Zaritsky (1994), to detect a possible dust halo in two spiral galaxy. I will discuss this more extensively in Sect. 1.6 and in Appendix B. González et al. (1998) studied the colour and number counts of background galaxies through the disks of the spiral galaxy NGC 4536 and the irregular NGC 3664, using images from the Wide Field Planetary Camera 2 of the Hubble Space Telescope. They found an extinction $`A_I`$=0.74-1.07 for the arm region of NGC 4536 (the value depending on the method used), and of $`A_I<`$0.5 for the interarm region. The disk of NGC 3664 shows an extinction $`A_I`$=1. These results convert to $`\tau _V`$ 2 for the arm region of NGC 4536 and the central part of NGC 3664. Another technique that makes use of background objects is the method of overlapping galaxies (White & Keel 1992, Berlind et al. 1997). To separate the radiation of the foreground object from the attenuated radiation of the background one, smooth and symmetric objects are required. If this is the case, it is possible to estimate each intrinsic individual flux in the region of overlap from the flux in the regions were the objects are seen separately. Subtracting the foreground flux from the overlap, only the attenuated flux of the background object is left. Comparing it to the intrinsic flux the extinction can be derived. Both the works of White & Keel (1992) and Berlind et al. (1997), conducted on two different galactic pairs, give higher extinction for the arm region ($`\tau _V`$ 1) than for the interarm ($`\tau _V`$ 0.5). Most of the recent work listed here seem to suggest that galaxies have a moderate extinction, with face-on central optical depth $`\tau _V`$ of order unity. However, the methods devised to ascertain the opacity in a spiral galaxy suffer from a lot of uncertainties and assumptions. Further constraints on dust can be obtained analysing its FIR emission. ### 1.3 FIR emission from spiral galaxies: missing dust? As a consequence of the principle of energy conservation, the radiation absorbed by dust from stars must be re-emitted. Using simple calculations for a dust grain immersed in the local ISRF, Van De Hulst (1946) derived a grain equilibrium temperature $`T16`$K. The peak of dust emission would therefore occur at $`\lambda 170\mu `$m, in the FIR (Sect. 2.3). Dust in the proximity of stars would be heated to a larger temperature and therefore emit at shorter wavelengths (Whittet 1992). With the launch of IRAS in 1983 (Neugebauer et al. 1984), dust emission was revealed to span over a wide wavelength range, from the MIR to the FIR, in our Galaxy (Beichman 1987, Cox & Mezger 1989) as well as in other spirals (Rice et al. 1988). IRAS observed in four broad filters centred at 12, 25, 60 and 100 $`\mu `$m, covering a spectral range from $`5\mu `$m to $`120\mu `$m. If dust is heated preferentially by the ISRF to temperatures lower than 30K, the peak of dust emission is not observed. Because of its spectral range, IRAS is therefore more likely to pick up regions where dust has higher temperature, like in the proximity of star forming regions, where the radiation field is higher than in the diffuse medium. Furthermore, it was discovered that emission at $`\lambda 60\mu `$m is mostly due to small grains heated stochastically and not at the thermal equilibrium (Sect. 2.5). Because of this excess emission at shorter wavelengths, dust temperatures derived from IRAS flux ratios under the hypothesis of thermal equilibrium are biased towards higher values. The luminosity emitted by dust depends on a high power of the temperature ($`T^{4+\beta }`$, where $`\beta 1`$; Sect. 2.3). A small amount of warm dust can therefore emit more radiation than a large amount of cold dust, which could pass undetected, unless observations cover the spectral range where cold dust emission peaks. Using IRAS data only, the bulk of the dust in a spiral galaxy may be overlooked. This is evident in the first determinations of the gas-to-dust mass ratio in our Galaxy and in other spirals. From the correlation between the local column density of interstellar hydrogen (atomic + molecular) and the colour excess E(B-V) found by Bohlin et al. (1978) towards a sample of 96 stars (Sect. 2.2) in the Solar neighbourhood, it is straightforward to derive the local gas-to-dust mass ratio. A value of 130 is found. The first determinations based on IRAS data gave higher values than that, thus implying a substantially smaller amount of dust than that derived from the extinction in the local interstellar medium. Sodroski et al. (1987) analysed the Galactic IRAS FIR emission at 60 and 100 $`\mu `$m. After correcting for zodiacal light and smoothing over discrete sources, the FIR emission from the galactic plane was compared to CO, HI and 5Ghz surveys, to study the similarities between dust emission and the main three phases of the gas, molecular, atomic and ionised. The longitude profiles of 60 and 100$`\mu `$m are quite similar to the CO and 5Ghz emission, while the HI is broader. The latitude distribution suggests a significant contribution from the dust associated with the atomic gas, the 100$`\mu `$m emission being broader and following closer the HI warp than the molecular. Temperatures derived from the ratio of 60 and 100$`\mu `$m fluxes are quite constant, with a mean value of 24 K (using an emissivity law with $`\beta =2`$; Sect. 2.3 and 2.4), decreasing by less than 10% from the inner to the outer Galaxy. This was unexpected, if the ISRF is the main contributor to dust heating: dust temperature should be higher in the centre, where the ISRF is higher. The constancy of T is ascribed to stochastically heated grains, whose apparent temperature (i.e. the temperature as measured from the flux ratios under the assumption of thermal equilibrium, which is not the case for small grains) depends very weakly on the IRSF. The derived dust masses lead to a gas-to-dust ratio that is twice the value for the Solar neighbourhood in the inner Galaxy, and 6 times higher in the outer Galaxy. The larger gas-to-dust ratio can be explained if a cold dust component from 1.5 to 6 times more massive than the warm dust is introduced. The cold component would contribute only to the 20% of IRAS emission. Sodroski et al. (1989) decompose the IRAS Galactic plane emission at 60 and 100 $`\mu `$m into three emission components, associated with the molecular (H<sub>2</sub>), neutral atomic (HI) and ionised (HII) phase. For several position along the galactic plane they derive temperatures (under the assumption of a singular temperature component along the line of sight), optical depths and gas-to-dust ratios for each of the three components. The assumption of a single T along the line of sight biases the results towards higher values of T. Using an emissivity law with $`\beta `$=2, they derived T=24 K for the HI component, a warmer T=32K for the HII component, consistent with OB stars heating the dust, while the molecular component is colder, T=18K. As in their previous work, the small variations of the temperature with the galactic longitude are not compatible with ISRF heating, but rather betray the presence of small transiently heated grains. The gas-to-dust ratio of the HI component is higher than the Solar neighbourhood value, as already seen in Sodroski et al. (1987). The value for the HII component, instead, is closer to that expected, because the temperature of dust associated with HII regions is higher then for dust in the mean ISRF, and therefore less affected by the IRAS bias on high T and the small grain emission. The gas-to-dust ratio for the molecular gas has large uncertainties. A similar trend was observed in other spiral galaxies. Devereux & Young (1990a) derive the dust mass from 60 and 100$`\mu `$m IRAS fluxes, for a sample of 58 spiral galaxies with available HI and H<sub>2</sub> data. A good correlation is found between the mass of gas and the mass of dust. The dispersion in the correlation is reduced when only data for the inner disk (R$`<`$D<sub>25</sub>) are used. Since molecular gas is always concentrated in the central part of the galaxy, this suggested that the outer part of the HI disk does not contribute significantly to the FIR emission. Nevertheless, the derived gas-to-dust ratio is higher than the Galactic (a mean value 1080). The observed value can be explained if 90% of the total dust mass has T$``$15K, too cold to be detected by IRAS. It is interesting to note that cold dust at T$`<30`$ K was not detected even when IRAS fluxes of spiral galaxies were integrated with sub-millimetre/millimetre observations (Eales, Wynn-Williams & Duncan 1989, Clements, Andreani & Chase 1993). Clements et al. (1993) explain this with the different beam sizes of the IRAS telescope (FWHM=120”) with respect to the observation at longer wavelengths ($`1020^{\prime \prime }`$). Directly comparing the fluxes in the two spectral ranges is equivalent to assuming that both the emissions come from a region smaller than the smaller beam size. If instead dust emission is extended, the flux from mm/sub-mm observations will be underestimated. Correcting the mm/sub-mm fluxes for this effect, Clements et al. (1993) retrieve a colder dust temperature of 20K. ### 1.4 FIR emission from spiral galaxies: cold dust As already foreseen, the picture changed with the availability of observations at wavelengths longer than the range observed by IRAS. Sodroski et al. (1994) repeat the same analysis as in Sodroski et al. (1987; 1989), but using the 140 and 240 $`\mu `$m observation of the Galactic plane from the Diffuse Infrared Background experiment (DIRBE) aboard the Cosmic Background Explorer (COBE) satellite. The observations are more sensitive than IRAS to cold dust, and the contamination from small grain emission is avoided. A mean gas-to-dust ratio of 160 is found, now compatible with the local value derived from extinction, and a mean temperature of 19 K. The longitudinal trend of T suggests that the dust temperature decreases with the galactocentric distance, compatible with dust being heated by a general ISRF. The gas-to-dust ratio increases with longitude, suggesting a lower metallicity in the external part of the galaxy, or the presence of a colder dust component, too cold to be detected even by DIRBE. As in Sodroski et al. (1989), the FIR emission is then decomposed into the contribution of the three gas phases. The temperature of dust associated with HI is consistent with ISRF heating and similar to the previous IRAS determination for the other components. In Sodroski et al. (1997) a similar data set is used to produce a three-dimensional model of the Galactic FIR emission. The properties of the dust component associated to each gas phase are retrieved as a function of the Galactocentric distance for 3 rings in the inner galaxy and for the outer galaxy out from the distance of the Sun, after adopting a rotation curve. Temperatures are still derived fitting a blackbody to the 140 and 240$`\mu `$m images, using a $`\beta =2`$ emissivity. For the HI component, T decreases with the galactocentric distance as for dust heated by the ISRF. Apart from the position of the molecular ring, the main contributor to the FIR is dust associated with HI (55-65% of Galactic FIR emission). The temperature is T$``$21 K. The gas-to-dust ratio for the component associated with HI increases outward (consistent with the decrease in the metallicity gradient), with a value of 130$`\pm `$40 at the Sun Galactocentric distance. A similar gas-to-dust ratio is retrieved for the other gas phases. Assuming an emissivity law (Sect. 2.3 and 2.4) and using the optical depth at 240$`\mu `$m for the HI and H<sub>2</sub> dust component, they find that the radial distribution of the face-on optical depth of the Galactic disk is quite flat, with $`0.5<\tau _B<1`$. If seen from a face-on direction, the Galaxy would look transparent, with a total extinction $`A_B<`$0.2. There is no evidence in DIRBE data to support the idea that a large fraction of the hidden mass in spirals may be due to unseen cold gas and stars obscured by intervening dust. Reach et al. (1995) fit several models to the Galactic spectrum from 104$`\mu `$m to 4.5mm, observed by the Far-Infrared Absolute Spectrometer (FIRAS) on board of the COBE satellite. Data are best fitted by a two temperatures model, with a warm component with 16K$`<`$T$`<`$23K and a very cold component with 4K$`<`$T$`<`$7K. High signal to noise spectra in the inner Galactic plane need an intermediate component, with T$``$14K. The warm dust produces the Galactic spectrum between 100 and 300 $`\mu `$m and is identified as produced by large grains in equilibrium with the ISRF. It is identical to the dust detected by Sodroski et al. (1994). The very cold component gives an important contribution to the spectrum only for $`\lambda >650\mu `$m and shows very little variation with position in the Galaxy. The optical depth of the cold component correlates well with the warm component and this suggests a Galactic origin. It is difficult to explain this component with dust shielded from the ISRF in the core of very opaque clouds: the high optical depths required and the ubiquity of the cold component would produce a Galactic extinction much higher than that observed. Transiently heated grains would have a very small temperature between each temperature fluctuation. Nevertheless, dust models (Désert et al. 1990) predict a contribution of very small grains to the spectrum in this wavelength range that is smaller than that observed: an increased amount of small grains to match the FIR-Submm spectrum would produce an excess of emission in the NIR, that is not observed. Other possible explanations require the presence of grains with unusual optical properties, as for example fractal grains with high emissivity, emissivity enhancements, like spectral features of the grains responsible for the warm component, or very large grains, although they should have a dependence on the ISRF, while the cold component has a quite constant T in the inner and outer galaxy. The third dust component observed in the inner galaxy is associated with the molecular gas, as indicated by the rough correlation between the variation of its brightness and that of the CO line. Dust in molecular clouds shielded from the mean ISRF would be heated to similar temperatures. It is very weak, contributing only $``$2% to the emission at 200$`\mu `$m. Boulanger et al. (1996) studied the correlation between the FIR emission from dust as measured from DIRBE and FIRAS and the atomic gas emission at high galactic latitude. They found a very tight correlation for atomic hydrogen column densities below 5.5 10<sup>20</sup> H-atoms cm<sup>-2</sup>. Above this threshold there is an excess of FIR emission that is interpreted as the increasing contribution of dust associated with molecular clouds: FIR emission associated with this dust is observed, while the H<sub>2</sub> is not in a HI survey. In the limit for zero HI column density there is a residual FIR emission, that is considered as due in part to an isotropic cosmic FIR background (Puget et al. 1996) and in part to warm ionised gas uncorrelated to the atomic component. After removing the residual, a mean spectrum for the low column density regions is computed, characterised by a temperature of 17.5$`\pm `$0.2 K ($`\beta =2`$). No evidence for the Reach et al. (1995) very cold component is found, thus suggesting that it was an artifact caused by the cosmic FIR background. Lagache et al. (1998) analysed high latitude observation from DIRBE and FIRAS, after subtracting the cosmic FIR background of Puget et al. (1996). Using the 60$`\mu `$m DIRBE image as a template of the diffuse Galactic emission, they isolate regions with excess emission at longer wavelengths. These regions at colder temperatures are associated with dense molecular clouds and appear as positive excesses in the FIR/HI correlation of Boulanger et al. (1996). The drop in temperature may be due to the attenuation of the radiation field in a dense cloud, but also to a change in the dust properties with the environment. Some regions have negative excesses, with dust hotter than in the mean interstellar medium because of the proximity to young stars, like in HII regions. The mean FIRAS spectrum for regions without FIR excess can be fitted with a modified blackbody ($`\beta =2`$) of T=17.5$`\pm `$2.5 K. Temperature fluctuations can be converted to variations of a 30% around the mean intensity of the radiation field. For regions of sky (3.4% of the celestial sphere) with a significant FIR excess, a two component spectrum is required, with a warm temperature T=17.8$`\pm `$1.2 K, consistent with the one derived in the other regions without FIRE excess, and T=15.0$`\pm `$0.8K for the cold dust, associated with the molecular clouds. The coldest temperature detected is 13K. Again, no evidence for the Reach et al. (1995) very cold component is found, whose detection is shown to be an artifact of the unsubtracted cosmic FIR background. Regions without FIR excess are further analysed in Lagache et al. (1999). Decomposing the FIR spectrum in dust associated with HI gas and with Warm Ionised Medium (WIM), they derive a temperature for the dust in the second gas component of T=20K. Dust properties in WIM are quite similar to those in the neutral gas (See Sect. 2.4.3) and consistent with those derived by Boulanger et al. (1996). With newly available observations in the FIR at $`\lambda >100\mu `$m and in the mm/sub-mm range, large amounts of cold dust have been finally observed in other spiral galaxies. Chini & Kruegel (1993) mapped the 1.3mm emission of three galaxies. They find that the dust emission spatial extent is comparable to the optical size of the galaxies. Because of the spatial information of the emission, they can safely compare the new observations with IRAS data, without being affected by the beam size problem described by Clements et al. (1993). A cold dust component with T=17 K is necessary to explain the spectra for $`\lambda >100\mu `$m. Guélin et al. (1993) observed NGC 891 at 1.3mm, using a bolometer array at the IRAM telescope. The measured flux is nine times stronger than what would have been expected on the bases of IRAS fluxes and temperature. After discharging other possible contributions to the observed emission, as CO lines or free-free emission, they conclude that the bulk of the emission must arise from dust at T$`<20`$ K. Comparing IRAS fluxes with a 1.3mm image of NGC 3627, Sievers et al. (1994) concludes that emission at $`\lambda >100\mu `$m can be explained by a dust component with T= 19.5K. Chini et al. (1995) mapped 32 non-active spirals at 1.3mm, observing 7 of them also at 450 and 800$`\mu `$m. Integrating the data with IRAS fluxes, they find that the coldest dust component necessary to fit the spectrum at larger wavelengths has an average temperature in the range 10-20 K. Cold dust is found by Guelin et al. (1995) in M51; they combine fluxes from a 1.2mm image with FIR observation between 55 and 320 $`\mu `$m from the Kuiper Airborne Observatory. The millimetre image is smoothed to the poorer resolution of the KAO observation. The spectrum at $`\lambda >100\mu `$m can be fitted by dust at T=18 K. Neininger et al. (1996) observed NGC4565 at 1.2mm. The emission is seen to follow the molecular gas in the inner part of the galaxy and the HI at the periphery. The radial gradient at 1.2mm is shallower than those previously observed in the range 50-200$`\mu `$m from IRAS and KAO. This is a clear signature of dust heated by the ISRF. The dust temperature in the centre of the galaxy is T=18K. Colder dust temperatures (T=15K) are observed in a plateau at a distance of 12 kpc from the galactic centre, in correspondence with the peak of HI emission. Dumke et al. (1997) observed NGC 5907 at 1.2mm. The dust emission follows the gas, but is also present at larger distances from the centre, where no CO is observed. Comparing the total flux with IRAS data, a cold dust component with a mean temperature of 18K is necessary to fit the spectrum, the warmer dust detected by IRAS being unable to explain the observed emission at 1.2mm. From an analysis of the radial profiles, a slight temperature gradient is inferred, with T dropping from 20K in the centre to 16K in the outer disk. As suggested by Chini et al. (1995) more precise temperature estimates than using IRAS and millimetre fluxes are possible when data around the peak of dust emission are available. Data in this spectral range have been made available by the ISOPHOT instrument (Lemke et al. 1996) on board of the Infrared Space Observatory (ISO; Kessler et al. 1996). Krügel et al. (1998) observed three quiescent and three active galaxies with ISOPHOT, obtaining data between 60 and 200$`\mu `$m. In combination with fluxes at 1.3mm, they found evidence for large amounts of cold dust in the inactive galaxies, with temperatures T=10K or smaller. Compared to estimates made without ISO data, the mass of dust is increased by a factor of three. Alton et al. (1998a, see also for NGC 6946) present resolved images of a sample of 8 nearby galaxies, observed with ISOPHOT at 200$`\mu `$m. Apart from consideration of the extent of the dust emission, which are reported in Sect. 1.6, they infer a mean grain temperature between 18 and 20K, using the IRAS 100$`\mu `$m data together with their 200$`\mu `$m fluxes. Temperatures are about 10K lower than those based on IRAS data only. Consequently, the dust mass is increased by an order of magnitude. Using literature values for the gas masses, they derived a mean gas-to-dust mass ratio of 220, much closer to the Galactic value than those derived by Devereux & Young (1990a). The results hold even when a possible error of 30% in the ISO photometric calibration is taken into account. Similar results come from ISOPHOT observation at 175$`\mu `$m of M31, the Andromeda galaxy (Haas et al. 1998). A dust temperature of 16$`\pm `$2 K is fitted to the ISO data and to DIRBE data at 140 and 240$`\mu `$m reported by Odenwald et al. (1998). The dust mass is boosted up by an order of magnitude with respect to the IRAS value, thus bringing the dust-to-gas ratio to 130, close to the determination of Sodroski et al. (1994). If the dust is assumed to be distributed in a thin slab for the inner 10 kpc, a mean face-on optical depth $`\tau _V=0.5`$ can be derived. This agrees with the mean values inferred by Xu & Helou (1996b), derived from an energy balance method (Sect. 3.2). Alton et al. (1998b) observed NGC 891 with the sub-mm camera SCUBA at 450 and 850$`\mu `$m. After smoothing the images to a resolution common with 60 and 100 $`\mu `$m High Resolution (HiRes) IRAS images, they find a cold dust component of 15K, together with a hot component of 30 K necessary to fit the 60 $`\mu `$m flux. An approximate distribution of cold dust is retrieved fitting the two temperatures model to the spectra for each of 6 radial bins at different distances from the centre of the galaxy. It is found that the cold component contributes increasingly with the galactocentric distance to the dust mass in each bin. Odenwald et al. (1998) searched in the COBE DIRBE all-sky survey (with a beam size of 0.7x0.7), for all galaxies with locations listed in the IRAS Catalog of Extragalactic Objects and the Centre for Astrophysics Catalog of Galaxies. They found 57 galaxies, of which only 7 had available fluxes for $`\lambda >100\mu `$m. Their spectra could be fitted by a cold component of T=20-25 K, and a possible weak very cold component of T=10-15 K. The very cold component contributes usually only up to 15% of the total dust mass. Only two of the seven galaxies are not compatible with a cold component spectrum only. Most of the work on spiral galaxies presented in this section makes use of observations coming from different broad-band instruments to derive flux ratios and dust temperatures. Temperatures would be better determined from spectra of FIR emission (like in the Reach et al. 1995 work on the FIRAS Galactic spectrum). The Long Wavelength Spectrometer (LWS) (Clegg et al. 1996), aboard the satellite ISO, cover the spectral range between 40 to 200 $`\mu `$m. Although a more extended coverage of the long wavelength range would be desirable, dust temperatures as cold as 15K can be derived from the spectrum shape. Braine & Hughes (1999) observed the centre of NGC4414, within the 100” LWS aperture. A temperature T=24.5 K for the cold component is derived. Comparing the LWS data with IRAS fluxes and $`\lambda `$1.3mm images covering a larger area, they infer a gradient in the dust temperature, with colder dust at larger radii. Trewhella et al. (1999) observed five galaxies positioning the LWS aperture in the centre and at different position along the galactic disk. Although work is in progress, the spectra on the centres suggest temperatures T= 30-35 K, with emission peaking at 100$`\mu `$m, while for the outer regions the spectra are flat, or still rising out to the maximum LWS wavelength. Spectra of the outer regions are compatible with T$`<20`$K. As shown in this Section, the problem of the lack of dust resulting from the use of IRAS data only is solved when observations at $`\lambda >`$ 100 $`\mu `$m are available. A cold (T$`20`$K), massive ($``$ 90% of the total dust mass) component is necessary to explain the FIR and sub-mm emission, in the Galaxy as well in other external spirals. The measured dust temperatures and the gradient of its radial distribution (observed in the Galaxy and also in other spiral, thanks to new high resolution and sensitivity sub-mm instruments) indicates that the cold component is heated by a diffuse ISRF. Since diffuse dust is the main contributor to extinction, FIR observation around the peak of dust emission (100$`\mu `$m-300$`\mu `$m) can be used to assess the opacity of a galaxy (Reach et al. 1995). ### 1.5 Heating mechanisms While it is accepted that starlight is the major source of dust heating in normal non-active spiral galaxies, there is controversy about which stellar population is the main contributor to the process. If young, high-mass stellar objects are the main contributor, it would be possible to derive the rate of recent star formation directly from the FIR emission. This would be highly desirable, since observations of FIR emission from spiral galaxies are more readily available than those of other tracers of star formation, like H$`\alpha `$ emission (Devereux & Young 1991). Instead, if the diffuse ISRF from an older stellar population is the main source of heating, estimates of star formation rates from FIR emission would be severely biased. Devereux & Young (1990b) claim that the correlation between the FIR and the molecular gas, as well as the correlation with the non-thermal radio emission support the first hypothesis. They compare FIR and H$`\alpha `$ luminosities for a sample of 124 spiral galaxies bright in the IRAS bands. L(H$`\alpha `$) and L(40-120$`\mu `$m), derived from 60 and 100$`\mu `$m IRAS fluxes, correlates, with a mean L(H$`\alpha `$)/L(40-120$`\mu `$m)$`510^3`$. If all the radiation from the star in a HII region is supposed to be absorbed and re-emitted in the FIR, the FIR luminosity is analogous to the bolometric luminosity of the heating star L<sub>bol</sub>. The ratio L(H$`\alpha `$)/L<sub>bol</sub> can be therefore computed for any spectral type of ionising stars, after assuming standard conditions for the HII regions. The measured L(H$`\alpha `$)/L(40-120$`\mu `$m) in their sample of spirals is consistent with stars of spectral type O9 being the main source of dust heating. In two successive works, they analyse FIR and H$`\alpha `$ emission in two galaxies, M51 (Devereux & Young 1992) and NGC 6946 (Devereux & Young 1993). A good correlation is found in the two objects between H$`\alpha `$ emission, H<sub>2</sub> column density and FIR emission (at 170$`\mu `$m for M51 and 160$`\mu `$m for NGC 6946), while the atomic hydrogen presents a central depression. Temperatures derived from 60 and 100 $`\mu `$m IRAS fluxes are quite constant all over the galaxy (32-33K and 27-28K, for $`\beta =1`$ and 2, respectively). A total FIR luminosity is obtained from the IRAS fluxes and from the flux in the FIR image at $`\lambda `$=160-170$`\mu `$m (they derive the temperature of the warm component from 60 and 100$`\mu `$m fluxes and assume that 90% of dust is cold at T=14-16K, Devereux & Young 1990a. The FIR luminosity is dominated by the warm component. The ratio between the FIR luminosities and the H$`\alpha `$ for several location within the galaxies is consistent with O9-B0 stars being responsible for the heating of dust. Devereux & Young also claim that the absence of a radial gradient for the temperature can be explained by the fact that the typical temperature of dust in HII regions is not expected to depend on the position in the galaxy. Sodroski et al. (1989) show that the temperature derived from the ratio of 60 and 100 $`\mu `$m IRAS fluxes on the Galactic plane is constant even when the FIR emission associated with the neutral gas only is considered. This suggests that stochastically heated grains provide a better explanation for the shallow temperature gradient. For the galaxy M51, Rand et al. (1992) come to radically different conclusions about the source of dust heating. They found that the infrared excess, i.e. the ratio between the FIR flux and the flux of Ly$`\alpha `$ photons (derived from the free-free continuum at 21 cm) is higher than that derived in Galactic HII regions, indicating a FIR emission not due to dust heated by photons originating in massive star-forming regions. Furthermore, they found that the arm-interarm contrast in IRAS images is always lower than for H$`\alpha `$ images convolved to the same resolution. This second test too suggests that the FIR emission does not arise only from dust in star-forming regions. Another contrasting view is offered by the two works on the Andromeda galaxy by Devereux et al. (1994) and Xu & Helou (1996a). In Devereux et al. (1994) the same arguments as in Devereux & Young (1992; 1993) are brought in favour of the FIR luminosity originating mainly from dust in star-forming regions, i.e. close resemblance of H$`\alpha `$ and FIR images and the ratio between FIR and H$`\alpha `$ luminosities similar to that observed in Galactic HII regions. The star-forming ring is supposed to contribute 70% of the FIR radiation. Xu & Helou (1996a) measure the ratio between the IRAS 60$`\mu `$m and H$`\alpha `$ fluxes from bright FIR-resolved sources in M31. Using the total H$`\alpha `$ luminosity and the Désert et al. (1990) dust model, they extrapolate the fraction of the total FIR luminosity that is associated with HII regions and star-formation. A value of 30$`\pm `$14% only is found. Walterbos & Greenawalt (1996) model the FIR emission of spiral galaxies under the assumption that dust is heated by the ISRF. The amount of dust is estimated from the HI column density, assuming a constant ratio between the two, while the intensity of the ISRF is derived from the blue profile, after correcting for the inclination and the dust internal extinction. The ISRF model is then scaled on the Galactic local value and the 60 and 100$`\mu `$m fluxes are then computed using the values tabulated by Désert et al. (1990) for their dust model heated by fractions or multiples of the Galactic local ISRF. On a sample of 20 galaxies, the modelled FIR fluxes can account for, on average, half of the observed fluxes. They conclude that the role of the ISRF in heating the dust should not be ignored. From their decomposition of the Galactic FIR emission Sodroski et al. (1997) conclude that the main contributor is dust associated with atomic gas (55-65%). The dust associated with the molecular phase contributes 30-35% of the FIR emission. It is believed that FIR emission within the H<sub>2</sub> gas comes in part from dust heated by the ISRF and in part by OB stars. The HII component contributes only to 5-10% of the total FIR. They conclude that since 55-85% of FIR emission is not associated to HII or to OB stars heating within the H<sub>2</sub> component, it is not safe to use the FIR to derive star-formation rates. Among the papers presented in this Section, the main evidence in favour of the dust heating by high-mass stars is the correlation between the FIR and H$`\alpha `$ emissions. This may simply be a reflection of the local density of the interstellar medium, both star formation (and H$`\alpha `$ emission) and FIR emission being stronger in regions of high density. The problem could be solved by decomposing the FIR emission into the different sources of heating. Because of the lack of high-resolution observations, this is possible only on large objects. Observations of the Galaxy and M31 favour the hypothesis of a FIR emission arising mainly from ISRF heated dust. It is then interesting to note that the works in favour of the high-mass star hypothesis presented here are mainly based on IRAS observations. As shown in Section 1.4, dust emitting for $`\lambda >100\mu `$m is likely to be heated by a diffuse IRSF. ### 1.6 Evidence for extended dust distributions In recent years new evidence has emerged that shows how galactic disks extend beyond the dimensions inferred from the luminous stellar distribution. Molecular clouds and associated HII regions has been observed in our Galaxy, at distances between 18 and 28 kpc, more than twice the Sun galactocentric distance (De Geus et al. 1993, Digel et al. 1994). HII regions beyond the optical radius ($`R_{25}`$, the radius corresponding to the 25 mag arcsec<sup>-2</sup> isophote in the B-band) have been observed in deep H$`\alpha `$ images of three spiral galaxies (Ferguson et al. 1998b; a). Because of the tight correlation between B-I colour excess and gas column density (Sect. 2.2), dust might be expected to be present at large distances as well. A few studies of extinction in spiral galaxies suggest larger exponential scalelengths for the dust distribution with respect to the stellar. Peletier et al. (1995) analyses the ratio between the B and K radial scalelength for a sample of 37 galaxies. Assuming an intrinsic ratio 1.2, due to stellar population gradients, a larger observed ratio can be caused by dust extinction, that makes the B profile flatter, leaving the K scalelength substantially unaltered, because of the small opacity at larger wavelengths. Comparing the observed ratios of 1.3 for face-on galaxies and 1.7 for edge-on with the results of an absorption (without scattering) model for dust and stellar exponential distributions, they find consistent results for exponential dust scalelengths larger then twice the stellar. A similar model is compared to J- and V-band images of edge-on spirals by Ohta & Kodaira (1995): for one of three galaxies (NGC 4565), the dust scalelength is found to be twice the stellar. A more complete radiative transfer model, inclusive of scattering and spheroidal distributions for stellar bulges is fitted to edge-on galaxies surface photometry by Xilouris et al. (1999). For a sample of six objects observed in several optical bands (and two of them also in the Near Infrared), they derived a mean dust/star scalelength ratio of $``$ 1.5. A close correlation has been found for the V-I colour of background galaxies seen in projection along the disk of M31 and the local hydrogen column density, on a field at a distance of 23kpc from the centre, outside the optical radius of the galaxy Lequeux & Guelin (1996). The result implies a substantial dust reddening ($`A_V=0.4`$ mag). The colour-magnitude diagram of the galaxy’s stars in the field reveals blue stars and therefore massive star formation. FIR observations of dust emission confirm the presence of extended dust distributions. Davies et al. (1997) use observations at 140$`\mu `$m and 240$`\mu `$m from the DIRBE instrument aboard COBE to model the dust emission. Adopting a double exponential model for the dust and a dust temperature spatial distribution inferred from observations, they produce maps of FIR emission and temperature as a function of Galactic latitude and longitude. The observed emission and temperature can be matched by the model only for a dust disk with a radial scalelength 1.5 times the stellar. The vertical scalelength of dust is found larger than the stellar as well. The Davies et al. model suggest a dust layer twice as thick as the stellar disk. The most striking evidence for a large dust distribution comes from ISO observations at 200$`\mu `$m (Davies et al. 1999b, Alton et al. 1998a). Alton et al. (1998a) compared the optical (B-band) and FIR scalelength at 60 and 100$`\mu `$m derived from high resolution HiRes IRAS images with 200$`\mu `$m images of seven resolved spiral galaxies, observed by the instrument ISOPHOT aboard ISO. To be sure to compare emission coming from the same galactic structures, all the images have been smoothed to the poorer ISO resolution (FWHM=117”). It is found that IRAS scalelengths are generally smaller than the optical. On the contrary, the 200$`\mu `$m ISO profiles are shallower, with an exponential scalelength $``$ 1.3 times the one measured in B. The dust temperature in the centre of a galaxy is warmer, because of the higher Interstellar Radiation Field. Therefore, for a given dust scalelength, the scalelength of the FIR emission should be smaller, because of the steep dependence of the emission on the temperature (Sect. 2.3). An emission scalelength larger than the optical can thus be produced only if the dust spatial distribution is more extended than the stellar (as traced by the B-band emission). A proper analysis would require a detailed modelling of the heating by the ISRF at any position along the dust distribution. This is actually the purpose of this Thesis. A confirmation of the large extent of dust disks in spiral galaxies comes from the work of Nelson, Zaritsky & Cutri (1998). They selected isolated spiral galaxy using the 100$`\mu `$m IRAS SKY Survey. Three samples are defined, classifying the objects on the basis of their optical radius (10’-30’,5’-10’,2.5’-5’). For each sample, a coadded image was produced, after rescaling each object, rotating it according to its position angle, subtracting the sky and normalising the flux. An image of the PSF is constructed coadding images of two control samples of stars and unresolved galaxies. After subtracting the PSF, they find residuals of 100 $`\mu `$m extended emission in the two samples of galaxies of larger dimensions. Using literature data for stellar disks, they derived a mean 100 $`\mu `$m scalelength equivalent or up to a factor of 2 smaller than the stellar scalelength. A simple model shows that this implies a dust spatial distribution less concentrated than the stellar. As well as along the disk, there are suggestions for extended distribution above the galactic plane. Zaritsky (1994) reports a preliminary detection of a dusty halo, through the colour excess of background object in fields close to two galaxies, with respect to fields at larger distances. Though, the measured colour excess is only twice as large as the rms coming from the intrinsic dispersions of the background object colours. A large number of objects is necessary to produce a statistically convincing result. Lequeux et al. (1995) and Lequeux et al. (1996) apply the same technique to two galaxies, but without producing statistically robust results, the largest colour difference being always smaller than 3$`\sigma `$. The only positive detection of a colour excess is the one reported for in the already cited work of Lequeux & Guelin (1996), but for a field along the galactic plane and thanks to a corroborating correlation with the gas column density. As I have shown in this Section, detection of star-formation at large distances from the galactic centre, models of dust extinction and FIR emission, observation of FIR dust scalelengths, all point towards the existence of extended distributions of dust in spiral galaxies, more extended than the stellar disk. if this is the case, the study of the distant universe might be severely biased by dust extinction. However, the only direct detection of extended dust through the extinction of background objects is not statistically robust. ### 1.7 Plan of the Thesis The work presented in this Thesis consists of a self-consistent model for the radiative transfer and the dust emission in spiral galaxies. I will adopt the energy balance method (Sect. 3.2), within which the FIR emission is directly compared to the stellar emission to derive the extinction. The simulation also gives the dust temperature distribution and the emission at different FIR wavelengths. Results of the models will be compared to observation, to address some of the topics presented in this Introduction, mainly: i) are spiral galaxies optically thin or thick? ii) Can the SED of FIR emission be explained advocating only the ISRF as the source of dust heating? iii) What kind of extended distributions of dust are needed to explain the observed spatial distribution of FIR emission? Apart from Chapter 1, the present Introduction, this Thesis is organised in the following Chapters: 1. Dust properties: extinction and emission Basic theory of dust extinction and emission is presented, especially those formulae that are used elsewhere in the Thesis. The adopted dust parameters are presented, together with an original determination of the dust emissivity, based on Galactic extinction and FIR emission. The fraction of absorbed energy re-emitted in the Mid-Infrared is estimated. 2. The radiative transfer and dust emission model First I review radiative transfer and FIR emission models available in the literature. I then describe the adopted distribution for the stars and the dust in the model of the galaxy. Finally, the radiative transfer and dust emission model developed for this Thesis is presented. The adopted procedure is described with the help of example program outputs. 3. Modelling NGC 6946 The model is finally applied to the spiral galaxy NGC 6946. Optical and FIR observation from the literature are used to derive the stellar and dust Spectral energy Distribution (SED). Observation of NGC 6946 carried out at the James Clark Maxwell Telescope (JCMT), using the Sub-millimetre Common User Bolometer Array (SCUBA) in June 1998 are presented here. Several models for the dust distribution are discussed, in the quest for a match with the observed FIR SED and spatial distribution of emission. 4. Conclusions During the period of my PhD, I have also worked on two other projects, that are presented in the Appendices: Appendix A SCUBA imaging of NGC 7331 dust ring is dedicated to sub-mm observation of the spiral galaxy NGC 7331, that I have carried out in October 1997 at the JCMT using SCUBA. A dust ring, also detected in an optical-NIR colour image, has been revealed. In Appendix B Search for dust in the halos of spiral galaxies I report on the attempt to measure the colour differences of objects in the background of two nearby edge-on spiral galaxies, following the Zaritsky (1994) technique, as outlined in Sect. 1.6. Unfortunately both the observing runs I attended at the Isaac Newton Telescope (November 1996 and December 1997 - January 1998) were undermined by bad weather and it was not possible to detect a sufficiently large number of objects to produce sound results. I have detected a faint extended luminous halo around the galaxy NGC 891 (Sect. B), similar to that observed in NGC 5907 (Sackett et al. 1994, Lequeux et al. 1996). ## Chapter 2 Dust properties: extinction and emission In this chapter I give a brief description of dust properties: the main aim is that of introducing the definitions and the parameters used later on in the radiative transfer model. A particular emphasis is given to the derivation of the dust emissivity presented in Sect. 2.4, an original contribution of this work. ### 2.1 Dust extinction If a light ray travels through a length $`ds`$ in a dusty medium with a grain number density $`n_\mathrm{d}`$, its specific intensity $`I_\lambda `$ (energy emitted per unit time, area, wavelength band and solid angle, also called surface brightness) is attenuated by a quantity $$dI_\lambda =n_\mathrm{d}\sigma _{\mathrm{ext}}(\lambda )I_\lambda ds,$$ (2.1) where $`\sigma _{\mathrm{ext}}(\lambda )`$ is the extinction cross section of dust for radiation at wavelength $`\lambda `$. The extinction cross section is usually written as $$\sigma _{\mathrm{ext}}(\lambda )=\pi a^2Q_{\mathrm{ext}}(\lambda ),$$ (2.2) the product between the geometric cross section of a grain of radius $`a`$ and the *extinction efficiency* $`Q_{\mathrm{ext}}(\lambda )`$. Dust extinction involves two different processes, absorption and scattering: in the former, photons are actually absorbed by the dust grain and goes into its heating (and the related emission, Sect. 2.3); in the latter radiation is re-directed along directions different from the incident one. Therefore, the extinction efficiency can be split in two terms describing the contributions from each of these two processes: $$Q_{\mathrm{ext}}(\lambda )=Q_{\mathrm{abs}}(\lambda )+Q_{\mathrm{sca}}(\lambda ).$$ (2.3) The fraction of the extinguished radiation that is diffused by scattering is given by the albedo $$\omega =\frac{Q_{\mathrm{sca}}(\lambda )}{Q_{\mathrm{ext}}(\lambda )}.$$ (2.4) The angular distribution of the scattered radiation is described by the phase function $`\varphi `$, that, for spherical grains, is a function of the scattering angle $`\theta `$ only, i.e. the angle between the incident and the scattered direction. The directionality of the phase function is characterised by the asymmetry parameter $`g`$, that is the mean of $`\mathrm{cos}\theta `$ over $`\varphi `$, $$g=\frac{_0^\pi \varphi (\theta )\mathrm{cos}\theta \mathrm{sin}\theta d\theta }{_0^\pi \varphi (\theta )\mathrm{sin}\theta d\theta }.$$ (2.5) A general solution for the extinction efficiency, albedo and phase function can be found in principle for any grain shape and $`\lambda `$, if the optical properties (i.e refractive index) of the material are known (Van De Hulst 1957, Bohren & Huffman 1983). The solution for spherical grains goes under the name of Mie Theory (Mie 1908). For a point source, the solution of the radiative transfer (Eqn. 2.1) is $$I_\lambda =I_\lambda ^0e^{\tau _\lambda },$$ (2.6) where $`I_\lambda ^0`$ is the intrinsic intensity of the source and $`\tau _\lambda `$ the optical depth of the dusty medium between the source and the observer (along the line of sight), given by $$\tau _\lambda =_{\mathrm{l}.\mathrm{o}.\mathrm{s}.}n_\mathrm{d}\sigma _{\mathrm{ext}}(\lambda )𝑑s.$$ (2.7) Under the assumption that grain properties do not change along the line of sight, $$\tau _\lambda =N_\mathrm{d}\pi a^2Q_{\mathrm{ext}}(\lambda ),$$ (2.8) with $`N_\mathrm{d}`$ the dust column density. A common parameter used to describe the attenuation properties of dust is the *extinction* $`A_\lambda `$, $$A_\lambda =2.5\mathrm{log}_{10}\frac{I_\lambda }{I_\lambda ^0}=1.086\tau _\lambda .$$ (2.9) Eqn. (2.6) and (2.9) are only valid under the assumption of a point source hidden by a layer of dust. In the case of extended sources with intermixed dust, like in the common geometries used to describe galaxies, radiation can be scattered into the line of sight, thus adding a positive term to Eqn. (2.1). In this case, the simple relation between optical depth, intrinsic and observed radiation as in (2.9) will not hold. It is difficult to solve analytically the radiative transfer equation in spiral galaxies including scattering, unless there is a simplification of the geometry (Bruzual et al. 1988) or approximations are made in the treatment of the scattering (Byun et al. 1994). An exact treatment of scattering is possible, in principle, for any geometry by using Monte Carlo methods (Witt et al. 1992, Bianchi et al. 1996, See also Sect. 3.1). ### 2.2 Assumed parameters for extinction It is possible to measure the *extinction law*, i.e. the variation of extinction with the wavelength $`\lambda `$, comparing the extinction towards stars of the same spectral type. In Fig. (2.3) (data points) the mean Galactic extinction law from Whittet (1992) is plotted in the form of $`A_\lambda `$ normalised to $`A_V`$ versus $`1/\lambda `$<sup>1</sup><sup>1</sup>1Usually the *colour excess* $`E(\lambda V)=A_\lambda A_V`$ is plotted, normalised to $`E(BV)`$. For $`\lambda \mathrm{}`$ the normalised colour excess tends to the value $`R=A_V/E(BV)`$ (for the mean Galactic extinction law $`R=3.1`$). The ratio $`A_\lambda /A_V`$ can be derived from the normalised colour excess using the measured value for $`R`$. Alternatively, the ratio between optical depth and hydrogen column density is plotted. Bohlin et al. (1978) found a correlation between the column density of interstellar hydrogen (atomic + molecular) as measured in UV absorption spectra towards a sample of 96 stars and the colour excess E(B-V), $$N(\mathrm{H})=\mathrm{5.8\hspace{0.33em}10}^{21}E(BV)\text{H atoms cm}\text{-2}.$$ (2.10) Using the mean Galactic extinction law, Eqn. (2.10) gives $$A_V/1.086=\tau _V=\mathrm{4.9\hspace{0.33em}10}^{22}N(\mathrm{H}).$$ (2.11) The ratio $`A_\lambda /A_V`$ can then be converted to $`\tau _\lambda /N(\mathrm{H})`$. The ratio $`N(\mathrm{H})/N_\mathrm{d}`$ can be easily found from Eqn. 2.11 and Eqn. 2.8, and hence the gas-to-dust mass ratio, if $`Q_{\mathrm{ext}}`$, the grain dimension $`a`$ and density $`\rho `$ are known. Assuming $`Q_{\mathrm{ext}}=1.5`$, $`a=0.1\mu `$m and $`\rho =3`$ g cm<sup>-3</sup> (Hildebrand 1983), the local gast-to-dust mass ratio is $``$ 130. . The mean Galactic extinction law is measured along several line of sight through the Galaxy. Its main characteristics are the linear (versus $`1/\lambda `$) growth in the optical, a bump at 2175Å and a steeper rise at shorter wavelength in the far-UV (a more detailed description can be found in Sect. (2.5)). There are local variations of the mean extinction law, mainly consisting in the change of strength of the 2175Å bump and in the Far-UV slope. As for external galaxies, extinction laws have been directly measured only towards stars of the Magellanic Clouds. In the LMC the curve is quite similar to the Galactic one, apart from the star forming region 30 Doradus, where the bump is weaker and the far-UV rise steeper. The SMC extinction law is characterised by the absence of the bump and the steep far-UV rise (Whittet 1992, Gordon et al. 1997, and references therein). The lack of the 2175Å bump has been noted in starburst galaxies (Calzetti et al. 1994, Calzetti 1997). The weakening of the bump can be due to the differential effects of scattering and absorption (Cimatti et al. 1997), but Gordon et al. (1997) argue that the absence in starbursts is due to a real absence of bump-carrier grains. The bump has been observed in high-redshift Mg<sub>2</sub> absorbers (Malhotra 1997). Extinction laws in the optical show smaller differences. Applying a radiative transfer model to seven edge-on galaxies, Xilouris et al. (1999) find an extinction law similar to that of the Galaxy longward of the U-band. Since I am interested in modelling normal galaxies, rather than starbursts, in this work I use the Galactic extinction law, as given by Gordon et al. (1997). The scattering properties of dust, i.e. albedo, asymmetry parameter and phase function, can be in principle derived from dust extinction models (see, for example Draine & Lee 1984, Bianchi et al. 1996). For the sake of simplicity and due to the uncertainties in current models, I prefer to use the empirical determination of albedo and asymmetry parameter for Milky Way dust in reflection nebulae, given by Gordon et al. (1997). Witt & Gordon (1996) point out that the presence of clumps in dust may bias the derived albedos toward lower values, since observational data are always analysed in the framework of homogeneous radiative transfer models. For the phase function, I use the analytical expression derived by Henyey & Greenstein (1941) relating the angular distribution of the scattering angle $`\theta `$ to the asymmetry parameter $`g`$: $$\varphi (\theta )=\frac{1}{2}\frac{1g^2}{(1+g^22g\mathrm{cos}\theta )^{3/2}}.$$ (2.12) Values for the extinction law, albedo and asymmetry parameters used in this thesis are given in Tab. (2.1), for the bands defined in Sect. 3.8. Values are taken from Gordon et al. (1997) apart from the bands EUV and LMN. The extinction law for these two bands has been taken from Whittet (1992) and Rieke & Lebofsky (1985), respectively. Albedo and asymmetry parameter for the EUV band are from Witt et al. (1993) data at 1000Å while for the LMN band they have been assumed equal to those in the K band. The directionality of scattering for an Henyey-Greenstein phase function is shown in Fig. (2.1), for a few values of the asymmetry parameter $`g`$ as in Tab. (2.1) and for the isotropic case ($`g`$=0). ### 2.3 Dust emission The energy absorbed from photons heats up dust grains and is successively re-emitted in the infrared, preferentially at $`\lambda >10\mu `$m. If the energy of each photon impinging on a dust grain is small compared to the internal energy of the grain itself, the radiation is emitted at the thermodynamic equilibrium. This is not the case for small grains absorbing high energy photons: I discuss this in Sect. 2.5. In this section I will consider only processes at the thermodynamic equilibrium. The power emitted by a dust grain at a temperature $`T_\mathrm{d}`$ can be expressed as $$W_{\mathrm{em}}=4\pi a^2_0^{\mathrm{}}Q_{\mathrm{em}}(\lambda )\pi B_\lambda (T_\mathrm{d})𝑑\lambda ,$$ (2.13) where $`B_\lambda (T_\mathrm{d})`$ is the Plank function at the wavelength $`\lambda `$ and $`Q_{\mathrm{em}}(\lambda )`$ is the emission efficiency (or emissivity). By Kirchhoff’s law, $`Q_{\mathrm{em}}(\lambda )=Q_{\mathrm{abs}}(\lambda )`$. At long wavelength scattering efficiencies are negligible compared to absorption (e.g. for the Mie theory $`Q_{\mathrm{abs}}\lambda ^1`$ while $`Q_{\mathrm{sca}}(\lambda )\lambda ^4`$) and therefore $`Q_{\mathrm{em}}(\lambda )Q_{\mathrm{ext}}(\lambda )`$. The emissivity in the infrared, $`Q_{\mathrm{em}}(\lambda )`$ is usually described by a function of the form $$Q_{\mathrm{em}}(\lambda )=Q_{\mathrm{em}}(\lambda _0)\left(\frac{\lambda _0}{\lambda }\right)^\beta $$ (2.14) where $`Q_{\mathrm{em}}(\lambda _0)`$ is the value of the emissivity at the reference wavelength $`\lambda _0`$, and $`\beta `$ is the wavelength dependence index. A more detailed description, together with a new derivation of $`Q_{\mathrm{em}}(\lambda )`$ is presented in the next section. Substituting Eqn. (2.14) in Eqn. (2.13) the emitted power is $$W_{\mathrm{em}}=4\pi a^2Q_{\mathrm{em}}(\lambda _0)K(\beta )T_\mathrm{d}^{4+\beta }.$$ (2.15) The function $`K(\beta )`$ is $$K(\beta )=2\pi \lambda _0^\beta \frac{k^{4+\beta }}{h^{3+\beta }c^{2+\beta }}_0^{\mathrm{}}\frac{x^{3+\beta }dx}{e^x1}.$$ (2.16) The integral in Eqn. (2.16) results from the substitution $`x=hc/k\lambda T_\mathrm{d}`$ and has the analytical solution $`\mathrm{\Gamma }(4+\beta )\zeta (4+\beta )`$, with $`\zeta `$ the Riemann function. As evident in (2.15), the radiated power is strongly dependent on the temperature of dust, thus colder grains radiate at a much smaller rate. By analogy with the Wien displacement law, the peak of the emitted radiation will occur at $$\lambda _{\mathrm{peak}}\frac{3000}{T_\mathrm{d}}\frac{5}{5+\beta }\mu \mathrm{m}.$$ (2.17) The rate of absorbed energy can be computed, if both the radiation field in which the grain is immersed and an expression for the absorption efficiency are known. For a dust grain in the general interstellar radiation field $$W_{\mathrm{abs}}=4\pi a^2_0^{\mathrm{}}Q_{\mathrm{abs}}(\lambda )\overline{w}_{}\pi B_\lambda (\overline{T}_{})𝑑\lambda ,$$ (2.18) where $`\overline{T}_{}`$ and $`\overline{w}_{}`$ are the mean temperature and dilution factor of the general interstellar radiation field. These parameters define the interstellar radiation field as the field produced by a collection of star with effective temperature $`\overline{T}_{}`$ that covers a fraction $`\overline{w}_{}10^{14}`$ of the celestial sphere. A typical interstellar radiation field has $`\overline{T}_{}=10000`$ K and $`\overline{w}_{}10^{14}`$ (Disney et al. 1989). A crude approximation would be to extend the validity of the expression for $`Q_{\mathrm{abs}}(\lambda )=Q_{\mathrm{em}}(\lambda )`$ as in Eqn. (2.14) to the wavelength range were the interstellar radiation field spectrum peaks (i.e. in the optical and UV). This would give as a result $$W_{\mathrm{abs}}=4\pi a^2\overline{w}_{}Q_{\mathrm{em}}(\lambda _0)K(\beta )\overline{T}_{}^{4+\beta }.$$ (2.19) At the thermodynamic equilibrium the principle of detailed balance imposes the rate of absorbed energy to be equal to the rate of emitted energy: equating Eqn. (2.19) and Eqn. (2.15) the dust temperature is $$T_\mathrm{d}=\overline{w}_{}^{1/(4+\beta )}\overline{T}_{}.$$ (2.20) Using $`\beta =1`$ (a valid approximation for $`Q_{\mathrm{abs}}(\lambda )`$ in the optical and in the infrared shortward of 100 $`\mu `$m; see next section) a temperature $`T_\mathrm{d}`$ 16 K is obtained (Van De Hulst 1946). Following Eqn. (2.17) it can be seen that the emission of dust at this temperature peaks at $`\lambda 170\mu \mathrm{m}`$, i.e. in the Far-Infrared. Similar temperatures are found using more complex models for $`Q_{\mathrm{abs}}(\lambda )`$ (Draine & Lee 1984) and from observations of the the Galaxy (Sect. 2.4). If the dust grain lies in a region denser in stars than the mean interstellar field the dilution factor would be larger, resulting in an increased dust temperature and in an emission peaking at shorter $`\lambda `$. Eventually, for circumstellar dust, the temperature could rise to the sublimation temperature of the grain, causing is destruction. Dust grains can be heated also by collision with gas atoms: this process is normally negligible and the dust temperature is almost entirely determined by radiative processes (Spitzer 1978). If a region of space is occupied by grains of radius $`a`$, number density $`n_\mathrm{d}`$ and thermal equilibrium temperature $`T_\mathrm{d}`$, the emission coefficient $`j_\lambda `$ (energy emitted per unit time, volume, solid angle and wavelength) can be written as $$j_\lambda =n_\mathrm{d}\pi a^2Q_{\mathrm{em}}(\lambda )B_\lambda (T_\mathrm{d}).$$ (2.21) An external observer would see a specific intensity $`I_\lambda `$ that, assuming the region is transparent ($`\tau _\lambda 1`$) to radiation in the wavelength range of dust emission, can be obtained integrating along the line of sight $`I_\lambda `$ $`=`$ $`{\displaystyle _{\mathrm{l}.\mathrm{o}.\mathrm{s}.}}j_\lambda 𝑑s`$ (2.22) $`=`$ $`N_\mathrm{d}\pi a^2Q_{\mathrm{em}}(\lambda )B_\lambda (T_\mathrm{d}).`$ As already said, in the wavelength range of dust emission scattering is negligible and $`Q_{\mathrm{ext}}(\lambda )Q_{\mathrm{abs}}(\lambda )=Q_{\mathrm{em}}(\lambda )`$. Therefore, using Eqn. (2.8), Eqn. (2.22) can be rewritten as $$I_\lambda =\tau _\lambda B_\lambda (T_\mathrm{d}).$$ (2.23) I will make use of Eqn. (2.23) in the next Section. ### 2.4 A new determination of dust emissivity I have derived the dust emissivity in the Far-Infrared (FIR) using data available in the literature. I use two wavelength dependences derived from spectra of Galactic FIR emission (Reach et al. 1995). A value for the emissivity, normalised to the extinction efficiency in the V band, has been retrieved from maps of Galactic FIR emission, dust temperature and extinction (Schlegel et al. 1998). The results presented here are similar to other measurements in the Galaxy but only marginally consistent with the widely quoted values of Hildebrand (1983) derived on one reflection nebula. The discrepancy with measurements on other reflection nebulae (Casey 1991) is higher and suggests a different grain composition in these environments with respect to the diffuse interstellar medium. I measure dust masses for a sample of six spiral galaxies with FIR observations and obtain gas-to-dust ratios close to the Galactic value. #### 2.4.1 Introduction Assessing the quantity of dust in spiral galaxies is of primary importance in both understanding the intrinsic properties of galaxies themselves and interpreting observations of the distant universe: large quantities of dust can modify the optical appearance of galactic structures like spiral arms (Trewhella 1998a); if the distribution of dust is extended, a large fraction of the radiation from the distant universe can be blocked (Ostriker & Heisler 1984); star formation as determined from UV fluxes could be severely underestimated thus altering our knowledge of the star formation history of the universe (Hughes et al. 1998a). Dust mass can be retrieved from extinction or from emission in the FIR. In the former case information about the star-dust relative geometry is needed and the method can only be applied to nearby edge-on galaxies, where the dust distribution can be inferred from extinction features (Xilouris et al. 1997; 1998). In the latter case there are no such limitations, and the wealth of data in the FIR and Sub-mm from instruments like the Sub-mm camera SCUBA and from the satellites ISO and COBE, can be used to measure dust mass. Unfortunately, the determination of dust mass is entangled with that of dust temperature and they both rely on knowledge of the dust emissivity (Hildebrand 1983), the form of which is currently highly uncertain. Usually the emissivity $`Q_{\mathrm{em}}(\lambda )`$ is described by Eqn. 2.14, with values of $`\beta `$ between 1 and 2. While a value $`\beta =1`$ seems to be plausible for $`\lambda <100`$$`\mu `$m (Hildebrand 1983, Rowan-Robinson 1992), there is observational evidence for a steeper emissivity at longer wavelengths. The difference in emissivity is not unexpected, since emission in the Mid-Infrared (25-60 $`\mu `$m) is dominated by transiently heated grains, while at $`\lambda >100`$$`\mu `$m grains emit at thermal equilibrium (Whittet 1992). Sub-mm observations of spiral galaxies (Bianchi et al. 1998, Alton et al. 1998b)) show that it is not possible to use an emissivity with $`\beta =1`$ to fit the 450 and 850 $`\mu `$m emission. Reach et al. (1995) came to a similar conclusion. They used the spectrum of the Galactic plane observed by the spectrophotometer FIRAS on board the satellite COBE, to find that the data are well fitted by an emissivity: $$Q_{\mathrm{em}}(\lambda )\frac{\lambda ^2}{\left[1+\left(\lambda _1/\lambda \right)^6\right]^{1/6}},$$ (2.24) for the range 100 $`\mu `$m to 1 cm. Eqn. (2.24) behaves like (2.14) with $`\beta =1`$ at small $`\lambda `$ ($`\lambda \lambda _1`$) and $`\beta =2`$ at large $`\lambda `$ ($`\lambda \lambda _1`$) (they set $`\lambda _1=200`$-$`\mu `$m). Masi et al. (1995) measure a value $`\beta =1.54`$ by fitting a single temperature grey-body spectrum to Galactic plane data in four bands between 0.5 and 2-mm taken by the balloon born telescope ARGO. Reach et al. (1995) suggest that a single temperature fit may bias towards lower values of $`\beta `$ (see also Wright et al. 1991); over the whole FIRAS spectral range, a two temperature grey-body with $`\beta =2`$ at large $`\lambda `$ provides a significantly better fit than a single temperature spectrum with $`\beta 1.5`$. At long wavelengths theoretical calculations for crystalline substances constrain $`\beta `$ to be an even integer number Wright (1993). For amorphous materials $`\beta `$ depends on the temperature: Agladze et al. (1996) find $`1.2<\beta <2`$ for amorphous silicate grains at a temperature of 20 K. A value for the emissivity at a specific wavelength $`Q_{\mathrm{em}}(\lambda _0)`$ normalised to the extinction efficiency in the optical can be determined by carrying out an energy balance in a reflection nebula, comparing the energy absorbed from the central star with the FIR output from the surrounding dust. Alternatively, the extinction measured toward the star can be directly compared to the optical depth in the FIR (Whitcomb et al. 1981, Hildebrand 1983, Casey 1991). These methods are complicated by the unknown nebular geometry and by temperature gradients in the dust; as an example, Casey (1991) found that the extinction method usually retrieves higher values than the energy balance. In this section I use the extinction method comparing the Galactic extinction to FIR emission: in this case the same column density of dust is responsible both for emission and extinction and a reliable result can be obtained. #### 2.4.2 The method Schlegel, Finkbeiner & Davis (1998, hereafter SFD) have presented a new map of Galactic extinction. After removing emission from zodiacal light and a cosmic infrared background, they have combined the 100 $`\mu `$m map of Galactic emission taken by the DIRBE experiment on board the COBE satellite with the 100 $`\mu `$m large-area ISSA map from satellite IRAS, to produce a map of Galactic emission with the quality calibration of DIRBE and the high resolution of IRAS. The dust temperature has been retrieved using the DIRBE maps at 100 $`\mu `$m and 240 $`\mu `$m assuming $`\beta `$=2. Knowing the temperature, the 100 $`\mu `$m map has been converted into a dust column density map and subsequently calibrated to E(B-V) using colours and Mg<sub>2</sub>-index of elliptical galaxies. I would like to stress that the colour excess has been derived from the 100 $`\mu `$m emission without any assumption about the value of the emissivity at any wavelength. Moreover, the choice of $`\beta `$ does not affect significantly their results: when $`\beta `$=1.5 is used, the dust column density map varies only of 1%, aside from an overall multiplicative factor that is taken account of when calibrating with the colour excess. I have accessed the electronic distribution of this remarkable dataset to retrieve the 9.5 arcmin/pixel maps of the intensity at 100 $`\mu `$m, I(100 $`\mu `$m), the temperature and the colour excess E(B-V) for the north and south Galactic hemispheres. When the same dust grains are responsible for emission and extinction, the ratio between the extinction coefficient in the V-band and the emissivity at 100 $`\mu `$m is equivalent to the ratio of the optical depths $$\frac{Q_{\mathrm{ext}}(V)}{Q_{\mathrm{em}}(\text{100 }\mu \text{m})}=\frac{\tau (V)}{\tau (\text{100 }\mu \text{m})}.$$ (2.25) The above formula is correct if all of the dust grains are identical. In a mixture of grains of different sizes and materials, the ratio of emissivities in Eqn. (2.25) can still be regarded as a mean value characteristic of diffuse galactic dust, if the dust composition is assumed to be the same on any line of sight. The optical depth at 100 $`\mu `$m, in the optically thin case, is measured using $$\tau (\text{100 }\mu \text{m})=\frac{I(\text{100 }\mu \text{m})}{B(\text{100 }\mu \text{m},T_\mathrm{d})},$$ (2.26) where $`B(\text{100 }\mu \text{m},T_\mathrm{d})`$ is the value of the Planck function at $`100\mu \text{m}`$ for a dust temperature $`T_\mathrm{d}`$, both the intensity $`I(\text{100 }\mu \text{m})`$ and $`T_\mathrm{d}`$ coming from the maps of SFD<sup>2</sup><sup>2</sup>2The assumption of an optical thin medium for FIR radiation is always valid in a spiral galaxy. The mean optical depth for the 100$`\mu `$m radiation over a region of 20 around the Galactic poles, derived from the SFD maps, is 2.5 10<sup>-5</sup>. The maximum on the Galactic plane is 0.14. An optically thin emission is also derived by Sodroski et al. (1994; 1997).. The optical depth in the V-band can be found from the colour excess E(B-V) maps, $$\tau (V)=\frac{A(V)}{1.086}=2.85E(BV),$$ (2.27) where I have used a mean galactic value $`A(V)/E(BV)`$=3.1 (Whittet 1992). Reach et al. (1995) suggest that dust emitting in the wavelength range 100-300 $`\mu `$m traces interstellar extinction. Since the FIR optical depth in Eqn. (2.26) has been measured using data at 100 and 240 $`\mu `$m, it is then justified to compare it with extinction as in Eqn. (2.27) to find the ratio of the extinction coefficient and emissivity. Knowing the optical depths from (2.26) and (2.27), I can compute a map of the ratio as in Eqn. (2.25); I obtain a mean value of $$\frac{Q_{\mathrm{ext}}(V)}{Q_{\mathrm{em}}(\text{100 }\mu \text{m})}=760\pm 60$$ (2.28) for both hemispheres. This value is included, together with other multiplicative factors, in the calibration coefficient $`p`$ as in Eqn. (22) in SFD. An estimate for $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}`$ can be easily derived from that equation, if the DIRBE colour corrections factors, slowly depending on T, are omitted. Following this method I obtain a value of 765.5. SFD give an error of 8% for $`p`$ and this is the value quoted here. Since most ($``$ 90%) of the elliptical galaxies used to calibrate colour excess maps have galactic latitude $`b>20^{}`$, one may argue that the measured value is characteristic only of high latitude dust. Reach et al. (1995) find that the emissivity (Eqn. 2.24) is best determined by fitting the FIRAS spectrum on the Galactic plane. They say that high latitude data have a smaller signal-to-noise ratio and can be fitted satisfactory with $`\beta =2`$ (Eqn. 2.14) although the same emissivity as on the plane cannot be excluded. Under the hypothesis that the same kind of dust is responsible for the diffuse emission in the whole Galaxy, I have corrected SFD temperatures using Reach et al. emissivity (Eqn. 2.24). The new temperatures are a few degrees higher than those measured with $`\beta =2`$ (as an example the temperature passes from a mean value of 18K in a 20 diameter region around the north pole to a new estimate of 21K). It is interesting to note that the difference between the two estimates of temperature is of the same order as the difference between the temperatures of warm dust at high and low Galactic latitude in Reach et al. (1995) and this may only be a result of the different emissivity used to retrieve the temperature. When the correction is applied $$\frac{Q_{\mathrm{ext}}(V)}{Q_{\mathrm{em}}(\text{100 }\mu \text{m})}=2390\pm 190.$$ (2.29) The new ratio is about three times higher, and this is a reflection of the change of temperature in the black body emission in (2.26): for a higher temperature, a lower emissivity in the FIR is required to produce the same emission. Uncertainties in $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\lambda _0)`$ are thus greatly affected by assumptions about the emissivity spectral behaviour. #### 2.4.3 Comparison with other measurements I now compare the emissivity for $`\beta =2`$ with literature results derived under the same hypothesis. Since no emissivity has been derived to my knowledge assuming Eqn. (2.24), I do not attempt any comparison with that result. All the data are scaled to $`\lambda _0=100`$ $`\mu `$m. Studying the correlation between gas and dust emission from FIRAS and DIRBE, Boulanger et al. (1996) derived an emissivity $`\tau /N_H=1.010^{25}`$ cm<sup>2</sup> at 250 $`\mu `$m for dust at high galactic latitude; assuming the canonical $`N_H=5.810^{21}E(BV)`$ cm<sup>-2</sup> mag<sup>-1</sup> and $`A(V)/E(BV)`$=3.1 (Whittet 1992), this is equivalent to $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\text{100 }\mu \text{m})`$=790. Lagache et al. (1999) have analyzed again the HI/FIR correlation, decomposing the FIR emission into two components, associated to neutral gas and the Warm Ionized Medium. They found $`\tau /N_{HI}=8.7\pm 0.910^{26}`$ cm<sup>2</sup> and $`\tau /N_{H^+}=1.0\pm 0.210^{25}`$ cm<sup>2</sup> at 250 $`\mu `$m. These values correspond to $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\text{100 }\mu \text{m})`$= 900$`\pm `$90 and 790$`\pm `$160, if we assume that the same $`N_H/E(BV)`$ ratio can be used for dust associated with ionized gas as well as for the atomic. Lagache et al. (1999) argues that the different emissivities in the two components may be due to a smaller cutoff in the large grain size distribution for dust within the hotter ionized medium. Nevertheless, the two values are consistent with each other. Quite similar values are found in the Draine & Lee (1984) dust model, which has a $`\beta =2`$ spectral dependence in this wavelength range. At 125 $`\mu `$m the optical depth is $`\tau /N_H=4.610^{25}`$ cm<sup>2</sup> which corresponds to $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\text{100 }\mu \text{m})`$=680. Sodroski et al. (1997) find a value for the ratio at 240 $`\mu `$m, using literature data identifying a correlation between B-band extinction and 100 $`\mu `$m IRAS surface brightness in high latitude clouds, assuming a dust temperature of 18 K. Converted to my notation, using a standard extinction law, the ratio is $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\text{100 }\mu \text{m})`$=990. The measurement by Whitcomb et al. (1981) on the reflection nebula NGC 7023 is the most commonly quoted value for the emissivity (Hildebrand 1983). Their value derived at 125 $`\mu `$m for $`\beta =2`$ is only marginally consistent with my result. Following my notation, their result is equivalent to $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\text{100 }\mu \text{m})=`$ 220 and 800<sup>3</sup><sup>3</sup>3Whitcomb et al. (1981) and Casey (1991) originally presented values for $`Q_{\mathrm{ext}}(UV)/Q_{\mathrm{em}}(FIR)`$: I have corrected to $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(FIR)`$ using the provided $`\tau (UV)=2\tau (V)`$., using the energy balance and the extinction method, respectively. The values obtained by Casey (1991) on a sample of five nebulae using the energy balance method are a factor of 3 smaller than the ones presented here (corresponding to $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\text{100 }\mu \text{m})=`$ 80-400). In Fig. 2.2 I show the literature data (plotted at the wavelength they have been derived in the original papers) together with the derived emissivity laws. I have added the value for Draine & Lee (1984) model at 250 $`\mu `$m. #### 2.4.4 Gas-to-dust ratio of external spiral galaxies I now exploit the FIR emissivity derived in this work by determining dust masses for nearby spiral galaxies. Following Hildebrand (1983) dust masses can be measured from FIR emission using $$M_{\mathrm{dust}}=\frac{F(\lambda )D^2}{B(\lambda ,T_\mathrm{d})}\frac{4a\rho }{3Q_{\mathrm{em}}(\lambda )},$$ (2.30) where $`F(\lambda )`$ is the total flux at the wavelength $`\lambda `$, $`D`$ the distance of the object, $`B(\lambda ,T_\mathrm{d})`$ the Planck function, $`a`$ the grain radius (0.1 $`\mu `$m) and $`\rho `$ the grain mass density (3 g cm<sup>-3</sup>). The emissivity $`Q_{\mathrm{em}}(\lambda )`$ is derived from the ratio $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\lambda )`$ assuming $`Q_{\mathrm{ext}}(V)`$=1.5 (Casey 1991, Whittet 1992). Alton et al. (1998a) provide total fluxes at 100 $`\mu `$m and 200 $`\mu `$m from IRAS and ISO for a sample of spiral galaxies. I have derived dust temperatures and masses using $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\text{100 }\mu \text{m})`$=760 and 2390, for $`\beta `$=2 and Reach et al. (1995) emissivities, respectively. Using literature values for gas masses, I have computed the gas-to-dust ratios. Values of gas masses, temperatures and gas-to-dust ratios are presented in Table 2.2. The mean value of the gas-to dust ratio for the sample is 100 using Eqn. (2.14), 110 using Eqn. (2.24). Mean temperatures go from 18K with the $`\beta =2`$ emissivity to 21K when the Reach et al. (1995) behaviour is assumed (as for the north galactic pole in Sect. 2.4.2). Alton et al. (1998a) pointed out that ISO 200 $`\mu `$m fluxes could be overestimated by about 30%; correcting for this I obtain a mean gas-to-dust ratio of 220-240 (for $`\beta `$=2 and Reach et al. (1995)) emissivity, respectively). As shown above, dust masses obtained with the two methods are quite similar. This can be explained by substituting eqs. (2.25) and (2.26) into (2.30). For $`\lambda =\text{100 }\mu \text{m}`$ I can derive $$M_{\mathrm{dust}}\frac{B(\text{100 }\mu \text{m},T_\mathrm{d}^\mathrm{G})}{B(\text{100 }\mu \text{m},T_\mathrm{d})},$$ (2.31) where $`T_\mathrm{d}^\mathrm{G}`$ is the mean temperature of dust in the Galaxy. From the equation it is clear that the dust mass determination is insensitive to the emissivity law used, as long as the dust temperature in external galaxies and in our own are similar. The range of values for the gas-to-dust ratio (100–230) encompasses the Galactic value of 160 (Sodroski et al. 1994). As a comparison, the mid-value of Whitcomb et al. (1981) would have given dust-to-gas ratios larger by a factor 1.5. #### 2.4.5 Conclusion I have derived the dust emissivity $`Q_{\mathrm{em}}`$ in the FIR using the wavelength dependence derived from the FIR Galactic spectrum (Reach et al. 1995). The emissivity has been normalised to the extinction efficiency in the V band using dust column density maps calibrated to Galactic extinction (SFD). $`Q_{\mathrm{em}}`$ depends strongly on the assumed wavelength dependence. For a $`\beta =2`$ emissivity index I obtained $$Q_{\mathrm{em}}(\lambda )=\frac{Q_{\mathrm{ext}}(V)}{760}\left(\frac{\text{100 }\mu \text{m}}{\lambda }\right)^2.$$ (2.32) This result is consistent with other values derived from FIR Galactic emission (Boulanger et al. 1996, Sodroski et al. 1997) and with the Draine & Lee (1984) dust model. The widely quoted emissivities of Whitcomb et al. (1981), Hildebrand (1983) derived from the reflection nebula NGC 7023 are only marginally consistent with these values, while the emissivity measured by Casey (1991) on a sample of five nebulae are smaller by a factor of 3. This may suggest a different grain composition for dust in the diffuse inter-stellar medium compared to reflection nebulae. When the wavelength dependence derived by Reach et al. (1995) on the Galactic plane is used, I obtain $$Q_{\mathrm{em}}(\lambda )=\frac{Q_{\mathrm{ext}}(V)}{2390}\left(\frac{\text{100 }\mu \text{m}}{\lambda }\right)^2\frac{2.005}{\left[1+\left(\text{200 }\mu \text{m}/\lambda \right)^6\right]^{1/6}}.$$ (2.33) I have used the derived emissivities to measure dust masses from 100 $`\mu `$m and 200 $`\mu `$m fluxes of a sample of six spiral galaxies (Alton et al. 1998a). I have retrieved similar dust masses with both the spectral dependences. The gas-to dust ratios of the sample (100-230) are close to the Galactic value of 160 (Sodroski et al. 1994). Since Eqn. (2.33) has been determined using the most accurate information available for the spectrum of dust emission and for the extinction in a galaxy, I will use that emissivity law for the FIR simulation of this thesis. ### 2.5 The MIR correction As seen in Section 2.3, grains emitting at the thermodynamical equilibrium are responsible for the emission in the FIR. However, if the dust grain is small, the absorption of a single high-energy photon can substantially alter the internal energy of the grain. The dust grain thus undergoes temperature fluctuations of several degrees and cools by emission of infrared radiation, mainly in the MIR range (Whittet 1992). Since in this thesis I want to model the FIR galactic emission only, I need to exclude from the total absorbed energy the fraction that goes into non-equilibrium heating. This can be done if a model of the absorption efficiency for all the dust components is known. Désert, Boulanger & Puget (1990) built an empirical dust model to interpret both extinction and infrared emission in the solar neighbourhood and other astrophysical situations. Analysing the features in the extinction curve and in the infrared emission, they found that three dust components are needed. They are: classical big grains ($`0.015\mu \text{m}<a<0.11\mu \text{m}`$) are needed to explain the rise in NIR-Optical extinction curve and the linear rise in the UV. They suggest that big grains could be of silicate with a coating of a blacker, carbon-dominated material, since silicates alone, because of their large albedo, cannot explain extinction. Simple functions are used for the optical properties ($`Q_{\mathrm{abs}}\lambda ^1`$, $`Q_{\mathrm{sca}}\lambda ^4`$ for $`\lambda >a`$, both constant for larger radii<sup>4</sup><sup>4</sup>4These are the general trends predicted by the Mie theory for spherical grains in the limits of grains very small or very large compared to $`\lambda `$. The oscillating behaviour around these trends (Van De Hulst 1957) have been neglected.). Big grains are needed to explain the thermal equilibrium emission in the FIR. these grains are responsible for the bump feature in the extinction curve at 2175Å. In the model they are supposed to be small graphite grains ($`0.0012\mu \text{m}<a<0.015\mu \text{m}`$), pure absorbers and with an absorption efficiency empirically derived fitting the bump with a Drude profile (Fitzpatrick & Massa 1988). These small grains are heated stochastically and are responsible for the infrared emission in the range $`25\mu \text{m}<\lambda <80\mu \text{m}`$. Polycyclic Aromatic Hydrocarbon molecules are introduced to explain the continuum emission and the Unidentified Infra-Red emission features at $`\lambda <25\mu \text{m}`$ and the Far UV non linear rise in the extinction curve. Radii of these molecules are taken to be between $`0.0004\mu \text{m}`$ and $`0.0012\mu \text{m}`$. PAHs are pure absorbers and the absorption efficiency is derived both empirically (from a fit of extinction in the FUV (Fitzpatrick & Massa 1988) ) and theoretically. The presence of PAHs features in the infrared spectra of dusty areas closely correlates with the FUV non-linear rise, but not with the presence of the 2175Å bump. This is why two small grain components are needed. Moreover, PAHs of reasonable physical dimensions cannot produce the emission in the range $`25\mu \text{m}<\lambda <80\mu \text{m}`$. Each dust component has a size distribution described by a power law. The parameters of the model, i.e. size distributions, grain radii, albedo of big grains and relative abundances of each component, have been derived comparing the FIR output of the model heated by the Local Inter Stellar Radiation Field with observations. Using the information provided by Désert et al. (1990) I have been able to reproduce their model for the extinction curve (Fig. 2.3). The fraction of starlight that is absorbed by small grains (both PAHs and very small grains) can be computed using the absorption efficiencies from the Désert et al. (1990) model. The extinction efficiency for all the components is $$Q_{\mathrm{ext}}=Q_{\mathrm{ext}}^{\mathrm{BG}}+Q_{\mathrm{ext}}^{\mathrm{VSG}}+Q_{\mathrm{ext}}^{\mathrm{PAH}}.$$ (2.34) Since both PAHs and Very Small Grains are pure absorbers, $$Q_{\mathrm{ext}}^{\mathrm{PAH}}=Q_{\mathrm{abs}}^{\mathrm{PAH}}Q_{\mathrm{ext}}^{\mathrm{VSG}}=Q_{\mathrm{abs}}^{\mathrm{VSG}},$$ (2.35) while $$Q_{\mathrm{ext}}^{\mathrm{BG}}=Q_{\mathrm{abs}}^{\mathrm{BG}}+Q_{\mathrm{sca}}^{\mathrm{BG}},$$ (2.36) where $`Q_{\mathrm{sca}}^{\mathrm{BG}}`$ is the scattering efficiency. The absorption efficiency of the dust model is therefore $$Q_{\mathrm{abs}}=Q_{\mathrm{ext}}Q_{\mathrm{sca}}^{\mathrm{BG}}$$ (2.37) and the absorption efficiency of small grains (PAHs + Very Small Grains) is $$Q_{\mathrm{abs}}^{\mathrm{SG}}=Q_{\mathrm{abs}}^{\mathrm{VSG}}+Q_{\mathrm{abs}}^{\mathrm{PAH}}.$$ (2.38) Of the light impinging on a dust grain, a fraction of energy proportional to $`Q_{\mathrm{abs}}`$ is absorbed; the fraction of total absorbed energy that is absorbed by small grains is therefore $$\frac{Q_{\mathrm{abs}}^{\mathrm{SG}}}{Q_{\mathrm{abs}}}.$$ (2.39) Values of the fraction of energy absorbed by small grains are plotted in Fig. (2.4) as a function of $`1/\lambda `$. Tab. (2.1) gives the values of the ratio for each of the used wavelength bands. In the model, after computing the energy absorbed by dust from each wavelength band, a fraction as in Eqn. (2.39) is excluded and is not converted into FIR emission (see Chapters 3 and 4). ### 2.6 Summary In this Chapter I have introduced the dust properties used in this work. The mathematical expression for dust extinction and emission will be used in the next Chapter, to model the galactic FIR emission. An original derivation for the dust emissivity has been presented, using the best available data for the Galactic dust extinction and emission. Finally, I have evaluated the contribution of the MIR radiation from transiently-heated dust grains to the total dust emission. The purpose of this Thesis is modelling the FIR emission by dust grains at the thermal equilibrium only. The fraction of energy absorbed by grains emitting in the MIR correction will be excluded from the model using the MIR correction derived in the last Section. ## Chapter 3 The radiative transfer and dust emission model This chapter is dedicated to a description of the radiative transfer and dust emission models used in this Thesis. First, a review of previous models for the solution of the radiative transfer problem and for the simulation of dust emission in spiral galaxies is presented. I then describe observations of galactic dust and stellar geometrical distributions and I justify the parameters adopted in this work. A brief description is given of the Monte Carlo radiative transfer code of Bianchi, Ferrara & Giovanardi (1996, hereafter BFG), the backbone of the project. Finally I discuss how dust temperatures and FIR emission are derived by another code, using the map of absorbed energy produced by the Monte Carlo code. ### 3.1 Radiative transfer models Several radiative transfer models are available in the literature. I describe in this section those that have been developed to simulate the extinction properties of general galactic environment. A review of simple geometry models is given by Disney et al. (1989). They show how the early screen model with all the dust in front of the emitting source (in analogy to the first studies of extinction of Galactic stars through layers of dust), always gives higher extinction than models with interspersed stars and dust, like the slab model, where dust and stars have the same plane parallel geometry, and the sandwich model, where the dust is confined to a thinner distribution internal to the stellar one. They then present the Triple Exponential model (TRIPLEX), where the simple plane-parallel scheme is abandoned in favour of more realistic vertical and radial exponential distributions for both stars and dust (Sect. 3.3 and 3.6). The radial scalelength of dust and stars is assumed to be the same, while the vertical scalelengths of the two distribution are independent, to be able to simulate the thinner dust disk. Exact analytical solutions are provided for the face-on case, and approximations for larger inclination of the line of sight. The models presented by Disney et al. (1989) include absorption as the only process leading to extinction. Because of the large observed values for dust albedo (Sect. 2.1) and in view of possible large optical depth inside a galaxy, any realistic models should include a treatment of multiple scattering of light by dust. Unfortunately, the scattering term makes the radiative transfer problem more complicated. A compromise between an exact treatment of scattering and a realistic galactic geometry must be adopted, if an analytical solution is to be achieved. Bruzual, Magris & Calvet (1988) chose to adopt a simple plane parallel geometry. The scattering is dealt with in a complete form, through the solution of a system of integro-differential equations. Solutions are found for a slab geometry and a forward scattering Henyey-Greenstein phase function (Eqn. 2.12), or for a sandwich geometry and isotropic scattering. For the same geometries, Di Bartolomeo, Barbaro & Perinotto (1995) use a quickly convergent approximation to the solutions derived with the method of spherical harmonics. Corradi, Beckman & Simonneau (1996) present solution for face-on models, allowing for a dust and stellar exponential distribution but assuming a local plane parallel geometry. The solutions are therefore exact only when the mean free path of the photon is small with respect to the scale of radial variation of the stellar and dust distributions. This is not the case for optically thin regions. However in these regions the scattering events are reduced in number and the results are not greatly affected. On the other hand, if more realistic geometries are desired, the exact treatment of multiple scattering has to be sacrificed. Kylafis & Bahcall (1987) model the surface brightness of the edge-on galaxy NGC 891 with a a three-dimensional model for dust and stars. The adopted distributions are radially exponential and behaves as a sech<sup>2</sup> law (Eqn. 3.3) along the vertical direction. Multiple scattering is treated correctly only for the first order term, then an approximation is introduced for the higher orders. Byun, Freeman & Kylafis (1994) have generalised the method to deal with any galactic inclination. A spheroidal stellar distribution has been included to simulate the bulge. Xilouris et al. (1997; 1998; 1999) successfully applied it to the fitting of edge-on galaxies surface photometry. Trewhella, Madore & Kuchinski (1998) are designing an analytical model with a cellular approach. The three-dimensional space is divided into cubic cells and radiation is passed from one cell to the other through 26 directions defined by the cell faces, edges and corners. Local optical depth, albedo and dust phase function regulate the absorption of energy and its diffusion through the 26 directions. The model is able to deal with arbitrary geometries, not necessarily homogeneous, and produces results at several wavelengths simultaneously. The Monte Carlo technique, instead, does not suffer of the limitation of the analytical solutions. Within this technique, each individual photon is randomly created and its path followed, through scattering and absorption, until it escapes the dust distribution. The treatment of scattering is straightforward and no approximations are needed. Arbitrary geometries can be used in principle. With respect to the analytical methods, it has disadvantages in the lack of handy solutions to be used in fitting algorithms (like in Xilouris et al. 1997; 1998; 1999) and in the large amount of computational time needed to produce high signal-to-noise results. Witt, Thronson & Capuano (1992) present the first application of the Monte Carlo method for extended distribution of stars and dust. The adopted spherical geometry is more suitable to describe extinction in the nuclei of active galaxies or in ellipticals. Several relative distributions of dust and stars are explored. BFG have developed a Monte Carlo code for realistic geometries for spiral galaxies. Since the work of this Thesis is based on this model, a more complete description of the original code, together with the main modification apported for the present simulations, is given separately in Sect. 3.7. Other models for spiral galaxies based on the Monte Carlo methods have been developed by De Jong (1996a) and Wood (1997). The cellular approach can be successfully adopted in the Monte Carlo technique to simulate Dis-homogeneous distributions of dust and stars. Results for the radiative transfer through clumpy dust distributions have been recently presented by Witt & Gordon (1999) for spherical geometries and by Kuchinski et al. (1998) and Bianchi et al. (1999b) for galactic disks. The last work includes the possibility of clumpy light emission as well. ### 3.2 Models of FIR emission The model for the FIR emission I am going to present in this Chapter is a sophisticated version of the energy balance method. Because of the conservation of energy, all the starlight absorbed by dust must be re-emitted. In Chapter 2 I showed how most of the dust emission occurs at $`\lambda >10\mu `$m, in the MIR and FIR. The intrinsic, unextinguished stellar luminosity is therefore the sum of the stellar luminosity that escapes dust and is observed in the UV, Optical and NIR and the infrared luminosity emitted by dust. Using a radiative transfer model it is then possible to relate the ratio between intrinsic and emitted starlight to the amount of dust and its geometrical distribution. Evans (1992) applies the energy balance to a sample of nine galaxies with available optical and FIR observations. As for the radiative transfer, he uses a TRIPLEX model with a dust vertical scalelength half that of the stellar (Sect. 3.6). After fitting a blackbody to the observed stellar fluxes, the integrated luminosity is computed over a few wavebands from the optical to the NIR. For a given optical depth in a reference band (the V-band), the adopted extinction law (Sect. 2.2) can be used to find the optical depth for the central wavelength of each band. The radiative transfer model is then used to compute, from the luminosity observed in each optical band, how much starlight must have been originally emitted, for the chosen dust/star geometry and the band optical depth. Constraining the absorbed energy to equal the observed energy emitted by dust, a value for the reference optical depth (and therefore for the amount of dust) can be derived. A slight modification of this method is used by Trewhella (1998a) to derive the mean absorption $`A_B`$ on several cells on the galaxy NGC 6946. A high resolution extinction map in the B band is then derived from B-K colour using the relation $$A_B=(BK)+\left[A_B(BK)\right]_{cell},$$ (3.1) where the quantity between brackets is the mean derived on each cell. Both Evans (1992) and Trewhella (1998a) have a wide range of optical and FIR observations for their small number of objects. Xu & Buat (1995) apply the energy balance to a large sample of 135 nearby spiral galaxies with available UV and B fluxes and 60 and 100$`\mu `$m IRAS data. Since the spectral coverage is not as extended as in the other two works, a considerable extrapolation is necessary to derive the stellar SED and the total FIR emission. The non-ionising emission of the stellar component (f<sub>star</sub>) is derived for each galaxy from an extrapolation of the UV and the B-band monochromatic fluxes, using synthetic spectra for specific Hubble types and/or UV-optical colours. The total FIR emission is derived from the IRAS data at 60 and 100$`\mu `$m, using a relation derived for 13 galaxies with available sub-mm observations. In this subsample, the ratio between the total FIR flux (their f<sub>dust</sub>) and the FIR flux in the spectral range 40$`\mu `$m-120$`\mu `$m, as derived from the 60 and 100$`\mu `$m observation (f<sub>fir</sub>), is found to anti-correlate with the ratio between the two IRAS fluxes. This is expected, since warmer dust (higher 60/100 flux ratio) would emit at wavelengths shorter than the 100$`\mu `$m and most of the radiation would be detected by IRAS. For colder dust (lower 60/100 flux ratio), the emission would occur out of the IRAS spectral range, and the ratio f<sub>dust</sub>/f<sub>fir</sub> would be larger. Using this anti-correlation and the available 60 and 100$`\mu `$m fluxes, the total dust emission is derived for each galaxy. Ionising photons are assumed to be absorbed locally, in the HII regions. The energy absorbed from ionising photons is derived from the H<sub>α</sub> fluxes of a subsample of 34 galaxies and subtracted from the total energy absorbed by dust. The remaining absorbed energy is then compared to the stellar emission, as described for the other two models. The only difference is that Xu & Buat (1995) use a sandwich model inclusive of scattering for the radiative transfer. The energy balance methods presented above have the disadvantage that the FIR radiation is treated as a bulk and all the information about the spectral distribution of the FIR radiation is unused. The relative geometry of stars and dust affects the ISRF and the way it heats dust. It would be desirable to know, from a radiative transfer code, not only how much energy is absorbed by dust, but also the distribution of the absorbed energy. After assuming dust emission properties it would then be possible to know the temperature and the dust emission as a function of the position in the model. A comparison between the model and the observed spectrum would put firmer constraints on the dust distribution. Usually, instead, the spectrum of dust is modelled assuming an ISRF. Désert et al. (1990) and Rowan-Robinson (1992), for example, first derive the properties of their dust models comparing local Galactic FIR spectrum with that produce by their dust model heated by a local ISRF. When the dust model is determined, spectra of dust heated by fractions or multiples of the local ISRF are derived, to be compared with the spectra observed in other environments, internal or external to the Galaxy. Few models produce their own ISRF from the stellar model and apply it to a dust distribution. Silva et al. (1998) build a complex model for the photometric evolution of galaxies. An intrinsic SED is derived from a spectral synthesis model. Stars, dust and gas are distributed in three exponential (both on the radial and vertical direction) distributions, to describe: i) a distribution of spherical molecular clouds and associated young stellar objects; ii) free stars that have escaped the molecular clouds; and iii) diffuse gas and associated dust. Although the three distributions in the paper have identical scalelengths, independent parameters can be chosen. Spheroidal galaxies can be modelled as well. The dust mass is derived from the residual gas mass in the galaxy evolution model, assuming a fixed dust to gas ratio. A fraction of the total mass of dust is ascribed to MCs. A model is used for the dust properties, slightly different in the molecular clouds, where there is a decrease of the number of PaHs. As for the radiative transfer, they use a cellular approach for the smooth medium. Scattering is dealt with in an approximate way, using an effective optical depth that leads to rigorous results only for an infinite homogeneous medium and isotropic scattering. A separate treatment is implemented for the objects within molecular clouds. After computing the ISRF in the grid cells, dust emission is derived. The stellar and dust spectral energy distributions are then followed in their evolution with time. Sauty, Gerin & Casoli (1998) present numerical simulations of radiative transfer in the spiral galaxy NGC 6946. After dividing the galactic volume into a three-dimensional grid, they represent the interstellar medium as a two phase medium, with molecular clouds and a constant density smooth diffuse phase associated with the atomic gas. The distribution of molecular clouds is derived from models of cloud collisions and of the gravitational potential of the galaxy. OB association are created within the more massive clouds, and local fluxes are computed accordingly. The radiative transfer through dust is carried out using a modification of the Monte Carlo method, but only for wavelengths between 912Å and 2000Å since the code does not include stars of later spectral types that contribute at longer wavelengths. In fact, the authors are mainly interested in the effects of the UV ISRF on the H$`\alpha `$ and C<sup>+</sup> emission lines. The optical and NIR ISRF is scaled from the Galactic, using the local surface brightness in a R-band image, but is only used in regions with no UV flux, like inter-arm regions. Maps of the FIR emission are produced. It is difficult, in the two models described before, to disentangle the effects of dust from the other (numerous) parameters. In Silva et al. (1998) a complex model is adopted for the stellar SED and for the dust properties while a correct treatment of the radiative transfer is sacrificed. Furthermore, only the global dust SED is produced, but no information on the spatial distribution of emission is present. The model of Sauty et al. (1998) perhaps deals with the radiative transfer in a more correct way, but the ISRF is derived only for the UV. For the spectral regions where the peak of stellar emission is, a Galactic ISRF is used. For this Thesis, I have built a self-consistent model for the dust emission in spiral galaxies. The radiative transfer is treated correctly. Given an input stellar SED, dust absorbs radiation from an ISRF that is consistent with the radiative transfer itself. The temperature along the dust distribution is computed, so that not only the FIR luminosity and spectrum can be retrieved, but also the surface brightness distribution of the FIR radiation for any wavelength. A Monte Carlo model with this characteristics has been presented by Wolf, Fischer & Pfau (1998), but only for star formation environments and not for galactic distributions of stars and dust. ### 3.3 Stellar disk The luminosity density distribution of a galactic disk is usually described by $$\rho =\rho _0\text{exp}(r/\alpha _{})Z(z/\beta _{})$$ (3.2) where r and z are the galactocentric distance and the height above the galactic plane, respectively, and $`\alpha _{}`$ and $`\beta _{}`$ the relative scalelengths. While there is a consensus for the radial exponential behaviour (De Vaucouleurs 1959, Freeman 1970), a number of expressions for the function $`Z(z/\beta _{})`$ have been used in the literature. Van Der Kruit & Searle (1981), in their analysis of optical images of edge-on disks, find that the vertical profile is best fitted by $$Z(z/\beta _{})=\text{sech}^2(z/z_0)$$ (3.3) with $`z_0=2\beta _{}`$. This function, the solution for a self gravitating isothermal sheet, behaves like an exponential of scalelength $`\beta _{}`$ at large radii, but has a less sharp peak in the centre. Peletier & De Grijs (1997) suggested that the results of Van Der Kruit & Searle (1981) may have been influenced by an high dust extinction. Wainscoat, Freeman & Hyland (1989) find the sech<sup>2</sup> function unsuitable to describe the vertical profile of the edge-on galaxy IC 2531 and prefer a more sharp function, the exponential $$Z(z/\beta _{})=\text{exp}(z/\beta _{}).$$ (3.4) This function has the advantage of mathematical simplicity, but has no firm physical justification. An alternative function $$Z(z/\beta _{})=\text{sech}(z/\beta _{})$$ (3.5) has been proposed by Van Der Kruit (1988). This function has a peak intermediate between the sech<sup>2</sup> and the exp and it is consistent with measures of the velocity dispersion of stars along the z-axis. Analysing the vertical profiles of a sample of 24 edge-on galaxies in the relatively dust-free K-band, De Grijs, Peletier & Van Der Kruit (1997) find that a distribution with a peak intermediate between the exp and the sech fits better the central peak (see also De Grijs & Van Der Kruit 1996). Since a small inclination from the pure edge-on case can produce a less sharp profile, these results are compatible with the exponential distribution. Therefore in this thesis I will adopt a radial exponential disk of star with a vertical exponential distribution as in Eqn. (3.4). In Eqn. (3.2) it is assumed that the vertical and radial behaviour are independent: De Grijs & Peletier (1997) noticed a constant increase of $`\beta _{}`$ with $`\alpha _{}`$ in a sample of spiral galaxies. This effect is stronger for early-type galaxies, where $`\beta _{}/\alpha _{}`$ can increase as much as a factor of 1.5 per radial scalelength. In late-type galaxies it is almost zero. This behaviour has been explained by a thick disk of star with a bigger scalelength than the ordinary one. Being dusty late-type galaxies the main concern of this work, I use here a constant $`\beta _{}/\alpha _{}`$. ### 3.4 Observed Galactic scalelengths In this section I give a list of references to recent determination of radial and vertical scalelengths of the stellar disk distribution in our Galaxy. In the bi-dimensional model of De Vaucouleurs & Pence (1978) the Galactic disk has a radial scalelength $`\alpha _{}=3.5`$ kpc. The value is obtained by comparing the local ($`R_{}`$) vertically integrated surface brightness in the B-band (derived from star counts) with the mean central surface brightness 21.65 mag arcsec<sup>-1</sup> for disks of spiral galaxies (Freeman Law; Freeman 1970). Bahcall & Soneira (1980) built a three-dimensional Galactic model to fit B and V star counts at a Galactic latitude $`b>20^{}`$ (where extinction is low). The disk component is modelled with an exponential both in the radial and in the vertical direction. For $`\alpha _{}`$ they assume the De Vaucouleurs & Pence (1978) value; values for $`\beta _{}`$ are derived from the literature and summarised in a magnitude (age)-dependent relation: young stars ($`M_V<2`$) have $`\beta _{}=90`$ pc, old disk dwarfs ($`M_V>5`$) have $`\beta _{}=325`$ pc, while a linear relation for stars in the range $`2<M_V<5`$ is assumed. In this formalism giants have $`\beta _{}=250`$ pc. This model is commonly referred to in the literature as the standard model. In a subsequent work (Bahcall & Soneira 1984) the parameter space is explored and limits are put on the scalelengths: $`\beta _{}=250\pm 100`$ pc for giants and $`\beta _{}=350\pm 50`$ pc for old disk dwarfs. Although the results depend weakly on $`\alpha _{}`$ (because the model is compared to star counts at high latitude), data can be reasonably fit only for $`\alpha _{}>2.5`$ kpc. A modified Bahcall & Soneira model is used by Van der Kruit (1986) to fit the integrated blue and red starlight observed by the Background Experiment on board of Pioneer 10. To avoid regions with high extinction, only data for galactic latitude $`b>20^{}`$ are used: thus, as shown by the author, only the value $`\alpha _{}/\beta _{}`$ can be derived. Assuming $`\beta _{}=325`$ pc a value of $`\alpha _{}=5.5`$ kpc is derived. As suggested by Kent et al. (1991) this method is sensitive to local surface brightness gradients and may only give a measure of the local exponential scalelength. Kent, Dame & Fazio (1991) derive the parameters of Galactic structure from the 2.4-$`\mu m`$ integrated light, measured from the Infrared Telescope aboard the Spacelab 2 mission in 1985. A radial exponential distribution is assumed for the disk, allowing for different vertical distributions. A correction for extinction (small at this wavelength, although not negligible) is incorporated in the model, evaluated from the atomic hydrogen distribution, assuming a constant value for the dust/hydrogen ratio. Surface brightness profiles as a function of the galactic longitude at several latitudes are compared to the data: the standard model is unable to describe the profile on the galactic plane. Using an exponential vertical distribution they find $`\beta _{}=204\pm 57`$ pc and $`\alpha _{}=2.7\pm 0.5`$ kpc. The $`\beta _{}`$ value is small compared to the Bahcall & Soneira (1980) value for the giant star distribution (the main contributor to the luminosity in the observed wavelength range). For this reason they try a different fit allowing for a varying $`\beta _{}`$. Up to 5.3 kpc they keep $`\beta _{}=165`$ pc then they assume a gradient of 30 pc/kpc so that $`\beta _{}=247\pm 69`$ at $`R_{}`$. The corresponding radial scalelengths is $`\alpha _{}=3\pm 0.5`$ kpc. I note here that all the works cited up to this point used R=8 kpc, while in the next references the value 8.5 is preferred. When a scalelength is derived from a model it should be scaled to the value of R (Kent et al. 1991). Wainscoat et al. (1992) fit star counts at 12-$`\mu m`$ and 25-$`\mu m`$ from the IRAS Point Source Catalog with a Galactic model with a disk of radial scalelength $`\alpha _{}=3.5`$ kpc. Scaleheights for the vertical exponential distribution are given for a a compilation of 87 types of galactic sources responsible for emission in the NIR bands. Values are similar to those in the Bahcall & Soneira (1980) model (e.g. $`\beta _{}=325`$ pc for old dwarfs, 270 pc for giants etc.). A previous model to fit K-band star count and 12-$`\mu m`$ IRAS source count (Garwood & Jones 1987, see also Jones et al. 1981) used slightly smaller values ($`\beta _{}=300`$ pc for old disk dwarfs). In Robin et al. (1992b) the scalelength is measured from B and V-band star counts up to $`m_V=25`$ mag in a low absorption window in the direction of the Galactic anticenter. They use a complex structure and evolution Galaxy model (the Besançon model): the disk in this model follows an Einasto density law, that behaves similarly to an exponential in the radial direction and to a $`sech^2`$ in the vertical. A correction for extinction is evaluated from the U-B/B-V plot of a sample of stars for which U-band data are available. They obtain $`\alpha _{}=2.5`$ kpc. A similar value ($`\alpha _{}=2.3\pm 0.1`$ kpc) is obtained by Ruphy et al. (1996) using J- and K-band star counts of two fields in the direction of the Galactic anticenter from the Deep Near-Infrared Survey of the Southern Sky. The same model as in Robin et al. (1992b) is assumed. Data are corrected for extinction (affecting the J-band) matching the positions of the observed and the model J-K colour distribution of the stars. The scalelength is obtained by minimising the difference in shape between the two distributions. Like in Robin et al. (1992b) their fit is not very sensitive on the assumed value for $`\beta _{}`$ (because data are very close to the Galactic plane). Porcel et al. (1998) measures $`\alpha _{}`$ from K-band star counts in the Two Micron Galactic Survey database. Extinction effects, low in the K-band, are further minimised by selecting stars out of the galactic plane (at a galactic latitude $`b=5^{}`$). A range of galactic longitudes ($`30^{}<l<70^{}`$) is chosen to minimise the contribution to the star distribution from local structures: namely, bulge, ring and bar in the centre ($`l<30^{}`$) and Local Arm, warp and truncation ($`l>70^{}`$). The contribution from other spiral arms are minimised by the choice of $`b`$. In the database, only stars with K-band apparent magnitude $`9<m_K<10`$ are selected: it can be shown that 80% of the total light emitted in this magnitude range is provided by K2-K5III stars, which have an absolute magnitude $`M_K=2.5\pm 0.6`$. Their luminosity function can thus be conveniently approximated by a Dirac delta function. They find $`\alpha _{}=2.1\pm 0.3`$ kpc, assuming a vertical scalelength $`\beta _{}`$ of 200 pc (again the result is not very sensitive on $`\beta _{}`$). These recent results based on star counts are confirmed also by the kinematical estimate of Fux & Martinet (1994), based on the asymmetric drift equation, that gives $`\alpha _{}=2.5_{0.6}^{+0.8}`$ kpc, assuming a constant value for $`\beta _{}`$. If the gradient in $`\beta _{}`$ derived by Kent et al. (1991) is included, $`\alpha _{}=3.1`$ kpc. Recently Haywood, Robin & Crézé (1997a; b) have questioned the validity of the exponential assumption for the vertical structure, on the basis of the Besançon model. Rather than adopting different values of $`\beta _{}`$ for different galactic sources, the model derives the vertical structure from the combined effect of the star formation history and of the secular heating of the disk. After tests on B, V, and I star counts (the Ruphy et al. (1996) value of $`\alpha _{}`$, 2.5 kpc, is used) they claim that the exponential is unsuitable to properly describe data for $`z<500`$ pc. They find that the vertical density distribution decreases faster than the $`300350`$ pc scaleheight generally assumed for old disk dwarf stars. In Ruphy et al. (1996) a value of $`250`$ pc is suggested for $`\beta _{}`$. ### 3.5 Adopted disk parameters A table of vertical scalelengths, B, V, J, H, K-band absolute magnitudes, local number density and other values for the main Galactic stellar sources is provided by Wainscoat et al. (1992). I have computed the mean values for the vertical scalelengths averaging over the disk luminosity in each waveband. A mean value is computed also averaging over the total disk luminosity integrated from the B to the K band. Mean values for $`\beta _{}`$ are presented in Tab. (3.1), together with the ratios $`\alpha _{}/\beta _{}`$. I have assumed $`\alpha _{}=3`$ kpc, a compromise between the highest and lowest values presented in Sect. 3.4. As shown by Tab. (3.1) and by the discussion in Sect. 3.4, a model of a spiral galaxy should include different scalelengths ratios for different stellar components, and therefore for emission at different wavelengths. For simplicity, in this thesis I will mainly use one scalelength ratio only, the one averaged over the total luminosity, $`\alpha _{}/\beta _{}=14.4`$. A test simulation has been run using different scale heights for different wavelength ranges (Sect. 4.7); for that simulation I use $`\alpha _{}/\beta _{}=18`$ ($`\beta _{}=170`$ pc for an $`\alpha _{}=3`$ kpc) shortward of the V-band (obtained averaging over the total integrated luminosity from B to V) and 11.3 ($`\beta _{}=265`$ pc) at longer wavelengths (averaging over the total integrated luminosity from J to K). Similar values for $`\alpha _{}/\beta _{}`$ can be found in the literature. Analysing the structure of eight large edge-on galaxies in the UGC catalog, De Grijs & Van Der Kruit (1996) find a mean ratio of the radial and vertical exponential scalelength in the I-band of $`11.8\pm 0.8`$. They argue that looking at edge-on objects, the radial scalelength can be overestimated at least by 10% (in a transparent case) because of projection effects, thus reducing the ratio. No clear trend of variation of $`\beta _{}`$ from B to I-band is observed in their sample. Van Der Kruit & Searle (1982) obtained $`9.4\pm 3.6`$ in the J-band for eight edge-on spiral galaxies, including our own. Fitting the stellar emission in a sample of seven spiral galaxies with their radiative transfer model, Xilouris et al. (1997; 1998; 1999) measure $`\alpha _{}/\beta _{}`$ 14, in B, 12 in V-band and 11-12 in I, J, K. In the models of this thesis, only one value of the radial scalelength is used, i.e. all the stellar components have a radial exponential distribution with the same $`\alpha _{}`$. A few authors have investigated the ratios of radial scalelengths in different passbands. Peletier et al. (1994) have compared B and K-band disk scalelengths for a diameter limited sample of 37 Sb-Sc galaxies with a uniform distribution of orientation respect to the sky. Their aim is to separate in $`\alpha _B/\alpha _K`$ the effects of extinction from those of a change in stellar population. If the ratio $`\alpha _B/\alpha _K`$ is mainly due to extinction, it increases with inclination, while it is constant if the change in stellar population is the dominant contributor to the galactic colour gradient. From the metallicity gradients a ratio of 1.17 is estimated, independent of inclination, while the observed value goes from 1.3 for face-on galaxies to 1.7 for edge-on (Peletier et al. 1995). Moreover they observe from previous work that in dust-free late type galaxies the ratio between scalelengths is small: $`\alpha _B/\alpha _I=1.04\pm 0.05`$, equivalent to $`\alpha _B/\alpha _K=1.08\pm 0.1`$ for almost any stellar population model. Thus they claim that the change in scalelength with $`\lambda `$ is mainly due to dust extinction, rather than being a reflection of different stellar components. On the contrary, comparing colour-colour plots of a sample of 86 face-on galaxies with a Monte Carlo radiative transfer model, De Jong (1996a) concludes that dust can’t be responsible for the gradient in colour. He suggests that the observed ratio $`\alpha _B/\alpha _K=1.22\pm 0.23`$ is caused by a change in stellar population in the outer parts of the disk. The disk models of this thesis are truncated at 6$`\alpha _{}`$. To explain star counts of faint sources in our Galaxy, Wainscoat et al. (1992) introduce a truncation at 4.3$`\alpha _{}`$, Robin et al. (1992b) and Robin et al. (1992a) at 5.6-6$`\alpha _{}`$, Ruphy et al. (1996) at $`6.5\pm 0.9\alpha _{}`$. ### 3.6 Dust disk The parameters for the dust disk are by far more uncertain than the stellar disk. Usually the same functional form is used as for the stellar distribution, with independent scalelengths (Kylafis & Bahcall 1987, Byun et al. 1994, Bianchi et al. 1996, Xilouris et al. 1997; 1998; 1999). Devereux & Young (1990a) find a very good correlation between the mass of dust derived from IRAS fluxes at 60$`\mu `$m and 100$`\mu `$m and the total mass of gas inside the optical radius for a sample of 58 galaxies. Xu et al. (1997) compared the extinction in the B band derived on a sample of 79 galaxies using an energy balance model with that obtained from the gas column density using the relation measure in the Galaxy (Eqn. 2.10). A good agreement is found. When the correlation is analysed separately for each gas phase, a tighter correlation is found for the H<sub>2</sub>, suggesting that extinction is due essentially to dust associated with the molecular gas. This is because in their sample, molecular gas is the dominant component in the inner galaxy. Indeed, when galaxies with a dominant HI component are selected, extinction is mainly associated to the atomic gas. Therefore, it is reasonable to use the gas distribution as a tracer of the dust distribution. In luminous, face-on, late-type spirals, H<sub>2</sub> peaks in the centre and falls off monotonically with increasing distance from the centre. This contrasts markedly with the central HI depression and the nearly constant HI surface density across the rest of the optical disk (Young & Scoville 1991). The same behaviour has been observed in some early type galaxies, although a good fraction of them presents a central depression and a flatter ring distribution for the molecular gas. Conducting an energy balance on M31 for several cells associated with diffuse FIR emission on M31, Xu & Helou (1996b) found a quite flat face-on optical depth with the radius. Because of their selection against FIR sources associated with molecular clouds and star-forming regions, the optical depth profile is close to the HI, rather than to the H<sub>2</sub> component. A dust ring in emission has been observed by SCUBA at 450$`\mu `$m and 850$`\mu `$m in the early type galaxy NGC 7331 (Bianchi et al. 1998). The dust emission correlates well with the observed molecular ring (See Appendix A for a complete discussion). A flat distribution for the B-band optical depth of the Galactic disk has been found by Sodroski et al. (1997) from the optical depth at $`240\mu `$m derived from DIRBE observation. Nevertheless, most of the galaxy exhibit a centrally peaked molecular gas component, dominant on the atomic gas phase. This is the case of the late-type spiral NGC 6946 (Tacconi & Young 1986), whose dust distribution is the main concern of this thesis (Chapter. 4), Therefore, I assume the dust to be distributed in a smooth radial and vertical exponential disk, similar to the stellar one (Sect. 3.3). The number density of dust grains can be written as $$n(r,z)=n_0\mathrm{exp}(r/\alpha _dz/\beta _d),$$ (3.6) with $`n_0`$ the central number density. The radial and vertical scalelengths $`\alpha _d`$ and $`\beta _d`$ can be selected independently from the analogous stellar parameters. Usually it is assumed $`\alpha _d\alpha _{}`$ and $`\beta _d0.5\beta _{}`$. The choice of the last parameter is mainly dictated by the impossibility of simulating the extinction lanes in edge-on galaxies with a dust distribution higher than the stars (Xilouris et al. 1999). Recently there have been suggestions for more extended dust distributions, both from analysis of extinction of starlight and FIR emission. Peletier et al. (1995), from an analysis of the variation of scalelength ratios in different colours with inclination, concluded that $`\alpha _d/\alpha _{}1`$. Davies et al. (1997) modelled the Galactic FIR emission at 140 $`\mu `$m and 240 $`\mu `$m observed by the satellite $`COBE`$ with an extended dust distribution with scalelengths $`\alpha _d/\alpha _{}=1.5`$ and $`\beta _d/\beta _{}=2`$. In all of seven edge-on spiral galaxies, Xilouris et al. (1999) are able to fit optical and NIR starlight with their radiative transfer model only using a larger radial scalelength for dust with respect to stars: the mean value is $`\alpha _d/\alpha _{}=1.4\pm 0.2`$. The dust number density in Eqn. (3.6) is normalised in the model from the optical depth. For a model with face-on central optical depth in the V-band $`\tau _V`$, the central number density is given by $$n_0=\frac{\tau _V}{2\beta _d\sigma _{\mathrm{ext}}(V)},$$ (3.7) where $`\sigma _{\mathrm{ext}}(V)`$ is the extinction cross section. For a model at a specific wavelength, the absorption coefficient $$k_\lambda (r,z)=n(r,z)\sigma _{\mathrm{ext}}(\lambda )$$ (3.8) is integrated along a path to compute the optical depth and there is no need to specify the absolute values of $`\sigma _{\mathrm{ext}}(\lambda )`$ but only the ratio $`\sigma _{\mathrm{ext}}(\lambda )/\sigma _{\mathrm{ext}}(V)`$, given by the assumed extinction law (Sect. 2.2). The dust disk has the same truncations as the stellar, at 6 scalelengths both in the vertical and radial direction. In chapter 4, I will explore various values of $`\alpha _d/\alpha _{}`$ and $`\beta _d/\beta _{}`$ and $`\tau _V`$, to better reproduce the characteristics of the dust FIR emission. ### 3.7 The Monte Carlo code The model used in this thesis is based on the BFG Monte Carlo code for the radiative transfer (complete with scattering) in dusty spiral galaxies. The BFG code originally included the radiative transfer of all the Stokes parameters, to also model polarisation. Dust properties were computed from the Draine & Lee (1984) dust model, using Mie’s theory for spherical grains. For use in this thesis the BFG code has been simplified: the radiative transfer is carried out only for the intensity using empirical dust properties and phase functions (Sect. 2.2). Clumping (Bianchi et al. 1999b) is not yet included in this work. Within the Monte Carlo method, the life of a photon (i.e. a unit of energy in the program) can be followed through scattering and absorption processes, until the radiation is able to escape the dusty medium. I’ll give here a brief description of the computation scheme of the Monte Carlo code, referring the interested reader to the BFG paper. The main steps are: the position of a photon in the 3-D space is derived according to the stellar distributions described in Sect. 3.3. The photon is emitted isotropically. Photons are emitted with unit intensity. the optical depth $`\tau _T`$ through the dust distribution is now computed from the emission position along the photon travelling direction. A fraction $`e^{\tau _T}`$ of all the energy travelling in that direction is able to propagate through the dust. With the Monte Carlo method it is then possible to extract the optical depth $`\tau `$ at which the photon impinges on a dust grain. This optical depth can be computed inverting $$_0^\tau e^\xi 𝑑\xi =R,$$ (3.9) where $`R`$ is a random number in the range . If the derived $`\tau `$ is smaller than $`\tau _T`$, the photon suffer extinction, otherwise escapes the dusty medium. This process is quite inefficient when the optical depth of the dust distribution is small, most of the photons leaving the dust distribution unaffected. To overcame this problem, the forced scattering method is used (Cashwell & Everett 1959, Witt 1977): essentially, a fraction $`e^{\tau _T}`$ of the photon energy is unextinguished and the remaining $`1e^{\tau _T}`$ is forced to scatter. When the optical depth is small ($`\tau _T<10^4`$) or the photon path is free of dust, the photon escapes the cycle. Once $`\tau `$ is known, the geometrical position corresponding to the point where the photons collide with a dust grain is computed. a fraction of the photon energy, given by the albedo $`\omega `$, is scattered, while the remaining $`(1\omega )`$ is absorbed. The scattering polar angle $`\theta `$, i.e. the angle between the original photon path and the scattered direction, is computed using the Henyey & Greenstein (1941) scattering phase function (Eqn. 2.12), inverting $$_0^\theta \varphi (\theta ^{})\mathrm{sin}\theta ^{}=R,$$ (3.10) with $`R`$ a random number. The inversion of Eqn. (3.10) is given by the analytical formula $$\theta =\mathrm{arccos}\left[\frac{1}{2g}\left(1+g^2\frac{(1g^2)^2}{(1+g(12R))^2}\right)\right],$$ (3.11) with $`g`$ the asymmetry parameter (Sect. 2.1). Another angle is needed to define the direction of the scattered radiation. Since for spherical grains there is no preferential direction perpendicular to the original photon path, an azimuthal angle is extracted randomly in the range $`[0,2\pi ]`$. The original BFG code has been modified to store the information about absorption, to be used in the derivation of dust temperature. Using the model symmetries around the vertical axis and about the galactic plane to improve the signal-to-noise, a map of the absorbed energy is produced as a function of the galactocentric distance and the height above the plane. the last two steps are then repeated, using the new direction of the scattered photon, the coordinates of the scattering point and the energy reduced by absorption. The cycle is repeated until the energy of the photon falls below a threshold value ($`10^4`$ of the initial intensity) or until the exit conditions on $`\tau `$ are verified. After the exit conditions are satisfied, the photon is characterised by the last scattering point, its travelling direction and its energy. The two symmetries of the model, planar and axial, are exploited to reduce the computational time: if the model is supposed to be observed from a point at infinite distance in the (x,z) plane, each photon position is rotated around the symmetry axis until its direction is parallel to that plane; then, for each photon coming from (x,y,z), another with the same direction coming from (x,-y,z) is added; two other photons are added coming from (x,y,-z) and (x,-y,-z) and with a direction specular to the original one with respect to the galactic plane. A total of 4 photons are produced from each one. The photons are then classified according to the angle between their direction and the symmetry axis, to produce maps of the galaxy as seen from different inclinations. This is done by dividing the whole solid angle in $`N_B`$ latitudinal bands of the same solid angle. In BFG and here, $`N_B=15`$. There are therefore 8 independent images at mean inclinations of 20, 37, 48, 58, 66, 75, 82 and 90. When models are produced for a specific angle only, as in Chapter 4, it means that a band of solid angle $`4\pi /15`$ is used, with a mean angle equal to the given one. Finally, all the photons in an angle band are projected into the plane of the sky according to their point of last scattering. For the models of this thesis, I have used maps of 201x201 pixel, to cover a region of 12x12 stellar radial scalelengths around the centre of the galaxy. Maps of absorbed energy are derived to cover 6 dust scalelengths in the radial direction and in the positive vertical direction in 101x101 pixels. ### 3.8 Normalisation of the radiative transfer output The Monte Carlo model described in the previous section is monochromatic, since the optical properties of dust must be specified for a particular wavelength. Also the geometrical distributions of stars may depend on $`\lambda `$ (Sect. 3.5). Therefore the absorbed energy maps contain information on the energy absorbed by dust from a single wavelength only: a map of the *total* energy absorbed by dust can be produced running several monochromatic models to cover the Spectral Energy Distribution (SED) of stellar radiation, the main source of dust heating. To describe the output of the program, I use in this chapter a model with a stellar exponential disk of radial scalelength $`\alpha _{}=2.5`$ kpc (the choice for a model of NGC 6946, see Sect. 4.5) and $`\alpha _{}/\beta _{}=14.4`$ (Sect. 3.5). The adopted dust disk has the same radial scalelength as the stars, but a vertical scalelength half the stellar one. The central face-on optical depth in this model is $`\tau _V=5`$. Images have been produced for the 15 contiguous angle bands defined in BFG (Sect. 3.7). I use a synthetic SED for a spiral galaxy, derived with the spectrophotometric evolution model PEGASE<sup>1</sup><sup>1</sup>1The spectra has been produced using the parameters for an Sbc galaxy given in Fioc & Rocca-Volmerange (1997), *without* taking dust into consideration, the extinction being modelled by the code described in this thesis. For the spectrum, I have assumed a galaxy of mass 10<sup>11</sup> M. (Fioc & Rocca-Volmerange 1997). The spectrum is shown in Fig. (3.1). In Table (3.2) the 17 wavelength bands used in the model are defined: the bands from UV1 to K have essentially the same spectral coverage as the homonymous described by Gordon et al. (1997), for which they provide extinction and scattering properties (Sect. 2.2). Two bands (namely EUV and LMN) have been added to extend the spectral coverage in the ultraviolet up to the ionization limit and in the near infrared. A Monte Carlo simulation is run for each of the band. The energy emitted in each band is computed integrating the stellar SED over the band limits (Fig. 3.1). In the example of this chapter, models are normalised to the intrinsic energy, i.e. the energy a galaxy would emit in the optical without intervening dust. For each band, I show in Fig. (3.1) the amount of energy that is able to escape dust, the rest being absorbed. When comparing the simulation to observed SEDs, as in the next chapter, the models are normalised to the *observed* energy, i.e. after radiation has been processed by dust. The intrinsic SED is then inferred from the radiative transfer model. Usually 10<sup>7</sup> photons are run for each simulation. Approximatively, 24 hours of computing on a SUN ULTRA 5 workstation are needed for a complete set of 17 simulations with $`\tau _\mathrm{V}=1`$. Optical images are produced in units of surface brightness (L kpc<sup>-2</sup> $`\mu `$m<sup>-1</sup> sterad<sup>-1</sup>). An example of the images produced by the radiative transfer code in the optical (B-band) is given in Fig. (3.3). ### 3.9 The FIR code For each waveband, a map of the absorbed energy density as a function of the galactocentric distance and of the height above the galactic plane is produced. Of all the radiation absorbed in a specific waveband, a fraction is absorbed by grains not emitting at the thermodynamic equilibrium. Since I am interested in modelling only the thermal emission, a correction is applied to the absorbed energy maps, as described in Sect. (2.5). After all the absorbed energy maps are scaled to the energy input of each band, they are summed together, to produce a single map of the absorbed energy that goes into thermal emission. The final map, $`W_{\mathrm{abs}}(r,z)`$, has units of energy per unit time per unit volume (L kpc<sup>-3</sup>). Knowing the number density of dust grains, $`n(r,z)`$, the power absorbed by a single grain, $`W_{\mathrm{abs}}(r,z)/n(r,z)`$, can be derived. Equating the absorbed and emitted radiation, a relation for the temperature $`T_\mathrm{d}(r,z)`$ can be then found. Assuming that dust grains have all the same radius, it can be derived from Eqn. (2.13) that $$\frac{W_{\mathrm{abs}}(r,z)}{n(r,z)}=4\pi a^2_0^{\mathrm{}}Q_{\mathrm{em}}(\lambda )\pi B_\lambda (T_\mathrm{d}(r,z))𝑑\lambda .$$ (3.12) In the case of an exponential dust distribution, the number density $`n(r,z)`$ can be obtained by Eqn. (3.6). The central number density of Eqn. (3.7) can be rewritten in terms of the grain radius $`a`$ and the extinction efficiency as $$n_0=\frac{\tau _\mathrm{V}}{2\beta _d\pi a^2Q_{\mathrm{ext}}(V)}.$$ (3.13) Finally, substituting Eqns. (3.6) and (3.13) in Eqn. (3.12) $$\frac{\beta _dW_{\mathrm{abs}}(r,z)}{2\tau _\mathrm{V}\mathrm{exp}(r/\alpha _dz/\beta _d)}=_0^{\mathrm{}}\frac{Q_{\mathrm{em}}(\lambda )}{Q_{\mathrm{ext}}(V)}\pi B_\lambda (T_\mathrm{d}(r,z))𝑑\lambda .$$ (3.14) A map of the temperature can thus be derived inverting Eqn. (3.14). I have used the emissivity law $`Q_{\mathrm{em}}(\lambda )`$ defined in Eqn. (2.33). The temperature contour map is shown in Fig. (3.2). The emission coefficient (Eqn. 2.21) can be easily derived from the temperature from the formula $`j_\lambda (r,z)`$ $`=`$ $`n(r,z)\sigma _{\mathrm{em}}B_\lambda (T_\mathrm{d}(r,z))`$ (3.15) $`=`$ $`{\displaystyle \frac{\tau _\mathrm{V}}{2\beta _\mathrm{d}}}\mathrm{exp}(r/\alpha _dz/\beta _d){\displaystyle \frac{Q_{\mathrm{em}}(\lambda )}{Q_{\mathrm{ext}}(V)}}B_\lambda (T_\mathrm{d}(r,z))`$ It is interesting to note that both the determination of the temperature and the emission coefficient are independent of the grain radius $`a`$. If a dust model was to be used for $`Q_{\mathrm{em}}(\lambda )/Q_{\mathrm{ext}}(V)`$, rather then an empirically determined value, the emissivity would have been dependent on the dust grain radius and on its composition. Therefore, the approach used here is analogous to use a mean value over the dust distribution of sizes and materials for both the radius $`a`$ and the ratio $`Q_{\mathrm{em}}(\lambda )/Q_{\mathrm{ext}}(V)`$. FIR images are created integrating the emission coefficient along a given line of sight through the dust distribution, under the assumption that dust is optically thin to FIR radiation. Using the emissivity as in Eqn. (2.33) this is justified for any model with reasonable $`\tau _\mathrm{V}`$. As for the optical images, the far infrared images are produced in units of surface brightness (L kpc<sup>-2</sup> $`\mu `$m<sup>-1</sup> sterad<sup>-1</sup>). FIR images have the same extent and resolution of the optical images, i.e. a region of 12x12 stellar radial scalelengths around the centre of the galaxy is mapped in 201x201 pixels. As an example, FIR images are shown in Fig. 3.3, at 100 $`\mu `$m. Both optical and FIR images can be analysed as real images. In Fig. (3.4) I show the surface brightness measured over a circular aperture corresponding to the half-light radius (in the B-band), for both optical and FIR images at an inclination of 20 (solid line). The half-light radius is observed, i.e. it has been derived from the surface brightness distribution in the B band image. The surface brightness inside the half light radius for a transparent galaxy seen at that inclination is also shown (dashed line). From Fig. (3.4) it is not straightforward to derive a relation between the absorbed and the emitted energy, i.e. it is not possible to compare the area between the two curves in the UV/optical/NIR (the absorbed energy) with the area under the FIR curve (the energy emitted by dust). This is because we are not looking at the total energy emitted by the model, but at the energy coming from a specific inclination: because of the reduced extinction and the influence of scattering, a model at low inclination looks more transparent than in the edge-on case. Furthermore, part of the absorbed energy has been disregarded because is not going into thermal equilibrium processes. The sum of the total energy emitted by dust and of the total energy observed in the optical, for all the inclination bands, do equal the total intrinsic unextinguished stellar energy. ### 3.10 Summary In this Chapter I have described the code used to model the FIR emission. A modified version of the BFG radiative transfer code is used to produce a map of the energy absorbed by dust as a function of the position within the dust distribution, together with the usual images of the attenuated emission from the stellar distribution. The whole spectral range of non-ionising stellar radiation is used. Using the model for the dust emissivity derived in Sect. 2.3, a temperature distribution is derived from the absorbed energy map. Finally, the FIR emission is computed, integrating over the dust number density and temperature distributions. Images of FIR emission for any wavelength and galaxy inclination are produced. In the next Chapter, I will use the code to model the FIR emission of the spiral galaxy NGC 6946. For a chosen dust distribution and optical depth, the output for the stellar emission from the radiative transfer code will be normalised to the observed stellar SED. The modelled FIR emission will be then compared with the observations. Dust parameters will be changed to match the observed FIR spectrum and the spatial distribution of FIR images (as defined by a radial exponential scalelength). ## Chapter 4 Modelling NGC 6946 NGC 6946 is a large (D<sub>25</sub>=11.5’, De Vaucouleurs et al. 1991, RC3) nearby Sc galaxy, seen from an inclination of 34 (Garcia-Gomez & Athanassoula 1991). Tully (1988) give a distance of 5.5 Mpc, similar to those obtained by other authors with a variety of methods (De Vaucouleurs 1979, Eastman et al. 1996, Schoniger & Sofue 1994), although there are estimates as big as 10.1 Mpc (Sandage & Tammann 1974). In this work I assume a distance of 5.5 Mpc, that gives a scale on the sky of 27 pc arcsec<sup>-1</sup>. The optical appearance of the galaxy is characterised by the prominent spiral arms (Fig. 4.1). Six separate spiral arms can be seen in optical images (Tacconi & Young 1990): the three spiral arms originating from the northeast quadrant are brighter and more developed than the other in the southwest. In his high-resolution extinction maps based on an energy balance method, Trewhella (1998a; b) finds that the interarm regions between the two prominent northeast spiral arms is a site of strong extinction, rather than being intrinsically less luminous. Indeed, light polarised by dust is observed in the interarm regions as well as in the spiral arms (Fendt et al. 1998). There is evidence for recent star formation along the spiral arms and a mild starburst in the centre (Tacconi & Young 1990). The distribution of atomic and molecular gas in the inner 10’ has been studied by Tacconi & Young (1986, Sect. 4.9). The galaxy is marginally resolved in FIR observations from IRAS and KAO (Engargiola 1991, Alton et al. 1998a), with the FIR emission following the giant HII regions along the spirals and the bright central emission. The low resolution 200$`\mu `$m ISO image (Alton et al. 1998a) shows a morphology similar to the 100$`\mu `$m IRAS observation (Fig. 4.1). In this chapter I will apply the model described in Chapter 3 to the optical and FIR observations of NGC 6946, to determine the spatial distribution and content of dust in the galaxy. ### 4.1 The stellar Spectral Energy Distribution The observed Spectral Energy Distribution of the stellar radiation in NGC 6946 has been constructed using data available in the literature and extrapolating a few data points at the edges of the wavelength range, where no observations were available. The SED (in units of surface brightness) for a circular aperture of 5’ (corresponding to the half light radius in the B-band) is presented in Table 4.1. All the data have been corrected for a Galactic extinction $`A_B`$=1.73 (RC3) using a standard extinction law (Whittet 1992). Optical and Near Infrared data comes from Engargiola (1991), that provides fluxes for NGC 6946 in the bands U, B, g, V, r, I, J, H, K for the half light radius aperture. In the NIR part of spectrum, the emission could be due in part to small grains of dust. Dust contribution can be computed from the Désert et al. (1990) dust model for dust heated by the local interstellar radiation field. According to the model, emission in filter K is 0.004 times the emission in the IRAS 12 $`\mu `$m filter. This is still valid even if the radiation field is 100 times the local one. Using NGC 6946 emission at 12 $`\mu `$m, derived from IRAS HiRes images (Sect. 4.2), it is therefore possible to compute the contribution of dust to the emission in K and compare it to the observed emission in that band. As expected, the K-band emission can be considered as purely stellar, the dust contributing only about 0.3% to the total. The stellar emission at longer wavelengths has been extrapolated using the synthetic galactic SEDs of Fioc & Rocca-Volmerange (1997). In this wavelength range, the SED of their unextinguished spiral galaxy models is almost linear when $`\lambda F_\lambda `$ is plotted versus $`\mathrm{log}\lambda `$. A value at 5 $`\mu `$m has thus been computed from the K-band flux. Fluxes for the non-ionising UV radiation shortward of the U-band have been taken from Rifatto, Longo & Capaccioli (1995a): the authors give total magnitudes for 2400 galaxies in three photometric bands centred at 1650Å (*short*-UV), 2500Å (*medium*-UV), 3150Å (*long*-UV). Data collected by several satellites (notably IUE), balloon and rocket-borne experiments with different apertures and sensitivities have been homogenised to a common scale. After dividing the sample in three morphological bins (E/S0, Sa/Sb, Sc/Sd), standard luminosity profiles in the B-band (Buta et al. 1995) have been fitted to the data in each of the three photometric band and the total magnitudes were derived. Using the luminosity profiles for Sc/Sd galaxies, I have derived the UV magnitudes for NGC 6946 inside the aperture used for Optical and NIR data. Fluxes have been derived from magnitudes using the calibration described in Rifatto, Longo & Capaccioli (1995b). Errors in the fluxes are quite big, mainly arising from the aperture correction. The flux at the Lyman limit (912Å) has been extrapolated after observing that in a $`\lambda F_\lambda `$ versus $`\mathrm{log}\lambda `$ plot the observed SED is flat for the *short*\- and *medium*-UV. I have assumed that the same trend is valid down to the ionization limit. I haven’t included the ionising UV in the model: this will be justified later (Sect. 4.10.4). ### 4.2 The dust Spectral Energy Distribution The FIR output of the code is compared to the SED of dust emission as measured from the IRAS and ISO maps of Alton et al. (1998a). The SED (in units of surface brightness) for a circular aperture equivalent to the half light radius is presented in Table 4.2, in the same units as for Table 4.1. Shortward of 100$`\mu `$m I have used IRAS High Resolution (HiRes) images. Original data from the IRAS satellite (Neugebauer et al. 1984) have a coarse resolution (FWHM $``$ 1.5’ x 5’ at 60 $`\mu `$m). HiRes images are produced using a model of the response of IRAS detectors and a process called Maximum Correlation Method, to restore a resolution close to the diffraction limit (Rice 1993, and references therein). As an example, resolution of HiRes images is 45” x 60” and 80” x 100” (FWHM), at 60 $`\mu `$m and 100 $`\mu `$m, respectively (Alton et al. 1998c; a). Derived values have been colour corrected using the corrections for NGC 6946 derived by Rice et al. (1988) (18%, -14%, 7%, 4% at 12, 25, 60 and 100 $`\mu `$m). Integrated values are consistent (within a 20% error, Engargiola 1991, Alton et al. 1998a) with the analogous data provided by Engargiola (1991), derived on previous enhanced resolution IRAS images. Data at 200$`\mu `$m comes from observations with the ISOPHOT instrument (Lemke et al. 1996) aboard the ISO satellite (Kessler et al. 1996), with a resolution of 117” (FWHM). Integrated values have an error of 15%, but the present calibration of the instrument may to be overestimated by about 30% (Alton et al. 1998a). The value in Table 4.2 is consistent, within the errors, with the measurements by Engargiola (1991) on 200$`\mu `$m images taken by the air-born telescope KAO. Finally, I have included in Table 4.2 data at 160$`\mu `$m derived from the KAO telescope (Engargiola 1991), with a resolution of 45” (FWHM). ### 4.3 High resolution SCUBA observations of NGC 6946 NGC 6946 was observed by me using the Sub-millimetre Common User Bolometer Array (SCUBA) on the 15 m James Clark Maxwell Telescope in Hawaii. Observations were carried out on April 10, 11 and June 17, 18, 19, 20 1998, at 450$`\mu `$m and 850$`\mu `$m. SCUBA consists of two bolometer arrays of 91 elements for the short wavelengths and 37 elements for the long wavelengths. The short-wavelength array is optimised for observing at 450-$`\mu `$m and the long-wavelength array for observing at 850-$`\mu `$m (Holland et al. 1999). The camera is mounted on the Nasmyth focus of the telescope. The arrays have a field of view of about 2.3 arcmin in diameter and can be used simultaneously, by means of a dichroic beamsplitter. The spacing between the bolometers does not produce instantaneously fully sampled images. Therefore, it is necessary to move the secondary mirror of the telescope according to specific patterns. For sources smaller than 2.3 arcmin, the jiggle-map mode is used: the secondary mirror moves according to a 64-point pattern to fully sample the selected region of sky at both long and short wavelengths, and chops to off-source positions to remove the sky background. The telescope nods to remove slowly varying atmospheric gradients. The highly inclined galaxy NGC 7331 (Bianchi et al. 1998, see also Appendix A) and the edge-on NGC 891 (Alton et al. 1998b) have been observed using this mode<sup>1</sup><sup>1</sup>1A more extensive description of observations with the jiggle-map mode is given in Appendix A.. The dimension of the arrays is suitable to contain the extent of the two galaxies along their minor axis. Chopping to relatively distant positions perpendicular to the major axes ensures the sampling of a source-free portion of the sky. For sources larger than 2.3 arcmin, like nearby face-on galaxies, the jiggle-map mode is not suitable. The need for source-free observations would lead to very large chop throws, resulting in a degradation of the sky background subtraction and of the beam size. The scan-map mode is therefore used in this case. The telescope scans the source at a rate of 24 arcsec per second, along specific angles to ensure a fully sampled map. Meanwhile the secondary chops with a frequency of 7.8 Hz within the observed field. While this ensures a correct subtraction of the sky background, the resulting maps unfortunately have the profile of the source convolved with the chop. The profile of the source is restored deconvolving the chop from the observed map by mean of Fourier Transform analysis. Scan-maps of NGC 6946 presented here are fully sampled over an area of 8’x8’. Each set of observations consisted of six scans, with different chop configurations: chop throws of 20”, 30” and 65” along RA and Dec are needed to retrieve the final image. Data have been reduced using the STARLINK package SURF (Jenness & Lightfoot 1999). Images were first flat-fielded to correct for different sensitivities of the bolometers. Noisy bolometers were masked and spikes from transient detections removed by applying a $`5\sigma `$ clip. A correction for atmospheric extinction was applied, using measures of the atmosphere opacities taken several times during the nights of observation. Zenith optical depth varied during the six nights, with $`\tau _{450}=0.42.5`$ and $`\tau _{850}=0.10.5`$. The 450$`\mu `$m opacity on the last night was too high ($`\tau >3`$) for the source to be detected and therefore the relative maps were not used for this wavelength. Because of the chopping in the source field and not along the scan direction, each bolometers sees a different background: a baseline, estimated from a linear interpolation at the edges of the scan, has been subtracted for each bolometer in each map. Images have been corrected for systematic sky variation. Sky fluctuations could be derived by observing the time sequence of observations for bolometers with negligible source signal. However, for large objects observed in scan-map mode, it is difficult to disentangle the signal due to the sky from the source. This problem is overcome by subtracting from the data a model of the source, obtained from the data themselves. The sky variation for all the bolometers is then derived and subtracted. Data taken with the same chop configuration were rebinned together into a map in an equatorial coordinate frame, to increase the signal to noise. Six maps with 3” pixels were finally obtained for each wavelength, combining 33 and 25 observations, at 850 and 450 $`\mu `$m, respectively. In each of the six maps the signal from the source is convolved with a different chop function. The Fourier Transform (FT) of a map is therefore the product of the FTs of the source signal and of the chop. Since the latter is a sine wave, a simple division should retrieve the FT of the source and the deconvolved image, after applying an inverse FT back into image space. In principle, an observation with a single chop configuration would be needed for this purpose. However, problems arises near the zeros of the sine-wave of the chop function FT. At these frequencies, the noise is boosted up and the signal to noise of the final image is significantly reduced. Therefore, observation taken with different chop configurations are used, with chop throws selected in a way that the zeros of one chop FT do not coincide with those of another, except at zero frequency. The noise introduced in the source FT by the division with one chop FT are thus smoothed by coadding with another source FT with noise at different frequencies. This method is known as the Emerson II technique (Holland et al. 1999, Jenness et al. 1998). The six chop configurations described above are recommended for this technique. Unfortunately the deconvolution introduces some artifacts in the images, like curved sky background. This may be due to residual, uncorrected, sky fluctuation at frequencies close to zero, where all the chops FT goes to zero. Work to solve this problem is ongoing (Jenness, private communication). To enhance the contrast between the sky and the source, I have modelled a curved surface from the images, masking all the regions were the signal was evidently coming from the galaxy. The surface has been then subtracted from the image. Calibration was achieved from scan-maps of Uranus, that were reduced in the same way as the galaxy. Comparing data for each night I derived a relative error in calibration of 8 per cent and 17 per cent, for 850-$`\mu `$m and 450-$`\mu `$m respectively. From the planet profile, the beam size was estimated: a FWHM of 15.2” and 8.7” were measured for the beam at 850 and 450 $`\mu `$m, respectively. To increase the signal to noise, the 850 $`\mu `$m image has been smoothed with a gaussian of 9”, thus degrading the beam to a FWHM of 17.7”. The 450 $`\mu `$m image has been smoothed to the same resolution as for the 850 $`\mu `$m one, to facilitate the comparison between features present in both. The sky $`\sigma `$ in the smoothed images is 3.3 mJy beam<sup>-1</sup> at 850 $`\mu `$m and 22 mJy beam<sup>-1</sup> at 450 $`\mu `$m. The final images, after removing the curved background and smoothing are presented in Fig. 4.2. For each wavelength, the grey scale shows all the features $`>1\sigma `$, while contours starts at $`3\sigma `$ and have steps of $`3\sigma `$. The 850$`\mu `$m image shows a bright nucleus and several features that clearly trace the spiral arms (see Fig. 4.3, where a U-band image of the galaxy (Trewhella 1998b) is shown with the sub-mm contour overimposed). As already seen in optical images (Tacconi & Young 1990), the spiral arms originating in the northeast quadrant are more pronounced than the others, where only regions with bright HII regions have detectable emission. The 850$`\mu `$m image presents a striking similarity to the CO (J=2-1) emission map in Sauty et al. (1998), observed with the IRAM 30m radiotelescope with a comparable resolution. This is hardly surprising, since the molecular gas is the dominant component of the ISM over the optical disk of NGC 6946 (Tacconi & Young 1986, Sect. 4.9). The nucleus is elongated in the direction north-south, as observed for the central bar of molecular gas (Ishizuki et al. 1990, Regan & Vogel 1995). Emission associated with a more diffuse atomic gas component cannot be detected, for several reasons. First of all the face-on inclination of the galaxy: since dust is optically thin to its own emission a faint component can be observed only if the dust column density is large. This is the case for the highly inclined galaxies observed with the jiggle-map mode, i.e. NGC 7331 and NGC 891, where higher signal to noise were obtained coadding a smaller number of observations. The large face-on galaxy M51 has been observed using the scan-map mode and confirms the necessity of long integrations (Tilanus, private communication). Furthermore, chopping inside the source field removes not only the emission from the sky but also from possible components with a shallow gradient: this may be the case for dust associated with the flat HI distribution in NGC 6946 (Sect. 4.9). Finally, a faint diffuse emission could have been masked by the mentioned artifacts and subtracted together with the curved background. The 450$`\mu `$m image is much noisier than the 850$`\mu `$m one, because of the larger sky emission in this wavelength. Only the nucleus can be clearly detected, although most of the features at a 3-$`\sigma `$ level corresponds to regions emitting in the long wavelength image. It is difficult to derive an integrated flux inside the half-light radius, as in Sect. 4.1 and 4.2, because the subtraction of the curved background and the chopping within the observation field are likely to remove faint emission. Assuming that the regions without evident signal have an undetected emission of about 1 $`\sigma `$, upper limits can be derived for the integrated values. The nuclear and most of the bright spiral arm emission at 850$`\mu `$m is included within the optical half light radius. Regions brighter then 3$`\sigma `$ have a flux of 2 Jy at this wavelength. An upper limit of 3 Jy can be derived for the total emission inside the half light radius. At 450$`\mu `$m, only the nucleus is detected above 3$`\sigma `$, with a flux of 9.3 Jy. Assuming a 1$`\sigma `$ emission for the rest of the half light radius aperture, the estimated upper limit is 27 Jy. These ranges, converted into the units used in this thesis, are shown in Table 4.2. For the nucleus it is possible to derive a temperature from the two sub-mm fluxes. The 3-$`\sigma `$ central region in the 450$`\mu `$m image has a a flux of 1.2 Jy at 850$`\mu `$m, resulting in a temperature of T=20$`\pm `$7 K, using the emissivity as in Eqn. (2.33). The large error comes from the calibration uncertainties. Because of the lower resolution of IRAS and ISO images, it is not possible to derive a temperature over the same small area from the 100 and 200 $`\mu `$m data. The mean temperature of dust inside the half-light radius, derived from IRAS and ISO, is T=26$`\pm `$3, on the upper limit of that measured with SCUBA data. A similar temperature can be derived substituting one of the two FIR observations with the integrated flux of an 850$`\mu `$m image smoothed to the same resolution. The gas column density can be derived from the 850$`\mu `$m flux, if the gas-to-dust ratio is known. Assuming a mean dust grain radius $`a=`$ 0.1$`\mu `$m and mass density $`\rho `$=3 g cm<sup>-3</sup> (Hildebrand 1983), the emissivity of Eqn. (2.33)<sup>2</sup><sup>2</sup>2Using Eqn. (2.32) (i.e. $`\beta =2`$ at any wavelength) instead of Eqn. (2.33) will result in values 15% smaller than the one quoted here. with an extinction efficiency in the V-band of 1.5 (Casey 1991), a gas-to-dust ratio of 160 (Sodroski et al. 1994) and T=26K, a hydrogen column density $`N(H)=1.5\times 10^{21}`$ cm<sup>-2</sup> can be derived for the 1-$`\sigma `$ level, $`N(H)=1.7\times 10^{23}`$ cm<sup>-2</sup> for the central peak and $`N(H)=2.5\times 10^{22}`$ cm<sup>-2</sup> for the two bright HII regions in the northeast spiral arms. Ishizuki et al. (1990) observed the CO(J=1-0) emission in the central 65” of the galaxy, using the Nobeyama Millimeter Array. They derived a total H<sub>2</sub> mass of $`4\pm 2`$ 10<sup>8</sup> M. Integrating the 850$`\mu `$m image over the same area, a column density $`N(H)=5.0\times 10^{22}`$ cm<sup>-2</sup> is retrived, resulting in a mass of 9 10<sup>8</sup> M. The two values are quite close, especially if we consider that the gas-to-dust ratio is supposed to decrease towards the centre of a galaxy with respect to the mean value we used (Whittet 1992, Sodroski et al. 1994). The derived column densities supports the idea of an optically thick dust distribution. For instance, the diffuse component of the north-east spiral arms at a 3-$`\sigma `$ level would correspond to a V-band optical depth $`\tau _V2.2`$ (Eqn. 2.10). The quite high optical depth corresponding to the sky noise ($`\tau _V0.7`$) shows how difficult is to obtain sub-mm images of dust emission in the outskirts of face-on galaxies, even for the high sensitivity of instruments like SCUBA. ### 4.4 Working procedure From the observed Stellar SED of Sect. 4.1, a continuous SED has been derived and then integrated over the 17 bands used in the model (Sect. 3.8): the energy emitted in each band (inside the B-band half light radius) is thus obtained. For each of the bands a radiative transfer simulation is run, to produce an image seen from the same inclination as NGC 6946, i.e. i=34. The B-band image is then analysed to measure the half light radius for the simulation. Successively, the intrinsic *unextinguished* stellar energy emission for each band is derived assuming that the surface brightness measured within the half light radius aperture in the simulation is equal to the observed one. Because of this normalisation, all the models will have the same observed stellar SED for the images at 34. The intrinsic *unextinguished* energy emitted by stars is derived from the observed SED using the information about the fraction of energy absorbed and the distribution of emitted light in the solid angle, both provided by the Monte Carlo code. From the intrinsic *unextinguished* stellar energy it is possible to derive the observed stellar SED in a dust free case, assuming an isotropic emission and measuring the half light radius for a transparent model. After the normalisation and the correction for the MIR emission (Sect. 2.5), dust temperatures are computed from the maps of absorbed energy. Knowing the temperature distribution in the galactic model, images of dust emission at FIR wavelength are produced, seen from the same inclination as the optical (Sect. 3.9). FIR images at several wavelengths are integrated inside the model half-light radius and the observed dust emission SED compared to the observed one described in Sect. 4.2. When comparing the model results to the observations, it would be more correct to smooth the images to the resolution of the instruments involved, to be sure to sample the same areas. This is effectively done when a single image is analysed. For the spectrum, instead, I have chosen to present integrations on images at their original resolution<sup>3</sup><sup>3</sup>3As described in Sect. 3.7, a region of 12$`\alpha _{}`$ is covered by 201 pixel, thus giving a pixel size of 5.7” (150 pc), for the adopted $`\alpha _{}`$=95” (see Sect. 4.5). Therefore the ISO beam (117” FWHM) can be modelled by a gaussian of FWHM$``$20 pixels.. Because of the large area of integration, the difference between integrating on images at full resolution or smoothed is generally small, $``$10%, smaller than the errors of observations. The spatial distribution of dust emission in the models is also compared to the observed one. Alton et al. (1998a) measured optical and FIR scalelengths for a sample of seven spiral galaxy, after smoothing all the images to the poorest resolution of the 200$`\mu `$m ISO images. The ratio between the scalelengths in the B-band, at 100$`\mu `$m and 200$`\mu `$m, measured by Alton et al. (1998a) on NGC 6946, is presented in Tab. 4.3. It is interesting to note that the scale length ratios for the seven spiral galaxies of Alton et al. sample are quite similar between each other, thus indicating similar dust heating scenarios in different galaxies. The images used in Alton et al. (1998a) are presented in Fig. 4.1. To compare the model results with the observations, I have therefore convolved both optical and FIR simulations with a gaussian of the dimension of the ISO beam. After the smoothing, scalelengths were measured in the same range as in Alton et al. (1998a), from 1.5’ to 3.5’ from the galactic centre. ### 4.5 Adopted scalelengths For the sake of simplicity, the model presented in this chapter have all the same geometrical distribution of stars. I will study the properties of dust extinction and emission modifying the geometrical parameters of the dust distribution only. Only for a few test cases will I adopt different stellar distributions. For the stellar component, I use an exponential disk distribution with the same radial scalelength for all the wavebands of the simulation. As for the vertical scalelength, I scale it using the ratio $`\alpha _{}/\beta _{}=14.4`$ derived in Sect. 3.5. Again, a single vertical scalelength is used for all the wavebands. The results of this chapter concerning optical, FIR surface brightnesses and temperature distributions do not depend on the absolute values of the scalelengths, as long as the quantities are plotted as functions of scaled galactocentric and vertical distances. However, absolute values are needed if we want to derive correct values for emitted and absorbed energies in each band. Radial scalelengths have been derived from the images described in Trewhella (1998b; a) (Fig. 4.1). Profiles averaged over elliptical isophotes have been produced, using a position angle $`PA=64^{}`$ and an inclination $`i=34^{}`$ (Garcia-Gomez & Athanassoula 1991). Under the assumption that the stellar radial scalelength is the same for any waveband, any value measured in the NIR should be closer to the intrinsic, unextinguished one, because of the small extinction in this spectral range. In a K-band image I have obtained $`\alpha _K=95^{\prime \prime }`$, that correspond to 2.5 kpc, for an assumed distance of 5.5 Mpc. As a comparison, B-band scalelength is $`\alpha _B=125^{\prime \prime }`$. Similar values (for the B-band) can be found in the literature (Engargiola 1991, Athanassoula et al. 1993, and references therein). I therefore use $`\alpha _{}=2.5`$kpc, and $`\beta _{}=`$170 kpc. ### 4.6 The standard model In radiative transfer models of spiral galaxies, it is usually assumed that $`\beta _\mathrm{d}0.5\beta _{}`$ and $`\alpha _\mathrm{d}=\alpha _{}`$, i.e. the dust disk is half as thick as the stellar one (Byun et al. 1994, Bianchi et al. 1996, Davies et al. 1997, and references therein). This choice of parameters is motivated by the presence of extinction lanes along the major axis of edge-on galaxies, that cannot be explained without a thin dust disk. Moreover, dust is supposed to form preferentially in a young stellar environment, the star distribution of which is thinner than the old stellar population (Sect. 3.5). Therefore I have first produced models with those geometrical parameters for the dust distribution, for four different values of the face-on optical depth in the V-band, $`\tau _\mathrm{V}=`$0.5, 1, 5 and 10. In Fig. 4.4 I present the SED for the four models, both for the intrinsic unextinguished stellar radiation and for the FIR emission. The thick solid line represent the stellar output of the galaxy as derived from the observed data, that is used as an input (Sect. 4.4). Both in this models and in the ones presented later, the spike at $`\lambda `$2000Å in the unextinguished SED is produced by the extinction bump at 2175Å characteristic of the Galactic extinction law (Sect. 2.2) to which correspond a flat observed SED. Values for the total energy emitted by the galaxy are given in Table 4.4. Obviously, models with larger optical depths have a larger intrinsic energy output to produce the same amount of observed light. It is interesting to note that the stellar energy output changes with the optical depth as well. This depends on the fact that the model is normalised on the observed surface brightness, i.e. on the amount of stellar radiation that escapes the galaxy along a specific direction from a given aperture. In a transparent model, stellar emission would be isotropic, the amount of energy emitted per unit solid angle independent of the inclination from which the model is observed. In extinguished models, light escapes preferentially along lines of sight at low inclination, because of the reduced path through dust and the effectiveness of scattering with respect to the more inclined cases (Bianchi et al. 1996). The anisotropy of the stellar output increases with the optical depth. Therefore, in a model that emits a given amount of energy when observed at low inclination, a larger amount of energy is emitted over the whole solid angle for an optically thin (i.e. more isotropic) case. The aperture dimension to compute the surface brightness is defined photometrically and this makes it dependent on the optical depth of the model: in an opaque galaxy less radiation comes from the central regions with respect to more transparent cases, resulting in a larger half-light radius. For a given half-light radius surface brightness, optically thin models will require a smaller amount of energy emitted along the given inclination. In models with $`\tau _\mathrm{V}=`$0.5 and 1, this two competing effects determine the constancy of the emitted energy. For the larger optical depths, the increase in the half-light radius is the more important, and the emitted energy increases. Table 4.4 gives the total absorbed energy in each model. The percentage contribution of energy absorbed from each band to the total absorbed energy is shown in Fig. 4.5 (solid line). I show the values for $`\tau _\mathrm{V}`$=0.5, since results for the other three models are quite similar. For these standard models, most of the radiation is absorbed from the Optical-NIR wavebands (60% for light at $`\lambda >4000`$Å). This is quite in contrast with the results obtained by Xu & Buat (1995) carrying out an energy balance on a sample of 134 nearby spirals with available UV, B and IRAS fluxes. They found that 60$`\pm 9`$% of the absorbed radiation comes from light in the non-ionising UV (912Å$`<\lambda <`$3650Å). It is difficult to compare this work with Xu & Buat, mainly because of the simplistic sandwich geometry they use. Apart from the geometry, the different result may be due to the assumption of isotropic scattering and smaller albedo for UV radiation with respect to the one used here, this increases extinction in UV, or to a selection effect in favour of bright UV galaxies, although Buat & Xu (1996) dismiss its presence. Trewhella (1998b) model of NGC6946 emission agrees with this work. Applying the correction of Sect. 2.5, approximately 32% of the total absorbed energy is estimated to go into MIR emission the remaining 68% being available for thermal equilibrium processes and FIR emission. The percentage contribution of each optical band to the FIR emission only is also shown in Fig. 4.5 (dotted line). Since small-grains and PAHs responsible for not-equilibrium processes have an higher absorption efficiency at shorter wavelength, the contribution of absorption from Optical-NIR wavebands is higher after the MIR correction: now the radiation originally emitted at $`\lambda >4000`$Å contributes 70% of the FIR emission. In the models of the next sections, MIR corrections are quite similar to the one for the standard model and therefore they will not be discussed separately. I will devote Sect. 4.10.5 to a comparison between the estimated and observed MIR emission. The total energy emitted in the FIR is given in Table 4.4. The temperature distributions for each model are shown in Fig. 4.6, as a function of the galactocentric radius and height above the plane. Apart from the central region, the distributions are very similar. At a galactocentric distance of 3$`\alpha _{}`$ (essentially the Sun position, for $`R_{}`$=8.5kpc and $`\alpha _{}`$=3kpc, see Sect. 3.4), the dust temperature is $``$21K, as observed towards the poles in our Galaxy (Sect. 2.4.2). For a dust distribution thinner than the stellar one, the stellar radiation field is expected to increase with the height above the plane in an optically thick model (Draine & Lee 1984, Rowan-Robinson 1986), because the stars closer to the plane are shielded. This is evident in the central regions ($`R<1.5\alpha _{}`$) of the models: when the optical depth increases, dust at higher temperature is found at higher positions above the plane. In the models with higher extinction, the effect can still be followed at bigger galactocentric distances, the region at higher temperature approaching the galactic plane at large distances. Vertical gradients are very shallow, because of the greater extent of the galaxy in the radial direction with respect to the vertical and because the stellar distribution is smooth. For larger optical depths, the effect previously described contributes to make them even smaller. The FIR spectrum produced by these temperature distributions is shown in Fig. 4.4. It is evident that only models with optical depth between $`\tau _\mathrm{V}`$=5 and 10 produce enough energy to match the observational data. This is a general property of all the models we are going to discuss: a substantial extinction is necessary to produce the observed SED in the FIR. The total amount of energy emitted in the FIR is 1-2 10<sup>10</sup> L, that correspond to a fraction 0.25-0.42 of the energy emitted by the galaxy in the UV-Optical-NIR (Table 4.4). Therefore, $`1/3`$ of the bolometric luminosity of the galaxy is absorbed by dust. An analogous result is obtained by Xu & Buat (1995) for their sample of 134 nearby galaxies. The peak of the FIR emission is quite close to 100$`\mu `$m, corresponding to an effective temperatures slightly smaller than 30K. The peak temperature is thus a reflection of the temperature of the central regions of the galaxy. Optically thick models have the maximum shifted towards longer wavelengths with respect to optically thin cases, because of the smaller temperatures (Fig. 4.6). The ratio between scalelengths, measured on optical and FIR images as described in Sect. 4.4, are presented in Table 4.5. No one of the models is able to reproduce the observed ratios of Table 4.3. Increasing the optical depth, the FIR scalelengths increase, because of the smaller temperature in the centre. But at the same time the optical profiles become flatter, because of extinction. The increase in the ratio B/$`200\mu \mathrm{m}`$ is dominated by the second effect. Because the amount of dust colder than 20K is not large in any of the models, the emission at $`200\mu \mathrm{m}`$ is caused by grains at the same temperature as for the emission at $`100\mu \mathrm{m}`$. Therefore the ratio $`100\mu \mathrm{m}/200\mu \mathrm{m}`$ is not sensitive to the optical depth. To summarise the results of this section, standard models can reproduce the observed temperature and the FIR spectrum (as long as the dust disk is optically thick). On the contrary, it is not possible to reproduce the spatial distribution of the emission: the FIR scalelengths are too small compare to the optical, while in the observations they are of the same order (Alton et al. 1998a). The large optical depths required to match the observed FIR spectrum concur in aggravating the problem, because of the increase of optical scalelengths with extinction. Larger FIR scalelengths can be obtained with dust disks more extended than the stellar: I explore this possibility in the next Sections. But first two tests are presented, to study how the results of this work are influenced by the use of different stellar distributions. ### 4.7 Test: two different stellar distributions As stated in Sect. 4.5, in the models of this thesis I analyse emission and extinction changing the parameters of the dust distribution but keeping the same disk stellar distribution in all the cases. Are the results dependent on this assumption? To answer this question, I present in this Section the results obtained using two different stellar distribution. Different stellar populations have different scaleheights, the younger stars being distributed in a thinner disk than the older. As a result, vertical scalelengths at shorter wavelengths are smaller (Sect. 3.5). The first model has therefore a stellar distribution whose vertical scalelength changes with wavelength. From the Galactic vertical scalelengths in B and V as in Tab. 3.1, I have derived a mean value $`\alpha _{}/\beta _{}=18`$ that I use for the simulations from EUV to the V-band. From the values for J, H and K-band, I derive $`\alpha _{}/\beta _{}=11.3`$, to be used for the remaining bands in the optical-NIR. As for the radial scalelength, I use again $`\alpha _{}=2.5`$ kpc, thus having $`\beta _{}^{\mathrm{EUV}\mathrm{V}}=140`$ pc and $`\beta _{}^{\mathrm{R}\mathrm{LMN}}=220`$ pc. The dust distribution of the model has the same scalelengths as the young stellar population. The second stellar distribution includes a bulge, together with the same stellar disk as the standard model. In a late type galaxy such as NGC 6946, the contribution of the bulge to the observed optical properties is not big. Nevertheless the presence of a concentrated, almost point-like, source in the middle of the dust disk could alter the temperature distribution significantly, and the dust emission as well. For the bulge I have used an exponential distribution : $$\rho (r)=\rho _0e^{r/h},$$ (4.1) where $`\rho _0`$ is the central luminosity density, $`r`$ is the distance from the centre and $`h`$ the bulge scalelength. Actually, the exponential has been used to describe the surface brightness of bulges, i.e. the projection of Eqn. (4.1) on the plane of the sky (De Jong 1996b). De Jong (1996c) finds that the ratio between the observed scalelengths of bulge and disk in a sample of 86 face-on galaxies is approximately 1/10. To produce this observed $`h`$, the intrinsic $`h`$ in Eqn. (4.1) needs to be $`1.2`$ times smaller, for a bulge truncated at 6$`h`$. For the stellar disk $`\alpha _{}=2.5`$ kpc, $`h=200`$ pc. I have assumed a bulge to disk ratio of 0.1, suitable for late type galaxy bulges (De Jong 1996c, G. Moriondo, private communication). In Fig. 4.7 I present the SEDs of the two test cases, for $`\tau _\mathrm{V}=5`$. The SED of the standard model with the same optical depth is shown for comparison. The two test models have spectral energy distributions very similar to the standard one. Intrinsic, emitted and absorbed energy are quite similar as well, apart from small variation due to the different spatial distribution of dust and stars. The emitted FIR energy is 10$`{}_{}{}^{10}L_{}^{}`$. The temperature distributions for the $`\tau _\mathrm{V}=5`$ case are shown in Fig. 4.8. For the case with bulge, the pattern of the distribution out of the centre is very similar to the one observed for the standard case: this is not a surprise, since both have the same stellar and dust disk, with a bulge that extends only up to 0.5 $`\alpha _{}`$. In the centre, the presence of the bulge introduce a stronger gradient, because of the more concentrated light emission. Temperatures in the core region reach higher values than for the standard model. As for the standard model, the shielding caused by dust makes central regions at higher $`z`$ hotter than those in the plane. In the case of a disk with different scale heights, the shielding effect is not present, because for higher optical depths (shortward of the V-band) the stellar distribution is co-spatial with the dust. The centroidal pattern, with hotter regions closer to the centre of the galaxy, is typical of dust distributions with scale heights equal to or larger than the stellar (see Sect. 4.8). Temperature values are of the same order as for the previous models. I have also tried models with smaller optical depths, the results for the energy output and temperature distributions showing the same trend as for the standard model. The temperature distributions of the test models produce a FIR emission quite similar to the standard model, and similar ratios of optical and FIR scalelengths. Again, the B/$`200\mu \text{m}`$ is smaller for smaller optical depth, the change with optical depth being dominated by the increase of the B scalelength with the optical depth. For the two stellar scaleheights model, B profiles are flatter, because in the B-band dust and stars have the same thickness, while in the model inclusive of bulge, B profiles are steeper, the bulge affecting the profile after smoothing to the ISO resolution. Nevertheless, these changes in the B profile are small. Similar results to the standard model are found also for the 100/200 scalelength ratio. The value for the disk model is the same as for the standard, while for the bulge, the presence of an hotter centre reduce the ratio by 6% only, not enough to justify the observed one. In summary, the two (more realistic?) stellar distribution do not produce results significantly different from the stellar disk described in Sect. 4.5, this is therefore used in the rest of the thesis. ### 4.8 Extended disk models As outlined in Sect. 1.6 and 3.6 there is evidence for dust distributions more extended than the stellar, both from optical and FIR observations. To analyse the effect of a large dust disk on the FIR emission, I have first ran models with a larger dust radial scalelength, $`\alpha _\mathrm{d}`$, keeping the other parameters as for the standard model of Sect. 4.6. The SED of an extended model with optical depth $`\tau _\mathrm{V}=5`$ and $`\alpha _\mathrm{d}=1.5\alpha _{}`$ (Davies et al. 1997, Xilouris et al. 1999) is shown in Fig. 4.9, together with the standard model of the same optical depth. As for models of Sect. 4.6, only optically thick cases are able to match the observed SED. For the same optical depth the extended model has an higher extinction (e.g. 44% of the energy is absorbed in the V-band against the 34% of the standard model). This is evident in the SED of FIR and intrinsic optical radiation shown in Fig. 4.9. The temperature distribution for the extended model is shown in the central panel of Fig. 4.10. For ease of comparison, the temperature distribution of the standard model is shown again (in the right panel), with the same scale as for the new model. Within a radius of 6 $`\alpha _{}`$ (the extent of the stellar disk) the temperature pattern of the extended model is quite similar, apart from a small difference due to the normalisation. This is reflected in the peak of the FIR SED, that is essentially the same in both the models. Outside of the 6$`\alpha _{}`$, the truncation of the stellar distribution, dust is colder and it does not modify the shape of the SED. The steeper gradient in the temperature distribution that can be observed in this and the other extended models at 6$`\alpha _{}`$ is obviously due to the truncation of the stellar disk. A truncation is indeed suggested by counts of faint stellar sources in the Galaxy (Sect. 3.5). I have run a few tests with stellar distributions truncated at the same distance as the dust disks, to avoid having dust in regions without local stellar emission. The steeper gradient disappears and a larger distance is needed to reach the same temperature. However, the changes are small. The general trend in the temperature distributions is conserved and the maps of FIR emission are not modified sensibly. It is interesting to note that for $`R>6\alpha _{}`$, dust is colder on the plane than above, because starlight is seen through higher optical depths along the plane. The temperature distribution of extended models at smaller optical depths has the same general characteristics as for the case presented here. With regard to scalelengths, the same problem as for the standard model is present in the $`\alpha _\mathrm{d}=1.5\alpha _{}`$ extended model. Despite the increase of FIR scalelengths in a model with extended dust distribution, the effect is not as large to compensate for the increase of the B scalelength in optically thick models. Therefore, an optically thick models are necessary to have a FIR energy output that matches observations, while optically thin models have ratios of scalelengths closer to the observed. For models with $`\tau _\mathrm{V}=0.5`$, the ratio B/200 is 1.15 and it increases with the optical depth. The ratio 100/200 is 0.75, almost independent of the optical depth. A further increase in the dust scalelength can solve the problem. The SED for a extended model with $`\alpha _\mathrm{d}=3\alpha _{}`$ and $`\tau _\mathrm{V}`$=3 is also presented in Fig. 4.9. As already seen, extending the dust scalelength results in a larger fraction of energy absorbed, for the same optical depth. Therefore a smaller optical depth, but still in the optically thick regime, can provide the right amount of absorbed energy. The new model gives a fit to the SED as good as the previous one. The assumed dust scalelength is the same as the one that can be derived from the distribution of atomic gas, although a smaller optical depth is derived from HI observation, if the local gas-to-dust mass ratio is assumed (Sect. 4.9). The temperature distribution is presented in the left panel of Fig. 4.10. The behaviour of the temperature values is analogous to the one for the $`\alpha _\mathrm{d}=1.5\alpha _{}`$ model, although a larger quantity of colder dust is present. The resulting FIR scalelengths are larger. This, together with the reduction in the optical depth, leads to a B/200 scalelengths ratio of 0.98, close to the observed value. The 100/200 scalelengths ratio is again bigger than the observed, with a value of 0.66. Fitting the Galactic FIR emission observed by the instrument DIRBE aboard the satellite COBE, Davies et al. (1997) find that also the vertical scalelength of dust should be larger than the stellar. Following their results, I have studied the effects of increasing the ratio $`\beta _\mathrm{d}/\beta _{}`$ from 0.5 (standard model) to 2, for the extended model with $`\alpha _\mathrm{d}=1.5\alpha _{}`$. A model with a dust vertical scale height larger that that of the stars obviously suffers a bigger extinction (now 60% of the energy is absorbed in the V-band), because part of the dust distribution act as a screen in front of dust, i.e. the most effective distribution in extinguishing radiation. Despite the increase in extinction, optically thick models are still required to match the observed FIR emission (Fig. 4.9). The temperature distribution for this model is shown in Fig. 4.11. Temperatures are generally higher than those for the previous models and this is reflected in the small displacement in the peak of the FIR SED in Fig. 4.9. The temperature pattern is centroidal, typical of source being less concentrated spatially than the absorbers. Outside of the stellar region, above 6$`\alpha _{}`$, the vertical gradient reduces, becoming more similar to the models with a smaller dust scaleheight. This is because in this region scattered light contributes to the radiation field, and it is distributed with the dust. For optically thin models, the extended vertical scalelength does not introduce changes in the ratio between optical and FIR scalelengths. In the optically thick case, though, the B-band scalelength is increased as an effect of extinction and the ratio B/200 becomes larger. In conclusions, models with dust distribution more extended radially than the stars have ratios between optical and FIR scalelengths smaller than in standard models. As for standard models, optically thick cases are required to match the observed FIR SED. For the scalelengths ratio to be close to the observed value in optically thick model, it is necessary to have a dust distribution with $`\alpha _\mathrm{d}3\alpha _{}`$. Such a scalelength is similar to that derived from HI observations (Sect. 4.9). The model with $`\alpha _\mathrm{d}=1.5\alpha _{}`$ deduced from the works of Davies et al. (1997) and Xilouris et al. (1999), instead, can provide small values of the scalelength ratio only in the optically thin case, while the optically thick case is still required to match the observed FIR SED. Assuming $`\beta _\mathrm{d}=2.0\beta _{}`$ does not improve the modelling. ### 4.9 A model with two dust disks Tacconi & Young observed NGC 6946 both in the HI (1986) and in CO emission (1989). From the observations they derived the column densities of atomic and molecular hydrogen as a function of the galactocentric radius (Tacconi & Young 1986). H<sub>2</sub> column density was observed to have a steep profile, with an exponential scalelength quite close to that of the optical emission ($``$ 90”, i.e. $``$ 1 $`\alpha _{}`$, according to the values adopted in Sect. 4.5). The atomic gas, instead, presents a dip in the centre, the column density reaching a maximum at $``$ 180” ($``$ 2 $`\alpha _{}`$), then declining exponentially with a scalelength of 300” ($``$ 3 $`\alpha _{}`$). Both molecular and atomic gas have the same column density ($`10^{21}`$ H atoms cm<sup>-2</sup>) at $``$250”. HI and H<sub>2</sub> have almost the same masses but the atomic gas is distributed on a much broader distribution. The gas morphology observed in NGC 6946, with a central peak and monotonically decreasing H<sub>2</sub> column density and a much shallower HI profile, is typical of late type spiral galaxies (Young & Scoville 1991). Since the H<sub>2</sub> distribution follows closely the emission detected by IRAS, Davies et al. (1999b) suggest that cold dust associated with the atomic gas component could be responsible for the broader $`200\mu `$m profile. As shown in Sect. 4.8, broad distributions of dust have colder temperature than the standard one in the outer skirts: this affects the emission at longer wavelength producing broader profiles. Motivated by the observations, I have therefore introduced a second dust disk in the model, to represent the dust associated with HI. To mimic the column density of atomic gas, the dust density of this disk falls off exponentially with a scalelength of 3 $`\alpha _{}`$ at R $`>2\alpha _{}`$, being flat for smaller radii. A standard disk as described in Sect. 4.6 has been used for the dust associated with the molecular component. Following the results of Sect. 4.8, a larger vertical scalelength does not improve significantly the modelling: I have thus used $`\beta _\mathrm{d}=0.5\beta _{}`$ for both the disks. As for the optical depth of the model, it has been derived from the gas column density of Tacconi & Young (1986), using the relation between E(B-V) and total hydrogen (atomic + molecular) column density of Eqn. (2.10), together with the mean Galactic extinction law (Sect. 2.1). Extrapolating from the column density at $``$250”, we derived a central face-on optical depth $`\tau _\mathrm{V}^m10`$ for the dust associated with the molecular disk and $`\tau _\mathrm{V}^a0.5`$ for the atomic gas disk. Using the central value given by Tacconi & Young (1986) a larger central optical depth would have been derived ($`\tau _\mathrm{V}^m`$ 18): the central region may be the site of a moderate starburst (Engargiola 1991) and have different characteristics with respect to the smooth medium. I have therefore preferred the extrapolated value. A high optical depth through the central region of the galaxy has also been derived by Engargiola (1991) and Devereux & Young (1993). Evans (1992) derived a central face-on optical depth of $`\tau _V=67`$, using a TRIPLEX model (Sect. 3.1) and the energy balance method. The optically thick central region is also confirmed by the high resolution extinction maps of Trewhella (1998b; a). The SED for this new model is presented in Fig. 4.12, together with the SED for the standard model with the same optical depth as the dust disk associated with the molecular component ($`\tau _\mathrm{V}=10`$). The FIR output is higher that than of the standard model, because of the extra dust disk. The temperature distribution inside the stellar disk is similar in the two models (as the peak of FIR emission shows) while the behaviour at larger radii is close to that of the extended model of Sect. 4.8 in the same region. As seen in Sect. 4.8, an extended model with $`\alpha _\mathrm{d}=3.0\alpha _{}`$ can produce large FIR scalelengths. However, in the model of this Section, the disk associated with the molecular component dominates the FIR emission. The behaviour of the model is close to that of a high optical depth standard model rather that to an extended model, with a B/200 ratio 1.86. I have also ran double disk models reducing the optical depth of the disk associated with the molecular component. The simulations for $`\tau _\mathrm{V}^m=`$ 1 and 5 are also presented in Fig. 4.12. Like in the case of Sect. 4.8, a model with $`\tau _\mathrm{V}^m=5`$ matches quite well the FIR emission, but the scalelengths ratio is higher than for an extended disk. The same for the model with $`\tau _\mathrm{V}^m=1`$, that is not able to fit the observed SED. Therefore, it is not possible to explain the observed ratio of FIR and optical scalelengths with a double disk model inferred from the observed distributions of molecular and atomic gas. ### 4.10 Discussion In this section I discuss a few aspects of the models presented up to this point, to provide an ensemble view of the variation of FIR scalelength, emission and dust temperature with the varying parameters of the dust distribution. Also discussed is the validity of the approximations of the model, namely, the assumption of smooth distributions for stars and dust, the neglect of the ionising UV in the stellar SED and the MIR correction, used to derive the FIR emission from the total energy absorbed by dust. #### 4.10.1 FIR scalelengths and SEDs Once a model for the radiative transfer in a spiral galaxy is available, it is relatively simple to derive an optical depth comparing the stellar SED and the FIR emission from dust. This energy balance method has been applied on NGC 6946 by Evans (1992) and Trewhella (1998b; a), using essentially the same set of data as in this work. Both the authors used the TRIPLEX model of Disney et al. (1989), i.e. an analytical approximation for the radiative transfer, neglecting scattering, in a standard model. The neglect of scattering results in an underestimate of the optical depth, since the effective opacity decreases when dust is allowed to diffuse radiation (Bianchi et al. 1996): Trewhella (1998b; a) corrected for this effect assuming that a model inclusive of scattering can be simulated by a pure absorption TRIPLEX with optical depth reduced by a factor (1-$`\omega `$). Their results are compatible with NGC 6946 being optically thick through its centre. When I constrain the fit to the energy balance only, I obtain analogous results for the standard model: optically thick models with face-on optical depth $`\tau _\mathrm{V}`$5 are necessary to match the observed SED. From a sample of 134 nearby spirals with the same ratio of bolometric luminosity absorbed by dust ($``$ 1/3) as in NGC 6946, Xu & Buat (1995) derived a mean optical depth $`\tau _B=0.60`$, compatible with the sample being optically thin. This is mainly because they used a plane parallel sandwich model and their optical depth is more representative of a mean opacity over the whole galactic disk rather than the central value. However, the main aim of the model of this thesis is to describe both the SED of the FIR emission and its spatial distribution. Alton et al. (1998a) measured a value close to unity for the ratio between the scalelengths of images in the B band and at 200$`\mu `$m, in a sample of seven galaxies including NGC 6946. The FIR scalelength increase with the optical depth, for any model and dust emission wavelength. Unfortunately, for the optically thick model necessary to match the SED, the B-band scalelength increases with the optical depth, because of the extinction. Therefore, the ratio B/200 is smaller in the optically thin cases. Besides, the scalelength ratio measured on standard models is always larger than the observed, even for small values of the optical depth. Values close to unity can be explained only if the dust distribution is more extended radially than the stellar one (Davies et al. 1999b). Fits of surface brightness in edge-on galaxies (Xilouris et al. 1999) and models of FIR emission (Davies et al. 1997), suggest that the dust scalelength is $``$ 1.5 times the stellar. When such an extended model is adopted (Sect. 4.8) FIR scalelengths are indeed larger than those for the standard model, for any value of the optical depth and of the wavelength of emission. Now values closer to the observed can be reached, but still the increase of the B-band scalelength with the opacity is high enough to compensate for the increase in the FIR scalelengths. We are therefore in front of two mutually exclusive situations: optically thin extended models that have both B and 200$`\mu `$m consistent with observations, but with a FIR output smaller than the observed; and an optically thick model with the required SED, but with 200$`\mu `$m emission more concentrated than the optical one. To obtain a value for the B/200 scalelengths ratio close to the observed, still being able to produce an adequate energy output in the FIR, it is necessary to increase further the dust scalelength. A optically thick model with $`\tau _\mathrm{V}=3`$ and $`\alpha _\mathrm{d}=3.0\alpha _{}`$ is indeed able to provide a good fit of both the B/200 scalelengths ratio and the dust emission SED. The dust distribution in this case has a scalelength similar to that observed for the atomic gas in NGC 6946, although a smaller optical depth is derived from HI observations, under the assumption of a gas-to-dust ratio as in the solar neighborhood (Sect. 4.9). In Sect. 4.9 I produced a model with two dust distributions, associated to both the gas components. As already said, the atomic gas requires an optically thin, extended disk, while the molecular gas suggest an optically thick standard distribution. Such a model fails to reproduce the observed properties, since it is the optically thick disk associated with the molecular component that dominates the model behaviour. For NGC 6946 the scalelength derived from the 100$`\mu `$m IRAS image is $`1/2`$ of that for the 200$`\mu `$m ISO observation (Alton et al. 1998a, Davies et al. 1999b). For the models presented here, the ratio 100/200 is larger, always bigger than 0.7. As already outlined, this ratio depends mainly on the geometric properties of the dust distribution; for a specific model, the ratio 100/200 is almost constant with the optical depth, both the scalelengths increasing at the same rate with increasing opacity. This is because both emissions are due to dust at the same temperature. A larger scalelength for the 200$`\mu `$m emission would be possible if there is a large amount of extended cold dust emitting in this wavelength but not at 100$`\mu `$m. Indeed, when extended models are considered, the ratio 100/200 is smaller than for the standard model, because dust at temperatures lower than 15K is present. Nevertheless, the amount of colder dust is never enough to match the observed values. On the other hand, the small value for 100/200 could also be due to more concentrated hot dust, this time contributing most to the 100$`\mu `$m than to the 200$`\mu `$m emission. This would be the case if dust is heated preferentially from hot stars in HII regions (See Sect. 4.10.3). Due to this uncertainty, it is better to use the B/200 scalelength ratio, rather than the 100/200, to discriminate between models. #### 4.10.2 Temperatures In this section I compare the temperature distribution in the model with temperatures actually measured. As shown previously, the temperature distribution does not change substantially from one model to the other. In Fig. 4.13 I plot, for three representative models, the temperature along the galactic plane as a function of the galactocentric distance. I chose a standard model (Sect. 4.6), a model with dust vertical scalelength changing from the UV to the NIR (Sect. 4.7) and a model with $`\beta _\mathrm{d}=2\beta _{}`$. In all the three models $`\tau _\mathrm{V}=5`$ and $`\alpha _\mathrm{d}=\alpha _{}`$. Models with parameters different from the plotted ones have similar spans in the temperature distributions. The best determination of temperatures for dust heated by the diffuse ISRF are those obtained for the Galaxy, because of the availability of high signal-to-noise spectra and images at $`\lambda >100\mu `$m (mainly from instruments aboard the satellite COBE). Shorter wavelengths may trace hotter dust associated with star-forming regions and are certainly contaminated by non-thermal emission from very small grains. The latter seems to be the explanation of the relative constancy with longitude of the temperature along the Galactic plane derived from the use of 60$`\mu `$m and 100$`\mu `$m IRAS data only (Sodroski et al. 1989). When 140$`\mu `$m and 240$`\mu `$m images from the instrument DIRBE on board of COBE are used (Sodroski et al. 1994; 1997), derived temperatures decrease as a function of the Galactocentric distance, as predicted for the ISRF (that shows the same behaviour). Determination of temperature in the Galaxy (as well as in other edge-on galaxies) is affected by a projection effect: when a single temperature is assumed to fit a spectrum or a flux ratio along a specific line of sight through the Galaxy, results are biased towards higher values of the temperature (Sodroski et al. 1994). The bias is not strong, since the radial gradients derived on the Galaxy are very shallow: Sodroski et al. (1994) compared the observed variation with longitude and latitude of the temperature derived from 140$`\mu `$m and 240$`\mu `$m DIRBE data and conclude that it is consistent with a model where the temperature varies exponentially with a radial scalelength of 21 kpc; a scalelength of 35.7 is derived by Davies et al. (1997), after fitting the temperature variation with longitude and latitude and the DIRBE fluxes at 140$`\mu `$m and 240$`\mu `$m with an extended dust model (Sect. 4.8). The slopes for these two exponentials at the Sun distance from the centre of the Galaxy are shown in Fig. 4.13. The models previously described present very small variation of temperature with the height above the galactic plane. Therefore it is straightforward to derive a temperature for dust at the Sun distance, observing FIR emission at high latitudes. Using the temperature maps derived by Schlegel et al. (1998) from 100 $`\mu `$m and 240 $`\mu `$m DIRBE images, the mean temperature for a region of 20 diameter around the Galactic north pole is 21K, when the data are corrected for the emissivity law used in this work (Eqn. (2.33); see also Sect. 2.4.2). After an analogous correction, this temperature is consistent with the temperature of the warm component derived by Reach et al. (1995) using high latitude spectra at $`\lambda >104\mu `$m observed by another instrument aboard COBE, the spectrophotometer FIRAS. This value is also presented in Fig. 4.13. Sodroski et al. (1997) use three-dimensional HI and H<sub>2</sub> maps (assuming a Galactic rotation curve to convert velocities into distances) and thermal radio-continuum observations to decompose the Galactic FIR emission observed by DIRBE into three components, associated with the atomic, molecular and the ionised gas phases. Several properties, among which the temperature, are retrieved for each of the dust component in four annuli at different distances from the centre. The component associated with the atomic gas is supposed to be heated by the diffuse ISRF. I plot in Fig. 4.13 the temperature of the dust associated with the atomic gas for each annulus, after scaling the distances to a Galactic scalelength $`\alpha _{}=3`$ kpc and correcting the temperature for the emissivity law used in this work. The vertical error bar represent the error in the temperature determination, while the horizontal the width of each annulus. From Fig. 4.13 it seems that models with a larger ratio between the vertical scalelength of dust and stars describe better the temperature profile. However, the decomposition of Sodroski et al. has large errors. Furthermore, the mean temperatures for each annulus are biased towards the higher values and thus are more representative of a point inner in the circle, rather than the mid-point for which they have been plotted. Therefore, on the basis of their temperature profiles, none of the models in Fig. 4.13 can be excluded, all of them being consistent with the observed data. More difficult is to derive temperature gradients in external galaxies, because of the lack of resolution and of the low signal-to noise. When IRAS fluxes at 60$`\mu `$m and 100$`\mu `$m are used, the temperature distributions are flat (Devereux & Young 1992; 1993), as observed in the Galaxy. This is the case also for NGC 6946: 60$`\mu `$m and 100$`\mu `$m radial profiles have the same gradient up to 3 scalelength from the centre (Alton et al. 1998a, Davies et al. 1999b). Deriving a gradient for the temperature of the dust component in thermal equilibrium with the ISRF is even more difficult, because of the poorer resolution at 200$`\mu `$m. To compare model results with observations, I have smoothed to the ISO resolution (Sect. 4.4) both 100$`\mu `$m and 200$`\mu `$m images for the same models as in Fig. 4.13. The temperature profiles derived from the ratio of fluxes at this two wavelengths are shown in Fig. 4.14. Davies et al. (1999b) derived the temperatures from 100$`\mu `$m IRAS and 200$`\mu `$m ISO fluxes at two different position on NGC 6946, in the centre and on the disk, at a distance of 3 arcmin from the centre ($`2\alpha _{}`$). I have plotted in Fig. 4.14 their data derived for $`\beta =1`$; this temperature should be close to those that would have been derived using the emissivity law of this thesis, since in this spectral range it has a slope $`\beta =1`$, turning to a steeper $`\beta =2`$ for $`\lambda >200\mu `$m (Eqn. 2.33). The errors on the temperature have been derived from the errors on ISO and IRAS fluxes quoted in Sect. 4.2, while the error bars on the positions reflect the wide aperture (of the dimension of an ISO resolution element), for which the temperature have been derived. Again, all the models are consistent with the observations, within their large errors. #### 4.10.3 The dust heating mechanism: ISRF vs hot stars The present models make use of smooth distributions for stars and dust. Therefore, the model is appropriate to describe dust heated by a smooth, diffuse ISRF. Instead, dust close to hot stars in star-forming regions is heated by a more intense radiation field and reaches higher temperatures. There is a debate about which of these two heating mechanisms is dominant in a spiral galaxy. If dust heated by hot-star in star forming regions contributes to the majority of the FIR emission, then it is possible to use FIR fluxes to measure the recent star-formation in a galaxy. Devereux & Young in a series of papers (Devereux & Young 1990b; 1991; 1992; 1993) suggest that the FIR luminosity is dominated by warm dust that absorb radiation from OB stars. They argue that the ISRF can only heat dust to temperatures of 15-20K, rather than the 30-40K usually observed in spiral galaxies (Devereux & Young 1991). I have shown here that higher temperatures are compatible with realistic models of the ISRF. Moreover, their warm dust temperatures are derived from 60 and 100$`\mu `$m IRAS fluxes, that may be contaminated by small grains (Sodroski et al. 1989). Xu & Helou (1996a) derived the fraction of the total FIR luminosity associated with star-formation, using observations of IRAS-resolved bright HII regions. A value of 30$`\pm `$14% is found. When COBE observations of the Galaxy at 140$`\mu `$m and 240$`\mu `$m are used, a different picture emerges: most of the FIR emission (70%) arises from dust associated with the atomic gas (Sodroski et al. 1994). The longitude and radial gradients (Sodroski et al. 1997) of the temperature are consistent with this dust being heated by the ISRF. 20% of the FIR is emitted by dust associated with the molecular component. Dust associated with the molecular gas is heated primarily by embedded OB stars and secondarily by the ISRF (Sodroski et al. 1997). Nevertheless its temperature is similar to that of the dust associated with the atomic gas. Finally, only 10% is due to hot dust associated with the HII phase. It is not easy to evaluate how the results of this chapter depend on the assumption about smooth distributions. If hot stars emitting mainly in the UV are subject to a larger and localised extinction, as it would be the case for newly born stars still located in HII regions embedded in molecular clouds, their contribution to the FIR output would be higher than in my model, both because of the larger absorbed energy and higher temperature of the dust. Therefore the derived optical depths for the smooth medium may be overestimated in this work. As a test, I can try to add the emission from hot-stars to the FIR spectrum derived by the model. In Fig. 4.15 I plot the FIR spectrum for a standard model with $`\tau _V=5`$. The radiation from dust heated by OB stars can be simulated by a grey-body spectrum with a temperature of 40K, as derived from the temperature of dust associated with the HII phase in the inner Galaxy (Sodroski et al. 1997) using the emissivity of Eqn. 2.33. The hot dust spectrum has been normalised to contribute to the 30% of the total FIR (as from Xu & Helou (1996a) work). Adding the hot component emission obviously results in an stronger FIR emission, especially around the peak of hot dust emission. Nevertheless, the difference between the new SED and the original is within the errors in the observation. Thus, for the accuracy with which FIR fluxes are presently observed, the determination of the optical depth is not severely affected by the smooth distribution assumption. Moreover, the contribution of the hot dust is overestimated, the spectrum of the standard model already including the contribution to the FIR output of light absorbed from the UV, although absorbed by the smooth medium only. A proper description will be possible only with the inclusion of clumping in the model, necessary both for dust and stars. The core of a dense clump of dust, in fact, is shielded from the stellar radiation and the dust temperature is lower than for dust in the smooth medium. On the other hand, it is inside denser regions of gas (and dust) than star formation occurs. Therefore a clumping model should take into account the presence of sources embedded in dust. In this case dust in the clump would be at higher temperatures than in the smooth medium and the contribution of hot stars to the FIR emission could be evaluated. #### 4.10.4 The ionising ultraviolet As outlined in Sect. 4.1, I do not consider the contribution of absorption of ionising UV photons ($`\lambda 912`$Å) to the dust emission. In this section I evaluate the impact of this assumption. The shape of the spectrum heavily depends critically on the assumed parameter for the evolution model (Fioc & Rocca-Volmerange 1997), I estimate it from H$`\alpha `$ observations, the H$`\alpha `$ flux being related to the strength of the ionising radiation. Kennicutt & Kent (1983) measured a H$`\alpha `$ flux for NGC 6946 $$f(\text{H}\alpha )=\mathrm{3.2\hspace{0.33em}10}^{11}\text{erg cm}\text{-2}\text{ s}\text{-1}.$$ (4.2) The flux is contaminated by \[NII\] lines, but they derived a correction from a sample of galaxies with available spectroscopy: for spiral galaxies, $``$75% of that emission is due to H$`\alpha `$ only. Similar values for the flux are measured by Devereux & Young (1993). Galactic extinction in the direction of NGC 6946 is $`A_\mathrm{V}=1.31`$ corresponding to $`A_{\mathrm{H}\alpha }=1.06`$. Applying those corrections the flux is $$f(\text{H}\alpha )=\mathrm{6.4\hspace{0.33em}10}^{11}\text{erg cm}\text{-2}\text{ s}\text{-1}.$$ (4.3) The intrinsic H$`\alpha `$ flux can be derived if the internal extinction in the galaxy is known: for a standard model with $`\tau _V=5`$, 30% of the radiation is absorbed in the R-band (in whose spectral range the H$`\alpha `$ line is located), the intrinsic flux being therefore $$f(\text{H}\alpha )=\mathrm{9.1\hspace{0.33em}10}^{11}\text{erg cm}\text{-2}\text{ s}\text{-1}.$$ (4.4) From the H$`\alpha `$ flux, assuming a standard ionisation condition in HII regions, the luminosity in the ionising (Lyman) continuum can be found (Lequeux 1980). Following Xu & Buat (1995), the Lyman continuum flux can be derived as $$f(\text{Lyc})=33.9f(\text{H}\alpha )=\mathrm{3.1\hspace{0.33em}10}^9\text{erg cm}\text{-2}\text{ s}\text{-1},$$ (4.5) where 75% of the ionising radiation is assumed to be absorbed by gas and converted in emission lines at larger wavelengths (see also Mezger 1978, DeGioia-Eastwood 1992). If the remaining 25% is entirely absorbed by dust, the ionising flux converted into infrared radiation is $$f^{\mathrm{abs}}(\text{Lyc})=\mathrm{7.7\hspace{0.33em}10}^{10}\text{erg cm}\text{-2}\text{ s}\text{-1}.$$ (4.6) In the Far UV the extinction law is dominated by small grain absorption, therefore most of the energy absorbed from the ionising flux goes into MIR radiation from stochastically heated grains (Sect. 2.5). Assuming that 28% of the energy absorbed goes into thermal radiation in the FIR<sup>4</sup><sup>4</sup>4This is an upper limit, derived from the MIR correction of the EUV band, Table 2.1. In the ionising UV the contribution of absorbed photons to the non-equilibrium heating is higher, the ionising flux contributes to the FIR emission with a luminosity $$L^{\mathrm{FIR}}(\text{Lyc})=\mathrm{2.2\hspace{0.33em}10}^8\text{L}_{},$$ (4.7) where a distance of 5.5 Mpc has been assumed. In a standard model with $`\tau _V`$=5 the total FIR luminosity is $$L^{\mathrm{FIR}}=\mathrm{1.0\hspace{0.33em}10}^{10}\text{L}_{},$$ (4.8) and the contribution from ionising UV would then be $``$ 2%. As a comparison, the contribution to FIR from the EUV band for the same model is 3.6%. I have used in this section the standard model because it provides the same amount of FIR energy as the observed. In models with higher extinction, like those of Sec. 4.8, the contribution of the ionising UV is smaller, the FIR energy output increasing faster with extinction than the infrared emission arising from Lyc photons (Eqn. 4.6). On the contrary, models with smaller extinction have higher ionising UV contribution. In a standard model with $`\tau _\mathrm{V}`$=1, for example, 6% of the total absorbed energy is coming from ionising photons, while only 4% comes from the EUV band. In any case, the energy output of this model does not match the observed one (Sect. 4.6). Therefore, I conclude that disregarding the contribution of the ionising UV to the total absorbed energy does not modify substantially the results obtained in this chapter. Xu & Buat (1995) argue that the ionising UV contributes as much as 20$`\pm 1`$% to the total FIR emission in a sample of 23 late type galaxies. Their UV contribution includes direct absorption of Lyc photons and indirect (via emission lines). It is difficult to compare this result to the one derived here, since in the present model the absorption of emission line photons is taken care of in the spectral band where the emission occurs (e.g. in the R-band as in this section for the H$`\alpha `$ line), the eventual contribution of re-combination summed up to the stellar SED for each band. Nevertheless, the ratio between Lyc emission and total absorbed energy (their FIR, including MIR radiation) is similar to the one derived for the standard $`\tau _V`$=5 model. #### 4.10.5 Estimated and observed MIR emission To compute the dust temperature at the thermal equilibrium (Sect. 3.9) I have excluded from the total absorbed energy the fraction that goes into non-equilibrium heating. This energy is essentially re-emitted in the MIR spectral range. In this section I compare the derived MIR energy output with the one observed in NGC 6946. The fraction of absorbed energy that goes into MIR emission depends essentially on the absorption of light from the short wavelength spectrum, since the absorption efficiency of small grains responsible for non-equilibrium processes is higher in the UV (Sect. 2.5). For models where the dust scaleheight is smaller than the stellar, the amount of energy absorbed from UV bands does not increase very much with the optical depth (The saturation effect, Bianchi et al. 1996). The MIR corrections for these models are therefore quite constant, $``$32% of the total absorbed energy being re-emitted in the MIR. In optically thin models this fraction is still constant when more extended vertical distributions are concerned. Optically thick models with vertically extended distributions have a higher efficiency in extinguishing radiation (they are closer to a screen model) and therefore the fraction of energy absorbed in bands with higher optical depth is larger. As an example, for a model with $`\tau _\mathrm{V}=5`$ and $`\alpha _\mathrm{d}=1.5\alpha _{}`$ (Sect. 4.8), the amount of energy absorbed in the band UV2 changes from 58% to 69% when the vertical scalelength is doubled, while the absorption in the J band changes only from 24% to 25%. The MIR correction therefore increases, but not by a large amount, being 41% for the model with both radial and vertical extended dust distributions. For a local Interstellar Radiation Field (Désert et al. 1990) the contribution of small grain emission to the 60 $`\mu `$m IRAS band is $``$62%, while at 200 $`\mu `$m it is only 14% and at 200 $`\mu `$m 4%. Therefore, the fraction of energy emitted in non equilibrium heating can be roughly estimated measuring the MIR emission shortward of 60 $`\mu `$m. After integrating a continuous SED interpolated from the data points in Table 4.2, the MIR energy is derived to be 34% of the total infrared energy emitted by dust. The same value is found when the data provided by Engargiola (1991) for the whole galaxy, rather than for the half light radius, are used. The value derived from observation is very close to the model one. This justifies the use of Désert et al. (1990) dust model as described in Sect. 2.5. It is interesting to note that the infrared galactic spectrum used in the Désert et al. (1990) model is different from the one of NGC 6946. As an example, the ratio between fluxes at 60 $`\mu `$m and 100 $`\mu `$m is 0.2, while it is 0.5 with our NGC 6946 data. This does not necessarily mean that the dust model of Désert et al. (1990) cannot be applied to NGC 6946. The different ratio could be due to the different heating condition in the local interstellar radiation field, with respect to the mean radiation field in NGC 6946. Larger ratios between 60 $`\mu `$m and 100 $`\mu `$m can be derived from Désert et al. (1990) model when the ISRF is larger than the local. ### 4.11 A halo of dust Yet another possible distribution for dust remain to be explored: a spherical halo. A reasonable amount of cold dust at large distance above the galactic plane can provide a FIR emission at larger scalelengths than those obtained for the extended disk seen in Sect. 4.8 and 4.9. A significant fraction of the total mass of dust produced by a galaxy during its lifetime can be injected into the halo because of the imbalance between the radiation pressure and the galactic gravitational force (Davies et al. 1999a). Unfortunately, information about the density and distribution of a putative dusty halo are by far more uncertain than those for the dusty disks. A dust halo would act as a screen distribution for the galaxy and therefore will not produce a substantial differential extinction on different parts of the galaxy, unless it has a steep gradient. Thus it would be impossible to detect it fitting the optical appearance with a radiative transfer model, as in the works of Xilouris et al. (1997; 1998; 1999). Zaritsky (1994) analyses the difference in colours of background galaxies between fields at different distances from the centre of two nearby galaxies; he find that fields at a projected galactocentric distance of 60 kpc have a B-I colour excess of 0.067 with respect to fields at a distance of 220 kpc. This suggests the presence of a halo of dust, although a better statistical determination is required since there is only a 2$`\sigma `$ difference. Comparing his result with observations of the mean opacity through the centre of spiral galaxies ($`A_V`$=1.0) he derives a halo scalelength of 31$`\pm `$8 kpc. Since the halo dust component may be unrelated to the dusty disk, he argues that this leads to a lower limit for the scale length, provided the central optical depth is not severely underestimated. Due to the lack of reasonable constrains, the parameters with which I describe the halo dust distribution in this section do not have a physical justification. However, the models I will present can be regarded as an exercise, to show what the influence of such a distribution could be on the FIR emission. The parameters are chosen on the basis of the results of the model presented earlier. Obviously the halo cannot be the only dust component in a galaxy, since observations of edge-on galaxies clearly show the existence of a flat dust distribution that produces the extinction lane. In this section I will use a standard dust disk. As for the dusty halo, I use a constant density spherical distribution, that extends up to the boundaries of the dust disk. This means a radius of 6$`\alpha _{}`$=15kpc, the maximum radius of the dust disk. As seen in the previous sections, an optical depth $`\tau _\mathrm{V}`$=5 for the dusty disk is necessary to produce the same amount of FIR emission as observed. A constant dust halo, acting as a screen, will be very effective in extinguishing radiation, therefore I chose for this structure an optically thin status, $`\tau _\mathrm{V}`$=0.1, not to alter significantly the energy output. The SED in the FIR is shown in Fig. 4.16; as predicted, the dust halo does not introduce a big difference in extinction and emission. The temperature distribution for this model is shown in Fig. 4.17. It shows a centroidal pattern, as already seen for models with stellar emission completely inside the dust distribution (Sect. 4.7 and 4.8). For the inner part of the disk, the temperature has the same gradient as for a standard disk only, although the actual scaling depends on the different normalisation of the model. Since the halo has a constant density, there is now a larger quantity of dust at lower temperature. However, for the chosen geometrical and optical parameters, the mass of the halo, has only 1/4 of the mass of the dust in the disk, and the latter dominate the behaviour of the scalelength ratio, that is similar to a standard model. A possible solution would be to use a more massive dust halo, responsible for a larger part of the dust extinction and emission and a more transparent disk. An increase in the FIR scalelengths would thus correspond to only a marginal increase in the optical scalelength, and the ratio would be closer to that observed. For a model where both halo and disk have the same optical depth $`\tau _\mathrm{V}=1`$, the mass of dust in the halo would be 12 times larger than for the disk. The SED for such a model is also displayed in Fig. 4.16, while the temperature distribution has a quite similar pattern as for the previous model and is not shown. It is interesting to note that, despite the total FIR output of the model with halo is nearly the same as for a standard model with $`\tau _\mathrm{V}=5`$ (also plotted for reference in Fig. 4.16), the surface brightness is about 1/2. This is because the FIR scalelength are larger than for the standard model, and a larger part of the FIR radiation is emitted outside the half-light radius. Indeed, within this model the scalelength ratios are remarkably close to those observed, being 0.9 for B/200 and 0.6 for 100/200. Would such a halo be observed in edge-on galaxies? Alton et al. (1998c) used resolution enhanced HiRes IRAS maps to study the FIR emission in 24 edge-on galaxies, including starburst and quiescent objects. None of the object was found to be resolved along the minor axis. HiRes images at 100$`\mu `$m have a resolution of $`90^{\prime \prime }`$ and a typical 3$`\sigma `$ level of 0.75 MJy sterad<sup>-1</sup>. When the disk+halo model with $`\tau _V=1`$ is observed at 90, almost all of the emission, extending up to 5 IRAS resolution elements from the centre, is at a flux larger then the 3$`\sigma `$ level. Therefore a dust halo model that could explain the scalelength ratio and marginally the FIR emission would be easily resolved in IRAS images. It is not observed. ### 4.12 Summary The radiative transfer and dust emission model described in Chapter 3 has been applied to the spiral galaxy NGC 6946. The stellar SED for the galaxy has been derived from literature data, requiring the UV, Optical and NIR radiation, after being processed through dust, to be the same as observed, for any dust distribution. Various aspects of the dust FIR emission have been simulated, i.e. temperature distributions, FIR spectra and images at specific wavelengths. I have explored several optical and geometrical parameters for the dust distribution, to reproduce the observational results in the FIR. It was relatively easy to find a model with a FIR spectrum able to match the observations. While the temperature distribution (and therefore the peak of spectral emission) does not vary substantially for any dust disk parameters and optical depth, an optically thick dust distribution is required to reproduce the emitted energy ($`\tau _\mathrm{V}5`$, the exact value depending on the geometrical details of the dust distribution). Using a dust scalelength $`\alpha _\mathrm{d}=1.5\alpha _{}`$, as suggested by models of surface brightness in edge-on galaxies and other FIR simulations, it is not possible to produce optically thick models that simulate the observed spatial distribution of optical and FIR light. Under this assumption, only optically thin models have the observed ratio between optical and FIR scalelengths, but they do not have the required energy output. To produce a good fit, both to the FIR energy output and to the B/200 scalelenths ratio, with an optically thick model ($`\tau _\mathrm{V}3`$), it is necessary to extend further the dust scalelength to $`\alpha _\mathrm{d}=3.0\alpha _{}`$. The dust distribution would be similar to that of the HI, althougth the atomic gas column densities suggest a lower optical depth. A model with a dust component for each gas phase, however, fails to reproduce the observed properties. It is possible, in principle, to reproduce scalelength ratios and (although only marginally) the energy output of the galaxy by including a halo of dust to a disk model. However, such an halo would be easily detected in FIR images, which it is not. A further discussion will be presented in Chapter 5. ## Chapter 5 Conclusions In the last Chapter I have applied the radiative transfer and FIR emission model described in Chapter 3 to the spiral galaxy NGC 6946. Several models for different dust geometries and optical depths were tested against the observations, to explain the fluxes and spatial distribution of the FIR emission. This Chapter presents a resume of the work done for the Thesis. A brief summary of the model characteristics and of the results on NGC 6946 is presented in Sect. 5.1 and 5.2. A discussion of the implication of the findings is given in Sect. 5.3. Finally, a summary of the Thesis can be found in the last Section. ### 5.1 Outline of the model The model of this Thesis has been derived from the Monte Carlo radiative transfer code for spiral galaxies of Bianchi, Ferrara & Giovanardi (1996). The main quality of the Monte Carlo technique consists in the exact treatment of multiple scattering in the radiative transfer. The original code included polarisation, a distribution of sizes and materials for the dust grains and optical properties, like albedo and phase functions, derived from the Mie theory for spherical grains. In this work, the code has been simplified, omitting the polarisation calculations and assuming albedo and phase functions derived empirically from observations of reflection nebulae (Sect. 2.2). The code has then been made able to store the amount of energy absorbed from dust, as a function of the distance from the galactic centre and height above the galactic plane (Sect. 3.7). Once the geometry of the dust distribution relative to the stars and its optical depth in the reference V-band are chosen, the (monochromatic) radiative transfer code is run for 17 different photometric bands, to cover the spectral range of stellar emission. For each band, the output of the code (i.e. optical image and absorbed energy map) is scaled on the observations of the galaxy to be modelled, in such a way that the flux measured inside a specific aperture (I use the half-light radius) in the simulated image matches that really observed. As a result, the SED of the stellar radiation in the model is the same as the observed (for the chosen aperture). Finally, the 17 maps of energy absorbed by dust from starlight in each band are summed together, to produce a map of the total energy absorbed by dust. For each position inside the galaxy I therefore know the amount of energy that is absorbed by dust illuminated by an ISRF that is consistent with the radiative transfer itself, without any other assumption (Sect. 3.8 and 3.9). This is one of the original points in the present code. To my knowledge, only the Monte Carlo radiative transfer code of Wolf et al. (1998) combines this characteristic with a proper treatment of the radiative transfer. However, their model has been implemented only for star formation environments and not for spiral galaxies. Absorbed radiation can go into heating of small grains, a process not occurring at the thermal equilibrium resulting in MIR emission. Since the models of this Thesis are restricted to the FIR emission from grains emitting at thermal equilibrium, a correction to the absorbed energy maps is applied (Sect. 2.5). Using the dust emissivity derived in Sect. 2.4, the corrected map of absorbed energy is converted into a map of temperature. Hence, a final map of FIR emission can be easily obtained for any wavelength, integrating along a specific line of sight (Sect. 3.9). The FIR scalelengths and SED are derived from the maps, and compared to the observed data. The dust distribution parameters are then modified and the procedure repeated until a match is achieved between simulated and real data. The model devised for this work is essentially a sophisticated version of the energy balance technique (Sect. 3.2). Not only the amount of energy emitted in the FIR is compared to the stellar radiation to derive the galactic opacity, but also the spatial information is used, to see if the chosen star-dust geometry is consistent with the FIR emission. ### 5.2 Summary of the results Several models have been explored in Chapter 4. Most of the models are able to fit the SED inside the half-light radius for NGC 6946, if the dust distribution has a face-on optical depth $`\tau _V`$5. The amount of energy absorbed by dust depends on the dust geometry. For a standard model with optical depth $`\tau _V=5`$, 27% of the total stellar radiation is absorbed, with a V-band extinction $`A_V=0.45`$ (Sect. 4.6). A model with the dust distribution more extended radially than the stellar ($`\alpha _d=1.5\alpha _{}`$) has an higher extinction $`A_V=0.62`$, and 36% of the intrinsic starlight is absorbed (Sect. 4.8). However, the models of this Thesis were also required to describe the observed spatial distribution of the FIR emission. In a sample of seven galaxies, including NGC 6946, Alton et al. (1998a) find that the 200$`\mu `$m radial scalelength is larger than the B-band one, by a mean factor of 1.3 for the whole sample. For NGC 6946 the scalelength ratio 200/B is $`1.1`$ (Sect. 4.4). As already foreseen (Alton et al. 1998a, Davies et al. 1999b), the standard model is not able to provide a FIR scalelength larger than the optical. Alton et al. (1998a) and Davies et al. (1999b) proposed an extended dust distribution. Extended dust models derived from the surface-brightness of edge-on galaxies ($`\alpha _\mathrm{d}=1.5\alpha _{}`$; Xilouris et al. 1999) were tryed. But even with such extended dust distributions the scalelength ratio is different from the observed, if the dust disk is optically thick (Sect. 4.8). An increase in the dust radial scalelength indeed increase the FIR scalelengths with respect to the standard model. The FIR scalelengths increase also with the optical depth, although by a minor amount. Unfortunately, the optical scalelengths increases as well with $`\tau _V`$. Therefore, for the optically thick case required by the match with the observed SED, the increase in the 200$`\mu `$m scalelength is compensated by the increase in the B-band one, and the ratio 200/B is always smaller than the observed. Only for optically thin cases, the scalelengths ratio approaches unity (Sect. 4.8). When the dust scalelength is extended further, to the values observed for the atomic gas, a fit to the 200/B ratio can be provided, in the optically thick case necessary to the match the energy output. The model, with $`\alpha _\mathrm{d}=3.0\alpha _{}`$ and $`\tau _\mathrm{V}=`$3, have an extinction $`A_V=0.66`$, with 37% of the intrinsic stellar radiation re-processed by dust. The temperature distributions are quite similar, for any of the dust disk models. Temperature values in the models are compatible with those observed in the Galaxy and in NGC 6946 as well (Sect. 4.10.2). Alton et al. (1998a) also measured the ratio between scalelenghts at 100 and 200$`\mu `$m. For NGC 6946 the scalelengths ratio 100/200 is $`0.5`$ (Sect. 4.4). The 100/200 scalelengths ratio for the disk models of Chapter 4 varies less than the 200/B ratio, being always in the range 0.7-0.9, for any model, with the lower value for extended dust distributions. Extended distributions of dust could in principle decrease the ratio to the observed values, if large amounts of cold dust emitting at 200$`\mu `$m but not at 100$`\mu `$m are present in the external part of the galaxy. This does not happen in the disk models explored here, and the emission at 100$`\mu `$m and 200$`\mu `$m is essentially due to dust at the same temperature (Sect. 4.10.1). An optically thick disk with a homogeneous spherical halo of optical depth $`\tau _V1`$ could fit the data, emitting enough FIR radiation to match the observed SED and having both 200/B and 100/200 scalelengths ratio similar to the observed (Sect. 4.11). However, such a halo would be easily detected in FIR observations of edge-on galaxies, even with the poor resolution of instruments like IRAS and ISO. It is not. ### 5.3 Discussion As already said, an optically thick disk with $`\tau _V5`$ is necessary to explain the SED observed in the FIR for NGC 6946. Evans (1992) and Trewhella (1998b) apply the energy balance method to the stellar and dust emission of NGC 6946, using a TRIPLEX model with dust scaleheight half of the stellar. This is the same as using a standard model (Sect. 4.6) and limiting the match to the observations to the SED only. They both derived high optical depths for the disk, using the data inside the half light radius. Evans (1992) measured $`\tau _V=67`$, while Trewhella (1998b) $`\tau _V=4\pm 1`$. A high optical depth is also suggested by the high-resolution sub-mm images from SCUBA described in Sect. 4.3: the diffuse inter-arm emission in the NE spiral arms at a distance of 2’ ($`\alpha _{}`$) is compatible with $`\tau _V=2.2`$. The high optical depth of NGC 6946 contrasts with the recent determination of optical depth of Xilouris et al. (1999), based on fits of the surface brightness of edge-on galaxies using a suitable radiative transfer model for spiral galaxies. For a sample of seven edge-on spirals they find optically thin dust disks, with a mean central face-on optical depth $`\tau _V`$=0.5. The higher opacity of NGC 6946 may be a result of the galaxy being very gas-rich (Alton et al. 1999b); Or it may be due to the clumping of the ISM, affecting in a different way FIR and optical determination of the optical depth. While FIR observations would detect all of the dust (at least when the temperature of the clump and inter-clump medium are similar), optical observations may be affected preferentially by the extinction of the smoother, lower density (and optical depth) inter-clump medium. Only two models available in literature include clumping in a proper radiative transfer for spiral galaxies. Kuchinski et al. (1998) use preliminary results from a Monte Carlo model to derive opacities of edge-on galaxies from their colour gradients along the minor axis. After dividing the space occupied by dust in a three-dimensional grid, some cell are assigned randomly a clumping status, assuming a constant filling factor for the high density cells all over the galaxy, and adopting a ratio of 100 between densities in clumps and in the nearby smooth medium (thus following the formalism developed by Witt & Gordon (1996) for clumping in an homogeneous sphere illuminated by a central point source). Although a few aspects of the inclusion of clumping are presented in the work, the authors defer a detailed discussion to a forthcoming paper. The other model is that of Bianchi et al. (1999b), based, as the work of this Thesis, on a simplified version of the Monte Carlo radiative transfer code of Bianchi et al. (1996) for spirals (Sect. 3.7). Simulations are conducted in the V-band, for a stellar disk similar to the one adopted here. Dust is described with two components: a smooth one, associated with the neutral gas, with a double exponential distribution and parameters as for a standard model; a clumpy one, associated with molecular clouds. A few values of the fraction of gas (dust) mass distributed in clumps are explored. A three-dimensional grid covering the whole dust volume is first filled with the homogeneous dust distribution, then clumpy cells are randomly selected according to the radial and vertical distribution of molecular gas in the Galaxy. Cell dimensions and the mass of each clump are chosen to match those observed for Giant Molecular Clouds. As a result of the choice of the parameters, the cubic cells have a fixed optical thickness $`\tau _V=4`$ through each side. As already found for clumpy models in simpler geometries, the main effect of clumping consists of reducing the amount of energy absorbed by dust, with respect to a homogeneous model of the same dust mass. The increase in the fraction of energy that can escape the galaxy is moderate, resulting in surface brightness profiles that are less than one magnitude brighter than those for homogeneous models. Minor and major axis profiles of the simulated disks reveal that clumping effects are higher in the edge-on case. This contrasts with the claims of Kuchinski et al. (1998) for edge-on profiles not being modified by the clumpy structure. It is shown how the difference in the models behaviour results from the different parametrisation adopted for the dust distribution. This is unfortunate, however, as it indicates a strong dependence of the observed brightness profiles on the detailed internal and spatial distribution properties of clumps which makes the interpretation of the data very difficult. Since the Giant Molecular Clouds simulated by each cell host star-forming regions, it is logical to assume that part of the galactic stellar radiation comes from within the clouds. Bianchi et al. (1999b) study this possibility allowing a fraction of the stellar radiation to be emitted from inside the clumpy cells. When embedded stellar emission is considered, extinction increases with respect to the case with only dust distributed in clumps. Extinction in a model including clumping also for stellar radiation can be even higher than that for a homogeneous case with the same dust mass. It is instructive to compute the gas-to-dust mass ratio for the models of this Thesis. NGC 6946 has a total gas mass of $`9.010^9`$ M (Devereux & Young 1990a, rescaled to the distance of 5.5 Mpc used in this Thesis). The dust mass of an homogeneous disk can be easily computed from the radial scalelength and the V-band face-on optical depth using the formula given by Bianchi et al. (1999b). Optically thick models with $`\alpha _\mathrm{d}=11.5\alpha _{}`$ have gas-to-dust mass ratios of the same order of the Local Galactic value (Sodroski et al. 1994) of 160 (the $`\tau _V`$=5 standard model has 360, while the $`\tau _V`$=5 $`\alpha _d=1.5\alpha _{}`$ model 160). The optically thin model with the same scalelengths, necessary to explain the optical-FIR scalelengths ratio, contains less dust, with a gas-to-dust ratio higher than the local of an order of magnitude, 1600 ($`\tau _V`$=0.5 $`\alpha _d=1.5\alpha _{}`$). On the other hand, the only model able to provide a simultaneous fit to the SED and the scalelength ratio, that with $`\alpha _\mathrm{d}=3.0\alpha _{}`$ and $`\tau _\mathrm{V}=`$3, has a smaller gas-to-dust ratio of about 70. This may suggest that, despite the better fit, the dust quantity is overestimated. Therefore, if the Galactic gas-to-dust mass ratio is to be considered a common value for spirals, optically thick models with $`\alpha _d=11.5\alpha _{}`$ not only are necessary to provide a good fit to the FIR SED, but they also have the right amount of dust. A model with two disk distributions was tested in Sect. 4.9. The model included a standard, optically thick distribution of dust, derived from the radial profile of the H<sub>2</sub> column density, and an extended ($`\alpha _d=3\alpha _{}`$) optically thin disk associated to the atomic component. Adopting $`\tau _V=5`$ for the standard disk and $`\tau _V=0.5`$ for the extended distribution, a good fit was provided for the FIR SED. It was hoped that such a model would have provided a good fit to the scalelength ratio as well, because of the extended distribution. However, the dust emission is dominated by the dust associated with the molecular disk, the amount of colder dust at larger radii being insufficient to modify the scalelengths. Both the standard and extended distribution have a similar dust mass for the parameters listed before, leading to a gas-to-dust ratio of 180. What would happen if the dust associated with the H<sub>2</sub> gas component were to be distributed in clumps? Predictions are not easy. Bianchi et al. (1999b) results cannot be easily used in this case, because they have been derived for a different configuration, with the H<sub>2</sub> and its associated dust being distributed in a ring like structure and not in an centrally peaked distribution, like in NGC 6946 (Tacconi & Young 1986). Furthermore, the disk of diffuse dust in Bianchi et al. (1999b) is a standard one, while the dust associated with HI in the double disk model has a radial scalelength three times the stellar. One may hypothesise the following scenario: the diffuse extended dust is responsible for the behaviour of the scalelength, and for the optically thin face-on $`\tau _\mathrm{V}`$ derived from edge-on profiles, as measured by Xilouris et al. (1999). The clumpy dust associated with H<sub>2</sub> may be responsible for the bulk of the FIR emission, if stellar sources are present within the clouds. This scenario favours the hypothesis of a substantial contribution of localised sources to the dust heating, rather than a diffuse ISRF. Would this scenario be valid? For the models of Bianchi et al. (1999b) it is difficult to make a dust disk appear optically thin, if the dust mass is high, since clumping does not modify a great deal the shape of the profiles. In the conclusion they show that a clumpy model with optical depth unity may look the same as an homogeneous distribution with $`\tau _V`$=0.5, when seen edge-on. For the models here discussed, a dust mass corresponding to a distribution with $`\tau _V`$5, would have to look as if it has $`\tau _V`$=0.5. This is not possible for the models of Bianchi et al. (1999b), although the hypothesis cannot be in principle disregarded for NGC 6946, because of its different distributions of atomic and molecular gas. I have then based the predictions of the FIR emission on the assumptions that the behaviour of the diffuse dust disk is unchanged when high density clumps are interspersed. A simple test in Sect. 4.10.3 showed that it is unlikely that, under these assumptions, the dust optical depth is overestimated when carrying out the energy balance. However, the temperature distribution, the shape of the SED and the spatial distribution of emission are likely to change in a way difficult to predict. Clumps with embedded stellar emission would act as hot spots in the dust distribution. The way this would influence the FIR emission depends on the clumps distribution. If clumps are more concentrated towards the centre of the galaxy, the FIR emission at shorter wavelength may be steeper. This could explain why the observed 100/200 scalelength ratio is smaller than any value derived from the models (Sect. 4.10.1). Two recent models include clumping of dust and embedded stellar emission to describe the radiative transfer and FIR emission of NGC 6946 (Silva et al. 1998, Sauty et al. 1998). However, it is not easy to use these works to address the problems raised in the last paragraph. The large number of parameters involved in the modelling prevents an isolation of the effect of the dust distribution on the FIR heating. Furthermore, the complexity of the models forces the authors to sacrifice a correct treatment of the scattering (Silva et al. 1998) or a complete description of the ISRF in the whole wavelength range of stellar radiation (Sauty et al. 1998). The energy balance is used by Silva et al. (1998) to calibrate the results of a complex photometric evolution model for galaxies. The galactic disk is described by three exponential distributions: a distribution of spherical molecular clouds with embedded stars, a distribution of stars that have escaped molecular clouds, and a distribution of diffuse dust and atomic gas. The stellar SED is derived from a spectral synthesis and galactic evolution model. For each evolution stage, the residual gas mass is used to find the dust mass of the galaxy. The radiative transfer for the diffuse medium is carried out in an approximate way, rigorous only for an infinite homogeneous medium and isotropic scattering. A specific radiative transfer model is adopted for the molecular clouds. The dust emission is predicted from the computed ISRF. The several parameters of the model are chosen to fit the stellar and dust SED. The code does not produce maps but only total integrated values for the luminosity. Among the other objects, they apply the model to NGC 6946. Assuming that radial and vertical scalelengths are the same in each component, they derive a total extinction in the B-band $`A_B=0.13`$ (as derivable from their “average” optical thickness), with 60% of dust residing in molecular clumps and the rest in diffuse ISM. For an homogeneous medium with the same scalelengths, their B-band extinction would correspond to a face-on optical depth $`\tau _V=1`$. Since part of the emission occur inside regions of higher opacity (the molecular clouds), the diffuse medium in the Silva et al. model may have $`\tau _V<1`$. A comparison with the results of the Bianchi et al. (1996) code, as presented in the database of Ferrara et al. (1999), shows how the extinction is likely to be overestimated by the Silva et al. approximate treatment of radiative transfer of a 10-15% in the B and V band, for inclinations close to face-on (G. L. Granato, private communication). The choice of dust vertical scalelengths equal to the stellar increase the extinction too, with respect to the standard model with the same dust mass. However, it is unlikely for their geometry and radiative transfer model to underestimate substantially the extinction in the galaxy. Therefore, their model is consistent with an optically thin ($`\tau _V1`$) diffuse dust component. They also claim that young stars soon escapes from their parent clouds, thus contributing considerably to the FIR radiation from diffuse dust in spiral galaxies. Thus the star-formation rate could be derived from FIR radiation. A complex model for the radiative transfer within NGC 6946 is also presented by Sauty et al. (1998). They describe the ISM as a two phase medium, constituted by molecular clouds and a diffuse constant density distribution associated with the atomic gas. The molecular clouds have a distribution in space and in size derived from models of the gravitational potential of the galaxy and of cloud formation. OB association are created inside the molecular gas. The dust is scaled on the gas, using a constant dust-to-gas mass ratio. The radiative transfer is carried out with a Monte Carlo method, but only for wavelengths smaller then 2000$`\mu `$m, to avoid the contribution of stars not included in their simulations. The ISRF is estimated on each cell of a three-dimensional grid. For cells not reached by UV radiation, like in the interarm regions, the ISRF at $`\lambda >2000`$Å is used, derived from a R-band map of the galaxy, scaled on the Galactic local ISRF. The UV flux is scaled on the observed. Dust emission is computed and maps can be produced. A fit is achieved for the integrated emission in the four IRAS bands and at 200$`\mu `$m (KAO observation). It is difficult to compare their results to the present work, because they use a different dust distribution from the exponentials, both for the atomic and molecular gas. They derive a total extinction at 2000Å $`A_{2000}`$=0.76. As a comparison, a $`\tau _V=5`$ standard model in the UV4 band (1900Å$`<\lambda <`$2090Å; Sect. 3.8) has $`A_{2000}=0.83`$, while for the $`\alpha _d=1.5\alpha _{}`$ $`\tau _V=0.5`$ model $`A_{2000}`$=0.28. The work also support the hypothesis of dust heated preferentially by young stars, the UV radiation contributing to 72% of the total FIR. However, the greater care in the treatment of the UV radiation at $`\lambda <2000`$Å dictated by the desire to model the UV excited H$`\alpha `$ and C<sup>+</sup> lines, rather than for the Optical and NIR ISRF, may have biased their results towards the shorter wavelengths. Unfortunately, no radial profiles are presented for wavelengths $`\lambda >100\mu `$m. It is therefore impossible to test the model output with the observation suggesting an extended dust distribution. A proper analysis of the effects of clumping on the FIR emission needs, however, a model focussed mainly on dust extinction and heating mechanisms. A correct treatment of the radiative transfer in the whole spectral range of stellar emission is necessary. Such a model would help not only to ascertain if the observed optical/FIR scalelength ratio can be produced by model with the required FIR energy output, but will also answer the debated question of which is the main contributor to the dust heating (Sect. 1.5). If indeed UV radiation from young objects is the main contributor to the dust heating, recent star formation rates can be derived from the FIR emission, for a large number of galactic objects. Unfortunately, results from a models including clumping are highly dependent on the description chosen for the clumps distribution (Bianchi et al. 1999b). The advent of high-resolution FIR and sub-mm instrument, will surely help to provide a better description of the dust distribution. Current instrumentation permit such observation only on large nearby objects, like the Galaxy and M31. For those objects, the diffuse ISRF is the main contributor to the heating (Sect. 1.5). One of the points raised in favour of young stars as the major source of dust heating is the impossibility of the diffuse ISRF to heat dust to temperatures higher than 20K (Devereux & Young 1991). I have shown in this Thesis how temperatures higher than 20K are compatible with models with only smooth distributions of dust and stars, i.e. with ISRF heating only. Temperatures in the galactic centre can reach values $`30`$K. When the temperature is derived from integrated FIR fluxes, values are smaller, because of the temperature gradient with the galactocentric distance. However, values are always biased towards hotter dust. As an example I can use the extended model with $`\alpha _d=1.5\alpha _{}`$ and $`\tau _V`$=5, to compute the temperature from fluxes integrated inside the half light radius. The temperature distribution goes from 30K, in the centre, to 18K, at 1.6$`\alpha _{}`$, the half light radius (Fig. 4.10). When the integrated 100$`\mu `$m and 200$`\mu `$m fluxes are used, the derived temperature is 26K, still higher than 20K. Temperatures higher than 20K are therefore compatible with ISRF heating of dust. The radial gradients of the temperature in the models are compatible with those observed in the Galaxy and in NGC 6946 (Sect. 4.10.2). Values of the temperature at a distance corresponding to the Sun Galactocentric distance are quite close to the values measured from Galactic high-latitude FIR emission (Reach et al. 1995, Schlegel et al. 1998). Using the 60$`\mu `$m DIRBE image as a template for the FIR emission from diffuse cirrus clouds in the Galaxy, Lagache et al. (1998) isolate high-latitude regions of sky with excess emission at the longer wavelengths observed by DIRBE. For regions with no FIR excess, they measured a temperature of 17.5K (using $`\beta =2`$; for the emissivity used in this work, Eqn. 2.33, the temperature would be of $`21`$K; see Sect 2.4.2). A significant excess is measured in regions covering 3.7% of the sky. Two temperature components are necessary to describe the emission in those regions, a warmer component with T=17.8, analogous to the one measured in the absence of the long wavelength excess, and a colder component with T=15.0. The coldest component measured on the Galaxy has T=13K. Again the temperature of the warm component is similar to those of the models of the Thesis. The colder component is identified with dense molecular clouds. Regions with negative excess are present as well, indicating hotter dust in molecular clouds with embedded massive young stars. However, the temperature variations are quite small, corresponding to local variation of the ISRF of only 30%. This may suggest that the effect of clumping on the ISRF and on the dust heating is not heavy. In the smooth distribution of the current models, colder temperatures can result from the shielding of stellar radiation along the plane in optically thick models (Sect. 4.6). However this does not produce appreciable differences in the temperature distributions. Colder temperatures can be obtained also for dust at large distance from the galactic centre ($`R>6\alpha _{}`$). At such distance, in fact, dust is immersed in a reduced (or null, as in the extended models of this Thesis) local ISRF, the heating coming mostly (if not completely) from stars in the inner galaxy (Sect. 4.8). For the standard models and for extended models in regions where dust and stars are mixed, the minimum temperature is T$``$14K. In extended models dust at larger radii is colder. This is shown also by the larger scalelengths of the FIR emission. Another geometrical configuration with dust present in region of scarce or null stellar emission is the one including the halo (Sect. 4.11). After reviewing the little indirect evidence for the existence of unseen baryonic matter, Gerhard & Silk (1996) consider the possibility that part of the dark matter in spiral galaxies may be in the form of gas. They produce a model for an extended flattened halo of cold gas clouds and defined the parameter range that will permit the halo to be unseen and stable. Two mechanisms are suggested to provide stability to each cloud against collapse and star-formation: the self-gravity of the gas may be reduced by the presence of a minicluster of particles inside the cloud; or dust grains associated with the gas and heated by the galactic and intergalactic radiation field may provide sustain through collisions with the gas. They propose several empirical tests to verify this hypothesis, among which deep FIR and sub-mm observations of dust halo emission. Such a halo of baryonic matter would play a major role in the galaxy star-formation history. Indeed, large amounts of baryonic matter have been recently observed by Valentijn & Van Der Werf (1999), from ISO Short-Wavelength Spectrometer (SWS) observations of H<sub>2</sub> rotational line emission along the disk of NGC 891. A cool molecular component is found to dominate the emission at larger radii, outweighting the atomic gas by a factor of $`10`$. Such gas could account for the dark mass in the galaxy. Unfortunately, the parameters of a putative halo are poorly constrained. In Sect. 4.11 I tried a few halo configuration, together with a disk distribution of dust, choosing the parameters on the basis of the observed behaviours in optically thin and thick models previously explored. I found a disk+halo model able to provide a SED reasonably close to the observed and the required scalelength ratio. However, the halo would have had a FIR emission easily detectable in the available FIR and sub-mm observations, which it is not. The limited number of models explored obviously does not rule out the presence of a dust halo of lower density. Indeed, an optically thin halo would be currently undetected. However, it will not be able to explain the observed FIR properties, that are dominated by the disk distribution (Sect. 4.11). In this Thesis I have assumed that the scalelength of the intrinsic stellar distribution does not change as a function of the wavelength, dust extinction being responsible for the shallower profile observed in the optical bands. However, different scalelengths at different $`\lambda `$ may be caused by intrinsic colour variation of the stellar distribution with the distance from the galactic centre. Bluer light at larger distances from the centre may trace a population of stars younger than in the central part of the disk. Peletier et al. (1995) studied the variation of the ratio of B and K-band disk scalelengths with inclination for a sample of 37 Sb-Sc galaxies. Larger scalelength ratios are expected in edge-on galaxies, with respect to face-on objects, if dust is responsible of the observed colour gradient. On the other hand, if the colour gradient is caused by a gradient in the stellar population, the ratio should remain constant, regardless of the galaxy inclination. They find that $`\alpha _B/\alpha _K`$ goes from 1.3 for face-on galaxies to 1.7 for edge-on, while a change in the stellar population, estimated from the observed metallicity gradients, can produce a ratio of 1.17 only. These results suggest that dust extinction is the main contributor to the observed colour gradients. An opposite conclusion is drawn by De Jong (1996a), on a sample of 86 face-on galaxies. Comparing the observed colour gradients in colour-colour plots with those derived from a Monte Carlo radiative transfer model, he concludes that, for reasonable models, the observed ratio between optical and NIR scalelengths is caused by a change in stellar population. This definition does not seem to include models with dust disks more extended that the stellar, that are shown to produce such large gradients as the observed. This may be considered a further hint for the existence of extended dust. Because of the uncertainty in the intrinsic stellar colour gradient, it is difficult to evaluate its effect in the present modelling. The stellar output is calibrated on the observed SED inside the half-light radius. If the shallower B profile is caused mainly by a change in stellar population, the SED at larger distance may be stronger in the short-wavelength side than the central one. Since extinction is larger for small $`\lambda `$, a smaller optical depth will be sufficient in the external part to produce the same FIR output. However, the effect is likely to be small, because most of the absorbed radiation comes from the higher extinction regions inside the half-light radius, for which I use the correct stellar SED. The mean dust properties derived in the Local Galactic environment have then been used for any position in the dust distribution. Dust grain composition and size distributions may vary along the disk. Given the high uncertainties on the properties of the local dust distribution itself, it is not easy to assess the influence of such variation on the model. Davies et al. (1999a) constructed a numerical model to study the expulsion and retention of dust grains in galactic disks, as a result of the radiation pressure, gravitational potential and friction with the gas. They conclude that larger grains (0.1$`\mu `$m) are likely to move from the outer galaxy to the centre, for reasonable disk opacities. On the contrary, smaller grains (0.01-0.001$`\mu `$m) remain relatively close to their formation sites. A reservoir of large grains would therefore accumulate in the central regions. Smaller grains can be heated to higher temperatures, for the same ISRF (Draine & Lee 1984). An overabundance of smaller grains at larger radii with respect to the centre would result therefore in a shallower temperature gradient and FIR profiles. However, the effect is likely to be small and it is difficult that it can account for the observed large FIR scalelengths. As already stated, an extended distribution of dust is needed to explain the FIR scalelengths. If this distribution is confirmed, our observations of the universe may be severely affected by dust extinction. (Alton et al. 1999b) assessed the impact of this hypothesis, with a universe populated by galaxies with dust distributed as in a model of NGC 6946. The chosen model is the same as in Sect. 4.9, with a standard optically thick disk derived from the H<sub>2</sub> distribution and an extended optically thin disk associated to the atomic gas. The fraction of light emitted at redshift $`z`$ that fails to reach instruments observing in the B-band, as a result of the intervening disks extinction, is then computed. It is found that 30-40% of the light emitted at $`z=2`$ would fail to reach us. Most of the extinction would be due to the extended optically thin disk, rather than the one associated with the molecular gas. This because the geometric cross section (9 times bigger for the dust disk associated with the atomic gas) is a more critical parameter than the central optical depth. Therefore the result will hold even if the molecular gas is distributed in clumps, thus reducing its extinction. However, note the authors caveat on NGC 6946 being very gas rich. When dust modelled on the gas distributions and gas-to-dust ratio of the two separate phases of NGC 891 is used, only a fraction of 5% of the light will not reach us, for the same redshift of emission as before. Clearly, the knowledge of the dust distribution in a spiral galaxy is still at its preliminary stages. More work need to be done, both on the observational and theoretical side. From the discussion above, it emerges that the inclusion of clumping in proper radiative transfer and emission model is necessary. This will clarify the impact of ISM and stellar dishomogenities of the FIR emission and solve the debated problem of the source of the dust heating. However, it has been shown that clumpy models depend heavily on the assumptions about the clumps distribution. Future high resolution and sensitivity instrumentation will therefore be essential to define the dust distribution and limit the number of parameters in the model. For this Thesis, I have tried to fit the FIR emission of one galaxy only, to assess the feasibility of the method and test the model. However, NGC 6946 may have characteristics different from those of a ‘mean’ galaxy. Our group possess a lot of optical and FIR data for several spirals (Trewhella 1998b). Furthermore, new FIR data is becoming available through the ISO mission Data Archive. When this Thesis analysis has been conducted on other spirals, more general conclusions about the dust distribution and its extinction could be drawn. ### 5.4 Summary of the Thesis In this Thesis I have devised a model to simulate the FIR emission in spiral galaxies and I have applied it to the observations of NGC 6946. An introduction to the influence of dust in astrophysical observations, a review of the studies of extinction and FIR emission in spiral galaxies, and a list of the observational evidences for extended dust distribution is presented in Chapter 1. The basic dust properties used in the modelling are given in Chapter 2. The model is described in Chapter 3, with details on the adopted stellar and dust geometries and on the implementation of the radiative transfer. Chapter 4 is devoted to a fit of the observed FIR properties of NGC 6946. Conclusions have been drawn in this Chapter. In the following list I summarize the major works done for this Thesis, together with a few other ancillary projects carried out during my PhD: * Modification of a Monte Carlo radiative transfer code previously developed by myself and creation of FIR emission model for spiral galaxies (Chapter 3 and 4). The main features of the model are: + full treatment of multiple scattering in realistic geometries; + use of empirically derived dust properties; + derivation of the absorbed energy and temperature from an ISRF consistent with the radiative transfer; + creation of maps of FIR emission and dust temperature distribution. * Application of the model to NGC 6946 (Chapter 4). Main findings are: + observed temperatures are consistent with ISRF heating; + optically thick disks are necessary to explain the observed SED of FIR emission; + optical-FIR scalelengths ratio can be explained by extended disks; + a model with face-on optical depth $`\tau _\mathrm{V}=3`$ and spatial distribution similar to that of the atomic gas provide a good fit to both the observed SED and the optical-FIR scalelengths ratio. * Original derivation of dust emissivity from maps of Galactic extinction and FIR emission (Sect. 2.4; Bianchi et al. 1999a). * SCUBA observations of dust associated with spiral arms and diffuse dust in NGC 6946 (Sect. 4.3) * SCUBA observations of the dust ring in NGC 7331 (Appendix A; Bianchi et al. 1998). * INT observations and analysis of fields around edge-on galaxies, to detect extinction of a possible dusty halo through the colour of background objects (Appendix B); discovery of a faint luminous halo in NGC 891 (Sect. B). ## Appendix A SCUBA imaging of NGC 7331 dust ring To reduce the size of this file, this section has been omitted. However, its content has been published integrally in Bianchi S., Alton P. B., Davies J. I., Trewhella M., 1998, MNRAS, 298, L49. ## Appendix B Search for dust in the halos of spiral galaxies To reduce the size of this file, this section has been omitted. The full version of this thesis, including the present appendix, can be found at http://www.arcetri.astro.it/sbianchi/tesi/thesis.ps.gz
no-problem/9911/astro-ph9911270.html
ar5iv
text
# Evolution of low-mass metal-free stars including effects of diffusion and external pollution ## 1 Introduction Little is known about the first generation of stars, which formed out of primordial material with a composition resulting from the Big Bang nucleosynthesis. In particular, the formation process of Pop III stars is not understood and important questions remain unanswered. What was the Pop III inital mass function? Did the low- or high-mass stars form first? Did the first Pop III Supernovae pollute the still fully convective low-mass pre-main sequence stars? On the observational side, no $`Z=0`$ star has been observed, a fact which is not surprising given the interaction of potential metal-free stars with the galactic interstellar medium over more than 10 billion years. However, there is an increasing wealth of data concerning the ultra metal-poor halo stars ($`[\mathrm{Fe}/\mathrm{H}]<\text{ }2.5`$), which are definitely metal-poorer than Pop II stars, whose metallicities cluster at $`[\mathrm{Fe}/\mathrm{H}]>\text{ }2.5`$. The metal distribution in the atmospheres of ultra metal-poor halo stars (UMPHS) displays an almost pure SNe II signature with $`r`$-process element abundances very similar to the solar ones (Ryan et al. (1996); Sneden et al. (1998)). Apparently, the envelopes of these stars contain matter processed in only one generation of massive stars. Indeed, Shigeyama & Tsujimoto (1998) argue that individual UMPHS have metal compositions which can be traced back to individual Pop III supernovae of type II. From these results one can conclude that the present UMPHS, which evidently must be the low-mass counterpart of a very early generation of stars, might have formed immediately after the first SNe. In this case, they evolved as extremely metal-poor stars, such as investigated by Cassisi & Castellani (1993). Alternatively, they might be true Pop III stars whose envelopes have been polluted by SNe ejecta after they already had reached the zero-age main sequence, and therefore had no longer been fully convective. The internal, nuclear evolution in this case is that of Pop III stars, even if the outer parts of the envelope shows the presence of metals. It is the second case – polluted low-mass stars of initial metallicity $`Z=0`$ (ignoring the $`10^{10}`$ level of initial BBN <sup>7</sup>Li and <sup>7</sup>Be) – which we are investigating in this paper. Our aim is to provide a theoretical background to assess the evolutionary state of the UMPHS. There is a significant difference from the structural point of view between extremely metal-poor stars and those of zero metallicity, as has been shown in all existing papers dealing with the evolution of metal-free stars (D’Antona (1982); Guenther & Demarque (1983); Eryurt-Ezer & Kiziloğlu (1985); Fujimoto et al. (1990)). The reason is the absence of CNO-elements. While in low-mass Pop II stars during the final stages of the main-sequence (MS) evolution the CNO-cycle is stabilizing the core at moderate temperatures, the sole operation of the $`pp`$-chains in Pop III stars leads to significantly hotter cores toward the end of core hydrogen burning. (For more massive Pop III stars, see Ezer & Cameron (1971); El Eid et al. (1983); Arnett (1996). Recall that for non-zero metallicity, the CNO-cycle is not only contributing but dominating the energy generation.) Similarly, the hydrogen burning shell of the post-MS stars shows much higher temperatures as for the Pop II case where the CNO-cycle provides the dominating energy source. This implies that if the star had any source supplying enough CNO, it can convert into an extremely metal-poor star and might evolve – after a possible transition phase – in a more standard fashion. The onset of the CNO-cycle is a rather drastic event leading to excursions in the Hertzsprung–Russell–diagram and to transient convective regions with mixing episodes. The critical mass fraction has been shown to be $`10^{10}`$ and results from the fact that at $`T10^8`$ K about $`10^8`$ of the solar CNO-abundance is sufficient to lead to the same energy generation as in a star of solar metallicity. As it has been demonstrated in the early works on Pop III stars, the temperatures at the end of core hydrogen burning in a $`1M_{}`$ star (D’Antona (1982); Fujimoto et al. (1990)) are high enough to allow the production of the critical carbon abundance through $`3\alpha `$-reactions. Results at lower masses are inconclusive (D’Antona (1982); Guenther & Demarque (1983)). In § 2 we will investigate in detail whether other nuclear chains could be additional sources for carbon production, most notably those that start from $`pp`$-chain elements like <sup>7</sup>Li. This is to ensure that the in situ production of carbon is treated correctly in our stellar evolution calculations, because it is crucial for predicting whether and when Pop III stars can convert themselves into extreme Pop II stars. In § 3 we will present such up-to-date calculations of Pop III models, which are partially a repetition of the classical work cited, but are also an extension to a variety of masses. In particular we will discuss for which stellar mass in situ production of carbon is possible at all. As the second source for CNO-elements we investigate that resulting from the assumed pollution by nearby supernovae. While the pollution initially affects only the outermost (possibly convective) envelope, atomic diffusion, which is known to be a significant physical process in the Sun, could transport the added metals to the hydrogen-burning regions. This external source of carbon could in particular affect the hydrogen burning shell in evolved stars. § 4 we will present calculations of Pop III stars evolution under these assumptions and discuss our results in connection with the observations of UMPHS. A section devoted to the conclusions follows. ## 2 Carbon production by nuclear reactions The dominant reaction for primary carbon production in stars is the $`3\alpha `$-process. Hydrostatic helium burning is usually taking place at $`T10^8`$ K, where enough energy is released to provide stellar luminosities. However, already at lower temperatures some $`3\alpha `$-reactions occur. Since hydrogen burning temperatures in Pop I and II stars are below $`810^7`$ K, these two burning phases can safely be assumed to be well separated. The situation is different in Pop III stars, where the central temperature $`T_c`$ is higher and already before the exhaustion of protons can approach values of $`710^7`$ K or above (see § 3 and Fujimoto et al. (1990)). At this temperature and at the typical densities of late main-sequence phase cores ($`10^5\mathrm{g}\mathrm{cm}^3`$) the lifetime of helium against $`\alpha `$-capture is of order $`10^{12}`$ yrs. Over the remaining $`10^8`$ years of core hydrogen burning the critical amount of carbon of $`10^{10}`$ (mass fraction) can easily be produced, therefore. For standard Pop II stars this might happen as well, although in smaller amounts, but the additional primary carbon produced is negligible compared to the one already present in the initial mixture. The $`3\alpha `$-process is not the only way to produce carbon at high temperatures. Mitalas (1985) investigated “unconventional <sup>12</sup>C production in Pop III stars” via $`\alpha `$-captures on the light elements <sup>7</sup>Be and <sup>8</sup>B, which are present in equilibrium abundances in the pp-II and pp-III chains. The results of those $`\alpha `$-captures would be <sup>11</sup>B or <sup>11</sup>C, which under subsequent $`p`$-captures (note that this limits the ability of these reaction chains to produce carbon to stages before the end of hydrogen burning) would create <sup>12</sup>C. Since these reactions dominate the $`3\alpha `$ process at $`T<\text{ }7.610^7`$ K, they might be able to create carbon in sufficient amounts even before the latter process, or – in stars of lower mass not reaching the critical temperatures for the $`3\alpha `$ process – might be the only path to carbon production before the end of core hydrogen burning. Alternatively, in hot pp-chains, <sup>8</sup>B does not $`\beta `$-decay but reacts via $`{}_{}{}^{8}\mathrm{B}(p,\gamma )^9\mathrm{C}`$. By further $`\alpha `$-capture <sup>13</sup>N can be created. Wiescher et al. (1989) have investigated these hot pp-chains in detail with a nuclear reaction network. Besides the classical pp-I, -II, and -III chains they identified – as function of $`T`$ and $`\rho `$ – additional chains dominating for hotter conditions. The sequence of reactions proposed by Mitalas (1989) was included (termed rap-II and rap-III). Wiescher et al. (1989) were interested in whether the hot pp-chains could produce CNO-isotopes sufficiently fast such that Pop III supermassive stars can experience a thermonuclear explosion during the fast core collapse. For this specific question they found the rap-processes to be important only for temperatures in excess of $`10^8`$ K (see their Fig. 7). In the present case, however, the problem is related to the competition between pp-chains and $`3\alpha `$ reactions in producing carbon during the final phases of core hydrogen burning, i.e. on nuclear time-scales. In order to answer this question and to determine which reaction chains have to be included in the network part of the stellar evolution code, we used the one-zone nuclear reaction network of Weiss & Truran (1990) and included all potentially relevant reactions<sup>1</sup><sup>1</sup>1We will use the term full network to denote the one-zone extended nuclear reaction network and call the more limited network used in the stellar evolution code the stellar evolution network. The reaction rates were taken from the most recent update of the reaction rate library of F.-K. Thielemann (Thielemann et al. (1987); Thielemann (1996)). The full network is shown in Fig. 1. In the lower right corner standard reactions ($`(p,\gamma )`$, $`(\alpha ,p)`$, etc.) except those involving neutrons are shown. They are included for all nuclei. In addition, several (but not all) other reactions not fitting into this standard reaction scheme are indicated. As an example, consider the $`3\alpha `$-reaction indicated by a long arrow with solid arrowhead. However, also the $`\alpha (\alpha ,\gamma )^8\mathrm{B}`$ is included as an individual reaction (open arrowhead). An important reaction is $`{}_{}{}^{9}\mathrm{C}p+2\alpha +\beta ^+`$, which inhibits the artificial buildup of $`{}_{}{}^{9}\mathrm{C}`$, from which $`{}_{}{}^{12}\mathrm{C}`$ would become possible. Some arrows indicate several possible reaction paths, e.g. from $`{}_{}{}^{4}\mathrm{He}`$ to $`{}_{}{}^{7}\mathrm{Be}`$ we have both an $`{}_{}{}^{3}\mathrm{He}`$ capture or an $`(\alpha ,n)`$-exchange included. In the lower left corner we also included reactions involving $`d`$, for example $`{}_{}{}^{3}\mathrm{He}(d,p)^4\mathrm{He}`$ or $`d(d,n)^3\mathrm{He}`$. Inverse reactions are always taken into account as well. The upper right corner contains nuclides included for the CNO-cycles. The full network needs as input $`T(t)`$ and $`\rho (t)`$, which we took from selected evolutionary calculations. Since these were done with our stellar evolution network including only standard pp-, CNO- and $`3\alpha `$-reactions (but treating all of them simultaneously), the full network might be considered to be inconsistent. However, up to a relative mass fraction of CNO-elements of $`X_{\mathrm{CNO}}<10^{10}`$, the energy production of the star and therefore its temperature evolution will not be affected and all other effects resulting from the production of carbon (e.g. changing the molecular weight) are utterly negligible. In addition, the full network calculations have only exploratory character to identify those reactions which must be included in the stellar evolution code for the proper treatment of carbon production. Primordial $`{}_{}{}^{7}\mathrm{Li}`$ and $`{}_{}{}^{7}\mathrm{Be}`$ – both present at a level of $`10^{10}`$ – are burnt by $`\alpha `$-captures at the beginning of the main-sequence phase or, equivalently, at the very beginning of the full network calculations. Mitalas (1985) already demonstrated that their primordial presence does not lead to carbon production during the pre-main-sequence phase. Carbon itself is produced in standard BBN only at a level of $`10^{15}`$ or lower; in inhomogeneous BBN this can rise under very high $`n/p`$-ratios to $`10^{12}`$ at most (Thomas et al. (1993)). For the full network we assumed a zero abundance. The results of the full network calculations are summarized in Fig. 2, which displays the creation of $`{}_{}{}^{12}\mathrm{C}`$ and $`{}_{}{}^{14}N`$ as a function of time for the last phases of the main-sequence evolution of 3 stars of masses 0.8, 1.0 and $`1.2M_{}`$. The bottom panel shows the run of temperature as taken from the stellar evolution calculations including the temperature rise due to the rapid conversion of carbon to nitrogen. The three lines in the upper two panels correspond to the production if all important processes – $`3\alpha `$, $`{}_{}{}^{7}\mathrm{Be}(\alpha ,\gamma )^{11}\mathrm{C}`$ and $`{}_{}{}^{7}\mathrm{Li}(\alpha ,\gamma )^{11}\mathrm{B}`$ – are included (dashed line), if $`3\alpha `$ is excluded (dot-dashed), and if only the last one is considered (solid). This latter chain – $`{}_{}{}^{7}\mathrm{Li}(\alpha ,\gamma )^{11}\mathrm{B}(p,\gamma )^{12}\mathrm{C}`$ – turned out to be the next one in the order of decreasing carbon production effectivity. For all three masses, the $`{}_{}{}^{7}\mathrm{Be}(\alpha ,\gamma )^{11}\mathrm{C}`$ chain is the only relevant one for the initially lower temperatures, as is evident from the fact that the dashed and dot-dashed lines lie on top of each other (i.e. the neglect of the $`3\alpha `$-process has no influence), while the solid line ($`{}_{}{}^{7}\mathrm{Li}(\alpha ,\gamma )^{11}\mathrm{B}`$ only) falls below them, even if by less than one order of magnitude. At increasing temperatures towards the end of core hydrogen burning, the $`3\alpha `$-process quickly becomes the most important source for carbon, which is partially processed to nitrogen, in particular in the more massive stars. The other chains’ contributions decline such that the corresponding lines almost level off. This is due to the exhaustion of protons needed for carbon production in these chains. Note that at earlier times, when enough protons are still available, the conversion of $`{}_{}{}^{12}\mathrm{C}`$ to $`{}_{}{}^{14}\mathrm{N}`$ – the CN-subcycle – is possible such that $`{}_{}{}^{12}\mathrm{C}/^{14}\mathrm{N}`$ is in equilibrium at $`10^2`$. With the exhaustion of protons and the additional $`{}_{}{}^{12}\mathrm{C}`$-source through $`3\alpha `$, carbon finally is exceeding nitrogen in abundance. While the $`{}_{}{}^{7}\mathrm{Be}(\alpha ,\gamma )^{11}\mathrm{C}`$ is the major path to $`{}_{}{}^{12}\mathrm{C}`$ and $`{}_{}{}^{14}\mathrm{N}`$ for some time, in no case this process is able to create the critical abundance of $`10^{10}`$. Ignoring all processes except $`3\alpha `$ therefore introduces only insignificant errors. The relevant carbon production will happen (if it does so at all) through $`3\alpha `$ at the end of core hydrogen burning and the carbon missing at that time due to the neglect of the other processes (at a level of $`10^{12}`$ or even less) will quickly be added. Note that the $`0.8M_{}`$ star just manages to reach the critical $`{}_{}{}^{12}\mathrm{C}`$ abundance. Due to the lower core temperatures, stars of even lower mass will fail to create enough carbon before the exhaustion of protons such they will not be able to initiate the CNO-cycles. Also, their main-sequence lifetimes are longer than the age of the universe. From the detailed network computations we have performed we therefore conclude that indeed it is safe to include only the $`3\alpha `$ process in the stellar evolution code if one accepts errors of 1% or less in the detailed $`{}_{}{}^{12}\mathrm{C}`$ and $`{}_{}{}^{14}\mathrm{N}`$ production history (which is well contained in the uncertainty of the $`3\alpha `$ rate). However, it is also evident that both H- and He-burning reactions must be treated simultaneously in the network. ## 3 The evolutionary computations: “standard” models The evolutionary properties of extremely metal-poor stellar structures have been the subject of accurate investigations since the early 1970s, thanks to the pioneering work by Ezer & Cameron (1971). The early investigations have usually been devoted (see the introduction) only to selected evolutionary masses or quite narrow mass ranges; the first complete survey of the evolutionary behavior of extremely metal-poor objects in a quite large range of mass (from low-mass to intermediate and massive stars) has been carried out by Cassisi & Castellani (1993), and was later completed by Cassisi et al. (1996) by extending the numerical computations for low-mass stars to more advanced evolutionary stages, as the central He burning and the double shell H and He burning phases. Following Applegate et al. (1988) the metallicity employed in the computations was $`Z=10^{10}`$, adopted as an upper limit for the cosmological production of heavy elements in inhomogeneous Big Bang nucleosynthesis. In the latter two works a big effort was devoted to investigate all the peculiar evolutionary features related to the paucity of heavy elements in the stellar matter; for this reason a detailed description of the main evolutionary properties of Population III objects will not be repeated here and we refer the interested reader to the quoted papers and reference therein. In the present work, we decided to adopt a ”true” metal free ($`Z=0`$) chemical composition. As noted above, the only elements present at a level of $`10^{10}`$ in the Pop III primordial material are $`{}_{}{}^{7}\mathrm{Li}`$ and $`{}_{}{}^{7}\mathrm{Be}`$, which, however, are burnt to helium in the largest parts of the stars already during the pre-main-sequence phase. The evolutionary models have been computed by adopting an updated version of the FRANEC evolutionary code (Cassisi & Salaris (1997); Salaris & Cassisi (1998)). As for the input physics, OPAL radiative opacity tables (Iglesias et al. (1992); Rogers & Iglesias (1992)) were adopted for temperatures higher than 10,000 K, while for lower temperatures the molecular opacities provided by Alexander & Ferguson (1994) have been used. Both high and low temperature opacity tables assume a scaled solar heavy elements distribution (Grevesse (1991)) when $`Z>`$0. The equation of state has been taken from Straniero (1988), supplemented by a Saha EOS at lower temperatures. The outer boundary conditions for the stellar models have been evaluated by assuming the $`T(\tau )`$ relation provided by Krishna-Swamy (1966). In the superadiabatic region of the stellar envelope a mixing length value of 1.6 pressure scale heights has been adopted. We emphasize that for some selected evolutionary sequences the numerical computations have also been performed with the Garching evolutionary code (e.g. Schlattl & Weiss (1999)) in order to verify the reliability of the present results. In all cases we have obtained good agreement between the two set of ”parallel” computations; small differences are due to some differences in the adopted physical inputs. For example, the OPAL EOS (Rogers et al. (1996)) is being used in the Garching code. Following the suggestions by Ezer & Cameron (1971) and by Cassisi & Castellani (1993), local equilibrium values have been adopted for the $`{}_{}{}^{3}\mathrm{He}`$ abundance, independent of the occurrence of convective mixing. If and when the CNO-cycle becomes operative, equilibrium abundances among the various nuclei have been assumed, as derived by solving - for any given temperature and density - the set of equations describing the equilibrium among $`p`$-captures and $`\beta `$-decay rates. In both cases, the approach adopted for the chemical equilibrium treatment follows from the evidence that - under the peculiar physical conditions existing in extremely metal-poor stars - the equilibrium timescales are much shorter than the characteristic mixing times (for a detailed discussion on this point we refer to Castellani & Sacchetti (1978)) Taking into account the evidence presented in the previous section that the $`3\alpha `$-reaction is by far the major contributor to $`{}_{}{}^{12}\mathrm{C}`$ formation, the evolutionary computations have been carried out by adopting a nuclear network accounting for both H and He-burning, but neglecting any possible unconventional nuclear reactions branch. For all the models an initial helium abundance equal to $`Y=0.23`$ – in agreement with the current estimations on the primordial helium abundance – has been adopted. We have considered stellar masses ranging from $`0.8M_{}`$ to $`1.2M_{}`$ (in steps of $`0.1M_{}`$), and all the evolutionary sequences have been followed from the initial Zero-Age Main Sequence (ZAMS) until the He-burning ignition at the tip of the Red Giant Branch (RGB); in the following, we will describe the properties of $`1M_{}`$ stellar models, taken as being representative of this mass range. The evolution of the stars in the considered mass range is quite similar, and the particular choice allows us to make useful comparisons with the results from previous investigations. For each mass we have computed two different cases without external pollution: one without atomic diffusion, which we denote as ’canonical’ and one with atomic diffusion (the diffusion has been treated according to the formalism by Thoul et al. (1994)). The second case we call ‘standard’ following the contemporary definition of the physics for the standard solar model. In addition (see next section), we have computed some evolutionary sequences taking into account both atomic diffusion and external He and heavy elements pollution. This case we will call in the following ‘pollution’ or similar. It is worth noticing that this is the first theoretical investigation of Pop III stars accounting in a self-consistent way for the effect of atomic diffusion alone, and of external pollution plus atomic diffusion. The evolutionary paths of both the $`1M_{}`$ canonical model and the one computed accounting for helium diffusion (standard), are plotted in Fig. 3. For comparison, we have also plotted the evolutionary tracks for an extremely metal-poor chemical composition ($`Z=10^{10}`$) as given by Cassisi & Castellani (1993) and for a solar composition, calculated by us with the same program. Comparing our canonical model with the one by Cassisi & Castellani (1993) it is worth noticing the fine agreement between the two computations, as far as the MS evolution is concerned: the two tracks are perfectly overlapping until the early evolution along the sub-giant branch (SGB). The age of the two models at the Turn Off (TO) is almost the same: $`t_\mathrm{H}=6.43`$ Gyr for the models computed by Cassisi & Castellani (1993) and $`t_\mathrm{H}=6.31`$ Gyr for our models; the difference being of the order of a negligible 2%. This result is not surprising since the main difference between the present computations and the ones performed by Cassisi & Castellani (1993) is only the use of different opacity tables. More in detail, Cassisi & Castellani (1993) employed the Cox & Tabor (1976) opacities for $`T<`$12000 K, and those by Huebner et al. (1977) at higher temperatures. The evolutionary timescales on the MS and the MS location may be affected by the high-temperature opacities employed in the computations, but the difference between OPAL and Huebner et al. (1977) data is negligible at such low metallicities (see also the discussion in Cassisi & Castellani (1993)). On the other hand, the difference in the location of the RGB is due to the higher values of the low-temperature opacities employed in the present calculations, since the low temperature opacities determine the position of the RGB in the Hertzsprung-Russell-diagram (HRD) entirely. Fig. 3 highlights the evolutionary event, peculiar of true Pop III stars, which occurs during the transition to the RGB: a flash in the H-burning region, which leads to a blue loop HRD. The same result has been obtained also by Fujimoto et al. (1990) by computing a $`1M_{}`$ metal-free model. The physical reasons for the occurrence of the flash have been discussed by Fujimoto et al. (1990) and can be summarized as follows. Close to the exhaustion of hydrogen at the center of the star, the core strongly contracts, increasing temperature and density, thereby counterbalancing the effect of the continuous hydrogen depletion on energy production. Due to this increase in both central density and temperature, there is an increase in the efficiency of the triple-$`\alpha `$ reactions and, in turn, of the abundance of carbon nuclei (cf. Fig. 2). As a consequence, the energy delivered by the CNO cycle strongly increases during this phase. A thermal runaway is produced and a flash occurs which produces the loop of the stellar track in the HRD. In this phase, the star develops a convective core. The ensuing expansion of the inner stellar regions produces a significant decrease in both density and temperature, which reduces the $`3\alpha `$ nuclear rate and, as a consequence, also the energy produced by the CN-conversion. When the central abundance of hydrogen drops to zero the convective core disappears. Even if the general characteristics of this phenomenon are quite similar to the ones obtained by Fujimoto et al. (1990), there are also remarkable differences, which are perhaps due to differences in the physical inputs and in the numerical treatments. In both models, the flash along the SGB occurs when the central abundance of hydrogen is of the order of $`X_c0.0006`$. However, the maximum size of the convective core during the thermal runaway is about 50% lower in our computations than in the model computed by Fujimoto et al. (1990) ($`0.11M_{}`$ in comparison with $`0.2M_{}`$). This means that in our model the energy flux produced by the flash is significantly lower in comparison with previous results. Another significant difference between the two sets of computations is related to the fact that during the runaway the hydrogen-burning rate via the CN cycle is able to exceed the contribution provided by the pp-chain in the model produced by Fujimoto et al. (1990). In our computations we find that when the energy delivered by the CNO cycle reaches its maximum it is about a factor 3 lower than the pp-chain contribution. Also the maximum helium-burning rate in our models ($`L_{\mathrm{He}}25.710^8L_{}`$) is about two orders of magnitude smaller than the value obtained by Fujimoto et al. (1990). Due to the general properties of this phenomenon, we suggest that it is not really resembling a He flash like the one occurring at the RGB tip (as suggested by Fujimoto et al. (1990)) but that it is more similar to a CNO flash. It is interesting to notice that the chemical abundances of CNO elements are $`X_{{}_{}{}^{12}\mathrm{C}}=6.5010^{12}X_{{}_{}{}^{14}\mathrm{N}}=2.1910^{10}X_{{}_{}{}^{16}\mathrm{O}}=2.7710^{12}`$ at the maximum flash energy production. Comparing this result with the models provided by Cassisi & Castellani (1993), one notices that the CNO flash along the SGB is missing in the latter computations. A quick comparison between the two models shows that both density and temperature values in the inner core of the stars are significantly lower in the models of Cassisi & Castellani (1993). As we verified with a specific evolutionary computation, this is a result of the fact that Cassisi & Castellani (1993) have assumed a non-zero primordial abundance of heavy elements: even this extremely low metal abundance ($`Z=10^{10}`$) is able to allow the CNO-cycle to operate, such that – at the exhaustion of the central hydrogen abundance – the central temperature remains low enough to avoid the thermal runaway. The initial chemical abundances of CNO elements in the Cassisi & Castellani (1993) model are quite similar to those of our zero-metallicity model after the thermal runaway; in this sense, a metal abundance of the order of $`Z=10^{10}`$ seems to be a ”critical” metallicity for getting the thermal runaway along the SGB. In any case, one has to bear in mind that the CNO-flash along the SGB is a secondary evolutionary feature which only slightly increases the central hydrogen burning phase duration (of order $`102010^6`$ yrs). Therefore, for general purposes as, for instance, population synthesis, the theoretical framework developed by Cassisi & Castellani (1993) and Cassisi et al. (1996) would be applicable also to canonical $`Z=0`$ models. The comparison between the canonical and standard model accounting for atomic diffusion shows the expected changes, i.e. a moderate decrease of the TO luminosity and of the central hydrogen burning lifetime, and a slight shift of the MS toward lower effective temperatures. It is evident that helium diffusion has no effect on the occurrence of the CNO-flash at the end of the central hydrogen burning phase. It is also worth noticing that one has to expect that the efficiency of atomic diffusion during the MS evolution of low mass stars increases when decreasing the stellar metallicity, since the diffusion in the stellar envelopes is larger due to the thinner convective envelopes. Therefore, for fixed diffusion coefficients the effect of atomic diffusion is largest in metal-free stars. This is a quite important point to bear in mind when we will discuss the effect of external pollution on metal-free stars. As far as the evolution along the RGB is concerned, we did not find any thermal oscillations or instabilities (thermal runaway) in the hydrogen burning shell as discussed by Fujimoto et al. (1990), who emphasized that the occurrence of this phenomenon is strongly related to the choice of the time steps in the numerical computations. However, all numerical experiments we performed have always provided negative results. Overall, there are no significant differences between canonical and standard models along the RGB. This is a well-known fact, since at the base of the RGB the surface convection reaches its maximum extension and mixes back into the convective envelope basically all the chemical elements previously diffused toward the center. The luminosity at the RGB tip ($`L_{\mathrm{tip}}`$) and the size of the He core at the He-burning ignition ($`M_{\mathrm{cHe}}`$) shows only minor differences: $`M_{\mathrm{cHe}}=0.497M_{}`$ and $`\mathrm{log}(L_{\mathrm{tip}}/L_{})=2.357`$ for the canonical model, and $`M_{\mathrm{cHe}}=0.498M_{}`$ and $`\mathrm{log}(L_{\mathrm{tip}}/L_{})=2.361`$ for the standard one. The values of $`M_{\mathrm{cHe}}`$ and $`\mathrm{log}(L_{\mathrm{tip}}/L_{})`$ appear in fair agreement with the values given by Cassisi & Castellani (1993) in spite of the different assumptions about the primordial metallicity. Some significant differences, however, exist in comparison with the data provided by Fujimoto et al. (1990) and Fujimoto et al. (1995): $`\mathrm{\Delta }M_{\mathrm{cHe}}0.015M_{}`$ \- $`\mathrm{\Delta }\mathrm{log}(L_{\mathrm{tip}}/L_{})0.06`$ in the sense that present models have a fainter luminosity and a smaller He core. However, these differences might be understood in terms of changes in the adopted physical scenario. To close this section, we briefly comment on the helium flash, which terminates the RGB evolution. Fujimoto et al. (1990) and Hollowell et al. (1990) have strongly claimed that during the He flash in a $`Z=0`$ model the outer edge of the convective shell formed in the helium zone can extend into H-rich layers, thereby mixing hydrogen back into the helium core. This could produce relevant changes in the surface chemical abundance of the star. In our present computations (but see also the extended survey made by Cassisi & Castellani (1993)) we did not find any evidence for this phenomenon, which depends strongly on the competition between convective and nuclear timescale. The former one is not known in the simple mixing-length convection theory, but can only be estimated. Therefore, no firm conclusions about the occurrence of mixing events during the helium flash can be drawn without a more extensive investigation. ## 4 The non-standard evolutionary models: the effect of external pollution ### 4.1 Evolutionary properties The absence of metal-free and the relative paucity of extremely metal-poor stars has been sometimes explained by taking into account the hypothesis of enrichment of the surface metallicity due to accretion of metal-rich material through encounters with interstellar gas clouds. Even if this hypothesis seems to be the most promising one in order to explain the observational evidences, as far as we know it has not been fully investigated until now. The evolutionary consequences and observational implications of such a process have been estimated by Yoshii (1981) and Fujimoto et al. (1995). In this respect, the work by Yoshii (1981) is quite important as it provides some reasonable estimates of the amount of accreted matter and its heavy elements abundance. The amount of material accreted on a star, due to encounters with gas clouds during its travel through the Galaxy, depends on several parameters as, for instance, the relative velocity between the star and the gas cloud, the cloud parameters and so on. Making some realistic assumptions about the value of these different quantities and the parameters of the stellar orbits, Yoshii (1981) has estimated that the global amount of material accreted on a star with mass of the order of $`0.8M_{}`$ in a timespan of the order of $`10^{10}`$ yr has to be in the range $`(10^310^2)M_{}`$. As far as the metal enrichment is concerned, it strongly depends on the details of the chemical evolution of the Galactic matter. Yoshii (1981) has estimated that after $`10^{10}`$ yr, due to external pollution, the surface metallicity of an extremely metal-deficient star should be in the range $`0.0006Z0.01`$ (depending on the stellar orbit properties). The effect on the stellar structures due to atomic diffusion or convective mixing (effectively dilution), however, were taken into account by means of simple considerations based on the known physical properties of standard stellar models for very metal-poor objects. Also in the paper by Fujimoto et al. (1995) the accretion of metals was not treated in a self-consistent way. For instance, both analyses do not take into account the possible changes in the thermodynamical properties of the envelope due to the change in the opacity of the stellar matter. In addition, if one is interested in investigating in detail the possible changes of the evolutionary behavior of Pop III stars due to accretion of metal-rich matter onto the stellar surface, it is necessary to verify if and when the atomic diffusion is able to bring CNO elements below the edge of the convective envelope and into the H-burning region, which can either be the core or, in later phases, a shell. To test this last point, we have decided to use the most simple accretion model possible. We assume that instantaneous accretion of all the metal-rich matter has happened immediately before the star reached the ZAMS. This means that we simply modify the chemical composition of a certain fraction of the external stellar layers before starting the MS computations. This approach has the benefit to maximize the ”efficiency” of the combination of both processes (accretion + diffusion). In the experiments carried out, we have modified the chemical composition of the outermost $`0.01M_{}`$ of a model, by adopting as the chemical composition of the “accreted” matter $`Z=0.01`$ and $`Y=0.25`$. In the following, we are going to describe the resulting evolution of the $`0.9M_{}`$ and the $`1M_{}`$ models. In Fig. 4, we have plotted the evolutionary tracks in the HRD and, for comparison, repeated the evolutionary track of the standard metal-free model with atomic diffusion (see § 3). The main result is that the accreted carbon never reaches the nuclear burning regions. Some interesting features can easily be recognized in the HRD, i.e., the shift toward lower effective temperatures of the models with external pollution. This is due to the increase of the opacities in part of the stellar structure as a consequence of the metals accreted. To be more specific, the lower boundary of the polluted region of the star is located at temperatures higher than $`10^6`$ K, well below the edge of the convective envelope. It is also worth noticing that the occurence of the CNO-flash along the SGB – an event originated in the deep interior – is not affected at all by the accretion of metals. This is indirect proof that atomic diffusion is not able to bring CNO-elements into the nuclear burning region. In fact, had it been the case, one would have witnessed an increase in the CNO-cycle burning rate, eventually developing into a thermal runaway, during the previous MS evolution. The fact that during the central hydrogen burning phase the evolutionary properties of the polluted stars are not changed<sup>2</sup><sup>2</sup>2One can easily notice that the chemical pollution of the outermost layers has the effect of slightly increasing the evolutionary lifetime (see Fig. 5). This occurrence has to be related to the evidence that the polluted models are moderately fainter than the standard ones. significantly by the diffusion of heavy elements – i.e., that atomic diffusion is not efficient enough to bring down the CNO-elements needed to start the CNO-cycle – is also confirmed by the behavior of the various energy sources all along the evolution. During the SGB evolution, before the CNO-flash occurs along the way to the RGB, a thin convective region appears in the envelope (see Fig. 5, lower panel, at $`t9.6`$ Gyr in the case of pollution of $`0.01M_{}`$). Even if the thickness of this region is very small ($`0.02M_{}`$), basically all the metals diffused during the MS evolution are dredged back to the surface. This is easily noticeable looking at the top panel of the same figure (which displays the run of \[Fe/H\] with time) and also from Fig. 6. The increase in envelope metallicity is accompanied by a decrease in effective temperature. Then the convective envelope goes even deeper, mixing therefore only matter with original zero metallicity; this results in a slight decrease of the surface abundances of the individual heavy-elements at $`t9.7`$ Gyr. The maximum abundances reached before this point are never exactly the same as the initial one of the polluting matter, since the diffused metals were already diluted with a metal-free environment. Then the star experiences the thermal runaway; $`T_{\mathrm{eff}}`$ suddenly increases and the convective envelope disappears, leaving the surface abundances unchanged for a short while. When the runaway stops and the track goes back to the normal evolution toward the RGB, envelope convection sets in another time, this time reaching much deeper ($`0.25M_{}`$). At this point the surface metal abundances decrease even more (more and more metal-free matter is mixed into the convective envelope), while the stellar track moves toward lower effective temperatures because of the progressively larger convective region. When the star is settling on its Hayashi track, the metal abundance in the convective envelope is still decreasing; this produces the ”kink” which appears at the base of the RGB in Fig. 4. It is due to the fact that the star is trying to settle on the Hayashi track corresponding to the metallicity of its convective envelope, but the metallicity is still changing due to the deepening of the convective region; therefore the track has to move toward larger $`T_{\mathrm{eff}}`$, since the RGB-location moves to larger $`T_{\mathrm{eff}}`$ for decreasing surface metallicity. Eventually this process ends when the convective envelope has almost reached its maximum depth, so that a small change in the extension of the convective region does not appreciably change the surface metallicity, and the star starts its standard RGB evolution. Up to this point our calculations indicate that diffusion is not able to transport the accreted metals into the nuclear burning regions. The last possibility for this occurrence takes place during the RGB evolution, where the hydrogen-burning shell could be able to pass through regions previously reached by the envelope convection at its maximum extension. This effect is present in Pop II stars and is the physical reason for the so-called RGB-bump. Since the envelope metallicity, even if being very diluted with respect to the beginning of the MS phase (by a factor of about 100), is significantly larger than zero, a substantial amount of CNO-nuclei could be ingested into the H-burning shell; this could cause a dramatic change in the efficiency of the H-burning, of the same kind as the one experienced on the SGB. The outcome of our evolutionary computations rules out this possibility, too, even in the very extreme case of our assumptions about the pollution mechanism. In all models we have computed, the distance in mass between the inner point reached by the convective envelope during its maximum penetration and the location of the hydrogen-burning shell at the He-flash, is never less than $`0.1M_{}`$ (we note, in passing, that the absence of the RGB-bump constitutes another difference between Pop III and extreme Pop II evolution). Moreover, we recall that the evolutionary timescale along the RGB is too short for allowing atomic diffusion to bring down CNO-elements from the point of deepest extent of the convective envelope prior to the $`3\alpha `$ ignition in the helium core. We have also checked if a larger amount of metal-rich accreted matter could produce some significant change in the described evolutionary properties of the models. To this end we have doubled the amount of accreted matter ($`M_{acc}=0.02M_{}`$), which is a factor $`2`$ larger than the maximum value suggested by Yoshii (1981). One can easily notice from the data plotted in Figs. 4 and 5 that the effect is very small and that the CNO elements in the accreted matter never reach the H-burning regions. Taking into account the fact that we have maximized the effects of external pollution plus diffusion, we can safely conclude that the accretion of a significant amount of metal-rich matter on the surface of a metal-free star is not able to change significantly its evolutionary behavior, except for a decrease of effective temperatures (due to the higher metallicity of the external polluted regions) and a small increase in the MS lifetimes. This is mainly due to the fact that atomic diffusion is never efficient enough to bring CNO elements into the inner stellar layers. ### 4.2 Temporal evolution of the surface metal abundances in polluted metal-deficient stars Since we have investigated the evolutionary effects of external pollution plus atomic diffusion on metal-free stars by computing self-consistent evolutionary models, we are able to show the run with time of the surface abundances of the most relevant chemical elements. This is a quite important issue since, indeed, it is potentially the only method available for discriminating between polluted Pop III and extreme Pop II stars, when observing isolated field stars. In Fig. 6 we have plotted the mass fraction of the CNO-elements, helium and iron as a function of time all along the evolution from the ZAMS until He ignition at the tip of the RGB. One can easily notice that as a consequence of atomic diffusion the surface abundances of all these elements are monotonically decreasing during the MS phase. However, as largely discussed in the previous section, when the star approaches the RGB, the convective envelope goes deeper inside the star, dredging up the elements which diffusion has previously carried down in the structure. This occurrence has the effect to produce the sharp increase of the displayed chemical abundances; as soon as the outer convection continues to deepen, it reaches the layers consisting of primordial matter, mixes it with all the outer layers, and the surface chemical abundances sharply decrease again. The surface abundances evolution of a polluted Pop III star is therefore initially reflecting the pollution process, which we have concentrated into a singular event at the beginning of the star’s MS evolution. Alternatively, continuous accretion or individual pollution events at any time during the MS phase can be envisaged. The pollution effect is modulated by that of diffusion, which will lead to declining metal abundances similar to those as shown in Fig. 6. As soon as the envelope becomes convective – which, in turn, depends on the surface metallicity as well – the previous diffusion history will be obliterated and only the mass ratio between accreted material and the stellar convective envelope will determine the observable abundances along the RGB. Since the abundance of the single various heavy elements cannot be obtained directly by spectroscopic measurements, we plot in Fig. 7 the expected behavior with time of observable quantities, namely the abundance ratios \[Fe/H\], \[C/Fe\], \[O/Fe\], \[C/N\] and \[O/N\]. From the data plotted in this figure one expects that, when accounting for atomic diffusion and external pollution, Pop III stars should show, during a significant portion of their evolution before reaching the RGB, a slight overabundance of (O/N) and an underabundance of (C/N), (O/Fe) and (C/Fe) in comparison with the Sun (we recall that a scaled solar abundance has been assumed as the distribution of the polluted matter), and more in general with respect to the same element ratios in the polluting material. However, one has to take into account that the value of these under- and overabundances are quite small (lower than the commonly adopted uncertainty of $`\pm 0.100.15`$ dex in spectroscopical measurements) since the diffusion velocities for C, N, O and Fe are quite similar and, moreover, that in our experiments we have maximized the effect of pollution plus diffusion. The element ratio showing the largest difference with respect to its value in the polluting matter appears to be \[C/Fe\] (again, of course, only during the MS phase). Can the measurement of these abundance ratios in MS field UMPHS – in particular \[C/Fe\] – be of some help for discriminating between a very metal poor Pop II MS star and a polluted Pop III one? Probably not very much, for two kind of reasons. The first one is that, as repeatedly stressed before, in our experiments we have maximized the effect of pollution plus diffusion. A steady slow accretion of metals, or an impulsive accretion by discrete amounts (which added together should amount at most to about 0.01$`M_{}`$) would have a smaller effect on the surface abundance ratios, since diffusion has less time to work. Moreover, diffusion should be effective also in Pop II stars and therefore a similar value for, i.e., the \[C/Fe\] ratio, would be observed at the surface of a true Pop II star or of a polluted Pop III one, at least in the simplest hypothesis that the element ratios in the matter from which Pop II stars originated and the matter accreted by Pop III objects are basically the same. Before concluding this section we want just to recall that, as already emphasized previously, in our computations we have found no evidence for any deep mixing phenomenon at the He ignition, as suggested earlier by Fujimoto et al. (1990) and Hollowell et al. (1990). In the hypothesis that this mechanism is efficient in the cores of Pop III stars, the surface metal abundance (in particular C and N) would be enhanced at the tip of the RGB, in contrast to the effect external pollution, which is washed out during the RGB phase. According to Fujimoto et al. (1995) a nitrogen rich carbon star with $`4.5[\mathrm{Fe}/\mathrm{H}]2.0`$ must be a PopIII stars. Since the relevance of this issue for the problem of discriminating between metal poor Pop II and polluted Pop III stars, it is our intention to present a detailed investigation of the evolution through the He-flash in a forthcoming paper. ## 5 Conclusions The main aim of this work was to investigate the possibility that a low-mass Pop III star could increase its original amount of CNO elements in the core to a level which allows the ignition of the CN cycle, thus modifying its evolutionary behavior from the one characteristic of metal-free object to the one typical of extremely metal-poor stars. The only two channels allowing – in principle – for this occurrence, are: the production of carbon by means of unconventional nuclear reactions and the accretion of metal-rich matter through encounters with molecular clouds. Both scenarios have been fully explored by means of self-consistent evolutionary computations. As for the possibility that non-standard nuclear reactions – as the ones suggested by Mitalas (1985) and Wiescher et al. (1989) – could significantly contribute, together with the canonical 3$`\alpha `$ reactions, to carbon production, it has been found that the most important reaction is $`{}_{}{}^{7}\mathrm{Be}(\alpha ,\gamma )^{11}\mathrm{C}`$, which dominates the carbon synthesis during the early MS phase. However, all non-standard reactions are unable to increase the carbon abundance to the level of $`10^{10}`$, needed to ignite the CN cycle. Therefore we conclude that, at least in the explored mass range, it is necessary only to account for the 3$`\alpha `$ reactions in the evolutionary models in order to obtain a reliable estimation of the carbon production. The uncertainty on the final abundance by mass of $`{}_{}{}^{12}\mathrm{C}`$ or $`{}_{}{}^{14}\mathrm{N}`$, is of the order of a quite negligible 1%. This occurrence can be considered as plain evidence for the reliability of current stellar models for metal-free objects. The scenario in which a metal-free object accretes metal-rich matter through encounters with molecular clouds, has been investigated by computing evolutionary models accounting for both atomic diffusion and chemical pollution, and by using reasonable estimates of the amount of accreted matter. The numerical computations have clearly shown that, neither atomic diffusion during the MS phase nor outer convection during the RGB evolution, are able to bring CNO-elements into the nuclear burning region; in spite of the remarkable changes of the heavy elements abundance of the outer layers, the evolutionary behavior of a polluted Pop. III star is always regulated by the original chemical composition. Since we have investigated only comparably well-known particle-transport effects (convection and diffusion), one could envisage that non-standard effects (for example, mixing induced by rotation) help in transporting metals from the stellar surface to the nuclear processing regions. Indeed, in globular cluster stars there is strong evidence for such an additional mixing mechanism, whose signature is evident by surface anomalies in CNO-elements as well as Na and Mg (see Kraft (1994) for a review). This mixing is simulated in theoretical models (e.g. Denissenkov & Weiss (1996)) by an additional diffusion process. The intention and result of these simulations is to transport material from the hydrogen shell to the bottom of the convective envelope. In our case, the direction would be opposite, but the mechanism could be the same. While we cannot exclude such an additional effect completely, there is a strong argument against its occurance in metal-free stars: there are observational evidences (Suntzeff (1981); Gilroy & Brown (1991)) supported by theoretical modelling and considerations (Sweigart & Mengel (1979); Charbonnel (1995)) that the process starts only after the red giant bump, i.e. when there is no molecular weight gradient (barrier) between the outer shell and the rest of the envelope. As demonstrated, true Pop III stars, however, never reach this phase because helium ignition sets in long before at very low luminosities compared to those of Pop II RGB-tips. This, in turn, is a direct consequence of the hotter cores, the temperature of which is determined by the shell temperatures. As we know, these are higher for shells burning hydrogen via the $`pp`$-chains. We therefore consider it unlikely that an additional mixing as in globular cluster red giants would appear in Pop III giants. From the point of view of an ’external’ observer it is extremely difficult to discriminate between a polluted Pop III field star and a very metal poor Pop II one; in spite of substantial differences in the core physical conditions and energy production mechanisms, the evolution in the HRD is qualitatively quite similar. The effective temperature of the star is basically regulated by the surface metallicity, and the only feature peculiar of the Pop III object is the CNO flash along the SGB, a very fast and unobservable phase. Also the study of the surface abundance ratios does not appear to be of very much help. This means that, if the chemical pollution of the stellar surface is effective, the still surviving Pop. III stars could be all disguised as extremely Pop. II stars with no chance to discriminate between “true” extremely metal-poor stars and polluted Pop. III objects. The only possibility, as mentioned in the previous section, is the occurrence of deep mixing phenomena at the He ignition, which would produce a nitrogen rich metal poor carbon star; this is a subject on which we will present a detailed investigation in a forthcoming paper. Our models also predict a general trend for the surface metal abundance of polluted Pop III stars: it should steadily decrease with evolutionary phase, with the exception of a brief episode during the subgiant phase, when they are higher by a factor of ten (Fig. 5). Given that enough UMPHS are observed and that all of them had been polluted by single events during their earliest evolution (Shigeyama & Tsujimoto (1998)), this might be observable. This work was supported by a DAAD/VIGONI grant. All authors are grateful for the warm hospitality they received during their visits at the institutes involved in the project. They also thank the organizers and participants of the 1999 MPA/ESO conference on “The First Stars” for a very stimulating meeting. Helpful discussions with V. Castellani and P. Marigo are acknowledged. F.-K. Thielemann kindly provided his reaction library and added helpful discussions about hot pp-chains. This paper made use of the NASA ADS system at its mirror sites at CDS, Strasbourg and ESO, Garching.
no-problem/9911/astro-ph9911098.html
ar5iv
text
# RELATIVISTIC JETS FROM COLLAPSARS ## 1 INTRODUCTION Catastrophic collapse events have been proposed to explain the energies released in a gamma-ray burst (GRB) including mergers of compact binaries (Pacyński 1986; Goodman 1986; Eichler et al. 1989; Mochkovitch et al. 1993), collapsars (Woosley 1993) and hypernovae (Pacyński 1998). According to the current view these models require a stellar mass black hole (BH) which accretes up to several solar masses of matter powering a pair fireball. If the baryon load of the fireball is not too large, baryons are accelerated together with e$`^+`$e<sup>-</sup> pairs to Lorentz factors $`>10^2`$ (Cavallo & Rees 1978). Such relativistic flows are supported by radio observations of GRB 980425 (Kulkarni et al. 1998b). Spherically symmetric fireballs have been studied by several authors by means of 1D Lagrangian hydrodynamic simulations (e.g., Panaitescu et al. 1997; Panaitescu & Mészáros 1998; Kobayashi, Piran & Sari 1999). Recently, it has been argued that the rapid temporal decay of several GRB afterglows is more consistent with the evolution of a relativistic jet after it slows down and spreads laterally than with a spherical blast wave (Sari, Piran & Halpern 1999; Halpern et al. 1999; Kulkarni et al. 1999; Rhoads 1999). The lack of a radio afterglow in GRB 990123 provides independent evidence for jet-like geometry (Kulkarni et al. 1999b). ## 2 INITIAL MODEL AND NUMERICAL SETUP MacFadyen & Woosley (1999; MW99) have explored the evolution of rotating helium stars ($`M_\alpha >\text{ }\mathrm{\hspace{0.17em}10}M_{}`$) whose iron core collapse does not produce a successful outgoing shock, but instead forms a BH surrounded by a compact accretion torus. Assuming an enhanced efficiency of energy deposition in polar regions MW99 obtain relativistic jets along the rotation axis, which are highly focused and seem to be capable of penetrating the star. However, as their simulations are Newtonian, they obtain flow speeds which are superluminal. We have performed axisymmetric relativistic simulations using a $`14M_{}`$ collapsar model from MW99. When the central BH has acquired a mass of $`3.762M_{}`$ we map the model to our computational grid. In a consistent collapsar model a jet will be launched by any process which gives rise to a local deposition of energy and/or momentum, as e.g.,$`\nu \overline{\nu }`$–annihilation, or magneto-hydrodynamic processes. We mimic such a process by depositing energy at a prescribed rate homogeneously within a $`30^{}`$ cone around the rotation axis. In the radial direction the deposition region extends from the inner grid boundary located at 200 km to a radius of 600 km. We have investigated constant energy deposition rates $`\dot{E}=10^{50}`$erg s<sup>-1</sup>, and $`\dot{E}=10^{51}`$erg s<sup>-1</sup>, and a varying deposition rate with a mean value of $`10^{50}`$erg s<sup>-1</sup>. The constant rates roughly bracket the expected $`\dot{E}`$ of collapsar models, while the varying rate mimics, e.g., time-dependent mass accretion rates resulting in time-dependent $`\nu \overline{\nu }`$–annihilation (MW99). The simulations were performed with the multidimensional relativistic hydrodynamic code GENESIS (Aloy et al. 1999) using a 2D spherical grid with 200 radial zones spaced logarithmically between the inner boundary and the surface of the helium star at $`R_{}=2.98\times 10^{10}`$cm. Assuming equatorial symmetry we performed simulations with $`2^{},1^{}`$ and $`0.5^{}`$ angular resolution. In the latter case the grid consists of 60 uniform zones covering the polar region ($`0^{}\theta 30^{}`$) and 40 nonuniform zones logarithmically distributed between $`30^{}\theta 90^{}`$. The gravitational field of the BH is described by the Schwarzschild metric. Effects due to the self-gravity of the star are neglected, i.e., we consider only the gravitational potential of the BH. The equation of state includes non-relativistic nucleons treated as a mixture of Boltzmann gases, radiation, and an approximate correction due to $`e^+e^-`$–pairs as described in Witti, Janka & Takahashi (1994). Complete ionization is assumed, and the effects due to degeneracy are neglected. We advect nine non-reacting nuclear species which are present in the initial model: C<sup>12</sup>, O<sup>16</sup>, Ne<sup>20</sup>, Mg<sup>24</sup>, Si<sup>28</sup>, Ni<sup>56</sup>, He<sup>4</sup>, neutrons and protons. ## 3 RESULTS ### 3.1 Constant small energy deposition rate (Model C50) For a constant $`\dot{E}=10^{50}`$erg s<sup>-1</sup> a relativistic jet forms within a fraction of a second and starts to propagate along the rotation axis with a mean speed of $`7.8\times 10^9`$ cm s<sup>-1</sup> (Fig. 1). The jet exhibits all the morphological elements of the Blandford & Rees (1974) jet model in the context of classical double radio sources: a terminal bow shock, a narrow cocoon, a contact discontinuity separating stellar and jet matter, and a hot spot. Fig. 1 shows that the density structure of the star does not change noticeably during the whole evolution. This, a posteriori, justifies our neglect of the self-gravity of the star. The propagation of the jet is unsteady, because of density inhomogeneities in the star. The Lorentz factor of the jet, $`\mathrm{\Gamma }`$, increases non-monotonically with time, while the density drops to $`10^6`$g cm<sup>-3</sup> (Fig. 2). The density profile shows large variations (up to a factor of 100) due to internal shock waves. The mean density in the jet is $`10^1`$ g cm<sup>-3</sup>. Some of the internal biconical shocks, which develop during the jet’s propagation, recollimate the beam. They may provide the “internal shocks” proposed to explain the observed gamma-ray emission. A particularly strong recollimation shock wave (hardly evident at low resolution) forms early in the evolution. A very strong rarefaction wave behind this recollimation shock causes the largest local acceleration of the beam material giving rise to a maximum in the Lorentz factor. When the jet encounters a region along the axis where the density gradient is positive (at $`\mathrm{log}r8.1`$ and $`\mathrm{log}r8.6`$) the jet’s head is decelerated, while a central channel in the beam is cleaned by outflow into the cocoon through the head, which accelerates the beam. The combination of both effects (deceleration of the head and beam acceleration) increases the strength of the internal shocks. Within the jet the mean value of the specific internal energy, $`10^{20}-10^{21}`$ erg g<sup>-1</sup>, or $`O(c^2)`$. The mean temperature $`5\times 10^8`$ K (well below the pair creation threshold) implying that the pressure is radiation dominated in accordance with our simplified EOS. The relativistic treatment of the hydrodynamics leads to a qualitatively similar (formation of a jet), but quantitatively very different evolution than in MW99. According to their Fig. 27 the jet propagates 7 000 km within the first 0.82 s. Furthermore, MW99 infer an asymptotic $`\mathrm{\Gamma }10`$, and find a half opening angle, $`\mathrm{\Omega }`$, for their jet of $`10^{}`$. In our simulation, at the same time for the same angular resolution ($`2^{}`$) and $`\dot{E}`$ the head reaches a radius of 30 000 km, but the maximum Lorentz factor ($`\mathrm{\Gamma }_{\mathrm{max}}`$) is only 4.62 at $`\mathrm{12\hspace{0.17em}200}`$ km. Such quantitative difference is expected, in part, due to different mapping time and inner boundary radius between the two calculations. Initially, in our simulations $`\mathrm{\Omega }`$ is between $`6^{}`$ to $`8^{}`$ depending on angular resolution. At $`1.5`$s the strong recollimation shock reduces $`\mathrm{\Omega }<\text{ }1^{}`$. We find that some results strongly depend on angular resolution, the minimum acceptable one being $`0.5^{}`$ (at least near the axis). The morphology of the jet is richer at higher resolution. At $`0.5^{}`$ angular resolution $`\mathrm{\Gamma }_{\mathrm{max}}15-20`$ at a radius $`8\times 10^9`$ cm at jet breakout. Within the uncertainties of the jet mass determination due to finite zoning and the lack of a precise numerical criterion to identify jet matter, the baryon load ($`\eta Mc^2/E_{\mathrm{depos}}`$ with $`E_{\mathrm{depos}}=\dot{E}𝑑t`$) decreases with increasing resolution. In the highest resolution run we find an average baryon load of $`\eta 1.3`$ at jet breakout (see also Sect. 4). ### 3.2 Constant large energy deposition rate (Model C51) Enhancing $`\dot{E}`$ by a factor of ten (to $`10^{51}`$ erg s<sup>-1</sup>), the jet flow reaches larger Lorentz factors. We observe transients during which the Lorentz factor becomes as large as 40. After 1.2 s the Lorentz factor steadily increases from 22 to 33. The jet propagates faster than in model C50. The time required to reach the surface of the star is 2.27 s instead of 3.35 s. At breakout the jet is less collimated ($`\mathrm{\Omega }10^{}`$). The strong recollimation shock present in model C50 is not so evident here. Instead, several biconical shocks are observed within a very knotty beam and the Lorentz factor near the head of the jet is larger ($`22`$ in the final model) because, due to the larger $`\dot{E}`$, the central funnel is evacuated faster, and because the mean density of the jet is 5 times smaller than in model C50 ($`\eta `$ being twice as large). ### 3.3 Varying energy deposition rate (Model V50) We have computed a model where the mean energy deposition rate ($`10^{50}`$ erg s<sup>-1</sup>) randomly varies on time scales of a few milliseconds and the amplitude by a factor of ten. Compared to model C50 the jet structure is more knotty and also richer in shocks, particularly inside the first $`10^9`$ cm (where the radial resolution is large enough to capture the finest structures imprinted on the flow by the time variability of the deposition rate). At breakout $`\mathrm{\Gamma }_{\mathrm{max}}=26.81`$, which is almost twice as large as the one found in model C50. Thus, a variable $`\dot{E}`$ is more efficient in converting internal energy into kinetic energy, and in this case, the internal shocks are stronger and more numerous. The mean propagation speed is similar in both models, although the instantaneous velocity of the jet’s head is clearly different. Behind the strongest recollimation shock $`\mathrm{\Omega }<1^{}`$ in both models. ### 3.4 Evolution after jet breakout The structure of the circumstellar medium will influence the characteristics of the GRB and of the subsequent afterglow. Thus, a continuation of the simulations beyond jet breakout is necessary. In order to satisfy the conditions for accelerating shocks (Shapiro 1979) we endowed the star with a Gaussian atmosphere, which at $`R_\mathrm{a}=1.8R_{}`$ passes over into an external uniform medium with a density $`10^5`$ g cm<sup>-3</sup> and a pressure $`10^8p(R_{})`$. The computational domain is extended to $`R_t=2.54R_{}`$ with 70 additional zones. The evolution after jet breakout has been computed for models C50 and C51. In both cases the jet reaches $`R_t`$ after $`1.8`$ s (measured from breakout). Its mean propagation velocity is $`0.85`$ c, which is almost three times faster than the velocity of the head inside the star (0.30 c in model C50; 0.44 c in model C51). The evolution after jet breakout consists of three distinct epochs (Fig. 3). The first one lasting 0.35 s is characterized by a head velocity of 0.48 c and a small sideways expansion. During the second phase (of 0.3 s) the jet head accelerates to 0.91 c, because of the steep external density gradient, and because the flux of axial momentum is still important compared to pressure. The sideways expansion is still sub-relativistic ($`0.008`$ c), and $`\mathrm{\Omega }`$ of the beam increases to $`10^{}`$. During the final 1.2 s the bow-shock propagates within the uniform part of the ambient medium leading to a rapid ($`\mathrm{\Gamma }5`$) lateral spreading (Fig. 3). The shape of the expanding bubble is prolate (Figs. 1 and 3) during the post-breakout evolution. However, when the jet reaches the uniform part of the circumstellar environment, the bubble widens due to the faster sideways expansion. We expect a more isotropic expansion when most of the bubble is inside the uniform medium, and when it is pressure driven (in particular if the energy deposition is switched off). The Lorentz factor near the boundary of the cavity blown by the jet grows from $`1`$ (at jet breakout) to $`3`$ in both models decreasing with latitude. At the end of the simulation $`\mathrm{\Gamma }_{\mathrm{max}}`$ is 29.35 (44.17) for model C50 (C51), which is still smaller than the ones required for the fireball model (Piran 1999). ## 4 DISCUSSION AND CONCLUSIONS Energy deposition in the polar regions of a collapsar model gives rise to both the formation and propagation of a relativistic jet through the mantle and envelope of the star, and to a supernova explosion. The jet has a small opening angle ($`8^{}`$) and possesses a highly collimated ($`1^{}`$), ultra-relativistic core in which the Lorentz factor reaches a value of $`\mathrm{\Gamma }_{\mathrm{max}}=44`$ (model C51) at the end of the simulation about 2 s after shock breakout. The equivalent isotropic kinetic energy (see MacFadyen, Woosley & Heger 1999) slightly exceeds $`10^{54}`$ erg for model C51 ($`10^{53}`$ erg for model C50) within $`2^{}`$ ($`5^{}`$) of the rotation axis dropping by a factor of 10 within $`17^{}`$ ($`10^{}`$). The inner region contains $`8\times 10^4M_{}`$ with $`<\mathrm{\Gamma }>4`$ (Fig. 1). For a larger $`\dot{E}`$, the jet and in particular the cocoon are less collimated, because a harder driven jet also expands stronger laterally. The rest-mass density and the internal energy strongly vary in space and time within the jet giving rise to a very inhomogeneous baryon load, i.e., the concept of $`\eta `$ as a global parameter is useless. Instead it is more appropriate to discuss the efficiency of energy conversion in terms of incremental baryon loads considering only matter within a given range of $`\mathrm{\Gamma }`$–values. Although we find an average baryon load of the jet of $`\overline{\eta }1`$, some parts of the flow have a baryon load as low as $`10^5`$ or even less. After jet breakout $`\overline{\eta }`$ decreases by a factor 4 in less than 1.8 s. If this trend continues even $`\overline{\eta }10^3`$ within 9 s. In model C51, at the end of the simulation, $`2M_{}`$ have a Lorentz factor of less than three, $`3\times 10^4M_{}`$ move with $`3\mathrm{\Gamma }<10`$, and for $`2\times 10^6M_{}`$ the Lorentz factor $`\mathrm{\Gamma }10`$ (Fig. 4). The latter two masses reduce to $`2\times 10^5M_{}`$ and $`2\times 10^7M_{}`$ for model C50. Except for the very early evolution ($`t<1`$ s) the amount of matter moving at moderate ($`3\mathrm{\Gamma }<10`$) and highly ($`\mathrm{\Gamma }10`$) relativistic velocities increases by a factor $`3`$ every second, i.e., if the central engine is active for another 5 s, at the assumed energy deposition rate, $`10^4M_{}`$ will move with $`\mathrm{\Gamma }10`$. As $`\mathrm{\Gamma }_{\mathrm{max}}`$ is also rapidly increasing, Lorentz factors of several hundreds might be reached before the central engine is switched off. For models where the released total energy is equal (C50 and V50) $`\mathrm{\Gamma }_{\mathrm{max}}`$ is higher (by a factor of two) for a time-dependent $`\dot{E}`$. Determining the efficiency of energy conversion $`E_k/E_{\mathrm{depos}}`$ is hampered by the fact, that the kinetic energy $`E_k`$ of a relativistic fluid is not a well defined quantity. If $`E_k\rho \mathrm{\Gamma }(\mathrm{\Gamma }-1)𝑑V`$, we find $`=1.6`$ for model C50 (and $`=2.8`$ for model C51) at the end of the simulation. These efficiencies are obtained considering only matter with radial velocities $`>0.3c`$ and specific internal energy densities $`>5\times 10^{19}`$ ergg<sup>-1</sup>, i.e., matter in the jet. Note that efficiencies larger than one can arise, because (i) there are also other large sources of energy (e.g., gravitational, internal) available, and (ii) because matter is entrained into the jet which does not originate from the deposition region. Thus, efficiencies larger than one suggest that the local energy deposition is efficient triggering conversion of other forms of energy into kinetic energy. In our simulations the jet reaches the stellar surface intact (propagating over three decades in radius). This result may also hold for other less specific initial conditions. A more spherical density stratification might decrease the collimation of the jet, but the outflow might also be initiated mostly by momentum deposition instead of pure energy deposition (e.g., by MHD effects). The propagation of the jet after breakout will depend on the density stratification of the circumstellar medium. Thus, further simulations with different environments are planned. We note in this respect that the post–breakout propagation is similar in models C50 and C51 suggesting that a lower value of $`\rho _{\mathrm{ext}}`$ will not change the dynamics quantitatively. In our models the jet has only reached a radius of $`7.5\times 10^{10}`$ cm at the end of our simulations, which is $`10^2`$ to $`10^4`$ times smaller than the distance at which the fireball becomes optically thin. Determining whether a GRB will eventually be produced requires to compute the further evolution of the jet. As the jet has stayed collimated in the star it might remain focused over the next three decades, too. This work has been supported in part by the Spanish DGES (grant PB97-1432) and the CSIC. MAA expresses his gratitude to the Conselleria d’Educació i Ciència de la Generalitat Valenciana for a fellowship. We would like to thank Stan Woosley for his enthusiasm in promoting this work, and Thomas Janka for many helpful remarks. The calculations were performed on two SGI Origin 2000 at the CEPBA and at the SIUV.
no-problem/9911/astro-ph9911237.html
ar5iv
text
# A catalog of helium abundance indicators from globular cluster photometry ## 1 Introduction The determination of helium abundances in stars has been a long-standing problem in astronomy. For the old stars in globular clusters and in the Galactic halo, it is only possible to spectroscopically measure the helium abundance in a direct way for hot horizontal branch (HB) stars. Studies of this kind (e.g. Moehler, Heber, & Durell 1997; Moehler, Heber, & Rupprecht 1997) have found abundances significantly higher and lower than the primordial helium abundance determined from observations of low-metallicity extragalactic H II regions (e.g. $`Y_P=0.234\pm 0.002`$ from Olive, Skillman, & Steigman 1997). The interpretation typically given for the low helium abundances is that gravitational settling has acted preferentially on helium atoms in the atmospheres of these stars, rendering the measurements unusable for determining the initial helium abundance of the stars. High helium abundances may be the result of mass loss before the HB phase, which could remove most of the hydrogen-rich envelope of the stars. Indirect methods must be used to measure the helium abundance in earlier phases of stellar evolution. The most useful methods are applied to globular clusters because individual clusters provide us with large samples of stars with seemingly identical compositions and ages. Globular cluster stars also seem to be among the oldest that can be found in the Galaxy, so that if the helium abundance could be determined, it would give us a hint of the primordial helium abundance. There are two other reasons for examining the helium abundances of globular clusters. First, it is important to look at the overall trend in the helium abundance as a function of the cluster metal content. This provides a check of our understanding of nucleosynthesis and Galactic evolution. In addition, previous studies of RR Lyrae stars have indicated that an anticorrelation of helium abundance with metal content can explain the Sandage period-shift effect (Sandage 1982). Second, it is important to see if it is possible to detect helium variations among clusters of similar metallicity, since helium remains a plausible candidate for the “second parameter” determining the morphology of the HB. While age seems to play a role in some clusters, it appears to be inadequate to explain all of the variations seen. In the past, clusters have been marked as possibly having high helium abundances (e.g., Dickens 1972), but to date there has not been definitive evidence of a relative helium abundance difference. We will examine three different helium abundance indicators that can be measured from cluster photometry. We do this in the hopes that anomalous clusters will appear with unusual values in two or more of the parameters, so as to provide better evidence for variations. A cursory examination of older tabulations of data reveals an additional reason for recomputing values for these indicators: the trend of helium abundance with metallicity appears to depend on the method used (Caputo & Castellani 1983). The three indicators have been discussed to varying degrees in earlier studies: $`R`$ (Iben 1968; Buzzoni et al 1983; Caputo, Martinez Roger, & Paez 1987), $`A`$ (Caputo, Cayrel, & Cayrel de Strobel 1983; hereafter, CCC), and $`\mathrm{\Delta }`$ (Carney 1980; CCC). Photometric studies of globular clusters have become rather numerous in the past decade, so that the size and quality of the dataset for each parameter can be considerably increased. Where it has been possible, we have also analyzed data for old globular clusters in the Magellanic Clouds, and for stars in Local Group dwarf spheroidals with old populations. In the following sections, we discuss each parameter in turn – their definitions, the datasets, potential errors in measurement, and calibration. In the final two sections, we compare helium abundance values derived from the different indicators, and examine the evidence for trends as a function of \[Fe/H\]. ## 2 Helium Abundance Indicators For all three of the indicators discussed below, higher values imply higher helium abundances. ### 2.1 The Population Ratio $`R`$ This indicator is simply defined as the ratio $`R=N_{HB}/N_{RGB}`$, where $`N_{HB}`$ is the number of horizontal branch stars, and $`N_{RGB}`$ is the number of RGB stars brighter than the luminosity level of the HB. This primarily reflects the dependence of the hydrogen-burning shell’s progress on the hydrogen content of the envelope material being fed into it (and to a lesser extent, the change of helium core mass at helium flash, which affects the HB luminosity, and hence, the HB lifetime). Because the ratio is computed using the brightest stars in each cluster, it can be calculated for any cluster having deep enough photometry. However, because these stars are relatively scarce in globular clusters, it usually requires wide-field data, or accurate photometry of the cluster core. We have restricted our sample to those clusters with photometric samples of at least a total of 100 HB and RGB stars. In addition, we require photometry of sufficient quality to separate the HB, RGB, and AGB populations readily, and to eliminate field stars from the sample when proper motion data is not available. We have chosen to use the ratio $`R`$ rather than the similar ratio $`R^{}=N_{HB}/(N_{RGB}+N_{AGB})`$, which would be more easily measured in most cases, because $`R`$ is more straightforward to calculate theoretically. For this reason, the stellar sample was sometimes restricted in radius in order to remove portions of the cluster center where image blending made identifications of HB, RGB, and HB stars difficult. In re-examining the determinations of $`R`$, we found that the vast majority of earlier calculations used the “average” differential bolometric correction $`\mathrm{\Delta }V_{BC}V_{RGB}V_{HB}`$ of 0.15 mag to determine the faint end of the RGB sample. This value was used in the original Buzzoni et al. (1983) paper, but only because uncertainties of various sorts made the correction unimportant. However, the correction is actually a function of metallicity, and it is applied to the faint end of the RGB sample, which means it can be the source of significant systematic error. To update $`\mathrm{\Delta }V_{BC}`$, we have examined the HB models of Dorman (1992) in conjunction with the isochrones of Bergbusch & VandenBerg (1992; hereafter BV92). These stellar models were computed with a consistent set of physics and compositions. Although the composition is somewhat out of date (it does not include full $`\alpha `$-element enhancement), the differential bolometric corrections should be satisfactory because they are differential in nature, and the oxygen abundance does not affect the color of the giant branch relative to the instability strip. We have determined the corrections as a function of \[Fe/H\], and have derived the following fitting formula: $$\mathrm{\Delta }V_{BC}=0.709+0.548\text{[M/H]}+0.229\text{[M/H]}^2+0.034\text{[M/H]}^3.$$ Because $`\alpha `$-element enhancements influence the position of the HB and RGB in the CMD like a change in \[Fe/H\] (Salaris, Chieffi, & Straniero 1993), they must be taken into account when computing \[M/H\]. To do so, we assumed a constant $`\alpha `$-element enhancement of 0.3 dex for all of the globular clusters examined (Carney 1996). \[The halo field population is believed to show a different abundance pattern as a function of metallicity (Wheeler, Sneden, & Truran 1989).\] The $`\alpha `$-element abundance was taken to contribute to the effective metal content of the cluster as 0.7 \[$`\alpha `$/Fe\] (Salaris, Chieffi, & Straniero 1993). The contribution does not go directly as \[$`\alpha `$/Fe\] because oxygen, the most abundant $`\alpha `$ element, has a relatively high ionization potential. As a result, it does not contribute as significantly to the opacity in the envelope of RGB stars as it does for higher temperature HB stars. Error in observed $`R`$ values comes from two sources: error in the numbers of HB and RGB stars due to misidentification or Poisson fluctuations, and error in the determination of the faint limit for the RGB sample (resulting from the determination of the magnitude level of the HB, or the metallicity uncertainty that affects the size of the differential bolometric correction). The magnitude level of the HB was taken to be the average magnitude of stars in the instability strip. For clusters with many well-studied RR Lyrae stars, this could be taken from the average magnitude of the variables. Often though the RR Lyrae stars were not observed frequently enough to derive good average magnitudes. If the cluster had populated red and blue edges of the intability strip, a linear interpolation between the edges was made. In cases where the cluster had only a populated red or blue HB, a theoretical correction was made using the models of Dorman (1992) to $`\mathrm{log}_{10}T_{eff}=3.85`$ using the cluster reddening and a well-determined point on the populated portion of the HB. Dorman’s scaled-solar abundance HB models were used with a correction for enhanced $`\alpha `$-element abundances as described above. An examination of the table indicates that the quoted errors for the HB magnitudes for the clusters are in general less than 0.05 mag. This is because we are not interested in the absolute value of $`V_{HB}`$ for any of the clusters – only the value for the sample used. It is clearly unnecessary for the photometric calibration of the sample to be perfect, since we only require that the relative photometry of the HB and RGB be good enough that the faint limit of the RGB sample can be determined well. For this reason, the tabulated values of $`V_{HB}`$ should not be taken as being good absolute values. This column is provided to make verification of the results easier. The error due to the differential bolometric correction is illustrative, so we present a semi-analytic model for its contribution to the error. The important quantity to compute is $`N_{RGB}/V_{RGB}`$. This can be derived assuming that i) the differential luminosity function (LF) of the RGB has a constant slope $`\alpha =d(\mathrm{log}_{10}N)/dV`$, and ii) the number of red giant branch stars goes to zero at the tip of the RGB (TRGB). So, we use $`\mathrm{log}_{10}(N)=\alpha (VV_{TRGB})`$, and find $$N_{RGB}=\frac{N_o}{\mathrm{ln}10}(10^{\alpha (V_{RGB}V_{TRGB})}1),$$ where $`N_o`$ is the normalization of the luminosity function. From this we find $$\frac{N_{RGB}}{V_{RGB}}=\alpha \mathrm{ln}10\left[\frac{10^{\alpha (V_{RGB}V_{TRGB})}}{10^{\alpha (V_{RGB}V_{TRGB})}1}\right]N_{RGB}.$$ The fraction in brackets is close to one, and from Bergbusch & VandenBerg (1992) LFs we find $`\alpha =0.33`$ nearly independent of composition. From this we find that for each tenth of a magnitude added to the faint limit of the RGB sample, the number of RGB stars increases by about 9%. (Alternately, making the faint limit fainter by one magnitude for a given cluster area increases the total number of red giants by approximately 85%.) This demonstrates the importance of accurate values for the differential bolometric correction $`\mathrm{\Delta }V_{BC}`$. The above derivative affects the error in $`N_{RGB}`$ through errors in metallicity and in the determination of the HB magnitude. Because of the metallicity dependence of the differential bolometric correction, helium determinations for high metallicity clusters will be relatively more uncertain. A polynomial fit to the derivative gives $$\frac{\mathrm{ln}N_{RGB}}{V_{RGB}}=0.9469+0.1084\text{[M/H]}+0.0257\text{[M/H]}^2.$$ We note that the error from this source goes as $`N_{RGB}`$, whereas the Poisson counting errors go as $`N_{RGB}^{1/2}`$. We have used \[Fe/H\] values from the tabulation of Djorgovski (1993) in computing the values of $`\mathrm{\Delta }V_{BC}`$ for the clusters. Djorgovski’s values fairly closely follow the values of Zinn & West (1984; hereafter, ZW). We have also considered an alternative metallicity scale since this affects the differential bolometric corrections. Using metallicities from Carretta & Gratton (1996), we recomputed the $`R`$ values for 16 clusters from their sample. The results are shown in Fig. 1. For the most part, the values were little affected by the changes in the scale. Computed $`R`$ values are insensitive to factors that merely shift the RGB in temperature or color (age, for example). Factors that cause changes in the absolute brightness of the HB (such as mean mass of the helium cores of stars, and CNO abundances) will cause systematic errors in measured values for individual clusters. Our sample is composed of 42 Galactic globular clusters (GGCs) and 5 globulars in the Large Magellanic Cloud (LMC), and is presented in Table 1. The listings are grouped in bins of approximately 0.2 in \[Fe/H\]. References carrying a “(PM)” designation were used to remove field stars from the sample using proper motion information. There tend to be few clusters at the very metal-rich end due to a combination of substantial field star contamination and confusion between RGB and red HB cluster stars. The variation of the $`R`$ values as a function of HB type is shown in Fig. 2. There may be a slight decrease in the average $`R`$ for clusters with bluer HB type. Several clusters with the bluest morphologies have unusually high $`R`$ values: NGC 6752, NGC 7099 (M30), NGC 6341 (M92), NGC 6218 (M12), and NGC 6681 (M70). However, this is not universal — there are more clusters in the same range of $`R_{HB}`$ having $`R`$ values that are closer to the Galactic average. There is not a clear reason for this difference: well-measured clusters are found in both groups, and a metallicity difference does not appear to be present. An important effect of the inclusion of the metallicity dependence of $`\mathrm{\Delta }V_{BC}`$ is the improved agreement between the $`R`$ values of several of the most metal-rich clusters and the metal-poor clusters. In a similar fashion, the increased $`\mathrm{\Delta }V_{BC}`$ values will also significantly reduce the estimates of the helium abundances of Galactic bulge fields ($`Y=0.28\pm 0.02`$ according to Minniti 1995) because of the high mean metallicity of those stars. ### 2.2 The MS–HB Magnitude Difference $`\mathrm{\Delta }`$ The indicator $`\mathrm{\Delta }`$ (CCC) is defined simply as the magnitude difference between the MS at $`(BV)_0=0.7`$ and the HB at the instability strip. CCC originally defined the HB point to be at the blue edge of the instability strip, but we have chosen to revise this definition to make it more easily calculable theoretically and observationally. This also reduces the color difference between the HB and MS points, reducing possible systematic effects from photometric calibration. The various sensitivities of the indicator are not significantly changed by the revision to the definition. Increases in envelope helium abundance influence this indicator by increasing the luminosity of the HB (via the strength of hydrogen shell-burning during that phase) and by increasing the effective temperature and the luminosity of the MS through a decrease in the envelope opacity. The net effect of the luminosity and temperature changes on the MS is to make it fainter at a given color. As an indicator, $`\mathrm{\Delta }`$ has the advantage of being sensitive to the helium abundance ($`\mathrm{\Delta }/Y=5.8`$ mag; CCC), and the disadvantages of having a definite metallicity dependence ($`\mathrm{\Delta }/\text{[Fe/H]}0.5`$ mag / dex) and of requiring photometry from the HB to well below the MS turnoff. There is an additional disadvantage in requiring the knowledge of the cluster’s reddening to determine $`V_{0.7}`$. However, $`\mathrm{\Delta }`$ has no dependence on age, since the chosen MS point reaches unevolved stars. We can calculate theoretical values of $`\mathrm{\Delta }`$ using a self-consistent set of isochrones (BV92) and HB models (Dorman 1992) having $`Y0.236`$. We have fitted the following polynomial to the data after making small corrections for different initial helium content in the theoretical models: $$\mathrm{\Delta }=4.2682.1295\text{[M/H]}0.7938(\text{[M/H]})^2$$ $$0.1173(\text{[M/H]})^3.$$ As can be seen from Table 3, there are only 20 clusters with enough information to compute a value of $`\mathrm{\Delta }`$. We included clusters for which $`V_{HB}`$ and $`V_{0.7}`$ were derived from different studies, in spite of the possibility of zero point differences. The error budget for $`\mathrm{\Delta }`$ was computed using: $$\sigma ^2(\mathrm{\Delta })=\sigma ^2(V_{HB})+\sigma ^2(V_{0.7})$$ $$+(\frac{dV_{MS}}{d(BV)})^2\sigma ^2(E(BV))+\sigma ^2(\mathrm{})$$ where the last two terms account for reddening and zero point uncertainties respectively. As was discussed in § 2.1, we do not need to worry about how well calibrated the data are in an absolute sense, as long as there are not significant systematic errors in the relative photometry for the MS and HB (for instance, a nonlinearity in one of the CCDs). In a global sense, the degree of agreement between the observed values and theoretical expectations is primarily a function of the metallicity scale chosen. In Fig. 3, we show the $`\mathrm{\Delta }`$ values using metallicities from three different studies: ZW, Carretta & Gratton (1996; hereafter CG), and Rutledge, Hesser, & Stetson (1997; hereafter RSD). The ZW comparison agrees well with the theory, although the scatter in values at a given metallicity is large. The high-resolution spectroscopy study of CG reduces the scatter considerably, although there seems to be a constant offset between the observed and theoretical values. RSD calibrated their Ca II triplet measurements to both the ZW and CG scales. In Fig. 3, we plot the values using RSD’s CG scale. Again, the scatter is fairly low, but the shape of the curve traced by the observed values is very different from theoretical expectations. The situation is almost identical for their ZW-calibrated values. How are we to judge the merits of the different comparisons? First we must keep in mind that the absolute metallicity scale is uncertain at the 0.2 dex level (RSD), as has been assumed throughout the paper. The relative rankings are to be trusted more ($`\sigma 0.1`$ dex), making the shape of the $`\mathrm{\Delta }`$ comparison most reliable. So, horizontal shifts of the curve by 0.2 dex should not be considered unreasonable. This means that the comparisons using both the ZW and CG metallicity data are consistent with the theoretical values. The RSD scales result in large discrepancies, particularly at the metal-rich end. The fact that the shape is different in both their ZW and CG calibrations implies that it probably results from details or assumptions of their technique. RSD discuss the issue of their Ca measurements as indicators of \[Fe/H\] at length, and we refer readers to their § 6. Because the absolute metallicity scale remains uncertain at about the 0.2 dex level, it is currently impossible to determine low-error absolute helium abundances using $`\mathrm{\Delta }`$. At the metal-rich end in particular, the absolute metallicity uncertainty causes large uncertainties in absolute helium abundance. The relative rankings are to be trusted more if the relative metal abundances for the globular clusters are good. This is not as much the case for the ZW scale (since it was a weighted average of determinations by different methods) as it is for the CG and RSD studies since they both applied a single method in a uniform way. We have chosen to use the CG scale because of the general agreement of the observational and theoretical curve shapes. The helium abundance for each of the clusters was computed relative to the theoretical values given the derivative $`\mathrm{\Delta }/Y`$ (CCC). In Figs. 10 and 4, we plot the calculated $`\delta Y`$ values as a function of metallicity and HB morphology, respectively. It is unsettling that the clusters with $`\left|R_{HB}\right|<0.8`$ (and well-defined blue and red edges to the instability strip) tend to have higher $`\delta Y`$ values. NGC 288 and M30 stand out slightly as having high values among the clusters with the bluest morphologies, and NGC 362, M71, and 47 Tuc stand out as high for clusters with red morphologies. This may indicate that the level of the horizontal branch is not being measured properly. However, the magnitude error would have to be between about 0.1 and 0.25 mag to bring them back into agreement, which seems overly large. If the reddest HB clusters can be explained via errors in their metallicities, there might be slight evidence for increased helium abundance with bluer HB type. We will discuss discrepant individual clusters in § 3. ### 2.3 The RR Lyrae Mass-Luminosity Exponent $`A`$ The indicator $`A`$ (CCC) is related to the mass-luminosity relationship for stars inside the instability strip: $$A=\mathrm{log}(L/L_{})0.81\mathrm{log}(M/M_{}).$$ $`A`$ is dependent on helium because increased helium can both increase the luminosity of an RR Lyrae, as well as the mean mass of stars occupying the instability strip. While $`A`$ has a relatively small sensitivity to helium abundance ($`A/Y=1.4`$; Sandage 1990b, Bencivenni et al. 1991), statistical errors are generally small for clusters with a fair number of RR Lyraes ($`\sigma _A0.01`$). Potentially $`A`$ could provide helium abundances with the best precision ($`\sigma _Y0.007`$) of the three indicators we have considered. $`A`$ can be computed using the period relation of van Albada & Baker (1971): $$\mathrm{log}P=11.497+0.84A3.481\mathrm{log}T_{eff}.$$ One of the difficulties in using this relation is the computation of realistic effective temperatures for the cluster variables from readily observable quantities. As will be shown below, the uncertainty in the absolute temperature values results primarily in systematic shifts in $`A`$ values. As a result, $`A`$ can only be realistically considered a relative indicator of helium abundance at this time. One source of uncertainty in the temperature scale relates to the calibration samples of field RR Lyraes. Aside from uncertainty in the model atmospheres used to calibrate the colors, the optimal choice of color continues to be debated. $`(VK)`$ has been recommended by many authors (Liu & Janes 1990; Jones et al. 1992; Carney, Storm, & Jones 1992a; Fernley 1993) over $`(BV)`$ due to evidence of shock-wave effects on $`B`$ magnitudes near maximum light. However, McNamara (1997) showed that the temperature calibration from $`(VK)`$ disagreed systematically as a function of period with those from $`(BV),(by),(VR)`$, and $`(VI)`$ when 1994 Kurucz model atmospheres were used. In recent years, studies of globular cluster RR Lyraes have turned to the use of quantities like the period $`\mathrm{log}P`$ and blue amplitude $`A_B`$ to derive temperatures so as to avoid systematic errors resulting from reddenings. However, there is still uncertainty in the temperature zero-point from the model atmospheres used to calibrate the temperature of field RR Lyraes, and from the differences in temperatures derived from colors using different filter combinations. We have chosen to recalibrate the temperature relation of Catelan (1998) for RR Lyrae stars of type ab (with $`A_B`$ and \[Fe/H\] the only variables) to temperatures derived using $`(BV)`$ colors. Catelan, Sweigart, & Borissova (1998a) point out that the use of a $`\mathrm{log}P`$ term in the determination of effective temperatures (as in equation 16 of Carney et al. (1992a)) tends to cause luminosity differences among RR Lyraes to be translated into temperature differences, erroneously reducing the scatter in the $`PT_{eq}`$ plane. Our decision to use $`(BV)`$ color temperatures for the calibrating sample of field RR Lyraes is based on the findings of McNamara (1997), and Kovács & Jurcsik (1997). As mentioned above, McNamara found that temperatures derived from $`(VK)`$ deviated systematically from several other commonly-used colors as a function of period. It has been known for a long time that $`M_K`$ correlates strongly with $`\mathrm{log}P`$ (e.g. Longmore et al. 1990). Because the $`K`$ band is on the Rayleigh-Jeans portion of the blackbody curve for RR Lyraes, it is not very sensitive to temperature ($`B`$ and $`V`$ fall near the maximum). Kovács & Jurcsik’s examination of $`M_V,M_{I_c}`$, and $`M_K`$ as a function of Fourier light curve parameters for globular cluster variables indicates that the period dependence is important for each of the filters, but is larger for redder filters. As a result, filter combinations with longer wavelength baselines have larger period dependences. In particular, $`(VK)`$ is predicted to have a period dependence that can explain the trend McNamara sees, while $`(VI_c)`$ has a dependence that is over twice as small, and the dependence for $`(BV)`$ is over ten times smaller. Because period depends significantly on the stellar luminosity in addition to temperature (see van Albada & Baker’s pulsation equation), a large period dependence in the color is likely to be a systematic problem, in agreement with the assertion of Catelan et al. (1998a). So, using temperatures from McNamara (1997), pulsational amplitudes from Blanco (1992), and metallicities from Layden et al. (1996) for field RR Lyraes, we found the relation: $$\mathrm{\Theta }_{eq}=(0.776\pm 0.012)(0.030\pm 0.008)A_B$$ $$(0.008\pm 0.003)\text{[Fe/H]}.$$ The fit has a multiple correlation coefficient $`r=0.925`$, and an rms deviation of 40 K from the fit. In addition, the residuals show no correlation with $`\mathrm{log}P`$. To check the effect that different compositions for field and cluster RR Lyraes could have, we redid the fit using only variables with \[Fe/H\] $`<1.0`$. The fit (using 15 stars) was $$\mathrm{\Theta }_{eq}=(0.786\pm 0.020)(0.039\pm 0.011)A_B$$ $$(0.008\pm 0.009)\text{[Fe/H]}.$$ We have chosen to use this second calculation for the calculations in this paper, though it makes only small changes to the $`A`$ values. In computing $`A`$ values for cluster variables, we have chosen not to include variables which are known to exhibit the Blashko effect, as this causes changes in pulsation amplitude (and in our computed temperatures) from cycle to cycle. Those variables were included in calculations of the average period though, since the period of pulsation is not affected. Error analysis was carried out on $`A`$ values for each variable. However, we find that the error in individual $`A`$ values is not very significant compared with the scatter in $`A`$ values for variables within a cluster. From an examination of histograms of $`A`$ values for the most populated clusters, the distributions appear to be Gaussian. We have found 50 Milky Way globular clusters having good data on at least 2 RRab variables, for a total of 974 stars. We have also analyzed 8 old Magellanic Cloud clusters (108 variables), and 4 Local Group dwarf spheroidal galaxies (214 variables), as shown in Table 4. Only 25 of the Milky Way clusters (and 5 in the Magellanic Clouds) have 10 or more RRab stars with good data. The table includes only those clusters for which there is at least one RRab star with period and amplitude measurements. For the clusters examined, we give the mean period of the RRab stars in column (3), along with the error in the mean. The average mass-luminosity exponent $`A`$ is computed from the average of the values for the stars that have $`A_B`$ values. When this is different from the number that have periods, the number is given in parentheses in column (2). Fig. 5 plots our data against those calculated according to the method of Caputo & De Santis (1992), which employed a different (but also reddening-independent) method for computing $`A`$. The primary difference between the studies is in zero-points, which is probably due to the differences in the temperature calibrations. There may also be a slope difference at high values of $`A`$ . Caputo & De Santis found that their $`A`$ values showed sensitivity to the HB type — clusters with blue HBs ($`(BR)/(B+V+R)>0.7`$) tended to have significantly higher values for $`A`$ . Synthetic HB models predict that only very evolved HB stars are found in the instability strip for clusters with very blue HB populations. In Fig. 6, we plot our $`A`$ values versus horizontal branch type $`R_{HB}`$. As Caputo & De Santis found, there is little scatter among Oosterhoff group I clusters with a few exceptions (most notably NGC 6229 on the low end). None of the Oosterhoff group II clusters have $`A`$ values consistent with the overwhelming majority of Oo I clusters. (Although Ruprecht 106 has a $`P_{ab}`$ closer to those of the Oosterhoff II group, its lack of RRc variables and its low $`A`$ value indicate that it could be an Oo I cluster.) M14 and M28, the two Oo I clusters with the bluest HB morphologies, have $`A`$ values that place them at the low end of the range populated by Oosterhoff II clusters. There are a number of systems that have $`A`$ values that are consistent with being in the Oosterhoff II group and HB types that are redder than the majority of Oo II clusters. This is partly an incarnation of the second parameter effect in HB morphology. However, it seems to rule out the possibility that Oo II RR Lyraes are simply the result of the instability strip being populated solely by stars that are evolving toward the AGB. The instability strip is too well populated in systems like the Ursa Minor dSph and Galactic globular clusters like M68 and NGC 5466, all of which have populations of red HB stars in addition to the variables. For this reason, we have re-examined the dependence of $`A`$ with \[Fe/H\] in Fig. 7. Many previous studies (e.g. Sandage 1990b, Caputo & De Santis 1992) have noted the apparent linear correlation within the total globular cluster sample. If this is truly the case, then the two Oosterhoff groups should also show the same linear relation separately. We derive the following relations: $$A=(0.032\pm 0.008)\text{[Fe/H]}+(1.821\pm 0.012)$$ for the 21 Oo I clusters having 7 or more RRab stars (excluding NGC 6229 as an extreme outlier) having $`1.69\text{[Fe/H]}0.99`$, and $$A=(0.001\pm 0.018)\text{[Fe/H]}+(1.929\pm 0.036)$$ for 12 Oo II clusters with 7 or more RRab stars (excluding $`\omega `$ Cen due to its metallicity spread) having $`2.22<\text{[Fe/H]}<1.58`$. If Ruprecht 106 is removed from the Oo I sample (it appears to be a young cluster, and it has an unusually low dispersion in the $`A`$ values of its variables), we find $$A=(0.003\pm 0.010)\text{[Fe/H]}+(1.868\pm 0.015).$$ The two groups individually have slopes that are significantly shallower than derived from the union of the two samples ($`0.088\pm 0.006`$), and both are consistent with zero. Both slopes are also significantly smaller than predictions from synthetic HB computations ($`0.027`$; Lee, Demarque, & Zinn 1990). For the Oo I clusters the small slope is not surprising since the $`A`$ values are not predicted to have a dependence on HB type for $`R_{HB}<0.7`$. The Oo I clusters M14 and M28 are the only Galactic globular clusters (with 40 and 9 RRab variables respectively) that can be said to fall in the gap between the two groups. Both have very blue HB morphologies, indicating that the relatively high temperatures of the variables are affecting the $`A`$ values (Caputo et al. 1993). More unexpected is the essentially constant value found for the Oo II clusters which are expected to be much more sensitive to HB type (Caputo & De Santis 1992). An examination of the average periods in Fig. 7 also indicates that while there does seem to be a linear relation in the total sample, if the Oosterhoff groups are considered separately the slope is significantly smaller. We have included only those clusters having 7 or more RRab stars. We find $$\mathrm{log}P=(0.019\pm 0.010)\text{[Fe/H]}+(0.281\pm 0.015)$$ for 20 Oosterhoff I clusters, and $$\mathrm{log}P=(0.028\pm 0.015)\text{[Fe/H]}+(0.251\pm 0.031)$$ for 13 Oosterhoff II clusters. The slopes are consistent with each other, and somewhat shallower than predicted by evolutionary models ($`0.05`$; Lee, Demarque, & Zinn 1990). For the two groups, we find average periods of 0.556 and 0.643 days, respectively. The offset in periods between the two Oosterhoff groups almost exactly corresponds to the difference between the average $`A`$ values, and so is probably not due to differences in mean temperature of the variables. From van Albada & Baker’s relation, this indicates that there is likely to be a difference in the mean mass and/or luminosity of the RR Lyraes in the two groups. While the Oo II clusters with the bluest HBs show large scatter in $`A`$ as expected for small samples of RR Lyraes, the clusters with redder HBs (and generally, with larger numbers of HB stars) – most notably M68, but also M15, M53, NGC 5053, NGC 5466, and possibly NGC 2419 – have the same value to within the errors. What also makes these clusters unusual is that if they indeed had high helium abundances as their higher $`A`$ values might indicate (whether primordial or due to the action of a deep mixing mechanism), all of their HB star distributions would be expected to be bluer. For these clusters we are left with the possibilities that either high helium is acting against an even stronger third parameter, or that helium does not vary and there is a different second parameter. In short, it would be wise to discard most of the Oo II clusters since they are prone to small number statistics, and so are overly sensitive to the HB type, making them useless as helium abundance tracers. However, the variable-rich Oo II clusters stand out in the constancy of their $`A`$ values as a group. Based on these data alone a high helium abundance could be a possible explanation. However, the data from other indicators show no evidence to support this. It is clear that some other factor is necessary to completely explain the large numbers (a total of 104 known for the 6 clusters) of RR Lyraes in these clusters, but the question is beyond the scope of this paper. One result of this discussion is that there appears to be no relationship between the Oosterhoff dichotomy and the second parameter problem, given the lack of variation in $`A`$ with HB type. In the range $`1.8<\text{[Fe/H]}<1.55`$, there are clusters of both Oosterhoff types with at least moderate numbers of variables. Three of the Oo I clusters (M3, NGC 3201, and NGC 7006) are among the most populated with RR Lyraes, and have moderately blue HB types. The Oo II clusters (M2, M9, M22, and NGC 5286) are among the bluest HB types in the group. The four Magellanic Cloud clusters that have $`A`$ between the two Oosterhoff groups also have \[Fe/H\] $`1.8`$ and have HB types falling between the majority of the clusters in the the two groups. The difference of 0.06 in $`A`$ between Oosterhoff I and II clusters would require a difference of 0.05 in initial helium abundance. A sudden change of that magnitude is unlikely to have occurred in Galactic chemical evolution. Enrichment of the envelopes of the RR Lyraes by deep mixing on the upper RGB could potentially be invoked to explain the $`A`$ measurements. In order to explain observed variations in the abundances of species of aluminum and magnesium in several clusters, surface material must have mixed through regions where hydrogen has at least partly burned to helium in the shell source. Sweigart (1997) finds that an increase in envelope helium abundance by 0.04 could explain the period shift difference between M3 and M15. (Period shifts go like 0.84 times the difference in $`A`$.) If so, the Oosterhoff dichotomy would imply that the driving mechanism only existed in one group or the other (in the case of stellar rotation driving circulation, Oo II clusters would be required to have higher average rotation). However, significant variations in oxygen abundance are seen in giants in clusters of both Oo I (M3, Kraft et al. 1992; M5, Sneden et al. 1992) and Oo II (M15, Sneden et al. 1997; M92, Sneden et al. 1991) groups, making this explanation unlikely. Among the Oo I clusters, the $`A`$ indicator shows no evidence of significant helium abundance variations. We have also examined old Magellanic Cloud globular clusters and Local Group dwarf spheroidals (dSph) having RR Lyrae observations in the literature. Several of the dSph galaxies have populations that are old enough to make a comparison with globular clusters useful, though composition variations have been seen in Draco (Lehnert et al. 1992; Shetrone, Bolte, & Stetson 1998) and Sextans (Suntzeff et al. 1993). In spite of the composition variations, Draco’s variables have $`A`$ values that are strongly peaked near its $`A`$ , giving additional evidence for lack of a significant metallicity dependence in A. Ursa Minor, the dSph with the largest mean period of those we have data for, falls unequivocally among Oo II clusters. Carina, Draco, and Sextans have mean periods that put them just on the Oo II side of the period gap between the Oosterhoff groups. However, only Carina and Ursa Minor have $`A`$ values consistent with Oo II clusters. (Our estimated $`R_{HB}`$ value for Carina is uncertain to probably $`\pm 0.2`$, but this does not impact the analysis. The value was estimated from the appearance of the old population HB in the CMD of Smecker Hane et al. 1994.) Draco and Sextans have $`A`$ values slightly higher than the average for Oo I clusters. Draco is unusual due to being very metal poor with an unusually red morphology. To check if this affected the derived $`A`$ values, we derived the following relation from the sample of 39 globular clusters (Milky Way and LMC) having more than 7 RR Lyrae stars: $$\mathrm{log}T_{eff}=(3.8394\pm 0.0012)+(0.0100\pm 0.0008)\text{[Fe/H]}$$ $$+(0.0032\pm 0.0005)R_{HB}.$$ The rms residual for the globular clusters was 0.0025. The dwarf spheroidals do not show good evidence for deviating from this rough relation, as shown in Fig. 8. Thus, the metallicity and HB morphology seem to be able to account for the low temperatures of the RR Lyraes, with no obvious distiinction between Oosterhoff groups. As was found for the reddest Galactic Oo II clusters, helium cannot be the main factor determining the HB morphology since increased envelope helium would drive the morphology much bluer. The slope derived from the 5 RR Lyrae-rich Magellanic cloud clusters is $$A/\text{[Fe/H]}=0.141\pm 0.019.$$ This is significantly higher than the slope of the (Oo I + Oo II) Galactic samples, but these clusters might also be more profitably put into separate Oosterhoff classes. According to the mean periods, the SMC cluster NGC 121 and the LMC cluster Reticulum are Oo I clusters, and NGC 1786 and 1841 are Oo II. (We have estimated $`R_{HB}=0.9\pm 0.1`$ for NGC 1786 from the statistically subtracted CMD of Brocato et al. 1996c.) The remaining four LMC clusters (NGC 1466, 1835, 2210, and 2257) fall between the two groups in Figs. 7, while having mean periods at the high end of the Oo I group. It may be suggestive that all four of these clusters have \[Fe/H\]$`1.8`$, which appears to be at the metal-poor end of the metallicity distribution for Oo I clusters in the Milky Way. A high helium abundance could be the cause only if helium enrichment occurs by a process that did not enrich the heavy elements in these clusters. ## 3 Comparisons As stated previously, we have examined these three helium indicators in an attempt to see if we could determine with greater confidence whether any of the Galactic globular clusters have abnormal helium abundances. It is also clear that we are unlikely to derive useful absolute helium abundances. We will now see if we can find anomalous relative abundances under the assumption that abnormal indicator values reflect abnormal helium abundances (which may not be the case, given the factors that can affect each indicator). For $`R`$, helium abundances can be computed from equation 11 of Buzzoni et al. (1983). For $`\mathrm{\Delta }`$, we compute a relative helium abundance value $`\delta Y(\mathrm{\Delta })`$ using the theoretical values in § 2.2 and the derivative $`\mathrm{\Delta }/Y=5.8`$ mag (CCC). Here, the uncertainties in the absolute metallicity scale prevent the computation of good absolute helium abundances. We have chosen to use CG abundances from high-dispersion spectroscopy, supplementing with abundances from RSD for NGC 1261, 1851, and 6218 (M12). For $`A`$ , we compute relative helium abundances $`\delta Y(A)`$ by computing the difference from the average $`A`$ value for the corresponding Oosterhoff group, and then applying the derivative $`A/Y=1.4`$ (Sandage 1990b; Bencivenni et al. 1991). Because the $`A`$ values do not show significant dependences on metallicity, we believe this is the best choice. We have chosen not to compute $`\delta Y`$ for clusters having fewer than 7 RRab stars with $`B`$ amplitudes. For the Oo I clusters, the average value is $`1.8629\pm 0.0017`$, and $`1.9271\pm 0.0028`$ for Oo II clusters. The helium values are given in Table 6, and the comparison plots are shown in Fig. 9. There isn’t an obvious correlation in the comparison of helium abundances from the $`R`$ and $`\mathrm{\Delta }`$ or $`A`$ indicators. Comparison between abundances from $`\mathrm{\Delta }`$ and $`A`$ is hampered by the small overlap between the two samples. We can alternately look for clusters whose values may indicate anomalous helium abundances. In the following paragraphs, we briefly discuss clusters with unusual values for some or all of the helium indicators. NGC 1851: The value derived from $`\mathrm{\Delta }`$ is below average, and the value from $`R`$ is about average. From previous studies of its RR Lyraes (Wehlau et al. 1978, Wehlau et al. 1982), NGC 1851 appeared to have an $`A`$ closer to those of Oosterhoff II clusters. In fact, both the mean period of the variables and the ratio of RR Lyrae numbers $`N_c/N_{ab}`$ are high for an Oosterhoff I cluster, and the bimodality of the HB (with a sparsely populated instability strip) has been difficult to model using canonical HB models (see Catelan et al. 1998b). However, Walker’s (1998) re-examination of the RR Lyraes using CCD photometry indicates that the $`B`$-amplitudes in the photographic studies were systematically too large. The new data indicates that NGC 1851’s $`A`$ value is completely consistent with the Oo I average. NGC 5927: see NGC 6624. M4 (NGC 6121): All three indicator values for M4 are below the averages, although the $`A`$ value is only slightly so. The value derived from $`\mathrm{\Delta }`$ can be questioned due to indications that there is differential reddening across the cluster (Cudworth & Rees 1990), and that $`R_V=A_V/\text{E}(BV)`$ differs from the most frequently used value (e.g., Dixon & Longmore 1993). Because the magnitudes of the HB and MS points were taken from studies of different portions of the cluster, we might expect a systematic error of over 0.01 in $`Y`$. The reddening quoted in Table 3 is a weighted mean of previous studies (Dixon & Longmore 1993), although the error of $`\pm 0.01`$ is probably still an underestimate. The values from $`R`$ and $`A`$ should be more reliable. NGC 6171 (M107): M107 has very low $`Y(R)`$ and $`\delta Y(\mathrm{\Delta })`$ values, and an average $`\delta Y(A)`$ value. To completely explain the low $`\mathrm{\Delta }`$ value by a reddening error, E$`(BV)`$ would have to be underestimated by 0.06, which is possible given that E$`(BV)0.3`$. Similarly, to explain the low $`R`$ value, one would have to invoke an excessively large metallicity error. NGC 6229: Like NGC 1851, NGC 6229 also has a bimodal HB with a sparsely populated instability strip. The $`A`$ value derived for this cluster is considerably lower than the average. As was previously the case with NGC 1851 (see above), the RR Lyrae data is based on fairly old photometry, which may be the source of systematic error in the pulsational amplitudes. The $`R`$ value also indicates a low helium abundance for a fair-sized sample ($`100`$ each of HB and RGB stars). NGC 6229 has a rather extended blue HB morphology for an outer halo clusters, contrary to what would be expected from the $`A`$ and $`R`$ values (Borissova et al. 1997). M62 (NGC 6266): Like other post-core-collapse clusters (see NGC 6752 and M30), M62 has a rather large $`R`$ value, although its $`A`$ value is fairly close to the average. M62 has heavy differential reddening, which may affect the determination of the HB magnitude. M92 (NGC 6341): M92’s $`R`$ value seems to be unusually large, although examination of more recent wide-field CCD photometry indicates that our value is probably too high (Bolte & Roman 1999; in preparation). The $`\mathrm{\Delta }`$ value falls a little lower than the average. NGC 6624: This cluster has the lowest $`R`$ value of any cluster examined here. Because $`\mathrm{\Delta }V_{BC}`$ is sensitive to \[M/H\] at the metal-rich end, and because NGC 6624 is one of the most metal rich clusters in our sample, the metal abundance is a natural suspect. NGC 6624 appears to be approximately $`0.20.4`$ dex more metal-rich than 47 Tuc according to Ca II triplet measurements (RSD; Idiart, Thevenin, & de Freitas Pacheco 1997; Armandroff & Zinn 1988), but less than 0.1 dex more metal-rich according to spectrophotometry (Gregg 1994). If we optimistically take NGC 6624’s metallicity to equal that of 47 Tuc, the $`R`$ value increases to $`0.83\pm 0.10`$, which does not alleviate the problem. It was suggested by Richtler et al. (1994) that the total metal abundance \[M/H\] for the cluster is considerably lower than its iron abundance (\[Fe/H\]$`=0.37`$; ZW) would indicate, based on comparisons between photometry and BV92 models. This could be the case if Type I supernovae became the primary source of heavy element enrichment. If the iron abundance did not actually trace the total metallicity for this cluster (and it is in fact more metal-poor), this would mean the the differential bolometric correction we used was too large, which would help explain the low $`R`$ value. The cluster NGC 5927 also shows an extremely low $`R`$ value consistent with this idea (Ca II triplet measurements indicate that NGC 5927 is about 0.4 dex more metal-rich than 47 Tuc). However, M69 and NGC 6496, which are only $`0.10.2`$ dex more metal poor according to \[Fe/H\] measures, have $`R`$ values which are closer to the average for our sample. NGC 6752: This cluster presents a rather large value for $`R`$ from a large sample of stars. The $`\mathrm{\Delta }`$ value is more consistent with the average of the globular cluster system, although the HB magnitude is notoriously difficult to determine due to a complete lack of stars on the instability strip and red HB. M30 (NGC 7099): In its high $`R`$ and $`\mathrm{\Delta }`$ values, M30 presents evidence for a helium abundance enhancement. The cluster’s reddening is under some dispute though. Reducing the reddening to E$`(BV)=0.02`$ would lower the enhancement computed from $`\mathrm{\Delta }`$ to 0.033, which is consistent with the bulk of the other clusters. The $`R`$ measurement is more certain to be unusually high given the relatively large bright star samples. Ruprecht 106: One concern in photometric analyses of Ruprecht 106 has been the uncertainty in its metallicity. Francois et al. (1997) have found \[Fe/H\]$`=1.6\pm 0.25`$ from high-resolution spectroscopy. The metallicity uncertainty plays a small role here though because of the small dependences of $`\mathrm{\Delta }V_{BC}`$ (at this low metallicity) and $`A`$ on \[Fe/H\]. The sample of stars for $`R`$ is relatively small, so $`Y(R)`$ is consistent with the values for other clusters to within the errors. Its RR Lyraes have low $`A`$ values for the metallicity of the cluster though, which may be a result of the apparent youth of the cluster (Buonanno et al. 1993), and relatively high masses of the variables. ## 4 Discussion In Fig. 10, we plot the helium abundances derived from the three indicators as a function of metallicity. With the current dataset, we find that the three helium indicators $`R`$, $`\mathrm{\Delta }`$, and $`A`$ now yield trends of helium abundance as a function of metallicity that are consistent with zero to within the errors over the range of \[Fe/H\] sampled. There does not seem to be an obvious trend in the $`Y(R)`$ values as a function of metallicity, although this could be masked by the considerable scatter in the points. The measurement error is primarily the result of Poisson errors. The situation will probably improve with the careful examination of the HB and RGB populations of clusters with the largest evolved star populations. A total RGB and HB star sample in excess of 1000 stars is necessary to reduce the error bars for individual clusters to $`\sigma (Y)0.01`$. By separating the clusters into three subsamples according to the total numbers of HB and RGB stars they have, we derive the mean values listed in Table 8. The standard deviations in each case are larger than the average measurement error for the clusters in each subsample, indicating that the scatter in the measurements may be real. To test this idea, we did Monte Carlo simulations using the measurement errors to set $`Y(R)`$ for each cluster (with the average value for each cluster always centered on the mean of the sample), and determined the probability of getting a standard deviation as large as what is observed. For the 17 clusters with $`N_{RGB}+N_{HB}>250`$, we find a standard deviation of 0.033 in $`Y`$, and an expected standard deviation of 0.018, which results in a probability less than $`10^4`$ that the measurement errors can explain the standard deviation. For the sample with $`N_{RGB}+N_{HB}<200`$ (having larger Poisson errors), the corresponding probability is 0.06. The derived mean abundance is considerably lower than the expected primordial value $`Y_P=0.23`$. Because we believe we have removed the most important (known) systematic errors in the $`R`$ measurements, we should ask what might cause the $`Y(R)`$ values to be low. Brocato, Castellani, & Villante (1998) suggest that the uncertainties in the rate of the <sup>12</sup>C($`\alpha ,\gamma `$)<sup>16</sup>O reaction lead to an uncertainty in the derived helium abundance of about 0.02. Another possibility is that there is a systematic effect throwing the $`\mathrm{\Delta }V_{BC}`$ values off. In lieu of systematic errors in $`\mathrm{\Delta }V_{BC}`$, we can ask what physical processes can slow RGB evolution. Langer, Bolte, & Sandquist (1999) have suggested that deep mixing processes on the upper RGB may result in lengthened or reduced evolutionary times for those stars, which could modulate the resulting $`R`$ values. This kind of scenario would mean that the helium abundances measured by this method (as well as those using HB stars in some way) would be affected since some of the helium produced by the hydrogen-burning shell would be mixed into the envelope of the stars that undergo the process. We can also attempt to measure the trend in $`\delta Y(\mathrm{\Delta })`$ as a function of \[Fe/H\]. The primary difficulty is the clusters at \[Fe/H\]$`1.3`$ (essentially on the ZW scale) with abnormally low values. There are good reasons for removing M4 and M12 from consideration due to differential reddening and large reddening uncertainty. Along with M4 (which has a relatively low $`\mathrm{\Delta }`$ value also), M12, M107, and NGC 1851 were the clusters for which we had to use metallicities from the RHS study since they were not observed by CG. We find from clusters with $`\mathrm{\Delta }`$ measurements (excepting M4, M12, M107, and NGC 1851) has a linear trend: $$\delta Y(\mathrm{\Delta })=(0.016\pm 0.007)\text{[Fe/H]}+(0.049\pm 0.011).$$ The slope has marginal significance, but systematic error in the zeropoint of the metallicity scale affects the slope of the best-fit line since the $`\delta Y(\mathrm{\Delta })`$ values of high metallicity clusters are modified to a larger degree by metallicity shifts than low metallicity clusters. If we reduce the metallicity of all clusters by 0.21 dex (equivalent to ignoring $`\alpha `$ elements), we derive a best-fit slope of $`0.010\pm 0.007`$ — a change in sign. Thus, the dependence of helium abundance on \[Fe/H\] (in addition to the absolute helium abundance) as derived from $`\mathrm{\Delta }`$ will not be certain until the absolute metallicity scale is improved. The data from the $`A`$ indicator for the two Oosterhoff groups are individually consistent with constant helium abundance as a function of \[Fe/H\]. It is unlikely that the difference in $`A`$ between the two groups is due to a difference in helium abundance since a similar jump does not appear in the other indicators. To explain the apparent difference in $`A`$ values between Oosterhoff groups, there must either be some combination of a mean luminosity or a mean mass difference between the RR Lyrae stars in the two groups. If the difference were entirely due to a difference in mean mass, there would be no observable effect in either the $`\mathrm{\Delta }`$ or $`R`$ indicators. If there was only a difference in the mean luminosity (the Oo II clusters RR Lyraes would have to be about 15% more luminous than Oo I variables), Oo II clusters should have $`R`$ values that are 13% smaller (since $`R`$ requires the HB magnitude to compute the faint limit of the RGB sample), and $`\mathrm{\Delta }`$ values that are higher by about 0.15 mag. From 15 Oo I clusters having more than 5 variables as well as $`R`$ values, we find $`\overline{R}=1.100\pm 0.033`$. For the 7 Oo II clusters meeting the same criteria, we find $`\overline{R}=1.139\pm 0.060`$. The values for the two groups are consistent to within the errors. For $`\mathrm{\Delta }`$, we find $`\overline{\delta Y(\mathrm{\Delta })}=0.0063\pm 0.0054`$ and $`0.0153\pm 0.005`$ for 7 Oo I and 3 Oo II clusters respectively. So again, we have no evidence for helium abundance differences between the groups, but the small numbers of clusters with $`\mathrm{\Delta }`$ values makes this a weak comparison. The difference in mean $`A`$ value between the groups corresponds to a difference in the mean RR Lyrae mass of approximately 20%. This is not so large as to make it unreasonable that there might be differences in the amount of mass loss at the tip of the RGB between the two groups. The observations require the Oosterhoff II clusters to have variables of lower mass. This goes in a direction opposite what is needed to explain the HB morphologies of at least some of the clusters. Clearly this does not speak to the exact cause of the Oosterhoff dichotomy, but it can give a little guidance on the details of how the dichotomy is brought about. ## 5 Conclusions We begin this section by summarizing what we consider to be the most important results of the surveys tabulated here. With the corrected differential bolometric corrections (larger for more metal-rich populations), we now see no significant evidence for variation in the indicator $`R`$ as a function of metallicity or horizontal branch type. In particular, metal-rich (\[Fe/H\] $`1`$) Galactic globular clusters now are more consistent with the mean, and Galactic bulge fields (Minniti 1995) are also likely to show lower helium abundances. However, only a handful of clusters have helium values $`Y(R)`$ consistent with the favored primordial value $`Y_P0.23`$. There is evidence that there is real scatter in the $`R`$ values, over and above what can be chalked up to measurement errors. Clusters with anomalously high (M30, M62, and NGC 6752) and low (M68, M107, NGC 5927, NGC 6229, and NGC 6624) values appear over a range of metallicity. For $`A`$, we find that virtually all of the Oosterhoff I clusters (and the LMC cluster Reticulum) have values consistent with a constant helium abundance. The Oosterhoff II clusters with the largest number of RR Lyrae variables also appear to have a constant $`A`$ value (as do the Ursa Minor and Carina dwarf spheroidals, and the LMC globular cluster NGC 1841). The remaining LMC globular clusters, and the dwarf spheroidals Draco and Sextans have $`A`$ values slightly above the average for the Oo I group. For the richest Oo II systems, evolution is unable to explain the numbers of RR Lyrae variables, and a systematic offset in helium abundance also seems unlikely given the evidence from the other two indicators. The lack of any correlation between HB type and $`A`$ is indication that the second parameter problem is completely independent from the Oosterhoff dichotomy. The $`A`$ measurements give us the best constraint on the chemical evolution parameter $`\mathrm{\Delta }Y/\mathrm{\Delta }Z`$. Using the $`1\sigma `$ error bar on the slope of the $`A`$ – \[Fe/H\] relation, we get a limit $`\left|\mathrm{\Delta }Y/\mathrm{\Delta }Z\right|10`$, which is consistent with measurements from extragalactic HII regions (Pagel et al. 1992). For $`\mathrm{\Delta }`$, our results are consistent with constant helium abundance across the range of metallicity sampled. The absolute value of the helium abundance as well as the exact value of $`dY/d(\text{[Fe/H]})`$ depend on the metallicity scale used. The most discrepant clusters can potentially be explained by errors in the cluster reddenings. Comparing data from different indicators, we find that the mean trends are all consistent with constant helium abundance for metallicities \[Fe/H\] $`0.7`$. We have examined the three indicators so as to use the information to either bolster or dispute claims of unusual helium abundance from just one indicator. In examining the clusters with helium abundances from more than one indicator, we have not found convincing evidence that any have abnormal helium abundances. Systematic effects clearly appear to varying degrees in the data for all three indicators, but we have not been able to determine the cause in all cases. We must note that none of the three indicators we have tabulated has much data covering the most metal-rich clusters where evidence of helium enrichment may still reside. In general, the photometry for these clusters is most subject to field star contamination and heavy reddening, making interpretation difficult. For $`R`$, there is the additional problem that the red HB begins to overlap the RGB in the CMD, making clean values impossible. The redness of the HB also tends to preclude the possibility of RR Lyrae stars, and hence the possibility of computing $`A`$. Reddening and difficulties in finding the true HB level makes $`\mathrm{\Delta }`$ values unlikely without considerable work. Other means must be devised to determine good helium abundances for these clusters. ## 6 Acknowledgements I would like to thank many for supplying electronic copies of their datasets: E. Brocato, F. Ferraro, P. Guhathakurta, D. Martins, S.-C. Rey, T. Richtler, N. Samus, A. Sarajedini, V. Testa, and A. Walker. I would also like to thank M. Catelan, G. E. Langer, R. Taam, and the anonymous referee for very helpful conversations, and M. Bolte and R. Taam for their support (under NSF grants AST-9420204 and AST-9415423, respectively) while this work was in progress.
no-problem/9911/math9911241.html
ar5iv
text
# Untitled Document Knot Concordance and Torsion Charles Livingston and Swatee Naik November 30, 1999 1 Introduction The classical knot concordance group, $`𝒞_1`$, was defined in 1961 by Fox \[F\]. He proved that it is nontrivial by finding elements of order two; details were presented in \[FM\]. Since then one of the most vexing questions concerning the concordance group has been whether it contains elements of finite order other than 2–torsion. Interest in this question was heightened by Levine’s proof \[L1, L2\] that in all higher odd dimensions the knot concordance group contains an infinite summand generated by elements of order 4. In our earlier work studying this problem we proved the following \[LN\]: 1.1 Theorem. Let $`K`$ be a knot in $`S^3`$ with 2–fold branched cover $`M_K`$. If the order of the first homology with integer coefficients satisfies $`|H_1(M_K)|=pm`$ with $`p`$ a prime congruent to 3 mod 4 and gcd(p, m) = 1, then $`K`$ is of infinite order in the classical knot concordance group, $`𝒞_1`$. An immediate corollary was that all of the prime knots of less than 11 crossings that are of order 4 in the algebraic concordance group are of infinite order in the concordance group. There are 11 such knots \[M\]. One simple case of a much deeper corollary states that if the Alexander polynomial of a knot satisfies $`\mathrm{\Delta }_K(t)=5t^211t+5`$ then $`K`$ is of infinite order in $`𝒞_1`$. According to Levine \[L2\], any higher dimensional knot with this polynomial is of order 4 in concordance. Here our goal is to prove the following enhancement of the theorem stated above: 1.2 Theorem. Let $`K`$ be a knot in $`S^3`$ with 2–fold branched cover $`M_K`$. If $`H_1(M_K)=𝐙_{p^n}G`$ with $`p`$ a prime congruent to 3 mod 4, $`n`$ odd, and $`p`$ not dividing the order of $`G`$, then $`K`$ is of infinite order in $`𝒞_1`$. As we will describe below, the significance of this result goes beyond its apparent technical merit; however, even in terms of computations it is an important improvement. Let $`H_p`$ denote the $`p`$–primary summand of $`H_1(M_K)`$. 1.3 Corollary. Let $`n`$ be a positive integer such that some prime $`p`$ congruent to 3 mod 4 has odd exponent in the prime power factorization of $`4n+1`$. Then a knot $`K`$ with Alexander polynomial $`nt^2(2n+1)t+n`$ and $`H_p`$ cyclic is of infinite order in the concordance group. Note that according to Levine \[L2\], any such knot represents an order 4 class in the algebraic concordance group. The $`n`$–twisted doubles of knots provide infinitely many examples of knots with Alexander polynomial $`nt^2(2n+1)t+n`$ and $`H_1(M_K)`$ cyclic. Further details and examples will be provided in the last section. Casson and Gordon’s first examples of algebraically slice knots that are not slice \[CG1, CG2\] were taken from the set of twisted doubles of the unknot. Our analysis extends theirs to a much larger class of knots and is not restricted to doubles of the unknot. Notice here the rather remarkable fact that an abelian invariant of a knot is being used to obstruct an algebraically slice knot from being slice! Theorem 1.2 is relevant to deeper questions concerning the concordance group. The underlying conjecture is that the only torsion in the knot concordance group is 2–torsion, arising from amphicheiral knots (see related questions in \[G, K1, K2\]); a positive solution to this conjecture seems far beyond the tools now available to study concordance, in either the smooth or topological locally flat category. However, two weaker conjectures are possible. 1.4 Conjecture. If a knot $`K`$ represents 4–torsion in the algebraic concordance group, then it is of infinite order in concordance. A simpler conjecture is: 1.5 Conjecture. There exists a class of order 4 in the algebraic concordance group that cannot be represented by a knot of order 4. In particular, Levine’s homomorphism does not split. Theorem 1.1 provided candidates for verifying Conjecture 1.5 but there are two difficult steps to extending that result from a representative of an algebraic concordance class to the entire class. It is a consequence of Witt theory (that we won’t be using elsewhere in this paper) that such an extension will have two parts: one must be able to handle the case where $`H_p=𝐙_{p^n}`$, with $`n>1`$, and also the case where $`H_p`$ is a direct sum of such factors. The results of this paper deal with the first part of the extension problem. A number of special cases of direct sums have been successfully addressed by us in unpublished work, but the necessary general result for sums has not yet been achieved. When it is, that result along with Theorem 1.2 should provide a proof of Conjecture 1.5 and perhaps 1.4 . The work of this paper is largely algebraic. In the next section we will summarize the topological results that we will be using. All the work that appears here applies in both the topological locally flat and the smooth category. In Section 3 we give a proof of Theorem 1.2. The proof is fairly technical and extends the techniques used in proving Theorem 1.1 in \[LN\]. Section 4 discusses examples. 2 Background and notation 2.1 Knots and the concordance group We will work in the smooth category, but as just mentioned, all results carry over to the topological locally flat setting. Homology groups will always be with $`𝐙`$ coefficients unless otherwise mentioned. A knot is formally defined to be a smooth oriented pair, $`(S^3,K)`$, with $`K`$ diffeomorphic to $`S^1`$. We will denote such a pair simply by $`K`$. A knot $`K`$ is called slice if $`(S^3,K)=(B^4,D)`$, where $`D`$ is a smooth 2–disk properly embedded in the 4–ball $`B^4`$. Knots $`K_1`$ and $`K_2`$ are called concordant if $`K_1\mathrm{\#}\text{}K_2`$ is slice, where $`\text{}K`$ represents the mirror image of $`K`$, formally $`(\text{}S^3,\text{}K)`$. The set of concordance classes of knots forms an abelian group under connected sum, denoted $`𝒞_1`$. The order of $`K`$ in the knot concordance group is hence the least positive $`n`$ for which the connected sum of $`n`$ copies of $`K`$, $`nK`$, is slice. Levine defined a homomorphism of $`𝒞_1`$ onto a group, $`𝒢`$, that is isomorphic to the infinite direct sum, $`𝒢𝐙^{\mathrm{}}𝐙_2^{\mathrm{}}𝐙_4^{\mathrm{}}`$. For higher dimensions the corresponding homomorphism is an isomorphism, but in dimension 3 there is a nontrivial kernel, as first proved by Casson and Gordon \[CG1, CG2\]. For details concerning $`𝒢`$, the algebraic concordance group, see \[L1, L2\]. 2.2 Casson-Gordon invariants and linking forms Let $`M_K`$ denote the 2–fold branched cover of $`S^3`$ branched over $`K`$, and let $`\chi `$ denote a homomorphism from $`H_1(M_K)`$ to $`𝐙_{p^k}`$ for some prime $`p`$. The Casson-Gordon invariant, $`\sigma (K,\chi )`$ is then a well defined rational invariant of the pair $`(K,\chi )`$. (In Casson and Gordon’s original paper, \[CG1\], this invariant is denoted $`\sigma _1\tau (K,\chi )`$, and $`\sigma `$ is used for a closely related invariant.) On any rational homology sphere, such as $`M_K`$, there is a nonsingular symmetric linking form, $`\beta :H_1(M_K)𝐐/𝐙`$. As before, let $`H_p`$ be the $`p`$–primary summand of $`H_1(M_K)`$. The main result in \[CG1\] concerning Casson-Gordon invariants and slice knots that we will be using is the following: 2.3 Theorem. If $`K`$ is slice there is a subgroup (or metabolizer) $`L_pH_p`$ with $`|L_p|^2=|H_p|`$, $`\beta (L_p,L_p)=0`$, and $`\sigma (K,\chi )=0`$ for all $`\chi `$ vanishing on $`L_p`$. We will also need the additivity theorem proved by Gilmer \[Gi\]. 2.4 Theorem. If $`\chi _1`$ and $`\chi _2`$ are defined on $`M_{K_1}`$ and $`M_{K_2}`$, respectively, then we have $`\sigma (K_1\mathrm{\#}K_2,\chi _1\chi _2)=\sigma (K_1,\chi _1)+\sigma (K_2,\chi _2)`$. Any homomorphism $`\chi `$ from $`H_p`$ to $`𝐙_{p^r}`$ is given by linking with some element $`xH_p`$. In this situation we have the following (see Section 4 of \[LN\]). 2.5 Theorem. If $`\chi :H_p𝐙_{p^r}`$ is a character obtained by linking with the element $`xH_p`$, then $`\sigma (K,\chi )\beta (x,x)\text{modulo}𝐙`$. A simple corollary, using the nonsingularity of $`\beta `$ is: 2.6 Corollary. If $`H_p=𝐙_{p^n}`$ and $`\chi `$ maps onto $`𝐙_{p^k}`$ with $`k>n/2`$ then $`\sigma (K,\chi )0`$. Finally, we will use the result below which is a consequence of the fact that the linking form $`\beta `$ gives a map from $`H_p`$ onto Hom$`(L_p,𝐐/𝐙)L_p`$, with kernel equal to $`L_p`$. 2.7 Theorem. With $`H_p`$ and $`L_p`$ as in Theorem 2.3, we have $`H_p/L_pL_p.`$ 3 Proof of Theorem 1.2 Let $`K`$ be a knot in $`S^3`$ with the 2–fold branched cover $`M_K`$. Suppose that $`H_1(M_K)=𝐙_{p^n}G`$ with $`p`$ a prime congruent to 3 mod 4, $`n`$ odd, and $`p`$ not dividing the order of $`G`$. We want to show that $`K`$ is of infinite order in $`𝒞_1`$. The linking form of $`H_1(M_K)`$ represents an element of order 4 in the Witt group of $`𝐙_p`$ linking forms. (See Corollary 23 (c) in \[L2\].) If $`K`$ is of concordance order $`d`$, since Levine’s homomorphism maps the concordance class of $`K`$ to an order 4 class, we have $`d=4k`$, for some positive integer $`k`$. We must analyze the possible metabolizers $`L_p`$ for $`(𝐙_{p^n})^{4k}`$. A vector in $`L_p`$ can be written as $`x=(x_i)_{i=1\mathrm{}d}(𝐙_{p^n})^d`$. Applying the Gauss-Jordan algorithm to a basis for $`L_p`$, and perhaps reordering, we can find a generating set of a particularly simple form. The next example illustrates a possible form for one such generating set, where the generators appear as the rows of the matrix. 3.1 Example. Let $`H_p=\left(𝐙_{p^3}\right)^8`$. A generating set for some metabolizer $`L_p`$ of the standard nonsingular $`𝐐/𝐙`$ linking form can be written as follows: $$\left(\begin{array}{cccccccc}1& & & & & & & \\ 0& p& 0& 0& & & & \\ 0& 0& p& 0& & & & \\ 0& 0& 0& p& & & & \\ 0& 0& 0& 0& p^2& 0& 0& \\ 0& 0& 0& 0& 0& p^2& 0& \\ 0& 0& 0& 0& 0& 0& p^2& \end{array}\right)$$ In the above matrix, there is 1 row corresponding to $`p^0`$, and 3 rows each for $`p^1`$ and $`p^2`$. We will denote the number of rows corresponding to $`p^i`$ by $`k_i`$, the vectors in these $`k_i`$ rows by $`v_{i,1},\mathrm{},v_{i,k_i}`$, and $`_{j=0}^ik_j`$ by $`S_i`$. For notational purposes, let $`S_1=0`$. Then, in general, the generating set consists of $`\{v_{i,j}\}_{i=0,\mathrm{},n1,j=1,\mathrm{},k_i}`$ where $`0k_i2k`$, such that the first $`S_i`$ entries of $`v_{i,j}`$ are 0, except for the $`S_{i1}+j`$ entry which is $`p^i`$, and each of the remaining entries is divisible by $`p^i`$. From 2.7 it follows that $`k_i=k_{ni},`$ for $`i>0,`$ and $`S_{(n1)/2}=2k`$. 3.2 Definition. If $`aH_p`$, let $`\chi _a:H_1(M_K)𝐐/𝐙`$ be the character given by linking with $`a`$. In the case that $`H_p`$ is cyclic, isomorphic to $`𝐙_{p^n}`$, we can fix a generator of $`H_p`$ and write $`\chi _a`$ where $`a`$ is an integer representing an element in $`𝐙_{p^n}`$. With this notation, we now see that our goal is to show that $`\sigma (K,\chi _{p^{(n1)/2}})=0`$. Since $`\chi _{p^{(n1)/2}}`$ maps onto $`𝐙_{p^{(n+1)/2}}`$ this will contradict 2.6 and it will follow that $`K`$ cannot be of finite order. As in Example 3.1, arrange the $`\{v_{i,j}\}`$ as rows of a $`(4kk_0)\times 4k`$ matrix following the dictionary order on $`(i,j)`$. We multiply the first $`k_0`$ vectors by $`p^{n1}`$, the next $`k_1`$ vectors by $`p^{n2}`$, and, so on, to obtain $`p^{n1}`$ on the diagonal. Clear the off-diagonal entries in the left $`(4kk_0)\times (4kk_0)`$ block. Now, adding all the rows gives us a vector in $`L_p`$ with the first $`4kk_0`$ entries equal to $`p^{n1}`$. This vector corresponds to a character $`\chi ,`$ given by linking an element with it, to $`𝐙_p`$ on which the Casson-Gordon invariants should vanish. That is, $`\sigma ((4k)K,\chi )=0`$. By 2.4 this leads to a relation of the form $`(4kk_0)\sigma (K,\chi _{p^{n1}})+_{x_i0}\sigma (K,\chi _{x_i})=0`$, where $`x_i`$ are the remaining $`k_0`$ entries, each of which is divisible by $`p^{n1}`$. The set of nonzero characters from $`𝐙_{p^n}`$ to $`𝐙_p`$ is isomorphic to the multiplicative group of units in $`𝐙_p`$, which is a cyclic group of order $`p1`$. Denoting a generator for this group by $`g`$, each nonzero $`\chi _{x_i}`$ corresponds to $`g^{\alpha _i}`$ for some $`\alpha _i`$. The correspondence can be given by $`\chi _{x_i}g^{x_i/p^{n1}}`$. As in \[LN\] we use further shorthand, setting $`t^{\alpha _i}=\sigma (K,\chi _{x_i})`$. Each metabolizing vector leads to a relation $`_{x_i0}t^{\alpha _i}=0`$. Note that at this point the symbol $`t^\alpha `$ does not represent a power of any element “$`t`$”, it is purely symbolic. However it does permit us to view the relations as being elements in the ring $`𝐙[𝐙_{p1}]`$. Furthermore, since $`\sigma (K,\chi _{x_i})=\sigma (K,\chi _{p^nx_i})`$, we have that $`t^j=t^{j+(p1)/2}`$. (Recall that $`g^{(p1)/2}=1`$.) Hence, we can view the relations as sitting in $`𝐙[𝐙_q]`$, where $`q=(p1)/2`$. If a metabolizing vector $`x`$ corresponds to the relation $`f=0`$, where $`f`$ is represented by an element in $`𝐙[𝐙_q]`$, then $`ax`$ corresponds to the relation $`t^\alpha f`$ where $`g^\alpha =a`$. It follows that the relations between Casson-Gordon invariants generated by the element $`xL_p`$ together with its multiples form an ideal in $`𝐙[𝐙_q]`$ generated by the polynomial $`f`$. With this in mind our relation can be written as $`f=(4kk_0)+_{i=1}^k^{}t^{\alpha _i}=0`$, where $`k^{}k_0`$. (Note that $`4kk_0=S_{n1}.`$) We show that the ideal generated by $`f`$ in $`𝐙[𝐙_q]`$ contains a nonzero integer. This will follow from the fact that $`f`$ and $`t^q1`$ are relatively prime, which will be the case unless $`f`$ vanishes at some $`q`$–th root of unity, say $`\omega `$; however, by considering norms and the triangle inequality we see that this will be the case only if $`k^{}=2k`$ and $`\omega ^{a_i}=`$ –1 for all $`i`$. But since $`q`$ is odd, no power of $`\omega `$ can equal –1. It follows that $`n\sigma (K,\chi _{p^{n1}})=0`$, for some $`n𝐙`$. This implies that $`\sigma (K,\chi _{p^{n1}})=0`$. Similarly we can show that $`\sigma (K,\chi _{ap^{n1}})=0`$, for $`0<a<p`$. Next, let $`l`$ be a nonnegetive integer, and assume that $`\sigma (K,\chi _{ap^s})=0`$, for all $`a𝐙`$, and all $`s`$ such that $`l<sn1`$. We will show that $`\sigma (K,\chi _{p^l})=0`$. For $`0iS_l`$, we multiply the vectors from the $`(S_{i1}+1)`$st to the $`S_i`$th vector by $`p^{li}`$, clear off-diagonal entries in the upper left $`S_l\times S_l`$ square block, and add the first $`S_l`$ rows to get a vector in $`L_p`$ with first $`S_l`$ entries equal to $`p^l`$, and the remaining entries divisible by $`p^l`$. Since we have assumed that $`\sigma (K,\chi _{ap^s})=0`$, for $`l<sn1`$, we can ignore the entries which are of the form $`ap^s`$, with $`s>l`$. Then we have a character to the multiplicative group of units in $`𝐙_{p^{nl}}`$. Since $`p`$ is odd, this is a cyclic group of order $`p^{nl1}(p1)`$ (see \[D\]). Again, since $`\sigma (K,\chi _{x_i})=\sigma (K,\chi _{p^nx_i})`$, we can view the relations as sitting in $`𝐙[𝐙_q]`$, where $`q=p^{nl1}(p1)/2`$. As $`p^{nl1}(p1)/2`$ is odd, as above, it follows that the relation $`f=S_l+_{i=1}^k^{}t^{\alpha _i}=0`$, where $`0k^{}4kS_l`$, is relatively prime to $`t^q1`$. It follows that $`\sigma (K,\chi _{p^l})=0`$. Similarly, $`\sigma (K,\chi _{ap^l})=0`$ for $`0<a<p`$. Thus, we have $`\sigma (K,\chi _{p^{(n1)/2}})=0`$, which contradicts 2.6, and proves that $`K`$ cannot be of finite order in the concordance group. 4 Examples Basic examples illustrating the applicability of Theorem 1.2 are easily constructed. For instance, since the 2–fold branched cover of $`S^3`$ over an unknotting number one knot has cyclic homology, to apply Theorem 1.2 we only need to check the order of $`H_1(M_K)`$ which equals the Alexander polynomial evaluated at $`1`$. We have the following. 4.1 Corollary. Let $`K`$ be an unknotting number one knot with Alexander polynomial $`\mathrm{\Delta }`$. If a prime $`p`$ which is congruent to 3 mod 4 appears in the prime power factorization of $`\mathrm{\Delta }(1)`$ with an odd exponent, then $`K`$ is of infinite order in the concordance group. More generally, suppose that there is a 3–ball $`BS^3`$ intersecting the knot $`K`$ in two arcs so that the pair $`(B,BK)`$ is trivial and so that removing $`(B,BK)`$ from $`S^3`$ and gluing it back in via a homeomorphism of the boundary yields the unknot. Since the 2–fold branched cover of the ball over two trivial arcs is a solid torus, the 2–fold branched cover of $`S^3`$ over $`K`$ is formed from $`S^3`$ (the 2–fold branched cover of $`S^3`$ over the unknot) by removing a solid torus and sewing it back in via some homeomorphism. In particular, the 2–fold branched cover has cyclic homology. Such knots include all unknotting number one knots and all 2–bridge knots. In the case of a 2–bridge knot $`K(p,q)`$, we have $`H_1(M_K)=𝐙_p`$. 4.2 Corollary. The 2–bridge knot $`K(p,q)`$ has infinite order in the knot concordance group if some prime congruent to 3 mod 4 has odd exponent in $`p`$. The following theorem, an immediate consequence of a result of Levine (Corollary 23 in \[L2\]), provides us with more examples of knots which represent torsion in the algebraic concordance group. 4.3 Theorem. If a knot $`K`$ has quadratic Alexander polynomial $`\mathrm{\Delta }(t)`$ then: (a) $`K`$ is of finite order in the algebraic concordance group if and only if $`\mathrm{\Delta }(1)\mathrm{\Delta }(1)<0`$, in which case $`K`$ is of order 1, 2 or 4. (b) $`K`$ is of order 1 if and only if $`\mathrm{\Delta }(t)`$ is reducible. (c) if $`K`$ is finite order, and $`\mathrm{\Delta }(t)`$ is irreducible, then $`K`$ is of order 4 in the algebraic concordance group if and only if for some prime $`p>0`$ with $`p3\text{mod}4`$, $`\mathrm{\Delta }(1)\mathrm{\Delta }(1)=p^aq`$ where $`a`$ is odd and $`q>0`$ is relatively prime to $`p`$. Consider the knot $`K_a`$, the $`a`$–twisted double of some knot $`K`$. The Seifert form for this knot is $$V=\left(\begin{array}{cc}a& 1\\ 0& 1\end{array}\right),$$ it has Alexander polynomial $`\mathrm{\Delta }(t)=at^2(2a+1)t+a`$, and the homology of the 2–fold cyclic branched cover is $`𝐙_{|4a+1|}`$. Levine’s result, Theorem 4.3, applies to determine the algebraic order of all of these knots. (In the case that $`K`$ is unknotted, $`K_a`$ can be described as the 2–bridge knot $`K(4a+1,2a)`$.) 4.4 Corollary. The $`a`$–twisted double of a knot $`K`$: (a) is of infinite order in the algebraic concordance group, $`𝒢`$, if $`a<0`$. (b) is algebraically slice if $`a>0`$ and $`4a+1`$ is a perfect square. (c) is of order 2 in $`𝒢`$ if $`a>0`$, $`4a+1`$ is not a perfect square, and every prime congruent to 3 mod 4 has even exponent in the prime power factorization of $`4a+1`$. (d) is of order 4 if $`a>0`$ and some prime congruent to 3 mod 4 has odd exponent in $`4a+1`$. Casson and Gordon \[CG1, CG2\] proved that if $`K`$ is unknotted, then all knots covered by case (b) above are actually of infinite order in concordance, except if $`a=2`$ in which case $`K_2`$ is slice. An immediate consequence of Theorem 1.2 is: 4.5 Corollary. If $`K_a`$ is of order 4 in $`𝒢`$ then it is of infinite order in the knot concordance group. As in Corollary 9.5 of \[LN\] a simple argument using Corollary 4.5 gives an infinitely generated free subgroup of $`𝒞_1`$ which consists of of algebraic slice knots. (It was first shown by Jiang in \[J\] that the kernel of Levine’s homomorphism is infinitely generated.) The extensive calculations of \[CG1, CG2\] are here replaced with a trivial homology calculation. Moreover, the results of \[CG1, CG2\] apply only in the case that $`K`$ is unknotted, a restriction that does not appear in Corollary 4.5. Recently, Tamulis \[T\] has proved that in the case that $`K`$ is unknotted, if $`K_a`$ is of order 2 in $`𝒢`$ and $`4a+1`$ is prime, then $`K_a`$ is of infinite order in concordance. Counterexamples. Given these previous examples, it is a bit unexpected that Theorem 1.2 does not apply in all cases of order 4 knots with quadratic Alexander polynomial. The difficulty is that the conditions of Theorem 4.3 do not assure that the homology of the 2–fold cover is cyclic. The next example demonstrates this. It is the simplest possible example in terms of the coefficients of the Alexander polynomial; its complexity illustrates the strength of Theorem 1.2. The example is obtained by letting $`K`$ be a knot with Seifert form: $$V=\left(\begin{array}{cc}21& 53\\ 52& 21\end{array}\right).$$ The Alexander polynomial for $`K`$ is $`\mathrm{\Delta }(t)=23154631t+2315t^2`$. We have that $`\mathrm{\Delta }(1)=1`$, $`\mathrm{\Delta }(1)=9261=3^37^3`$, and hence by Theorem 4.3, $`K`$ is of order 4 in the algebraic concordance group. The homology of $`M_K`$ is presented by $`V+V^t`$: $$V=\left(\begin{array}{cc}42& 105\\ 105& 42\end{array}\right).$$ A simple manipulation shows that this presents the group $`𝐙_3𝐙_9𝐙_7𝐙_{49}`$. Because this is not cyclic, Theorem 1.2 does not apply. As mentioned in the introduction, we have been able to extend our results to special cases of direct sums of cyclic groups, and one of those extensions applies to the group $`𝐙_3𝐙_9`$. Hence it can actually be shown that any knot with this Seifert form is not of order 4 in concordance. References \[CG1\] A. Casson and C. Gordon, Cobordism of classical knots, Preprint, Orsay (1975). (Reprinted in “A la recherche de la Topologie perdue”, ed. by Guillou and Marin, Progress in Mathematics, Volume 62, Birkhauser, 1986.) \[CG2\] A. Casson and C. Gordon, On slice knots in dimension three, in Proc. Symp. Pure Math. 32 (1978), 39–54. \[D\] D. Dummit and R. Foote, Abstract Algebra, 2nd ed., Prentice Hall, New Jersey, 1999. \[F\] R. Fox, A quick trip through knot theory, 1962 Topology of 3–manifolds and related topics (Proc. The Univ. of Georgia Institute, 1961) 120–167 Prentice-Hall, Englewood Cliffs, N.J. \[FM\] R. Fox and J. Milnor, Singularities of $`2`$–spheres in $`4`$–space and cobordism of knots, Osaka J. Math. 3 (1966) 257–267. \[G\] C. Gordon, Problems, in Knot Theory, ed. J.-C. Hausmann, Springer Lecture Notes no. 685, 1977. \[Gi\] P. Gilmer, Slice knots in $`S^3`$, Quart. J. Math. Oxford Ser. (2) 34 (1983), no. 135, 305–322. \[J\] B. Jiang, A simple proof that the concordance group of algebraically slice knots is infinitely generated, Proc. Amer. Math. Soc. 83 (1981), 189–192. \[K1\] R. Kirby, Problems in low dimensional manifold theory, in Algebraic and Geometric Topology (Stanford, 1976), vol 32, part II of Proc. Sympos. Pure Math., 273–312. \[K2\] R. Kirby, Problems in low dimensional manifold theory, Geometric Topology, AMS/IP Studies in Advanced Mathematics, ed. W. Kazez, 1997. \[L1\] J. Levine, Knot cobordism groups in codimension two, Comment. Math. Helv. 44 (1969), 229–244. \[L2\] J. Levine, Invariants of knot cobordism, Invent. Math. 8 (1969), 98–110. \[LN\] C. Livingston and S. Naik, Obstructions to 4–Torsion in the Classical Knot Concordance Group, J. Diff. Geom. \[M\] T. Morita, Orders of knots in the algebraic knot cobordism group, Osaka J. Math. 25 (1988), no. 4, 859–864. \[R\] D. Rolfsen, Knots and Links, Publish or Perish 7, Berkeley CA (1976). \[T\] A. Tamulis, Concordance of classical knots, thesis, Indiana University Department of Mathematics, 1999.
no-problem/9911/math9911113.html
ar5iv
text
# The number of edges in critical strongly connected graphs ## 1. Introduction A directed graph (or digraph) without loops or multiple edges is called strongly connected if each vertex in it is reachable from every other vertex. It is called (vertex) critical strongly connected if, in addition to being strongly connected, it has the property that the removal of any vertex from it results in a non-strongly connected graph. We denote by $`M(n)`$ the maximal number of edges in a critical strongly connected digraph on $`n`$ vertices. Schwarz conjectured (and proved for $`n5`$) that $`M(n)\left(\genfrac{}{}{0pt}{}{n}{2}\right)`$. This conjecture was proved by London in . In this paper we determine the precise number of $`M(n)`$, showing that it is $`\left(\genfrac{}{}{0pt}{}{n}{2}\right)n+4`$. (The corresponding number for edge-critical strongly connected graphs is $`2n2`$, see e.g. , pp 65-66.) Here is some notation we shall use. Given a digraph $`D`$ we denote by $`V(D)`$ the set of its vertices, and by $`E(D)`$ the set of edges. Throughout the paper the notation $`n`$ will be reserved for the number of vertices in the digraph named $`D`$. For a vertex $`v`$ of $`D`$ we write $`E_D^+(v)`$ for the set of vertices $`u`$ for which $`(v,u)E(D)`$ and $`E_D^{}(v)`$ for the set of vertices $`u`$ for which $`(u,v)E(D)`$. We write $`d_D(v)`$ for the degree of $`v`$, namely $`|E_D^+(v)|+|E_D^{}(v)|`$. For a subset $`A`$ of $`V(D)`$ we write $`DA`$ for the graph obtained from $`D`$ by removing all vertices in $`A`$, together with all edges incident with them. If $`A`$ consists of a single vertex $`a`$, we write $`Da`$ for $`D\{a\}`$. By $`D/A`$ we denote the digraph obtained from $`D`$ by contracting $`A`$, namely replacing all vertices of $`A`$ by a single vertex $`a`$, and defining $`E_{D/A}^+(a)=\{E_D^+(v):vA\}A`$ and $`E_{D/A}^{}(a)=\{E_D^{}(v):vA\}A`$. ## 2. The number of edges in vertex-critical graphs ###### Theorem 2.1. For $`n4`$ $$M(n)=\left(\genfrac{}{}{0pt}{}{n}{2}\right)n+4$$ A vertex-critical graph with $`\left(\genfrac{}{}{0pt}{}{n}{2}\right)n+4`$ edges is the following. Take a directed cycle $`(v_1,v_2,\mathrm{},v_n)`$, and add the directed edges $`(v_i,v_j),3j<in`$ and the edge $`(v_2,v_1)`$. Thus, what remains to be proved is that in a vertex-critical graph the number of edges does not exceed $`\left(\genfrac{}{}{0pt}{}{n}{2}\right)n+4`$. The proof will be based on two lemmas. ###### Lemma 2.2. Let $`D`$ be a strongly connected digraph and $`vV(D)`$ a vertex satisfying $`d(v)n`$. Then there exists a vertex $`zV(D)\{v\}`$ such that $`Dz`$ is strongly connected. Proof The proof is by induction on $`n`$. For $`n<2`$ the lemma is vaccuously true, since its conditions are impossible to fulfil. For $`n=2`$ take $`z`$ to be the vertex of the graph different from $`v`$. Let now $`n>2`$ and suppose that the lemma is true for all graphs with fewer than $`n`$ vertices. Let $`v`$ be as in the lemma. There exists then a vertex $`u`$ such that between $`u`$ and $`v`$ there is a double-arc (that is, two oppositely directed eges). Let $`C=D/\{u,v\}`$, and name $`w`$ the vertex of $`C`$ replacing the shrunk pair $`\{u,v\}`$. By a negation hypothesis, we may assume that $`Du`$ is not strongly connected. We claim then that $`d_C(w)n1`$. This will prove the lemma, since by the induction hypothesis it will follow that $`C`$ has a vertex $`z`$ different from $`w`$ whose removal leaves $`C`$ strongly connected. But then, clearly, also $`Dz`$ is strongly connected. To prove the claim note, first, that $`d_C(w)n2`$. This follows from the fact that each edge in $`D`$ incident with $`v`$ and different from the two edges joining $`v`$ with $`u`$, has its copy in $`C`$. Since by our assumption $`Du`$ is not strongly connected, there are two edges in $`D`$, say $`(x,u)`$ and $`(u,y)`$, such that $`y`$ is not reachable from $`x`$ in $`Du`$. If $`x=v`$ then the edge $`(w,y)`$ is an edge in $`C`$ not having a copy $`(v,y)`$ in $`D`$, and thus can be added to the $`n2`$ edges incident with $`v`$ counted above, and thus $`d_C(w)n1`$, as desired. Similarly, if $`y=v`$ then the edge $`(x,w)`$ shows that $`d_C(w)n1`$. If, on the other hand, $`xv`$ and $`yv`$, then one of $`(x,w)`$ or $`(w,y)`$ is an edge in $`C`$ not counted above. $`\mathrm{}`$ Note that the lemma proves the original conjecture of Schwarz, namely that $`M(n)\left(\genfrac{}{}{0pt}{}{n}{2}\right)`$. ###### Lemma 2.3. Let $`D`$ be a critical digraph and $`C`$ a chordless cycle in it, such that $`V(C)V(D)`$. Then $`d(v)n|V(C)|+2`$ for all $`vV(C)`$, with strict inequality holding for at least two vertices. Proof Let $`J=D/V(C)`$, and denote by $`c`$ the vertex of $`J`$ obtained from the contraction of $`C`$. Write $`k`$ for $`|V(C)|`$. The graph $`J`$ has $`nk+1`$ vertices, and therefore, by Lemma 2.2, $`d_J(c)nk`$. This implies that $`d(v)nk+2`$ for every $`vV(C)`$. Suppose now that, for some vertex $`w`$ of $`C`$, there obtains $`d(v)=nk+2`$ for all vertices $`vV(C)\{w\}`$. Then $`d_J(c)=nk`$, and all sets $`E_D^+(v)V(C)`$ ($`vV(C)\{w\}`$) are equal, and the same goes for the sets $`E_D^{}(v)V(C)`$. Moreover, $`(E_D^+(w)V(C))E_D^+(v)`$ and $`(E_D^{}(w)V(C))E_D^{}(v)`$ for all $`vV(C)`$. But then $`Dw`$ must be strongly connected, since if $`(x,w)`$ and $`(w,y)`$ are edges in $`D`$, then $`y`$ is reachable from $`x`$ in $`Dw`$ through vertices of $`V(C)\{w\}`$. $`\mathrm{}`$ Proof of Theorem 2.1 The proof is by induction on $`n`$. Write $`s_n`$ for the value claimed by Theorem 2.1 for $`M(n)`$, namely $$s_n=\left(\genfrac{}{}{0pt}{}{n}{2}\right)n+4$$ Since $`D`$ is critically strongly connected, it contains a chordless cycle $`C`$. Let $`|V(C)|=k`$. If $`V(C)=V(D)`$ then we are done because then $`|E(D)|=ns_n`$. Thus we may assume that $`V(C)V(D)`$, and since $`D`$ is critical, this implies that $`nk+2`$. Let $`J=D/V(C)`$, and denote by $`c`$ the vertex of $`J`$ obtained from the contraction of $`C`$. ###### Assertion 2.4. $$|E(D)||E(J)|s_ns_{nk+1}$$ Consider first the case $`k=2`$. Let $`v`$ be one of the two vertices of $`C`$. The graph $`J`$, being the contraction of a strongly connected graph, is itself strongly connected, and since $`Dv`$ is not strongly connected, we have $`JDv`$. This implies that $`|E(J)|>|E(Dv)|`$, and hence $$|E(D)||E(J)|d_D(v)1n2=s_ns_{n1}$$ Assume now that $`k3`$. Let $`v`$ be a vertex of $`C`$ having maximal degree, namely $`d_D(v)d_D(u)`$ for all $`uV(C)`$. Let $`r`$ be the number of edges in $`D`$ not incident with any vertex of $`C`$. Then $$|E(D)|=rk+\underset{uV(C)}{}d_D(u)$$ and $$|E(J)|=d_J(c)+rd_D(v)2+r$$ and therefore $`|E(D)||E(J)|2k+{\displaystyle \underset{uV(C)\{v\}}{}}d_D(u)`$ $`2k+2(nk+1)+(k3)(nk+2)`$ $`=(k1)n(k^22k+2)s_ns_{nk+1}`$ which proves the assertion. If $`J`$ is critical, then the theorem follows from Assertion 2.4 and the induction hypothesis. So, we may assume that $`J`$ is not critical. But, for every vertex $`u`$ different from $`c`$, the graph $`Ju`$ is not strongly connected, since the graph $`Du`$ is not strongly connected. Hence, by Lemma 2.2, we have (1) $$d_J(c)nk$$ On the other hand, the fact that $`J`$ is not critical means that $`Jc=DV(C)`$ is strongly connected. We next show: ###### Assertion 2.5. $$\underset{vV(C)}{}d_D(v)(n1)kn+4$$ Proof of the assertion By Lemma 2.2 $`d(v)<n`$ for all vertices $`v`$. Hence, if $`d_D(v)=2`$ for some $`vV(C)`$ then the assertion is true. So, we may assume that $`d_D(v)2`$ for all $`vV(C)`$. This means that $`(E_D^+(v)E_D^{}(v))V(C)\mathrm{}`$ for every $`vV(C)`$. Let $`V(C)=\{v_1,v_2\mathrm{}v_k\}`$ and $`E(C)=\{(v_i,v_{i+1}):1ik\}`$ (where, as usual, the indices are taken modulo $`k`$). Without loss of generality we may assume that $`E_D^+(v_1)V(C)\mathrm{}`$. If $`E_D^{}(v_3)V(C)\mathrm{}`$ then $`Dv_2`$ is strongly connected. Thus we may assume that $`E_D^{}(v_3)V(C)=\mathrm{}`$ and $`E_D^+(v_3)V(C)\mathrm{}`$. Applying this argument again and again, we conclude that $`k`$ is even and that $`E_D^{}(v_i)V(C)=\mathrm{}`$ for all odd $`i`$ and $`E_D^+(v_i)V(C)=\mathrm{}`$ for all even $`i`$. By (1) it follows that for every two adjacent vertices on $`C`$ the total number of edges incident with them and not belonging to $`C`$ does not exceed $`nk`$. This implies that: $$\underset{vV(C)}{}d_D(v)\frac{k}{2}(nk)+2k(n1)kn+4$$ proving the assertion. Recall now that $`nk+2`$ and that $`DV(C)`$ is strongly connected. Hence $`DV(C)`$ contains a chordless cycle $`C^{}`$. Let $`k^{}=|V(C^{})|`$. The same arguments as above hold when $`C`$ is replaced by $`C^{}`$, and thus we may assume that $$\underset{vV(C^{})}{}d_D(v)(n1)k^{}n+4$$ This, together with Lemma 2.2, yields: $$\underset{vV(D)}{}d_D(v)(n1)kn+4+(n1)k^{}n+4+(n1)(nkk^{})=2s_n$$ which means that $$|E(D)|s_n$$ $`\mathrm{}`$
no-problem/9911/cond-mat9911454.html
ar5iv
text
# Depinning of kinks in a Josephson-junction ratchet array ## I Introduction Disorder and noise are not always undesirable in physical systems. Inhomogeneity has been shown to control certain types of spatiotemporal chaos, while noise can lead to an enhancement of the signal-to-noise ratio because of stochastic resonance . Another more recent counterintuitive result is that of transport of a Brownian particle in a ratchet potential . Though initially proposed as a model for molecular motors in biological organisms , ratchets can also serve as a model to study dissipative and stochastic processes in nanoscale devices. A ratchet potential is a periodic potential which lacks reflection symmetry (in 1D $`V(x)V(x)`$, see Figure 1). A consequence of this symmetry breaking is the possibility of rectifying non-thermal, or time correlated, fluctuations . This can be understood intuitively. In Fig. 1, it takes a smaller dc driving force to move a particle from a well to the right than to the left. In other words, the spatial symmetry of the dc force is broken. Under an ac drive (so-called “rocking ratchets”) or time-correlated noise, particles show net directional motion in the smallest slope direction. This effect can be used in devices in which selection of particle motion is desired. Because of this effect, ratchet engines have been proposed as devices for phase separation , and very recently as a method of flux cleaning in superconducting thin films . A ratchet mechanism has also been proposed as a method to prevent mound formation in epitaxial film growth . Josephson junctions are solid state realizations of a simple pendulum. By coupling them, it is possible to make a physical realization of model systems such as the Frenkel-Kontorova model for dislocations or the 2-D X-Y model for phase transitions. In particular, a parallel Josephson array (see Fig. 2) is a discrete version of the sine-Gordon equation and it has been used to experimentally study soliton (usually referred to as kinks, vortices or fluxons) dynamics on a discrete lattice . In parallel arrays, kinks behave as particles in which the idea of Brownian rectification can apply. The applied current is the driving force. If the kink experiences a ratchet potential, then the current needed to move the kink in one direction is different than the current to move it in the opposite direction. In this paper, we will show that we can design almost any type of 1D pinning potential in a parallel Josephson array by choosing an appropriate combination of junction critical currents and plaquette areas. Indeed, it has been shown that two alternating critical currents and plaquettes areas are enough to provide a ratchet potential for fluxons. As we will show below, this is not the only possible design for a ratchet potential. With only an ac driving current these arrays show dc voltage steps of stability at multiples of the external ac drive amplitude. This occurs when the equivalent ac driving force becomes commensurate with the period of the ratchet potential. This behavior could open the possibility of using these arrays for a voltage standard device or a microwave detector without a dc bias current. Moreover, the same ideas of flux cleaning underlying reference could be applied to 2D arrays using the designs described here. The paper is organized into 5 sections. Section II introduces the theoretical framework for the study of an inhomogeneous parallel Josephson arrays. We find that inhomogeneous arrays present a long periodicity with respect to the number of kinks in the array. To test the theory, we have designed four different Josephson junction rings and measured the depinning current of the array versus the applied magnetic field. The experimental results are shown in section III. In section IV we discuss some of the properties of the model and show that they agree well the experimental results. We also show that a combination of three different critical current junctions is sufficient to design a ratchet potential. In section V we present the conclusions of our work and propose a number of new experiments. ## II Theoretical framework ### A Circuit model Figure 2 shows the circuit diagram for an array of Josephson junctions. Each junction is marked by an “$`\times `$” and we will connect $`N`$ junctions in parallel with short wires as shown. Coupling of the junctions occurs through the geometrical inductances of the cells. We will neglect all mutual inductances and consider only the self-inductance of each cell $`L_j`$. The induced flux in each cell is then $`L_j`$ times the mesh current of the cell which in this simple geometry can be easily seen to equal the current through the top horizontal link $`I_b^j`$. We will use $`I_{ext}`$ for the uniformly applied external bias current per junction as shown in Fig. 2. We then define the mesh current as the current passing through this top horizontal wire. With this definition we can place the loop self-inductance $`L_j`$ on the top horizontal link. We emphasize that this inductance is not the wire inductance, but the self-inductance of the cell so that only one such element is needed per cell. The junctions will be modeled by the parallel combination of an ideal Josephson junction with a critical current of $`I_c^j`$, a capacitor $`C_j`$, and a resistance $`R_j`$. The ideal Josephson junction has a constitutive relation of $`I_j=I_c^j\mathrm{sin}\phi _j`$ where $`\phi _j`$ is the gauge-invariant phase difference of the junction. When there is a voltage across the junction, $`v_j`$, then $`v_j=(\mathrm{\Phi }_0/2\pi )d\phi _j/dt`$. Since we will have $`N`$ parallel junctions, in our array $`j=1`$ to $`N`$. The circuit equations result from applying current conservation and flux quantization. Current conservation at the top node of junction $`j`$ yields $$C_j\dot{v}_j+\frac{v_j}{R_j}+I_c^j\mathrm{sin}\phi _j=I_{ext}+I_b^jI_b^{j1}$$ (1) Flux quantization of cell $`j`$ yields $$\frac{\mathrm{\Phi }_0}{2\pi }(\phi _{j+1}\phi _j)=\mathrm{\Phi }_j,$$ (2) where $`\mathrm{\Phi }_j`$ is the total flux in cell $`j`$. Due to the linearity of Maxwell’s equations, $`\mathrm{\Phi }_j`$ can be decomposed into two parts: the induced flux $`\mathrm{\Phi }_{ind}^j`$, and the external flux $`\mathrm{\Phi }_{ext}^j`$ which is the applied field $`B_{ext}`$ times the cell area $`A_j`$. The induced flux is simply $`L_j`$ times the mesh current of the cell, which has been defined to equal $`I_b^j`$. Then, $`C_j\dot{v}_j`$ $`+`$ $`{\displaystyle \frac{v_j}{R_j}}+I_c^j\mathrm{sin}\phi _j=I_{ext}+F_j`$ (4) $`+{\displaystyle \frac{\mathrm{\Phi }_0}{2\pi }}\left[{\displaystyle \frac{1}{L_j}}(\phi _{j+1}\phi _j)+{\displaystyle \frac{1}{L_{j1}}}(\phi _{j1}\phi _j)\right]`$ with $`F_j=(\mathrm{\Phi }_{ext}^{j1}/L_{j1}\mathrm{\Phi }_{ext}^j/L_j)`$. This circuit is realizable by varying cell and junction areas. The cell area $`A_j`$ will determine the self-inductance. If $`W`$ is the width of the cell and $`\mathrm{\Delta }x_j`$ is its length then $`L_j\mu _0\mathrm{\Delta }x_j`$ as long as $`W\mathrm{\Delta }x_j`$. Since $`\mathrm{\Phi }_{ext}^j=W\mathrm{\Delta }x_jB_{ext}`$, we see that $`\mathrm{\Phi }_{ext}^j/L_jWB_{ext}/\mu _0`$ and is approximately constant for all $`j`$. The junction area determines $`I_c^j,C_j`$, and $`R_j`$ but they are not independent since the capacitance and critical current are linearly proportional to the junction area and the resistance is inversely proportional to the junction area. The $`I_c^jR_j`$ product and the $`I_c^j/C_j`$ ratio of each junction are constant for every junction. We will normalize all the currents by $`I_c^{}=\mathrm{max}(I_c^j)`$ and time by $`\tau =\sqrt{\mathrm{\Phi }_0C_{}/2\pi I_c^{}}`$ where $`C_{}=\mathrm{max}(C_j)`$. Then, $`h_j𝒩(\phi _j)=`$ $`i_{ext}+f_j`$ (6) $`+\lambda _j(\phi _{j+1}\phi _j)+\lambda _{j1}(\phi _{j1}\phi _j)`$ where $`𝒩(\phi _j)=\ddot{\phi }_j+\mathrm{\Gamma }\dot{\phi }_j+\mathrm{sin}\phi _j`$ . The ratio of critical currents is $`h_j=I_c^j/I_c^{}`$ and the inductances are normalized as $`\lambda _j=\mathrm{\Phi }_0/2\pi I_c^{}L_j`$. Finally, $`f_j=2\pi f(\lambda _{j1}A_{j1}/A_{}\lambda _jA_j/A_{})`$, where $`f`$ is the frustration $`B_{ext}A_{}/\mathrm{\Phi }_0`$. We have used $`A_{}=\mathrm{max}(A_j)`$. To complete the system we need to specify the boundary conditions. There are two types: open, if the junctions form a linear row; and periodic, if the junctions form a closed ring. For the open boundary condition we set $`\lambda _0=A_0=0`$ in Eq. 6 for junction $`j=1`$. At the other end of the array, $`j=N`$, we set $`\lambda _N=A_N=0`$. For the periodic boundary conditions we let $`\lambda _0=\lambda _N`$ and $`A_0=A_N`$. Furthermore, a circular system poses a topological constraint on $`\phi _j`$ since they are angular variables and have $`2\pi `$ periodicity: $`\phi _{j+N}=\phi _j+2\pi M`$. In particular $`\phi _0=\phi _N2\pi M`$ and $`\phi _{N+1}=\phi _1+2\pi M`$. Here $`M`$ is referred to as the winding number and represents the number of kinks in the system. In this paper we will discuss systems with periodic boundary conditions. Since the product $`\lambda _jA_j`$ is roughly constant throughout the array we consider $`f_j=0`$ in the simulations of the rings we present . We have checked numerically that for the experiments reported here, these terms do not significantly alter our results. ### B Symmetries The system of equations (6) presents an odd inversion symmetry under the change $`MM`$, $`\phi _i\phi _i,`$ and $`i_{ext}i_{ext}`$ as is expected from Maxwell’s equations. The response of the array to an external current will reflect this symmetry. In particular, $`I_{dep}(M)=I_{dep}(M)`$. Here, $`I_{dep}`$ is the maximum value of the applied current for which a solution $`\dot{\phi _j}=0`$ can not be sustained in the presence of a positive or negative external current. To refer to this odd inversion symmetry we will use the notation $`I_{dep}(M)=I_{dep+}(M)`$ where $`I_{dep\pm }`$ refers to the absolute value of the depinning current as the external current is increased or decreased from zero. Another symmetry of the equations refers to the periodicity of the system when varying the number of kinks in the array. In the case of a regular ring (all the cells and junctions are equal) this period $`T`$ is equal to the number of junctions, $`N`$ . A method of calculating the periodicity in $`M`$ for the general case studied here is to use the simple transformation $`\psi _j=\phi _j+2\pi m_j`$ (7) where $`m_j`$ are integers. The equations of motion in the new variables are the following $`h_j𝒩(\psi _j2\pi m_j)=\lambda _j(\psi _{j+1}\psi _j)+\lambda _{j1}(\psi _{j1}\psi _j)`$ (8) $`2\pi \lambda _j(m_{j+1}m_j)2\pi \lambda _{j1}(m_{j1}m_j)+i_{ext}+f_j`$ (9) where $`𝒩(\psi _j2\pi m_j)=𝒩(\psi _j)`$. The new boundary conditions are $$\psi _{j+N}=\psi _j+2\pi (M+T)$$ (10) where $`T=m_{j+N}m_j`$. Thus after the transformation (7) we recover the same equations as (6) but with the number of kinks equal to $`M+T`$ so that the equations are periodic in the number of kinks in the array with a period $`T`$. To calculate $`T`$ we take out the $`m_j`$ dependence on the right hand side of Eq. 9 by choosing $`m_j`$ such that $`\lambda _j(m_{j+1}m_j)+\lambda _{j1}(m_{j1}m_j)=0`$. Remarkably, the resulting period is independent of $`h_j`$ and only depends on the ratio between the consecutive $`\lambda ^{}s`$. In the appendix we find a formula for the periodicity in the number of kinks for the general system. Here we are going to develop the case of a ring that was measured: a ring with an even number of junctions and with alternating cell areas. In this case there are only two $`\lambda ^{}s`$ involved. Let $`\lambda _j=\lambda _1(\lambda _2)`$ for $`j`$ odd(even) and $`\lambda _1/\lambda _2=p/q`$. If we let $`(m_{j1}m_j)=q`$ and $`(m_{j+1}m_j)=p`$ (for even $`j`$ for instance), we satisfy the above condition. The period is calculated from the new boundary conditions: $$T=m_{N+1}m_1=(p+q)N/2.$$ (11) For the regular array $`p=q=1`$ and we recover the expected result of $`T=N`$. Also, we note that in order to have a finite period we need the ratios between $`\lambda ^{}s`$ to be rational numbers. This condition will almost never be satisfied in a real experiment. Thus we see that a simple design of alternating cell areas can result in an arbitrarily long period (that could be equal to $`\mathrm{}`$) when varying the number of kinks in the array. A similar calculation can be made for the case of an open array. As no topological constraint for the phases can be imposed, the number of kinks in the array does not appear in our equations. We consider instead the periodicity of the system with the external field. In this case, the periodicity depends on the ratio between the cell areas instead of the ratio between the inductances. It can be shown that the period in $`f=B_{ext}A_{}/\mathrm{\Phi }_0`$ is equal to $`q`$, where $`A_2/A_1=p/q`$ and $`A_1=A_{}`$. ## III Experimental results We have designed and fabricated the four different rings (a), (b), (c) and (d) schematically shown in Fig. 3. The rings are fabricated with a Nb-Al<sub>2</sub>O<sub>x</sub>-Nb tri-layer technology with a junction critical current density of $`1\mathrm{kA}/\mathrm{cm}^2`$. The current is injected through bias resistors in order to be distributed as uniformly as possible. We measure the dc voltage across a single junction and each ring consists of $`N=8`$ junctions. Fig. 3(a) is a regular ring with equal critical currents and plaquette areas. Fig. 3(b) has alternating critical currents with a ratio of 0.43. Fig. 3(c) has alternating plaquette areas with a ratio of $`\lambda ^{}s`$ of 1.8. Finally, Fig. 3(d) has both alternating critical currents and alternating plaquette areas. It will be shown experimentally that only (d) has a ratchet pinning potential. The outer diameter of each ring is $`36\mu \mathrm{m}`$ with an area $`4070\mu \mathrm{m}^2`$. The inner diameter is $`18\mu \mathrm{m}`$ and it consists of an island of niobium that is used to extract the applied current. The rings also have either small junctions ($`3\times 3\mu \mathrm{m}^2`$) or alternating small and large junctions ($`4.25\times 4.25\mu \mathrm{m}^2`$). The designed $`I_c`$ ratio is 0.5, but in practice the junction areas have rounded corners and experimentally we find the $`I_c`$ ratio to be 0.43. We vary the cell inductance by alternating the cell area. In this case, the angles of the cells are $`60^{}`$ and $`30^{}`$. Both $`\mathrm{\Gamma }`$ and $`\lambda `$ are mostly determined from material properties of the samples and the junction $`I_c`$. Since $`I_c`$ varies with temperature, both parameters can be experimentally controlled to some extent. In general $`\mathrm{\Gamma }`$ and $`\lambda `$ can be made larger by up to a factor of 10 by raising the sample temperature. As the temperature reaches $`T_c`$, however, most of the measured features become too smeared to be distinguished. The temperature dependence of $`I_c`$ is modeled well by the standard Ambegaokar-Baratoff relation with $`I_c(0)R_n=1.9\mathrm{mV}`$ . We find that $`I_c(0)=95\mu \mathrm{A}`$ for the small junctions and $`I_c(0)=224\mu \mathrm{A}`$ for the larger junctions. We will normalize all our parameters with the largest $`I_c`$ of a given ring. From the above values, we can estimate $`\mathrm{\Gamma }(0)=0.17`$ which, due to the constant $`I_cR_n`$ product, is independent of junction area. The inductances are estimated from a numerical package that extracts inductances from complex 3-D geometries of conductors . In this sample the loop inductance is $`L=23.5\mathrm{pH}`$ for the small cells and $`L=42.6\mathrm{pH}`$ for the large ones \[arrays (c) and (d)\]. For the cells in rings (a) and (b) $`L=33.5\mathrm{pH}`$. To calculate dimensionless penetration depth $`\lambda (0)=\mathrm{\Phi }_0/2\pi LI_c(0)`$ we use $`I_c=95\mu \mathrm{A}`$ if the ring only has small junctions \[(a) and (c)\] and for those rings that also have large junction \[(b) and (d)\] we use $`224\mu \mathrm{A}`$. The current-voltage, IV, curves are measured by applying a perpendicular magnetic field of 0 to 300 mG through a magnetic coil that is mounted on the radiation shield of our probe. We heat the sample above $`T_c=9.2\mathrm{K}`$ and cool down to a temperature $`T<T_c`$. We cool our ring in the presence of a flux that corresponds to approximately $`M`$ flux quanta. Flux quantization will cause the expulsion of extra flux so that the ring contains exactly $`M`$ flux quanta after undergoing a superconducting transition. Figure 4 shows typical IV’s for the different rings shown in Fig. 3. Fig. (a) is for a regular ring when $`M=1`$. The IV is symmetric with respect to applied current direction. As the current is increased from the superconducting state the voltage remains at zero. We define the depinning current when the array has a voltage greater than a threshold of $`1.5\mu \mathrm{V}`$. Our computer controlled equipment also corrects for any voltage drift of our amplifiers. As the current increases beyond the depinning value, there is a sequence of voltage steps where as the current increases the voltage remains relatively constant. There are at least two mechanisms that can cause these steps: resonances between the circulating kink and radiated linear waves, and instabilities of the whirling branch. We have verified that the voltage positions correspond to these two mechanisms. Figure 4(b) is a ring with alternating critical currents when $`M=1`$. We again see that the IV is symmetric with respect to current direction and that there are voltage steps. These steps are of the same origin as in the regular ring. However, in this ring the linear dispersion relation that determines the resonance condition is split into two branches. This splitting is analogous to the optical and acoustic branches of a crystal with a two atom basis. Fig. (c) is a ring with alternating areas. The characteristics are similar to that of ring (b) including a splitting of the linear dispersion relation. Since for these three rings $`I_{dep+}=I_{dep}`$, we can infer that the kink is traveling in a symmetric pinning potential as theoretically expected. Figure 4(d) shows an IV for the ring with both alternating critical currents and areas. The IV of this ring is qualitatively different from the other rings due to the ratchet nature of the pinning potential. We see that $`I_{dep}`$ in the positive direction is $`65\%`$ of the depinning current in the negative direction. We also note that there are different voltage steps excited in the up and down direction. The steps are of the same nature as the explained resonances above and there is also a splitting of the dispersion relation. In the rest of the article we will focus on $`I_{dep}`$ measurements as a signature for ratchet behavior in our arrays. Figure 5 shows a measurement of the depinning current vs. applied flux for the regular ring shown in Fig. 3(a). The temperature is $`8.8\mathrm{K}`$, $`\mathrm{\Gamma }=0.5`$ while $`\lambda =0.9`$. Each plateau represents a different number of kinks trapped in the ring. This is a direct result of flux quantization: The ring only allows integer number of flux quanta even if we have applied slightly more or less flux. Since $`N=8`$ and this ring has a symmetric pinning potential, we expect $`I_{dep+}=I_{dep}`$ (no ratchet effect), and a period of 8 as can be seen in the measurements. We also see that $`I_{dep}`$ has a reflection symmetry about $`M=T/2`$. When we alternate the critical currents in our ring we expect the same qualitative features of $`I_{dep}`$ as in the regular ring. Figure 6 shows a measurement of the depinning current vs. applied flux for the ring shown in Fig. 3(b) which has alternating critical currents. There are plateaus corresponding to different values of $`M`$ just as in the regular ring and there is up-down symmetry and periodicity with $`M=8`$ as expected, and a reflection symmetry about $`M=4`$. If we make all the critical currents constant and vary only the cell area as in Fig. 3(c), then we alternate the values of $`\lambda `$ but the pinning potential remains symmetric. At $`T=9\mathrm{K}`$, $`\lambda _l`$ for the large cell is $`0.7`$ and $`\lambda _s`$ for the small cell is $`1.3`$. The result of measuring $`I_{dep}`$ is shown in Fig. 7. As expected the data is symmetric with respect to current direction so kinks are not traveling in a ratchet pinning potential. However, unlike in the previous rings, $`I_{dep}`$ is no longer periodic with $`M=8`$. As shown in section II B, the period will depend on the ratio of the inductances. For our geometry $`L_1/L_21.8`$ or $`9/5`$ which implies a period of 56. However, in any physical array the inductance ratio is rarely going to be exactly a ratio of small numbers. Just on physical grounds we expect a very large period, if any, in the experiments. In Fig. 7 we have measured the depinning current from $`M=15`$ to $`M=15`$ and though there is some apparent self-similarity in the data, it is not periodic. Though there is no period, we can still prepare our ring systematically with $`M=1`$, 2, 3, etc. by counting the plateaus. But instead of $`M=1`$ and $`M=1+N`$ yielding the same dynamical system as in the regular ring, they are now distinguishable. When we alternate both the critical current and the cell inductances as in Fig. 3(d), it is possible to form a ratchet pinning potential (see Figure 1). Fig. 8 shows an experiment on such a ring. Since the period depends on the inductance ratio, we experimentally expect a very long period. This is borne out by the data as there is no sign of a period in the range from $`M=15`$ to 15. We also expect that $`I_{dep+}I_{dep}`$ since the kink is traveling in a ratchet pinning potential. The line shown in the center of the figure varying about $`I_{dep}=0`$ is the difference between the $`I_{dep+}`$ and $`I_{dep}`$. Clearly, the force to move kinks in one direction is different than the force to move it in the opposite direction. The magnitude and direction of this ratchet effect depends on the number of kinks in the system. As a further test of the symmetries and periods of the experiments, we have numerically integrated Eq. 6 using a variable step size explicit 4th order Runge-Kutta method. The kink number $`M`$ is set in the boundary junctions. The initial conditions are $`\phi _j=2\pi Mj/N`$. That is, we stretch the kinks across the full array at the start of the simulation. We then sweep the applied current in the positive direction until a voltage develops in the array and calculate $`I_{dep+}`$. We repeat the procedure while sweeping the current in the negative direction to calculate $`I_{dep}`$. Figure 9 shows the simulations with parameters similar to those of the experiments. Both Fig. 9(a) and Fig. 9(b) have alternating $`\lambda ^{}s`$ with $`\lambda _j=0.3`$ and $`\lambda _{j+1}=0.54`$ for $`j`$ odd. The inductance ratio is $`0.54/0.3=9/5`$ so using Eq. 11 the expected period is $`T=56`$. We find this period in the simulations. Fig. 9(a) has $`h_j=1`$ so we expect the depinning current to be up-down symmetric, i.e. no ratchet effect, as can be seen in the data. Since we always have an odd inversion symmetry \[$`I_{dep+}(M)=I_{dep}(M)=I_{dep}(TM)`$\], $`I_{dep}`$ is symmetric about $`M=56/2=28`$. This reflection symmetry of $`I_{dep}`$ about $`T/2`$ is generic for any array that is not ratchet since it is a direct consequence of the up-down symmetry of the currents. We also find this symmetry in the experiments of non-ratchet arrays. Fig. 9(b) has junctions with two alternating critical currents ($`h_j=1`$ and $`h_{j+1}=0.43`$ for $`j`$ odd) as well as two alternating $`\lambda `$’s ($`\lambda _j=0.3`$ and $`\lambda _{j+1}=0.54`$ for $`j`$ odd). We now expect the kinks to travel in a ratchet pinning potential so that $`I_{dep+}`$ does not equal $`I_{dep}`$, though $`I_{dep}`$ still has an odd inversion symmetry. Just as in the experiments we see that the effect of the ratchet, and rectification direction, depends on the number of kinks. Also, $`I_{dep}`$ does not have the expected reflection symmetry about $`T/2`$. In summary, the simulations show the same features as the experiments and also agree quantitatively with our predictions. ## IV Discussion The equations developed in section II describe kink propagation through a discrete inhomogeneous medium. In this section we will try to get a better understanding of the system by briefly analyzing the continuous limit of our discrete equations. We will then go back to our discrete equations and approximate the pinning potential for a single kink. With the analysis, it will become apparent how it is possible to construct many types of pinning potentials, including ratchet ones, in the inhomogeneous array. To derive the continuous limit of the equations, let $`2\overline{\lambda }_j=\lambda _j+\lambda _{j1}`$ and $`\delta \lambda _j=\lambda _j\lambda _{j1}`$. Substituting in Eq. 6, we get $$h_j𝒩(\phi _j)=\overline{\lambda }_j_{xx}\phi _j+\delta \lambda _j_x\phi _j+f_j+i_{ext}$$ (12) where $`_{xx}\phi _j=\phi _{j+1}2\phi _j+\phi _{j1}`$ represents a discrete Laplacian while $`_x\phi _j=(\phi _{j+1}\phi _{j1})/2`$ is just the center difference of the first order derivative. To arrive at a continuous limit we expand our variables as Taylor series in $`\mathrm{\Delta }x_j`$. The cell area is $`W\mathrm{\Delta }x_j`$ while the cell inductance $`L_j=G\mathrm{\Delta }x_j`$ as $`\mathrm{\Delta }x_j0`$ where $`G`$ is a geometric constant. Therefore $`f_j=0`$ as $`\mathrm{\Delta }x_j0`$ and the discrete operators are replaced by their continuous derivatives $`h(x)𝒩(\phi )`$ $`=`$ $`\lambda (x)_{xx}\phi +_x\lambda (x)_x\phi +i_{ext}`$ (13) $`=`$ $`_x\left(\lambda (x)_x\phi \right)+i_{ext}`$ (14) If $`\lambda `$ and $`h`$ are constant then we have the usual sine-Gordon equation. In this case the equations have a reflection symmetry and it is not possible to have a ratchet pinning potential. If $`\lambda `$ is dependent on position, the spatial coupling is analogous to inhomogeneous diffusion, anisotropic heat conduction, or waves traveling in an anisotropic medium. We also note that $`f_j`$ in the discrete equations is essentially a perturbation to the continuous model that is dependent on the exact discretization employed and is usually small. Thus, in order to get a ratchet pinning potential, there are three ways to break the reflection symmetry of the equations: with an appropriate $`h(x)`$, $`\lambda (x)`$, or a combination of both. To calculate how the parameters $`h_j`$ and $`\lambda _j`$ determine the pinning potential, we will use a perturbative approach. In the limit where all $`\lambda _j0`$ the kink will approach a step function . A stable kink configuration will have the kink sitting in a potential well in the middle of a plaquette. Let the kink lie between junction $`j`$ and $`j+1`$. The nearest phases to $`j`$ and $`j+1`$ will be small in this limit. As an approximation we let $`\phi _j=\alpha `$ and $`\phi _{j+1}=2\pi \beta `$ and set all the other phases to 0 or $`2\pi `$. We can solve for $`\alpha `$ and $`\beta `$ by minimizing the static energy of the system, $$H=\underset{j}{}\left[\frac{\lambda _j}{2}(\phi _{j+1}\phi _j)^2+h_j(1\mathrm{cos}\phi _j)\right].$$ (15) Here we have ignored the kinetic energy since we are only concerned with kink depinning . Substituting we are left with $`H`$ $`=`$ $`{\displaystyle \frac{1}{2}}(\lambda _j+\lambda _{j+1})\beta ^2+{\displaystyle \frac{1}{2}}(\lambda _{j1}+\lambda _j)\alpha ^2`$ (18) $`2\pi \lambda _j(\alpha +\beta )+\lambda _j\alpha \beta +2\pi ^2\lambda _j`$ $`+h_j(1\mathrm{cos}\alpha )+h_{j+1}(1\mathrm{cos}\beta ).`$ To solve for $`\alpha `$ and $`\beta `$ we minimize the energy: $`H/\alpha =0`$ and $`H/\beta =0`$. The resulting equation is transcendental because it depends on the sine of $`\alpha `$ and $`\beta `$ and would in general have to be solved numerically. However, for the systems of small $`\lambda ^{}s`$ studied here, the corrections are small and we can linearize the sine terms ($`\mathrm{sin}(x)x`$) to solve for $`\alpha `$ and $`\beta `$. We have found that for the parameters used in this paper the linear approximation is sufficiently accurate to describe the numerically calculated pinning potentials. After linearizing the sine term we are left with $`\alpha `$ $`=`$ $`2\lambda _j\pi (h_{j+1}+\lambda _{j+1})/D`$ (19) $`\beta `$ $`=`$ $`2\lambda _j\pi (h_j+\lambda _{j1})/D,`$ (20) where $`D=(h_j+\lambda _j+\lambda _{j1})(h_{j+1}+\lambda _{j+1}+\lambda _j)\lambda _j^2`$. To get an idea of how the energy depends on the parameters, we can substitute back into Eq. 15 and expand the energy as a series with respect to $`\lambda _j`$. The result is $$H=2\pi ^2\lambda _j+O(\lambda _j^2)$$ (21) For small $`\lambda _j`$, the height of the pinning potential when the kink is the middle of a plaquette is determined by $`\lambda _j`$. The second order term has corrections due to $`h_{j1}`$, $`h_j`$ and $`\lambda _{j1}`$ and $`\lambda _{j+1}`$. As the kink moves through the pinning potential it will reach a point of maximum energy which in the limit where all $`\lambda _j0`$ occurs when the kink is on the top of a junction. In this limit the nearest phases can have small corrections. We let $`\phi _{j1}=\alpha `$, $`\phi _j=\pi \beta `$, and $`\phi _{j+1}=2\pi \gamma `$. Again we substitute the corrections and set all the other phases to 0 or $`2\pi `$. Minimizing the energy with respect to $`\alpha `$, $`\beta `$, and $`\gamma `$ and linearizing the sine terms yields: $`\alpha =\lambda _{j1}(\pi \beta )/(h_{j1}+\lambda _{j2}+\lambda _{j1})`$, $`\gamma =\lambda _j(\pi +\beta )/(h_{j+1}+\lambda _j+\lambda _{j+1})`$, and $`\beta `$ that can be calculated from $`\lambda _{j1}(\pi \beta \alpha )\lambda _j(\pi \gamma +\beta )+h_j\beta =0`$. If we let every $`\lambda _j`$ be of $`O(\lambda )`$ and $`O(\lambda )O(h_j)`$, then we can expand the energy as a series $$H=2h_j+O(\lambda ).$$ (22) For small $`\lambda `$, $`h_j`$ determines the pinning potential height when the kink is on top of a junction. The above calculation gives some intuition on the different ways of designing a ratchet pinning potential. For instance, alternating critical currents in the array will not produce a ratchet pinning potential since the potential will still have reflection symmetry. In this paper we have experimentally studied one possible way of breaking this reflection symmetry by using alternate critical currents and plaquette areas. However, another possibility corresponds to having three critical currents while maintaining equal areas for all the cells. To test theses ideas, we have numerically integrated Eq. 6 for the case of a 9 junctions array. We let $`h_{j1}=1,h_j=0.5,`$ and $`h_{j+1}=0.25`$ (with $`h_{j+3}=h_j`$) and use the experimentally realizable value of $`\lambda _j=0.25`$ for all $`j`$. We set the kink number $`M`$ and the initial conditions as described in the previous section. We then sweep the applied current in both the positive and negative direction to calculate the depinning current. Fig. 10(a) shows the result of the simulation. There are three features in the depinning current vs. $`M`$ graph. First, the kink is traveling in a ratchet pinning potential. For $`M=1`$, as the current is swept in the positive direction the depinning current is different than when it is swept in the negative direction. Second, the depinning current has the expected odd inversion symmetry; that is $`I_{dep+}(M)=I_{dep}(TM)`$. Thirdly, the depinning current is periodic with period $`T=9`$. All these features were predicted by the theory developed above. The observation that the kink is traveling in a ratchet pinning potential can be directly verified by calculating the pinning potential. We will use both the analysis described above and the numerical method used in , which allows us to compute the energy of the kink as it moves from a maximum to a minimum. The position of the kink in the array is calculated with $$X_{cm}=\frac{1}{2}+\frac{1}{2\pi }\underset{j=1}{\overset{N}{}}j(\phi _{j+1}\phi _j).$$ (23) In Fig. 10(b) we have plotted the numerically calculated pinning potential. We place the kink on the energy maximum and perturb it along the unstable direction and calculate the energy using Eq. (15) and the kink position using Eq. (23). We have also superimposed the values of the kink pinning potential calculated from the above analysis. We have used the linearized results to calculate the phases and Eq. (15) to calculate the energy. The circles represent the energy when the kink is approximately on a junction while the squares are the energy when the kink is approximately in a plaquette center. We see that the pinning potential is indeed asymmetric and that the analysis agrees well with the numerical result. ## V Summary We have shown that an inhomogeneous parallel Josephson-junction array provides an ideal experimental system to study kink motion in different potentials. In particular, we have designed a ratchet potential in an array with a ring geometry. One way of designing a ratchet potential is by varying cell inductances and junction areas. We have verified experimentally and numerically that a kink, and even a train of kinks, requires a different amount of force to depin in positive and negative directions. One interesting result for the inhomogeneous rings is that the periodicity in $`M`$ of the system will depend only on the inductance ratios of consecutive cells. As a consequence, it is possible to design a small ring, e.g. $`N=8`$, such that one can distinguish between hundreds of states with different number of trapped kinks. We have also shown that a ratchet kink potential can be obtained by using junctions with three different critical currents. In this case, the inductances of all cells are equal and the array has a period in $`M`$ equal to the number of junctions. We expect to investigate a kink in our ratchet potential with an ac bias to show that there is a rectifying effect: the ac force leads to kink drift in a preferred direction. This Brownian rectifier has the added technical benefit that the dc voltage response is quantized . This opens up the possibility of designing electronic detectors that can directly measure the amplitude (instead of just the frequency) of an applied signal very accurately. The ideas studied in this paper can be extended to the study of vortex depinning, vortex motion and flux flow in ratchet 2D Josephson-junction arrays. We just need to design a 2D array with an appropriate combination of critical currents and cell areas in the direction of vortex motion, which is perpendicular to the current injection direction. Another way of designing a ratchet effect is by controlling the critical current of the individual junctions of a regular homogeneous array with the application of an external magnetic field. In this way, we can make a physical realization of a “flashing ratchet”. The mechanics of motion is well understood . The pinning potential is removed periodically. In the interval in which the potential is off, particles can diffuse freely. After restoration of the pinning potential, most of the particles localize again in the minimum of the next lattice site giving a net motion (in the opposite direction of the “rocking ratchet”). However, as we have seen temperature (i.e., diffusion) does not play an important role in the motion of the kink. Nevertheless, one can devise a new mechanism for the kink motion in this context. After the removal of the pinning potential, kinks delocalize in an asymmetric way and localize again (when the pinning potential appears) in the next plaquette. Preliminary numerical simulations confirm this scenario. The study of inhomogeneous 1D arrays of Josephson junctions can also help to elucidate pinning mechanism in both 2D Josephson-junction arrays and superconducting thin films. Also, systems in which critical currents are modulated can show complex and interesting dynamical behavior. In these systems and mainly in the presence of ac driving, we expect the appearance of new collective coherent vortex motion which can give a mode-locking response. Thus, these ratchet arrays may be used as inspiration for devices that take advantage of the properties of directional transport, rectification, and quantized response to ac driving. An interesting application of directional motion of vortices has already been proposed in . An appropriate ratchet potential (via the modulation of the thickness of the superconductor) is used to eliminate vortices from the thin film. This “cleaning” is also convenient in 1D and 2D Josephson-junction arrays in which the presence of trapped flux breaks the phase coherence of, for instance, arrays used as radiation sources or complex rapid single flux quantum (RSFQ) circuits. It appears that our ratchet pinning potential could be used to “clean” this trapped flux. In summary, we have shown that inhomogeneous parallel arrays of Josephson arrays are ideal model systems for the study of flux pinning. We have also shown that there are different ways to build a ratchet pinning potential, and have found an excellent agreement between experiments and theory. ## Acknowledgments We thank S. Watanabe, J.E. Mooij, S. Cilla, L.M. Floría and P.J. Martínez for insightful discussions. This work was supported by NSF grant DMR-9610042 and DGES (PB95-0797 and PB98-1592). JJM thanks the Fulbright Commission and the MEC (Spain) for financial support. ## Appendix In this appendix we calculate the periodicity in the number of kinks, $`M`$, of Eq. (6) for a general inhomogeneous ring array. Importantly, this period depends only on the ratio between consecutive $`\lambda ^{}s`$ and it is independent of the order of such ratios and the values of the critical currents. As in the main text, we will use the following transformation for the phases: $`\psi _j=\phi _j+2\pi m_j,`$ (24) where $`m_j`$ is an integer. Eq. (9) is the new equation of motion in the new variables. The new boundary condition for the transformed variables becomes $`\psi _{j+N}=\psi _j+2\pi (M+T)`$ with $`T=m_{j+N}m_j`$. The strategy to calculate the period $`T`$ will be to find a set of integers that eliminate the $`m_j`$ dependence in the right hand side of Eq. (9). We will look for solutions where $`m_jm_{j1}+(m_jm_{j+1})\lambda _j/\lambda _{j1}=0`$. Clearly, this condition is independent of $`h_j`$ and only depends on the ratios $`\lambda _j/\lambda _{j1}`$. First we let $`\lambda _j/\lambda _{j1}=p_j/q_j`$ with $`p_j`$ and $`q_j`$ coprime. Since only differences of $`m_j`$ are needed, we let $`m_1=0`$ without loss of generality. Then we solve for $`m_3`$ in terms of $`m_2`$, $$m_3=\frac{p_2+q_2}{p_2}m_2.$$ (25) Similarly, $`m_4`$ in terms of $`m_3`$ is $$m_4=\frac{q_2q_3+p_3(p_2+q_2)}{p_3(p_2+q_2)}m_3.$$ (26) After some algebra we find the following recursive formula for $`m_{j+1}/m_j`$: $$m_{j+1}/m_j=\xi _{j+1}/p_j\xi _j,$$ (27) with $$\xi _j=\underset{k=2}{\overset{j1}{}}q_k+p_{j1}\xi _{j1}.$$ (28) Here $`\xi _2=1`$ and $`j=3`$ to $`N+1`$. We have now derived that the ratio of $`m_{j+1}/m_j`$ is a ratio of integers. So in principle, we can find an integer for every $`m_j`$. To find a set of integers for $`m_j`$ we start at the most complex ratio: $`m_{N+1}/m_N`$. We take $`m_{N+1}=\xi _{N+1}`$ and $`m_N=p_N\xi _N`$. By back substituting, we find $$m_j=\xi _j\underset{k=j}{\overset{N}{}}p_k$$ (29) for $`j=2`$ to $`N`$ and with $`m_1=0`$. It is straight forward to find the period. Since we have taken $`m_1=0`$ the period can be most easily expressed as $`T=m_{N+1}`$, $$T=\underset{k=2}{\overset{N}{}}q_k+p_N\xi _N.$$ (30) For consistency we also check that the equations at $`j=1`$ are satisfied: $`m_0+m_2\lambda _1/\lambda _N=0`$. It is relatively easy to find that $`m_0=_{k=2}^Nq_k`$. The period calculated using $`T=m_Nm_0`$ also yields Eq. (30). This completes the existence prove that an inhomogeneous parallel array with consecutive $`\lambda ^{}s`$ that are rational numbers has a period in $`M`$. This procedure, however, will not necessarily yield the minimum period. To calculate the minimum period we need to find the smallest $`m_{N+1}`$. For each ratio of $`m_j`$, we can make the numerator and denominator of $`\xi _{j+1}/p_j\xi _j`$ relatively prime by dividing by their common multiples. We start with the last ratio $`m_{N+1}/m_N=\xi _{N+1}/p_N\xi _N`$. If we let $`y=\mathrm{gcd}(\xi _{N+1},p_N\xi _N)`$ then $`m_{N+1}=\xi _{N+1}/y`$ and $`m_N=p_j\xi _N/y`$. However, we also need to be able to consistently change $`m_N`$. That is, the ratio $`m_N/m_{N1}=p_N\xi _N/p_Np_{N1}\xi _{N1}`$ should still be valid. This implies that $`y`$ has to be a multiple of $`m_{N1}`$ as well. By iterating, we see that $`y`$ has to be a multiple of all the $`m_j`$. Therefore, let $`x=\mathrm{gcd}(m_{N+1},m_N,\mathrm{},m_2)`$. The minimum integer period is then $$T=(\underset{k=2}{\overset{N}{}}q_k+p_N\xi _N)/x.$$ (31) As an example, let us consider the regular ring with $`\lambda _j=\lambda `$. Here $`\xi _{N+1}=N,\mathrm{gcd}(N,N1,N2,\mathrm{},1)=1`$ and $`T=N`$ as expected from the homogeneous sine-Gordon equation. This explains the observation in Fig. 10 that T=9. As another example, we consider the ring with alternating areas. Let $`\lambda _j/\lambda _{j1}=p/q`$ for $`j`$ even and $`\lambda _j/\lambda _{j1}=q/p`$ for $`j`$ odd. Then $`\xi _3=p+q`$, $`\xi _4=2pq+q^2`$, and $$\xi _N=\frac{N}{2}p^{N/21}q^{N/21}+(\frac{N}{2}1)p^{N/22}q^{N/2}$$ (32) for $`N`$ even. Also $$\underset{k=2}{\overset{N}{}}q_k=p^{N/21}q^{N/2}.$$ (33) Then, $`x=\mathrm{gcd}(m_{N+1},m_N,\mathrm{},m_2)=p^{N/21}q^{N/21}`$ and $`T`$ $`=`$ $`(N/2)p+(N/21)q+q`$ (34) $`=`$ $`(p+q)N/2.`$ (35) We have recovered the same result derived in the main text.
no-problem/9911/quant-ph9911067.html
ar5iv
text
# Quantum theory of incompatible observations ## Abstract Maximum likelihood principle is shown to be the best measure for relating the experimental data with the predictions of quantum theory. Quantum theory describes events on the most fundamental level currently available. The synthesis of information from mutually incompatible quantum measurements plays the key role in testing the structure of the theory. The purpose of this Letter is to show a unique relationship between quantum theory and the mathematical statistics used to obtain optimal information from incompatible observations: Quantum theory prefers the relative entropy (maximum likelihood principle) as the proper measure for evaluation of the distance between measured data and probabilities defined by quantum theory. In the standard textbooks , a quantum measurement is represented by a hermitian operator $`\widehat{A}`$, whose spectrum determines the possible results of the measurement $$\widehat{A}|a=a|a.$$ (1) In the following, the Dirac notation will be used and for the sake of simplicity a discrete spectrum will be assumed. Eigenstates are orthogonal $`a|a^{}=\delta _{aa^{}}`$ and the corresponding projectors provide the closure relation $$\underset{a}{}|aa|=\widehat{1}.$$ (2) Projectors predict the probability for detecting a particular value of the q–variable $`a`$ represented by the operator $`\widehat{A}`$ as $`p_a=a|\rho |a,`$ provided that the system has been prepared in a quantum state $`\rho .`$ This mathematical picture corresponds to the experimental reality in the following sense: When the measurement represented by the operator $`\widehat{A}`$ is repeated N times on identical copies of the system, the number a particular output $`a`$ is collected $`N_a`$ times. The relative frequencies $`f_a=\frac{N_a}{N}`$ will sample the true probability as $`f_ap_a`$ fluctuating around them. The exact values are reproduced only in the asymptotical limit $`N\mathrm{}.`$ Experimentalist’s knowledge may be expressed in the form of a diagonal density matrix $$\widehat{\rho }_{est}=\underset{a}{}f_a|aa|,$$ (3) provided that error bars of the order $`1/N`$ are associated with the sampled relative frequencies. This should be understood as mere rewriting of the experimental data $`\{N,N_a\}`$. Similar knowledge may be obtained by observations, which can be parameterized by operators diagonal in the $`|a`$ basis, i.e. by operators commuting with operator $`\widehat{A}.`$ But the possible measurement of non–commuting operators yields new information, which cannot be derived from the measurement of $`\widehat{A}`$. From this viewpoint it seems to be advantageous to consider the sequential synthesis of various non–commuting observables. In this case several operators $`\widehat{A}_j,`$ $`j=1,2,\mathrm{}`$ will be measured by probing of the system $`N`$ times together. Now, one expects to gain more than just the knowledge of the diagonal elements of the density matrix in some a priori given basis. This sequential measurement of non–commuting observables should be distinguished from the similar problem of approximate simultaneous measurement of non–commuting observables . As in the case of the measurement of a hermitian operators, the result of sequential measurements of non–commuting operators may be represented by a series of projectors $`|y_iy_i|.`$ This should be accompanied by relative frequencies $`f_i`$ indicating how many times a particular output $`i`$ has been registered, $`_if_i=1.`$ Various states need not be orthogonal $`y_i|y_j\delta _{ij}`$, in contrast to the previous case of a hermitian operator. However, this substantial difference has its deep consequences. The result of the measurement cannot be meaningfully represented in the same manner as previously. For example, direct linking of probabilities with relative frequencies used in standard reconstructions $`\rho _{ii}=f_i,`$ $`\rho _{ii}=y_i|\widehat{\rho }|y_i,`$ may appear as inconsistent, since the system of linear equation is overdetermined, in general. A novel approach will be suggested here. Let us assume the existence of a quantum measure $`F(\rho _{ii}|f_i)`$, which parameterizes the distance between measured data and probabilities predicted by quantum theory. We will search for the state(s) located in the closest neighborhood of the data. A general state may be parameterized in its diagonal basis as $$\widehat{\rho }=\underset{i}{}r_i|\phi _i\phi _i|.$$ (4) The equation for the extremal states may be found analogously to the treatment developed in for maximum likelihood estimation as $$\underset{i}{}\frac{F}{\rho _{ii}}|y_iy_i|\widehat{\rho }=\lambda \widehat{\rho },$$ (5) where $`\lambda `$ is a Lagrange multiplier. The normalization condition $`\mathrm{Tr}\widehat{\rho }=1`$ sets its value to $$\lambda =\underset{i}{}\frac{F}{\rho _{ii}}\rho _{ii}.$$ Any composed function $`G(F(\rho _{ii}|f_i))`$ fulfills the same extremal equation (5) with the Lagrange multiplier rescaled as $`\lambda \frac{dG}{dF}.`$ Without loss of generality it is therefore enough to consider the normalization condition $`\lambda =1.`$ The extremal equation (5) has the form of a decomposition of the identity operator on the subspace, where the density matrix is defined by $$\underset{i}{}\frac{F}{\rho _{ii}}|y_iy_i|=\widehat{1}_\rho .$$ (6) This resembles the definition of a probability-valued operator measure (POVM) characterizing a generalized measurement . To link the above extremalization with quantum theory, let us postulate the natural condition for the quantum expectation value $$Tr\left(\frac{F}{\rho _{ii}}|y_iy_i|\widehat{\rho }\right)=f_i.$$ (7) This assumption seems to be reasonable. The synthesis of sequential non–compatible observations may be regarded as a new measurement scheme, namely the measurement of the quantum state. The quantum measure $`F`$ then fulfills the differential equation $$\frac{F}{\rho _{ii}}\rho _{ii}=f_i.$$ (8) and singles out the solution in the form $$F(\rho _{ii}|f_i)=\underset{i}{}f_i\mathrm{ln}\rho _{ii}.$$ (9) This is nothing else than the log likelihood or Kullback–Leibler relative information . Formal requirements of quantum theory, namely the interpretation of the extremal equation as a POVM, result in the concept of maximum likelihood in mathematical statistics. The analogy between the standard quantum measurement associated with a single hermitian operator, and a series of sequential measurements associated with many non–commuting operators is apparent now. The former determines the diagonal elements in the basis of orthonormal eigenvectors, whereas the latter estimates not only the diagonal elements, but the diagonalizing basis itself. This is the difference between measurement of quantum observable $`\widehat{A}`$ and measurement of the quantum state. In this sense maximum likelihood estimation may be considered as a new quantum measurement. The observed quantum state is given by the solution of the nonlinear operator equation $$\underset{i}{}\frac{f_i}{\rho _{ii}}|y_iy_i|\widehat{\rho }=\widehat{\rho },$$ (10) which is, in fact, the completeness relation of a POVM with measured outputs $`\{f_i\}.`$ Special cases of the solution (10) have been discussed recently for the phase , the diagonal elements of the density matrix and the reconstruction of the $`1/2`$ spin state . Quantum interpretation offers a new viewpoint on the maximum likelihood estimation. This method is customarily considered as just one of many estimation methods, unfortunately one of the most complicated ones. It is often considered as a subjective method, since likelihood quantifies the degree of belief in a certain hypothesis. Any physicist, an experimentalist above all, would perhaps use as the first choice the least–squares method for fitting theory and data . Let us evaluate this as an illustrative counterexample. In this case $`F(\rho _{ii}|f_i)=_i(\rho _{ii}f_i)^2`$ and the extremal equation reads $`2{\displaystyle \underset{i}{}}(\rho _{ii}f_i)|y_iy_i|\widehat{\rho }=\lambda \widehat{\rho },`$ (11) $`\lambda =2{\displaystyle \underset{i}{}}(\rho _{ii}f_i)\rho _{ii}.`$ (12) Equation (11) may be again interpreted as a completeness relation for the POVM $$\widehat{E}_i=2\frac{(\rho _{ii}f_i)}{\lambda }|y_iy_i|.$$ However the expectation value is a rather complicated implicit function of the measured data, since $$\mathrm{Tr}(\widehat{\rho }\widehat{E}_i)=2\frac{(\rho _{ii}f_i)\rho _{ii}}{\lambda }.$$ (13) It does not mean that the least–squares method is incorrect, it only means that such fitting does not reveal the structure of quantum measurement. In this sense the maximum likelihood seems to be unique and exceptional. There are several fundamental consequences of this result. According to Fisher’s theorem , maximum likelihood estimation is unbiased and achieves the Cramér–Rao bound asymptotically for large $`N\mathrm{}.`$ As demonstrated here, for any finite $`N`$ maximum likelihood may be interpreted as a quantum measurement. When seen this way, bias and the noise above the Cramér–Rao bound seem to be unpleasant but natural properties of quantum systems. Maximum likelihood may set new bounds on distinguishability related currently to the Fisher information . Fisher information corresponds to the Riemannian distinguishability metrics and may be naturally interpreted as the distance in the Hilbert space. Besides this, fundamental equations of quantum theory such as Schrödinger, Klein–Gordon, Pauli etc. and other physical laws may be derived from the principle of minimum Fisher information . Since Fisher information only approximates the behaviour of likelihood in asymptotical limit, all these features seem to be involved also in the maximum likelihood principle. However, the latter one is obviously stronger since, as shown here, only the maximum likelihood is able to reproduce the structure of quantum measurement for finite observations. Notice also that maximum likelihood generalizes the notion of POVM in the following sense. Actual measurements may be (and usually always are) incomplete. However, the synthesis of any incomplete measurements, namely of the original observations represented here by $`|y_iy_i|,`$ is complete in the subspace, where the resolution of the identity (6) holds. POVM and estimated quantum state are mutually connected in dependence on the type of measurement and on its results. In particular, it is not necessary to consider only the special scheme for quantum state observation as for example the mutually complementary eigenbases . We conclude with a remark that may shed light on why maximum likelihood is peculiar: Maximum likelihood is perhaps singled out by Nature, because the non-symmetric fluctuations of data of multinomial distributions, which are the results of quantum measurements, are compensated, so to say, by an equally non-symmetric attribution of degrees of belief to the various test states. Maximum likelihood takes into account that, in finite observations, improbable events tend to appear more frequently than they should, and conversely, very probable events tend to appear somewhat less often. This work was supported by TMR Network ERB FMRXCT 96-0057 “Perfect Crystal Neutron Optics” of the European Union and by the grants of Czech Ministry of Education VS 96028 and CEZ J14/98.
no-problem/9911/gr-qc9911098.html
ar5iv
text
# Removing Instrumental Artifacts: Suspension Violin Modes ## I Motivation The wire suspensions of the interferometer test masses are a conduit for environmental noise to enter the gravitational wave data channel; additionally, they are a source of noise themselves. Both the pendulum mode (whose frequency is out of the interferometers band) and the suspension wire violin modes (whose fundamental mode frequencies are in-band) are energized by their contact with the thermal bath. Owing to their weak damping this energy is strongly concentrated about the mode resonant frequencies. At the fundamental violin mode frequencies the thermal noise dominates the other noise sources by approximately 50 dB. In addition to this thermal noise component, non-thermal excitations of the suspension wires (e.g., sudden creep events) can lead to excitations in the interferometer output. These narrow band features are instrumental artifacts, not gravitational waves: along with other artifacts they should be removed from the data before it is studied for the presence of other signals. As instrumental artifacts, however, they carry important information about the instrument’s state. Thermal and technical noise that disturbs the suspension excites these modes and move the mirrors, leading to an artifact in the gravitational wave channel. Gravitational waves give rise to a signal by changing the distance between the mirrors, but do not move or otherwise excite the suspension modes. Correspondingly, if we can determine the mode state as a function of time we have a way of eliminating a broad class of technical noise sources that might otherwise masquerade as gravitational wave bursts. How can we identify the state of the suspension violin modes, given only the gravitational wave channel? This is a classic problem in data analysis, generally addressed by a Kalman filter. We have developed such a filter for use in LIGO and shown, using data taken in November 1994 at the LIGO 40M prototype, that it enables us to follow the state of the violin modes independently of the other noise sources — both technical and fundamental — that contribute to the gravitational wave channel. ## II The Kalman Filter The Kalman Filter kalman60a is a mechanism for predicting the multi-dimensional state of a dynamical system from a multi-dimensional observable. The system is assumed to evolve linearly and the observable is assumed to be linearly related to the state. denoting the system state $`𝐱`$ we have (for discrete time series): $$𝐱[k]=𝐀𝐱[k1]+𝐰[k1].$$ (1) We assume that the system is driven by a stochastic force, referred to as the process noise and denoted $`𝐰`$. The state dynamics determine the linear operator $`𝐀`$. The state contributes to the observation $`𝐲`$ , which also includes a stochastic, additive measurement noise $`𝐯`$: $$𝐲[k]=𝐂𝐱[k]+𝐯[k].$$ (2) In the classic Kalman filter the process and measurement noises are assumed to be Normal processes with known co-variances $`𝐖`$ and $`𝐕`$.<sup>1</sup><sup>1</sup>1Even when $`𝐰`$ and $`𝐯`$ are not Normal the Kalman filter estimates of the state $`𝐱[k]`$ can be shown to have the smallest mean-square error of all linear state estimators that depend only on the co-variances. Now suppose that we have an estimate $`\widehat{𝐱}[k1]`$ of the state, and also an estimate of the error co-variance $`𝐏[k1]`$ in the estimate, at sample $`k1`$. The Kalman filter uses these estimates, the observation $`𝐲[k]`$ at sample $`k`$, and $`𝐀`$, $`𝐂`$, $`𝐖`$ and $`𝐕`$ to form an estimate of the state and its error co-variance at sample $`k`$: $`\widehat{𝐱}[k]`$ $`:=`$ $`𝐊[k]\left(𝐲[k]\widehat{𝐲}[k]\right)`$ (3) $`\widehat{𝐏}[k]`$ $`:=`$ $`\left(𝐈𝐊[k]𝐂\right)\stackrel{~}{𝐏}[k]\left(𝐈𝐊[k]𝐂\right)^T`$ (4) where $`\widehat{𝐲}[k]`$ $`:=`$ $`𝐂𝐀\widehat{𝐱}[k1]`$ (5) $`𝐊[k]`$ $`:=`$ $`\stackrel{~}{𝐏}[k]𝐂^T/\left(𝐕+𝐂\stackrel{~}{𝐏}[k]𝐂^T\right)`$ (6) $`\stackrel{~}{𝐏}[k]`$ $`:=`$ $`𝐀\widehat{𝐏}[k1]𝐀^T+𝐖.`$ (7) The estimated system state $`𝐱[k]`$ (e.g., the generalized coordinate and conjugate momentum of the violin mode normal mode of the wire) is thus completely determined by the observation $`𝐲[k]`$, the estimated state at sample $`k1`$, the wire dynamics, and the statistical properties of the process and measurement noise. The error in the estimate $`𝐱[k]`$ falls with $`k`$, converging upon a limiting error covariance that is fully determined by $`𝐀`$, $`𝐂`$, $`𝐖`$ and $`𝐕`$; correspondingly, we can choose any initial estimate of $`𝐱`$ and $`𝐏`$ and the filter will, after several iterations, adjust the state estimate and error accordingly. From the state estimate at each sample we can, through the measurement equation, estimate the contribution of the system to the actual observation. This estimated contribution can be subtractively removed from the actual observation, leaving a residual that is as free from the contaminating influence of the process as we can make it. ## III Modeling the violin modes To describe a Kalman filter for our system we need a model for the wire state dynamics, the relationship between the wire state and the appearance of the mode in the detector data channel, and estimates of the process and measurement noise. The wire motion contributes significantly to the data channel only in a narrow band about the violin mode resonant frequency; correspondingly, we can focus attention this narrow band and model the wire dynamics as a viscously damped harmonic oscillator driven by white noise<sup>2</sup><sup>2</sup>2Over the relevant, narrow band near the resonant peak there is no distinction between viscous and structural damping.: $$\ddot{\psi }=\omega _0^2\psi \gamma \dot{\psi }+N(t).$$ (8) We assume that the measurement is of the state variable $`\psi `$ plus white measurement noise. There are many violin modes, corresponding to the many wires that are used to suspend the interferometer mirrors: for each wire there are separate state variables and a separate equation describing the dynamics of that mode. Using standard lock-in techniques we mix the $`\psi `$ and the data channel with a local oscillator whose frequency is near that of the violin modes and band-limit the output of the lock-in to the narrow band over which our model is accurate. The in-phase and quadrature-phase components of mixed-down $`\psi `$ become the state $`𝐱`$ used in the Kalman filter, and the in-phase and quadrature-phase components of the data channel become the observation $`𝐲`$. The matrices $`𝐀`$ and $`𝐂`$, describing the state dynamics and the relationship between the state $`𝐱`$ and observation $`𝐲`$ are derived from the our model in light of the lock-in, band-limiting, and discrete time sampling operations. ## IV Results and Discussion ### IV.1 Removing the artifact To explore the effectiveness of the Kalman filter in identifying the contribution of the violin modes to the detector output we have applied it to data taken in November 1994 at the LIGO 40M prototype detector. In this instrument the fundamental violin mode resonances are all in the (571.6, 605.425) Hz band. The upper and lower panels of figure 1 show the power spectra of the interferometer output in a 45 Hz band between 565.0 and 610.0 Hz before and after the subtractive removal of the Kalman filter estimate of the violin mode contribution to the detector output. The filter identifies the contribution of the mode to the detector output, allowing us to suppress this artifact by 40 dB. The residual bumps positioned in the wings of the removed lines, are non-linear artifacts: the violin mode amplitudes are so large that they modulate the detector transfer function, up-converting other detector noise frequency-modulating the violin mode signal itself. ### IV.2 Statistics of the artifact and residual Some simple exploratory statistics show the value of identifying and removing the known instrumental artifacts from the data stream. Figure 2 shows a histogram of the sample amplitude relative to the RMS sample amplitude for the data channel before (top) and after (middle) removal of the Kalman filter estimate of the violin mode contribution, and (bottom) for the estimated violin mode contribution itself. In an ideal world the measurement noise and the process noise are Normal; correspondingly, each of these distributions should be Rayleigh and appear, in these figures, as straight lines. Departures from a straight line thus imply non-Gaussian noise statistics. Comparing the three panels in figure 2 shows that the violin mode artifact contributes significantly to the non-Gaussian component of the noise in the detector data channel. Since gravitational waves do not excite the violin modes, this excess noise component is strictly technical. Generalizing to the full-scale detector, removing artifacts like these, together with their associated excess noise, from the data stream before analysis thus strengthens our ability to make significant statements regarding the detection of gravitational waves in the residual. ### IV.3 Violin modes vs. gravitational waves The Kalman filter estimates the violin mode contribution to the gravitational wave channel. If that estimate is influenced by the presence of a gravitational wave signal then removing the estimated contribution may distort evidence of the wave in the output. The Kalman filter identifies the violin mode contribution through its dynamics. Since the evolution with time of expected gravitational wave signals is different than the dynamics of the mode contribution to the detector output, we expect that the Kalman estimates of the violin mode contribution will not be influenced by the presence of a gravitational wave signal. It is useful to consider two different kinds of sources: burst sources, such as inspiraling neutron star binary systems, and periodic sources, such as a pulsar at a frequency near to but not identical with the violin mode frequency. Nearly all the signal to noise ratio signal-to-noise from an inspiraling binary is contributed when the signal is in the band from about 70 Hz to 250 Hz finn96a ; finn97g : what happens in the band near the violin mode is is inconsequential. Similarly, the filter output is entirely unaffected by what happens in the band where the S/N is deposited. We have verified this by forming estimates of the violin mode state from the LIGO 40M prototype data, and from the same data set but with an added, simulated gravitational wave signal corresponding to a coalescing neutron star binary. Even for a very strong signal there is no difference in the predicted mode amplitudes when the signal was in the relevant band. In separate experiments we have looked at how well the Kalman filter rejects nearby monochromatic signals, such as might arise from a pulsar. As long as the signal is greater than a linewidth away from the violin mode, the estimated state is unaffected by the periodic signal; correspondingly, the periodic signal is unaffected the subtractive removal of the estimated contribution of the mode to the output. ### IV.4 Monitoring the wire state The Kalman filter estimates separately the state of each violin mode; correspondingly, we can monitor the state of each of these wires separately. It is convenient to represent the state in terms of its amplitude and phase (as opposed to generalized coordinate and momentum); additionally, it is convenient to express the amplitude as an instantaneous measure of the energy in the mode, expressed in terms of temperature. Excess noise will raise the effective temperature of the mode, while the projection of the corresonding mirror motion normal to the optical axis will lower the effective temperature. ## V Summary Thermal and technical noise that disturbs the violin modes of the substrate suspension moves the mirrors, leading to a strong, narrow band artifact in the gravitational wave channel. Gravitational waves, on the other hand, also change the distance between the mirrors, but without moving or otherwise exciting these modes. The Kalman filter described here distinguishes between excitations due to gravitational waves and those due to thermal or other excitations of the violin modes, allowing us to eliminate a broad class of technical noise sources that might otherwise masquerade as gravitational wave bursts. A Kalman filter uses the known dynamics of the modes to distinguish between the mode “signal” and other contributions to the measured detector output: i.e., it detects the violin modes. This distinguishes it from other methods (e.g., multitaper methods, linear notch filters allen98a ) which purport to characterize or remove artifacts, but which in fact simply suppress all contributions to the noise within a narrow band, not distinguishing violin mode from other contributions. The computational cost of identifying and removing the violin modes using the Kalman filter described here is negligible: an interpreted Matlab matlab implementation on a low-end workstation runs at greater than 20$`\times `$ the detector’s real-time sample rate. A compiled implementation, with attention paid to optimization, an additional speed-up of 10 or more can be expected. We thank Albert Lazzarini for drawing our attention to Kalman Filtering and the LIGO Laboratory for its hospitality during the 1997/8 academic year and for the use of the 40 meter prototype data. SM thanks S. Mohanty for many valuable insights and LSF thanks P. Fritschel for valuable discussions. This work was supported by NSF awards PHY 98-00111 and 99-96213.
no-problem/9911/hep-ex9911006.html
ar5iv
text
# Measurement of the trailing edge of cosmic-ray track signals from a round-tube drift chamber ## 1 Introduction Tube drift chambers employing round tubes as the cathode electrode are frequently used as an important device for charged-particle tracking at large experiments, such as the Monitored Drift Tubes (MDT) in the muon detection system of the ATLAS experiment at the Large Hadron Collider (LHC) to be built at CERN. Because of their simple structure, tube drift chambers provide us with easiness in construction and calibration. This is a great advantage for constructing a large detector system. Along with such an advantage, tube drift chambers have an undesirable nature in applications to very high-rate experiments, such as those at LHC. They require relatively long time intervals to collect all meaningful signals. For example, this time interval (i.e., the maximum drift time) is expected to be about 500 ns in the case of ATLAS-MDT. This is appreciably longer than the planned beam-crossing interval of LHC (25 ns). Hence, the event data will be contaminated by garbage signals produced by particles from neighboring beam-crossings. In addition, since the environmental radiation tends to be severe at high-rate experiments, the data may suffer from continuous garbage produced by the radiation. These contaminations may deteriorate the track-reconstruction capability of the detector. The source of signals from drift chambers is ionization electrons produced by charged particles passing through the chamber gas volume. The produced electrons drift towards an anode wire placed at the center to induce electrical signals via an avalanche process around the wire. The first arriving electrons, produced near the closest approach to the anode wire, form the leading edge of the signals. The leading edge, therefore, provides track position information. On the other hand, if the response to single electrons is sufficiently narrow, the trailing edge of the signals is determined by those electrons produced near the tube wall. Thus, in the case of round-tube chambers, trailing edges appear at an approximately identical time with respect to the charged-particle passage, irrelevant of the leading-edge time and the incident angle, as schematically shown in Fig. 1. This argument leads to a prospect that, if we can measure the trailing edge with a sufficient time resolution simultaneously with the leading edge, we may be able to distinguish the beam crossing relevant to each signal. In other words, we may be able to select only those signals relevant to an interesting beam crossing before applying reconstruction analyses . In order to investigate the feasibility of this idea, we carried out a cosmic-ray test using a small tube drift chamber. The leading and trailing-edge times of the signals were simultaneously measured using a multi-hit TDC module employing the Time Memory Cell (TMC) LSI . We discuss the observed properties, by comparing the results with predictions from a Monte Carlo simulation. ## 2 Setup for the cosmic-ray test The tube chamber used for the test is made of a thin aluminum tube having an inner diameter of 15.4 mm, a wall thickness of 0.1 mm, and a length of 20 cm. A 3 mm-diameter window made on the tube wall near the center allows us to feed X rays into the tube. The window is covered with a thin aluminum foil and sealed with Kapton adhesive tape. A gold-plated tungsten wire of 30 $`\mu `$m in diameter is strung along the center axis of the tube. A positive high voltage was applied to the wire with the tube wall being grounded. A gas mixture of argon (50%) and ethane (50%) at the atmospheric pressure was made to flow inside the tube. The signals from the tube chamber were amplified and discriminated using circuits made for the central drift chamber (CDC) of the VENUS experiment at KEK-TRISTAN, where a preamplifier board was attached to one end of the tube. The amplified signals were transferred to a discriminator board through a 30 m-long shielded twisted-pair cable. We added a pulse shaping circuit to the discriminator board, because no intentional shaping was applied in the original circuit. The timing of the discriminated signals was measured using a 32-ch TMC-VME module installed in a VME crate. This TDC module employs the TMC-TEG3 LSI , allowing us to measure both the leading and trailing edges simultaneously with a time resolution of 0.37 ns. The module also has an essentially unlimited multi-hit capability. The TDC module was operated in a common-stop mode. Stop signals synchronized with the passage of cosmic rays were formed by a coincidence between signals from two scintillation counters, 10 cm by 25 cm each, vertically sandwitching the tube chamber with a separation of 15 cm. The coverage of this counter telescope was sufficiently larger than the tube chamber, providing a uniform track distribution in sampled data. As a trade-off, tube-chamber signals were observed in only about 20% of the data. The data taking was controlled by a board computer installed in the VME crate. The computer collected the digitized data and stored them in a local disk. After completing the data taking, the stored data were transferred to a workstation through a network and analyzed there. ## 3 Pulse shaping Before starting the test, we naively thought that appropriate pulse shaping would be necessary for trailing-edge measurements with a good time resolution, because of the presence of a long $`1/t`$ tail in drift chamber signals. The existence of a long tail, which may also be produced by readout circuits, would enhance the time-walk effect due to a large fluctuation in the gas-amplification process. In order to eliminate the tail, we added a pulse shaping circuit between the receiver circuit and the comparator on the discriminator board. A diagram of the added circuit is shown in Fig. 2. The circuit is a double pole-zero filter, capable of converting two poles in the input signal into two new poles. The parameters of the circuit were determined by using signals produced by the 5.9-keV X rays from <sup>55</sup>Fe. The gas volume of the tube chamber was irradiated through the window on the tube wall, and the pulse shape at the discriminator input was investigated using a digital oscilloscope. First of all, the pulse shape was sampled without adding the shaping circuit, and the observed signal tail was approximated by two exponentials. The circuit parameters were calculated so that the two zeros of the circuit should cancel the two time constants (poles) of the approximating function. Another constraint that we applied was that the amplitude corresponding to one of the newly produced poles, having a longer time constant, should become zero. Ideally, a thus-determined circuit should replace a long tail in the input signal with one relatively short exponential tail. However, since the approximation with two exponentials has ambiguity, fine-tuning was necessary in order to achieve satisfactory performance. Looking at the resulting pulse shape, we adjusted the values of two resistors (R1 and R2), with capacitors (C1 and C2) and the other resistor (RL) fixed to the calculated values. The optimized parameter values are shown in Fig. 2. The pulse shapes measured before and after the shaping circuit are shown in Fig. 3. ## 4 Results Due to a large fluctuation in the ionization and avalanche processes, discriminated signals for cosmic-ray tracks are sometimes separated into several fragments. In the offline analysis, successive signals were merged and considered to be one signal if the time interval between them (the interval between the trailing edge of the preceding signal and the leading edge of the following signal) was shorter than 40 ns. Figure 4 shows the relation between the trailing-edge time and the leading-edge time of the recorded data. The data were obtained for an anode voltage of 2.0 kV and a discriminator threshold of 10 mV. The average pulse height for the <sup>55</sup>Fe X rays was 750 mV for this anode voltage. Since about 200 electrons are expected to be produced by this X ray, the threshold corresponds to about three-times the average pulse height for one ionization electron. We can see that the trailing edge of the signals exhibits a nearly equal time, irrespective of the leading-edge time, as expected. We can also find other interesting and unexpected properties in this result: as the leading-edge time becomes larger, the trailing-edge time resolution becomes better and the average time shifts towards larger values. Such properties are expected to emerge as the result of a geometrical focusing effect; i.e., the ionization electron density in the time domain becomes higher as the track distance becomes larger. In a real situation with a finite pulse width, this leads to a larger pulse height for signals having larger leading-edge times, resulting in a better time resolution and a longer delay of the trailing edge. However, as is shown later, this effect is not enough to explain the observed swing-up behavior at large leading-edge times. Along with the dominant data points having a nearly equal trailing-edge time, we can see some data points distributed diagonally in the plot. Since the pulse-merging process is applied, they are not fragments of wider signals, but are isolated narrow signals. These data points gradually vanish as the threshold voltage is raised. They are apparently synchronized with the scintillator trigger. In addition, the frequency of these signals increases as the leading-edge time becomes larger, This suggests a uniform occurrence of the causal process over the chamber gas volume. These facts indicate that these signals were produced by soft X rays coming in association with triggering cosmic-ray muons. Therefore, they must have nothing to do with the chamber performance that we are now interested in. Projections of the scatter plot are shown in Fig. 5, where the data points corresponding to those narrow signals described above are excluded. A flat distribution of the leading-edge time confirms a uniform distribution of the cosmic-ray tracks. The trailing-edge data are concentrated around about 40 ns after the maximum leading-edge time (the maximum drift time in the ordinary definition). From a straightforward numerical evaluation, we obtained an rms resolution of 12 ns for the trailing-edge measurement. The measurement was repeated by varying the threshold voltage. Figure 6 shows the obtained rms resolution as a function of the threshold. We can observe that the improvement of the resolution by lowering the threshold is not significant. The improvement is limited by the swing-up behavior seen in Fig. 4. ## 5 Comparison with a simulation We developed a simple Monte Carlo simulation, aiming at understanding the observed properties. In the simulation, a muon track passing through the chamber gas volume leaves ionization electron clusters along its path, according to a Poisson distribution with an average frequency of 3.0 clusters/mm. The number of electrons composing each cluster is subject to a Poisson distribution with an average of 3.0. The drift time of each electron is determined by the distance from the anode wire, assuming a constant drift velocity. The diffusion during the drift is not taken into account, since it is expected to be ineffective, compared to the measured time resolution. The signal shape is simulated by convoluting the drift time distribution with a single-electron response, determined from the average pulse shape for <sup>55</sup>Fe X rays. In the convolution, the pulse height for each electron is varied in order to simulate the gas-gain fluctuation. A Gaussian distribution was assumed for the fluctuation in the first version of the simulation. The leading and trailing-edge times are then determined by applying a threshold to the simulated signal shape. A scatter plot obtained from the simulation, which should be compared with Fig. 4, is shown in Fig. 7, where the gain fluctuation was assumed to be 30% in the standard deviation. We can see in this result an improvement of the resolution and a shift of the average at larger leading-edge times. However, these variations are apparently less significant than those observed in the measurement result. Namely, the geometrical focusing effect, which is automatically included in the simulation, is not enough to reproduce the observation. Aiming at a better reproduction, we applied several modifications to the simulation. The Polya distribution was applied to make the gas-gain fluctuation more realistic. The threshold was made to fluctuate according to a Gaussian distribution in order to simulate a noise contribution. The diffusion of drift electrons was taken into account. Although these modifications could smear the overall time resolution, they could never enhance the swing-up behavior. As a result of further studies, we found that a small overshoot in the shaped signal may produce an appreciable swing-up structure in the leading-trailing relation. The signal shape that we originally used in the simulation does not have any overshoot, because no significant overshoot was observed in the <sup>55</sup>Fe X-ray response. However, any signal-shape measurement more or less affects the circuit property. There may be a small, but finite, overshoot in the real situation. Figure 8 shows the result of a modified simulation in which the one-electron response is assumed to have an overshoot amounting to 5% of the peak pulse height. The overshoot is assumed to produce a baseline shift after the signal; i.e., the overshoot is assumed to have a very long time constant. The modifications concerning the signal fluctuation mentioned above are all applied in this simulation. The Polya distribution of a 100% fluctuation is used for the gain fluctuation. The standard deviation of the threshold fluctuation is equal to the average pulse height for one ionization electron. The swing-up behavior that we can see in Fig. 8 looks quite similar to that which we observed in the measurement data. Projections of the plot are shown in Fig. 9. The trailing-edge time distribution has an rms resolution of 8.7 ns. This is still significantly smaller than that of the measurement result, suggesting that a further fine structure of the signal tail or certain phenomena not taken into account in the simulation (e.g., the $`\delta `$-ray emission from the tube wall ) may be effective. Note that non-proportional effects, such as the space-charge effect, must not be significant, because a measurement with the anode HV lowered to 1.9 kV, for which the gas gain is reduced to about one half, gives a comparable result, as shown in Fig. 6. ## 6 Conclusions A cosmic-ray test was carried out to investigate the feasibility of an idea of filtering tube drift-chamber signals at high-rate experiments, based on a trailing-edge time measurement. A small tube chamber filled with a popular chamber gas, Argon/Ethane (50/50), at the atmospheric pressure was used for the test. The leading and trailing-edge times of the signals for cosmic-ray tracks were measured simultaneously, using a TDC module employing the TMC LSI. Applying a simple pole-zero filter circuit for signal shaping, we achieved a trailing-edge time resolution of 12 ns in rms with a realistic, or rather moderate, setting of the discriminator threshold. Measurements with a resolution of this level will be very useful at future high-rate experiments. In the case of LHC, such a measurement will allow bunch-crossing identification with a tolerance of two or three crossings without any significant loss of signals. In the measurement data, we observed an unexpected correlation between the leading and trailing edges at large leading-edge times. This correlation limited the achieved trailing-edge time resolution. From a simulation study, we found that a small overshoot in the signal tail can produce a correlation quite similar to that which we observed. If this is truly the reason, the resolution may be further improved with better signal shaping. Meanwhile, in the case of long tube chambers, this effect may seriously limit the achievable resolution, since the signal shape varies according to the signal transmission length. Further studies are necessary to confirm these arguments. The authors wish to thank Yasuo Arai and Masahiro Ikeno for their help in preparing and maintaining the data-acquisition system. Nobuhiro Sato and Takeo Konno are acknowledged for their contribution in preparing the test setup.
no-problem/9911/cond-mat9911298.html
ar5iv
text
# Spin-Peierls transition in NaV2O5 in high magnetic fields \[ ## Abstract We investigate the magnetic field dependence of the spin-Peierls transition in NaV<sub>2</sub>O<sub>5</sub> in the field range 16T-30T. The transition temperature exhibits a very weak variation with the field, suggesting a novel mechanism for the formation of the spin-Peierls state. We argue that a charge ordering transition accompanied by singlet formation is consistent with our observations. \] The Peierls instability takes place in one-dimensional systems and can give rise to complex and fascinating behavior. In itinerant electronic systems the instability is driven by the coupling of electrons to the phonons of the lattice . Any coupling at $`T=0`$ leads to the formation of the Peierls state which is characterized by charge ordering (gap in the electronic spectrum) and a finite lattice distortion. Similar phenomena occur in purely insulating spin systems, where the spin-phonon coupling is responsible for the formation of a singlet ground state with neighboring spins pairwise bound into singlets . The spin-Peierls ground state shows a characteristic gap in the excitation spectrum and has been observed in a variety of organic compounds, such as (TTF)\[CuS<sub>4</sub>C<sub>4</sub>(CF<sub>3</sub>)<sub>4</sub>\] . At high temperatures these materials behave as non-interacting Heisenberg chains, while below the transition temperature, T<sub>c</sub>, the magnetic exchange acquires an alternating component. In 1993 the first inorganic spin-Peierls compound CuGeO<sub>3</sub> was discovered with T<sub>c</sub> $`14K`$. This material, like its organic predecessors, shows a characteristic 1d Heisenberg (Bonner-Fisher)-like magnetization at high T with a sharp drop at T<sub>c</sub>, indicating a non-magnetic ground state. Very recently, a second inorganic compound, NaV<sub>2</sub>O<sub>5</sub> was shown to behave as a spin-Peierls material with T<sub>c</sub> $`34`$K . The properties of NaV<sub>2</sub>O<sub>5</sub> however have proven to be quite controversial, thus stimulating the research reported in this letter. Magnetic susceptibility measurements of NaV<sub>2</sub>O<sub>5</sub> indicate a transition to a non-magnetic phase at T<sub>c</sub> . This can be understood within the framework of a spin-phonon coupling driven transition on a Heisenberg chain . The antiferromagnetic exchange, J, was estimated to be J$``$560K. The low-temperature structure which is assumed in this interpretation of the data, is that of magnetic chains formed by the spin 1/2 V<sup>4+</sup> ions along the crystalline b-axis, separated by spinless V<sup>5+</sup> chains. This scenario implies a lattice distortion in one direction only. However, recent experiments have shown that the above picture is not satisfactory. X-ray diffraction measurements indicated that the system should be viewed as a quarter-filled ladder made of V<sup>4.5+</sup> chains , meaning that a spin of 1/2 is not attached to a single V ion, but rather to a rung of the ladder, i.e. a V-O-V orbital. Subsequent NMR analysis revealed that below T<sub>c</sub>, two inequivalent types of V sites - V<sup>4+</sup> and V<sup>5+</sup> appear, suggesting that charge ordering occurs in the spin-Peierls phase. Charge disproportionation leaves room for period doubling in more that one crystallographic direction, consistent with additional X-ray and NMR studies. These works suggest that lattice distortion takes place in the (a,b) plane (where b is the direction along the chains and a is perpendicular to the chains). A number of theoretical studies have addressed the possibility of charge ordering in 1/4 filled systems, where both electron-lattice and electron-electron interactions are included. The most probable scenario at present seems to be the “zig-zag” order proposed in Ref. where the charge density (i.e. the sites V<sup>4.5±δ</sup> with deviation $`\delta `$ from the average valence) is distributed in a zig-zag fashion along the ladder direction. As emphasized in Ref. the Coulomb repulsion in combination with the electron-lattice interaction can drive such a transition, while the formation of a spin singlet ground state “follows” the charge order. Charge modulation is consistent with the analysis of the observed magnetic excitation spectra , Raman spectra , as well as the anomalies in the thermal conductivity and the dielectric constant at T<sub>c</sub>. The present work attempts to gain further insight into the nature of the spin-Peierls transition in NaV<sub>2</sub>O<sub>5</sub> by addressing the magnetic field dependence of the transition temperature in very high fields. Previous studies in fields up to 5.5T have found behavior consistent with the theoretical predictions and similar to the previously known spin-Peierls compounds . However, subsequent measurements in higher fields, up to 14T and 16T , have found much weaker field dependence. These experiments were based on a determination of T<sub>c</sub> from the changes in the elastic constants and the specific heat , unlike the measurement in Ref. which determined T<sub>c</sub> from the drop in the magnetization. In this work we have measured the magnetization of two NaV<sub>2</sub>O<sub>5</sub> single crystals in magnetic fields from 16T to 30 T. The crystals were grown by high temperature solution growth from a vanadate mixture flux. The masses of the samples investigated were 1.9 and 3.1 mg respectively and they had irregular parallelepiped shapes with smooth, faceted faces. The single crystals were characterized with an Enraf-Nonius CAD4 single crystal diffractometer using Mo-radiation. The results of the structure refinement were the same as reported earlier in Ref.. Magnetization was measured using a standard metal foil cantilever beam magnetometer. The “T” shaped flexible cantilever beam was made from a 7.62 $`\mu `$m thick heat treated MP35N alloy. The dimensions of the “T” were approximately 8 mm on a side. The gap between the “T” and the parallel fixed reference electrode was approximately 800 $`\mu `$m. The sample was mounted using a small amount of vacuum grease. In the presence of a DC magnetic field the interaction of the magnetic moment of the sample with the field results in a force and/or torque, deflecting the beam and changing the capacitance between electrodes. A capacitance bridge was used to monitor the changes in force (magnetization) for temperature sweeps in fixed field. Since MP35N is magnetic (typically 13.5 $`\mu `$emu/g at 78 K), the same bare cantilever was measured under the identical conditions (sweep direction and sweep rate) as the cantilever+sample combination to provide a background reference. The temperature dependence of the cantilever capacitance was compensated for in the same way. A room temperature measurement of the cantilever’s sensitivity showed that a force of 3 nN could be resolved. Cantilever displacement can arise from either a torque or a force on a sample with a magnetic moment. When the sample is at field center, where the field gradient is zero, then torque ($`𝐦\times 𝐁`$) will dominate. Strictly speaking, if the sample is isotropic and there are no shape factors, then there is no torque on the cantilever for fields applied along the direction of displacement (perpendicular to the sample). On the other hand, when the sample is raised (or lowered) away from field center, the force term ($`F=mdB/dz\chi BdB/dz`$) will usually dominate, although torques can still be present. Figure 1 shows temperature sweeps taken for the 1.9 mg sample at the three indicated fields in the legend. The cantilever was located in a position where the field gradient was maximum. The maximum field at this position (24T) is 80$`\%`$ of the field center maximum (30T). The change in capacitance, $`\mathrm{\Delta }C`$, which is proportional to the change in magnetization, is calculated at each field by subtracting the background trace (cantilever alone) from the sample trace (sample+cantilever). This quantity is divided by B<sup>2</sup> and plotted as the ordinate in Fig. 1. As seen from the figure, the data scale reasonably well for the three different fields, confirming the B<sup>2</sup> dependence expected from both torque and force contributions. To accentuate the small shift in the transition temperature, we plot in the inset the derivative of $`\mathrm{\Delta }C/B^2`$ with respect to the temperature. From the position of the peaks we can determine the field-dependent transition temperature. To measure $`T_c`$ at the maximum field of 30T, the sample was placed at field center (sensitive to the torque only) and the data collected and analyzed as described above. A similar scaling with B<sup>2</sup> was observed. Figure 2 shows the derivative of $`\mathrm{\Delta }C`$ with respect to temperature at field center. Plotted in this way, the shift in $`T_c`$ can be clearly seen. Data similar to those plotted in Figs. 1 and 2 were obtained for a second sample with mass 3.1 mg, and for reversed fields. In all cases the shifts in $`T_c`$ were equal to or less than the shifts shown in Fig. 2. In Fig. 2 (inset) we also show the results of magnetization measurements at low fields using a commercial SQUID magnetometer (MPMS7). The singlet formation at $`T_c`$ is clearly observable, but no shift of $`T_c`$ can be observed within measurement accuracy in fields up to 5T, in agreement with previous work . In Fig. 3 we present our high field data for the variation of T<sub>c</sub> in terms of $`\mathrm{\Delta }T_c/T_c(0)=T_c(H)/T_c(0)1`$ and the square of the scaled magnetic field $`h=g\mu _BH/2kT_c(0)`$ . This scaling is expected in spin-Peierls systems, and for small fields $`h1`$, the relative variation of $`T_c`$ should be quadratic : $$\mathrm{\Delta }T_c/T_c(0)=\alpha h^2.$$ (1) The data of Fig. 2 follows this dependence quite well, and we estimate $`\alpha _{exp}0.072(8)`$. The value $`T_c(0)`$ was not measured directly but was estimated from an extrapolation to zero field of the quadratic dependence of $`T_c`$ vs. $`H`$ to be $`T_c(0)=34.2K`$. This value is close to published values and to the $`T_c(0)`$ measured by us using SQUID magnetometer measurements of the magnetization of a 40mg polycrystalline sample. The combination of our high field data and the lower field data of previous measurements gives the variation of $`T_c`$ over a large range of magnetic field and shows a very weak dependence. In contrast, the “conventional” inorganic spin-Peierls compound CuGeO<sub>3</sub> exhibits a much stronger field dependence with $`\alpha =0.39`$ , in good agreement with the theory. The theoretical values of $`\alpha _{SP}`$ predicted for the spin-Peierls transition are $`\alpha _{SP}=0.44`$ or $`0.36`$, depending on the way interaction effects are taken into account . The first, larger number corresponds to the Hartree approximation for the interactions between the Jordan-Wigner fermions, representing the localized spins . The value $`0.36`$ is obtained by exact treatment of the correlation effects , which is possible in the Luttinger liquid framework in one dimension . In both cases the characteristic scaling $`H/T_c(0)`$ which appears in Eq.(1) is due to the commensurate nature of the dimerized phase. For large fields, corresponding to a reduction of $`T_c`$ by a factor of $`T_c/T_c(0)=0.77`$, a transition into an incommensurate phase is expected to take place . Such a transition is less sensitive to magnetic field and has been observed in a variety of spin-Peierls materials . In NaV<sub>2</sub>O<sub>5</sub> however, a transition into such a modulated phase does not seem to take place, since even in the highest field (30T), $`T_c(30T)/T_c(0)=0.97`$, which is very far from the expected incommensurate boundary. Notice that even in a field as high as $`30T`$ the scaled ratio $`h=0.59`$ is quite small due to the large $`T_c(0)`$. We now discuss the possible sources for the difference between the measured value $`\alpha _{exp}`$ and the theoretically predicted one $`\alpha _{SP}=0.365\alpha _{exp}`$ for spin-Peierls systems. In addition to this discrepancy, any theory of NaV<sub>2</sub>O<sub>5</sub> should also be able to explain the large value of the ratio $`2\mathrm{\Delta }/T_c(0)6`$ ($`\mathrm{\Delta }100K`$ being the spin gap), where a mean-field value of 3.52 might be expected. As discussed in the introduction, a transition into a charge ordered state in a 1/4-filled system is consistent with a number of recent experiments. Although it is not clear whether the charge density wave (CDW) precedes or forms simultaneously with the magnetically dimerized spin-Peierls state, it seems certain that the physics of charge ordering must be taken into account. Recent numerical work has shown that CDW and spin-Peierls order can co-exist in quasi one-dimensional 1/4-filled electronic systems. If we assume that the CDW formation is the driving force behind the opening of a spin gap, as argued in Ref., then the “charge” part of the transition will be mainly responsible for the $`T_c(H)`$ dependence. In a system of non-interacting electrons, undergoing a Peierls transition into a (commensurate) CDW state, the decrease of $`T_c`$ for small magnetic field (coupled to the electron spin via a Zeeman term) is also described by Eq.(1), but with $`\alpha _{CDW}=0.21`$ . Two effects, orbital coupling and electron-electron interactions, could further modify this result. Orbital effects are known to be present when nesting is imperfect, and generally compete with the Pauli terms, producing a flatter dependence of $`T_c`$ on $`H`$, i.e. a further reduction of $`\alpha _{CDW}`$ . However spin-orbit interactions lead to anisotropic variation of $`T_c`$ with respect to the magnetic field direction. In NaV<sub>2</sub>O<sub>5</sub> this variation has been found to be extremely weak , which is also confirmed in this work, and consequently the orbital effects can be ruled out as a source of the weak $`T_c(H)`$ dependence. On the other hand, electron-electron interaction effects do not reflect anisotropies and are important in the formation and stabilization of a CDW state . In general, the stability of the CDW depends on the strength of the electron-phonon coupling (which drives the transition) and on the on-site and nearest-neighbor Coulomb correlations . To demonstrate this latter point concerning strong correlation effects, we consider the simplified model of a Hubbard chain with an on-site repulsion $`U`$. We treat the phonons adiabatically, as in Ref., but take into account the electron-electron interaction following Ref., i.e. calculate the polarization bubble exactly for the Luttinger liquid. In this case it is known that $`T_c(H=0)`$ increases with respect to its value at $`U=0`$ . For finite magnetic field we find, at $`U2t`$ (where $`t`$ is the bandwidth), that the coefficient $`\alpha `$ drops to $`\alpha _{CDW}0.15`$, i.e. below the non-interaction value of 0.21. This is not surprising and in fact is quite similar to the difference between the mean-field and the exact treatment in the spin-Peierls case ($`\alpha _{SP}=0.44,\mathrm{\hspace{0.17em}0.36}`$, respectively). The essence of the effect is in the different type of divergence in the polarization bubble with and without interactions. While in the free case the polarization diverges logarithmically at small frequencies, in a Luttinger liquid the stronger, power law dependence sets in , and the Peierls instability is effectively enhanced. Thus the interaction effects, being naturally more important for the CDW formation (compared to the spin-Peierls case), can produce a weaker $`T_c(H)`$ dependence. A more realistic calculation based on a Hamiltonian appropriate for NaV<sub>2</sub>O<sub>5</sub> would be very desirable. The orbital and interaction effects discussed above are, strictly speaking, valid only for an isolated chain. It was assumed that inter-chain interactions are sufficiently strong to suppress the fluctuation effects, typically important in one-dimensional systems . The fluctuations are known to reduce $`T_c(0)`$ below the mean-field value and cause a specific heat jump at the transition $`\mathrm{\Delta }c_P`$ several times the mean-field one. The large observed ratio $`\mathrm{\Delta }/T_c(0)`$ (twice the mean-field), in combination with a $`\mathrm{\Delta }c_P`$ about ten times the mean-field value suggest that fluctuations indeed could be important in NaV<sub>2</sub>O<sub>5</sub>. At the same time one should have in mind that, due to the specific structure of NaV<sub>2</sub>O<sub>5</sub>, transverse interchain interactions are expected to play a crucial role in the stabilization of the ordered phase, in particular the formation of the spin gap and doubling of the period in the (a,b) plane . The vanadium displacements are nearly absent along the ladder direction (b-axis), and largest perpendicular to the ladder direction both along the a- and c-axis . Thus it is not clear whether fluctuation effects have to be necessarily invoked to explain the large $`\mathrm{\Delta }/T_c(0)`$ ratio in this material as is traditionally done, or whether the large $`\mathrm{\Delta }/T_c(0)`$ ratio is intimately related to the anomalously weak variation of $`T_c`$ with field reported in this work. We are grateful to A. Dorsey, S. Hershfield, D. Maslov, C. Biagini and S. Arnason for many stimulating discussions, and M. Meisel for critical reading of the manuscript. We are also appreciative of experimental advice given by V. Shvarts and by the Users Support Group at the National High Magnetic Field Laboratory (NHMFL). G.M. acknowledges the support of the Netherlands Foundation for Fundamental Research on Matter with financial aid from NWO. A.F.H. and S.G.B. were supported by the NSF funded In-House Research Program of the NHMFL in Tallahassee and V.N.K. was supported by NSF Grant DMR9357474.
no-problem/9911/gr-qc9911116.html
ar5iv
text
# Friedmann Equation and Stability of Inflationary Higher Derivative Gravity ## I Introduction Inflationary theory provides an appealing resolution for the flatness, monopole, and horizon problems of our present universe described by the standard big bang cosmology . It is known that our universe is homogeneous and isotropic to high degree of precision . Such a universe can be described by the well known Friedmann-Robertson-Walker (FRW) metric . There are only three classes of FRW spaces characterized by their topological structure: one can either have a closed, open or flat universe according to the observations at large. It is also known that gravitational physics should be different from the standard Einstein models near the Planck scale . For example, quantum gravity or string corrections could lead to interesting cosmological consequences . Moreover, some investigations have addressed the possibility of deriving inflation from higher order gravitational corrections . A general analysis of the stability condition for a variety of pure higher derivative gravity theories is very useful in many respects. It was shown that a stability condition should hold for any potential candidate of inflationary universe in the flat FRW space . We will first briefly review the approach of reference based on a redundant field equation. The proof will be shown to be incomplete. We will also show how to complete the proof with the help of the Bianchi identity for some models where the redundant equation can be recast in a form similar to the Bianchi identity in a FRW background. In addition, the derivation of the Einstein equations in the presence of higher derivative couplings is known to be very complicated. The presence of a scalar field in induced gravity models and dilaton-gravity model makes the derivation even more difficult to derive. We have developed a simpler derivation by imposing the FRW symmetry before varying the action, while keeping the proper time lapse function . We try to generalize the work in in order to obtain a general formula for the non-redundant Friedmann equation. It can be applied to provide an alternative and simplified method to prove the validity of the stability conditions in pure gravity theories. In fact, this general formula for the Friedmann equation is very useful in many area of interests. ## II Friedmann Equation and Stability of Pure Gravity Theories The generalized Friedmann-Robertson-Walker (GFRW) metric can be read off directly from the following equation: $$ds^2g_{\mu \nu }^{\mathrm{GFRW}}dx^\mu dx^\nu =b(t)^2dt^2+a^2(t)\left(\frac{dr^2}{1kr^2}+r^2d\mathrm{\Omega }\right).$$ (1) Here $`d\mathrm{\Omega }`$ is the solid angle $`d\mathrm{\Omega }=d\theta ^2+\mathrm{sin}^2\theta d\chi ^2,`$ and $`k=\mathrm{\hspace{0.17em}0},\pm 1`$ stand for a flat, closed or open universe respectively. Note also that the FRW metric can be obtained from the GFRW metric by setting the lapse function $`b(t)`$ equal to one, i.e. $`b=1`$, in equation (1). One can list all non-vanishing components of the curvature tensor as $`R_{tj}^{ti}`$ $`=`$ $`{\displaystyle \frac{1}{2}}[H\dot{B}+2B(\dot{H}+H^2)]\delta _j^i,`$ (2) $`R_{kl}^{ij}`$ $`=`$ $`(H^2B+{\displaystyle \frac{k}{a^2}})C_{kl}^{ij}.`$ (3) Here $`C_{kl}^{ij}ϵ^{ijm}ϵ_{mkl}`$ with $`ϵ^{ijk}`$ denoting the three space Levi-Civita tensor . Here $`\dot{}`$ denotes differentiation with respect to $`t`$ and $`H=\dot{a}/a`$ is the Hubble constant. We have written $`B1/b^2`$ for later convenience. Given a pure gravity model one can cast the action of the system as $`S=d^4x\sqrt{g}=N𝑑ta^3L(H,\dot{H},k/a^2)`$ in the FRW spaces. Here $`N`$ is a time independent integration constant. If we take $`L`$ as an effective Lagrangian, one can show that the variation with respect to $`a`$ gives $`3LH{\displaystyle \frac{\delta L}{\delta H}}+(H^2\dot{H}){\displaystyle \frac{\delta L}{\delta \dot{H}}}`$ $`=`$ $`(2H+{\displaystyle \frac{d}{dt}})[(4H+{\displaystyle \frac{d}{dt}}){\displaystyle \frac{\delta L}{\delta \dot{H}}}+{\displaystyle \frac{\delta L}{\delta H}}]+2k{\displaystyle \frac{\delta L}{\delta k}}.`$ (4) Note that $`a^3L`$ is normally referred to as the effective Lagrangian. We will also call $`L`$ the effective Lagrangian unless confusion occurs. The Above equation is the space-like $`ij`$ component of the Einstein equation $$G_{\mu \nu }=t_{\mu \nu }$$ (5) with $`t_{\mu \nu }`$ denoting the generalized energy momentum tensor associated with the system. It is known that this equation is in fact a redundant equation. Indeed, one can define $`H_{\mu \nu }G_{\mu \nu }t_{\mu \nu }`$ and write the field equation as $`H_{\mu \nu }=0`$. Hence one has $$D_\mu H^{\mu \nu }=0$$ (6) from the energy conservation ($`D_\mu t^{\mu \nu }=0`$) and the Bianchi identity ($`D_\mu G^{\mu \nu }=0`$). Indeed, the extended Bianchi identity (6) can be shown to give $$(_t+3H)H_{tt}+3a^2HH_3=0,$$ (7) as soon as the FRW metric is substituted into equation (6). Here $`H_3\frac{1}{3}h^{ij}H_{ij}`$ and $`g_{ij}a^2h_{ij}`$. It is now straightforward to show that $`H_{ij}=H_3h_{ij}`$. In fact, equation (7) indicates that: “$`H_{tt}=0`$ implies $`H_3=0`$” as long as $`a^2H0`$. On the other hand, $`H_3=0`$ implies instead $`(_t+3H)H_{tt}=0.`$ This implies $`a^3H_{tt}=\mathrm{constant}.`$ Hence the <sub>ij</sub> equation can not imply the Friedmann equation $`H_{tt}=0`$. Hence any conclusion derived without the Friedmann equation is known to be incomplete. We will briefly review the stability analysis obtained from the analysis based on the redundant equation (4) here and show how to make up the loop-hole in this approach. Suppose that we are given a pure gravity theory, the stability of the background inflationary solution for the Hubble constant $`H=H_0`$ to the redundant field equation (4) can be obtained by perturbing $`H=H_0+\delta H`$. The leading order perturbation equation can be shown to be $$3H_0F+\dot{F}=0$$ (8) along with the zeroth order equation that vanish according to the field equation. This in fact takes a little arguments as shown in reference . One can show that the zeroth order perturbation equation from the perturbed Friedmann equation leads directly to the field equation for the background field. For simplicity the parameter $`k`$ is set as $`k=0`$ in reference . Here $`F`$ is defined as $$FL_{22}(0)\delta \ddot{H}+3H_0L_{22}(0)\delta \dot{H}+(6L_2(0)+3H_0L_{21}(0)L_{11})\delta H$$ (9) In addition, the coefficient of expansions are defined by $`L(H,\dot{H})`$ $`=`$ $`L(H_0,0)+{\displaystyle \frac{\delta L}{\delta H}}(H_0,0)\delta H+{\displaystyle \frac{\delta L}{\delta \dot{H}}}(H_0,0)\delta \dot{H}L(0)+L_1(0)\delta H+L_2(0)\delta \dot{H}`$ (10) $`{\displaystyle \frac{\delta L}{\delta H}}(H,\dot{H})`$ $`=`$ $`{\displaystyle \frac{\delta L}{\delta H}}(H_0,0)+{\displaystyle \frac{\delta ^2L}{(\delta H)^2}}(H_0,0)\delta H+{\displaystyle \frac{\delta ^2L}{\delta H\delta \dot{H}}}(H_0,0)\delta \dot{H}L_1(0)+L_{11}(0)\delta H+L_{12}(0)\delta \dot{H}`$ (11) $`{\displaystyle \frac{\delta L}{\delta \dot{H}}}(H,\dot{H})`$ $`=`$ $`{\displaystyle \frac{\delta L}{\delta \dot{H}}}(H_0,0)+{\displaystyle \frac{\delta ^2L}{\delta H\delta \dot{H}}}(H_0,0)\delta H+{\displaystyle \frac{\delta ^2L}{(\delta \dot{H})^2}}(H_0,0)\delta \dot{H}L_2(0)+L_{21}(0)\delta H+L_{22}(0)\delta \dot{H}`$ (12) If we focus on the solution $`F=0`$ , one has $$\delta H=A_+e^{B_+t}+A_{}e^{B_{}t}.$$ (13) Here $`A_\pm `$ denotes arbitrary constants and $$B_\pm =\frac{3}{2}H_0\pm \frac{\sqrt{\mathrm{\Delta }}}{2L_{22}}$$ (14) denotes the characteristic roots to the characteristic equation $$L_{22}(0)x^2+3H_0L_{22}(0)x+6L_2(0)+3H_0L_{21}(0)L_{11}=0$$ (15) of the ODE (9). Here $`\mathrm{\Delta }9H_0^2L_{22}^24L_{22}(6L_2+3H_0L_{21}L_{11})`$ denotes the discriminant of the characteristic equation of (9). One can integrate $`\delta H`$ to obtain $$a(t)=a_0\mathrm{exp}\left(H_0t+\frac{A_+}{B_+}e^{B_+t}+\frac{A_{}}{B_{}}e^{B_{}t}\right).$$ (16) Therefore, one finds that stability of the de Sitter type inflationary solution will require both characteristic roots $`B_\pm `$ to be negative. If one of the roots is positive and the other one is negative, then there may exist a limited period of inflation. This sort of inflation will come to an end in a time duration of the order of $`1/B_p`$ with $`B_p`$ denoting the positive root. Choosing a sufficiently small value of $`1/B_p`$ allows inflation to exit naturally . Therefore the sign of the roots to the characteristic equation (15) can be checked to see if the system supports a stable inflationary de Sitter solution. If the discriminant is negative, the solution of $`B_\pm `$ will contain an oscillating phase. Hence the system is stable again. Since this argument is based on the redundant field equation, this stability analysis is not complete. In other words, the redundant $`G_{ij}`$ equation will normally take the form of $`_t(a^3G_{tt})=0`$. Hence analysis based on the $`G_{ij}`$ equation will be quite indirect and incomplete. There are two problems with this stability condition. First of all, this condition is obtained from the redundant equation. One does not know the validity of the field equation, not to mention the stability condition derived from it. Secondly, there are homogeneous terms in equation (8) in addition to $`F=0`$, i.e. $`F=k_1\mathrm{exp}[3H_0t]`$ with an arbitrary constant $`k_1`$. The first problem is not easy to answer for the moment. The second problem can be resolved immediately. One notes that the complete solution to the redundant equation (8) is in fact $`\delta H=A_+e^{B_+t}+A_{}e^{B_{}t}+k_1/[(6L_2(0)+3H_0L_{21}(0)L_{11})a_0^3]`$. Here $`a_0(t)=\mathrm{exp}H_0t`$. This obviously will not affect the stability analysis as long as we are interested in the inflationary universe where the particular solution is negligible in the above equation unless the denominator of the $`k_1`$-term happens to vanish. In fact, we are going to show that $`F=0`$ is not only a lucky guess, it can be derived from perturbing the Friedmann equation. But one can not be sure about this unless a closed form expression for the Friedmann equation is available so that a model independent analysis is applicable. Nevertheless, one can still resolve this problem by looking into the details of the Bianchi identity. As to the first problem with this condition, one notes that in most cases, the redundant equation can be rearranged as $$_t(a^3H_{tt})=0$$ (17) using the Bianchi identity. The solution to above equation is $`H_{tt}=\mathrm{constant}\times a^3`$. Hence one can show that the Friedmann equation has to be of the form $$H_{tt}=\stackrel{~}{F}+k_1a^3=0$$ (18) if the redundant equation can be written as the combination $`_t(a^3\stackrel{~}{F})=0`$ with $`\stackrel{~}{F}=0`$ the corresponding equation leading to the first order equation $`F=0`$ shown in Ref. . To be more specifically, $`\delta \stackrel{~}{F}=F+`$ to the leading order in $`\delta H`$ and its derivatives. Here $`H=H_0+\delta H`$. This follows from the fact that $`_t[a^3(H_{tt}\stackrel{~}{F})]=0`$ implies that the difference $`H_{tt}\stackrel{~}{F}`$ has to be proportional to $`a^3`$ with some arbitrary constant $`k_1`$. Therefore, one can effectively work with the $`F=0`$ solution if we are working on an inflationary background De Sitter solution. This is because $`H_{tt}\stackrel{~}{F}`$ in the De Sitter background. Therefore, any analysis based on the ansatz $`F=0`$ can only be justified in the De Sitter background. In particular, stability conditions derived from $`F=0`$ adopted in Ref. can not be justified from above analysis in anti De Sitter space. This is because the undetermined part $`k_1a^3`$ will affect the result significantly. While we suspect that $`F=0`$ should probably be the first order Friedmann equation we are looking for, we are not sure if the redundant equation can always be cast into the familiar form shown above. Moreover, the true Friedmann equation can look like $`\stackrel{~}{F}+k_1/a^3=0`$ even if we can write the redundant equation in above familiar form. Fortunately, one can in fact derive a closed form for the Friedmann equation similar to equation (4). The Friedmann equation can be recast as $$L+(H\frac{d}{dt}+3H^2\dot{H})\frac{\delta L}{\delta \dot{H}}\frac{\delta L}{\delta H}H=0$$ (19) after some algebra. This is done by a variation of $`L^{\mathrm{GFRW}}`$ with respect to $`b`$ (or equivalently with respect to $`g_{tt}`$) and setting $`b=1`$ afterwards. Here $`L^{\mathrm{GFRW}}d^3x(g_{\mu \nu }=g_{\mu \nu }^{\mathrm{GRFW}})`$. One notes that the crucial point in the derivation is due to the observation that any variation of $`L`$ with respect to $`H\dot{B}`$ has to be equivalent to the variation of $`L`$ with respect to $`2B\dot{H}`$. This is because the term $`H\dot{B}`$ always shows up with $`2B\dot{H}`$ as indicated in the explicit formulae listed in equations (2-3). Note that equation (19) is known as the minimum Hamiltonian constraint $`\pi \dot{a}L(a,\dot{a})=0`$ in the case where $`\delta L/\delta \dot{H}=0`$. For example, one can write $`L=R=6[k/a^2H^2]`$ after proper integration by parts. Hence the Hamiltonian constraint is identical to the Friedmann equation (19) in this model. Note in particular that even the term containing $`k`$ does not get involved explicitly in the Friedmann equation, equation (19) remains valid for arbitrary $`k`$. Our derivation leading to equation (19) is based on a pure gravitational action. The derivation of the Friedmann equation in the presence of other sources of interactions is straightforward. In addition, the Friedmann equation in $`D`$-dimensional FRW space can also be derived following similar arguments. One can then apply the same perturbation, $`H=H_0+\delta H`$, to the Friedmann equation. The zeroth order perturbation equation gives exactly the field equation for the background field $`H=H_0`$ while the leading order in $`\delta H`$ gives $`F=0`$ identically. Indeed, perturbing equation (19) gives $$L_{22}(0)\delta \ddot{H}+3H_0L_{22}(0)\delta \dot{H}+(6L_2(0)+3H_0L_{21}(0)L_{11})\delta H=0$$ (20) to the leading order in $`\delta H`$ and its derivatives. Note that $`a`$-dependent terms always appear in a combination as $`H^2+k/a^2`$ in $`R_{kl}^{ij}`$ as given by equation (3). Hence one can ignore the $`\delta a`$-dependent terms during the inflationary phase when $`H1/a^2`$. This follows from the fact that $`\delta aa\delta H\mathrm{\Delta }t`$ and hence $`|\delta (1/a^2)/\delta H^2|1/Ha^21`$ during the inflationary phase. Here $`\mathrm{\Delta }t`$ is the time duration for the inflationary phase. Therefore, one is able to show that the stability conditions in the inflationary phase are indeed given by the result obtained in equation (16). Hence, one would never need to worry about any complication that can possibly weaken the validity of the stability condition obtained in reference . This stability condition hence serves as a screening device for any possible candidates for a realistic cosmological universe without any ambiguity. In addition, the stability condition obtained earlier works also for curved FRW spaces in the inflationary phase where $`H1/a^2`$. In short, our result states clearly without ambiguity that physically acceptable inflationary models need to meet the stability conditions shown earlier in this section. The perturbative stability indicates that a solution with a stable mode and an unstable mode can possibly exit the inflationary phase in due time. Our result based on the non-redundant Friedmann equation is complete and remains valid for all FRW models. Most of all, working directly on the Friedmann equation (19) we just derived can save us a lot of trouble in the complete analysis. ## III Friedmann Equation for Scalar-Gravity Theory The derivation of the Friedmann equation in the presence of a scalar field is in fact rather straightforward since complications only arise from complicated curvature interactions. Indeed, the inclusion of scalar interactions introduces a kinetic term $`T_\varphi =\frac{1}{2}_\mu \varphi _\nu \varphi g^{\mu \nu }.`$ It will take the form $`T_\varphi =\frac{1}{2}\dot{\varphi }^2B(t)`$ if $`\varphi (𝐱,t)=\varphi (t)`$. Hence the complete effective Lagrangian in the GFRW spaces will take the following form $$ba^3L(g^{GFRW},\varphi )=ba^3L_0+\frac{1}{2}a^3\sqrt{B}\dot{\varphi }^2,$$ (21) with $`L_0`$ denoting the graviton Lagrangian plus everything else except the kinetic term of the scalar field $`T_\varphi `$. Or equivalently, $`L_0=LT_\varphi `$ with $`L`$ denoting the complete effective Lagrangian of the theory. Hence, one can show that the Friedmann equation becomes $$L_0T_\varphi +(H\frac{d}{dt}+3H^2\dot{H})\frac{\delta L_0}{\delta \dot{H}}\frac{\delta L_0}{\delta H}H=0.$$ (22) Note that the minus sign in front of $`T_\varphi `$ is due to the $`a^3b^1L`$ combination from the $`\sqrt{g}`$ and the $`g^{tt}`$ component. In addition, the variational equation for the $`\varphi `$ field can be directly obtained from the variation of the effective Lagrangian $`L`$ with respect to $`\varphi `$. The method for deriving the Friedmann equation described here can be extended to theories with any form of simple gravitational interactions in a straightforward way. For example, one can study the following action with Gauss-Bonnet coupling $$L=\frac{1}{2}R\frac{1}{2}_\mu \varphi ^\mu \varphi +f(\varphi )R_{GB}^2$$ (23) with $`R_{GB}^2=R_{\mu \nu \alpha \beta }R^{\mu \nu \alpha \beta }4R_{\mu \nu }R^{\mu \nu }+R^2`$ denoting the Gauss-Bonnet term. The effective Lagrangian is then $$L=3(\dot{H}+2H^2+\frac{k}{a^2})+\frac{1}{2}\dot{\varphi }^2+24(\dot{H}+H^2)(H^2+\frac{k}{a^2})f(\varphi )$$ (24) once the FRW metric is applied. The Friedmann equation (22) becomes $$3(H^2+\frac{k}{a^2})(1+8H\dot{f})=\frac{\dot{\varphi }^2}{2}$$ (25) Furthermore, the variational equation of $`\varphi `$ is also straightforward. The result is $$\ddot{\varphi }+3\dot{\varphi }H24\frac{df}{d\varphi }(\dot{H}+H^2)(H^2+\frac{k}{a^2})=0.$$ (26) This agrees with the result in while the derivation is much more straightforward. In fact this simple formula for the Friedmann equation can also be generalized to any scalar-gravity theory. It helps to reduce the labor in deriving gravitational field equations. It is especially helpful when complicated interactions are present and higher derivative terms become important. ## IV Conclusions and Acknowledgments A general analysis of the stability condition for a variety of pure higher derivative gravity theories is very useful in many respects. It is known that a stability condition should hold for any potential candidate of inflationary universe in the flat FRW space . We first briefly review the approach of reference based on a redundant field equation. The proof is shown to be incomplete in this paper. We also showed in this paper how to complete the proof with the help of the Bianchi identity for some models where the redundant equation can be recast in a form similar to the Bianchi identity in a FRW background. In addition, the derivation of the Einstein equations in the presence of higher derivative couplings is known to be very complicated. For example, the presence of a scalar field in induced gravity models and dilaton-gravity model makes the derivation even more difficult to derive. We have developed, in this paper, a simpler derivation by imposing the FRW symmetry before varying the action, while keeping the lapse function. We also tried to generalize the work in in order to obtain a general formula for the non-redundant Friedmann equation in this paper. This result was applied to provide an alternative and simplified method to prove the validity of the stability conditions in pure gravity theories. This general formula for the Friedmann equation is also very useful in many area of interests. This work was supported in part under the contract number NSC88-2112-M009-001 and NSERC grant 72013704.
no-problem/9911/nucl-th9911058.html
ar5iv
text
# DFPD 99/TH/51, TRI-PP-99-34 Isoscalar off-shell effects in threshold pion production from 𝑝⁢𝑑 collisions ## I Introduction Pion production from nucleon-induced reactions has the potential to probe the nuclear phenomena at short distance since it involves processes transferring large momenta to the target nucleus. But the pion also mediates the nuclear force; hence meson production (or absorption) plays a fundamental role in hadron dynamics because it may reveal facets of meson-baryon couplings, and of meson-exchange processes in general, which would remain hidden otherwise. In absence of reliable calculations on meson dynamics within an interacting multinucleon context, one has to rely on the determination of the reaction mechanisms which dominate the process. Even so, the analysis of the process is complicated by the fact that a general treatment of the reaction mechanisms reveals the occurrence of many terms, and one is forced to introduce further assumptions in order to reduce the number of terms to a few, tractable ones. This reduction clearly introduces ambiguities making it more difficult to extract informations about the nuclear wave function at short distances, or about the modifications of the hadron interactions because of the presence of other nucleons. The situation is somewhat simplified if we consider nucleon-induced production close to the pion threshold, since there the $`s`$-wave mechanisms of the elementary $`NN`$$`\pi `$$`NN`$ inelasticities dominate, while the $`p`$-wave mechanisms (including the isobar degrees of freedom) can be treated as corrections. In the past decade, with the aim to clarify the nature of the elementary $`NN`$$`\pi `$$`NN`$ $`s`$-wave inelasticities, a great deal of experimental and theoretical activity has been made in pion production from $`NN`$ collision at energies close to threshold. The situation has been recently reviewed by Meyer . Advances in experimental techniques allowed to measure in particular the reaction $`pp\pi ^opp`$ cross section very close to threshold. This reaction filters the $`s`$-wave $`\pi `$$`N`$ coupling in the isoscalar channel. Standard theory including the one-body term and isoscalar rescattering constrained by the $`\pi `$$`N`$ scattering lengths underestimated the cross section by a factor of five. Unexpectedly, there have been two theoretical explanations for this discrepancy, not just one. The enhancement in the cross section can be explained by invoking short-range nucleon-nucleon effects , where the important effects can be simulated by $`\sigma `$ and $`\omega `$ exchanges . The explanation is appealing, since in this case the pion field couples with the two-nucleon axial charge operators, and this coupling provides an explicit link to the inner part of the nucleon-nucleon interaction, which is notoriously difficult to disclose. But the results have been entirely explained by resorting also to an off-shell enhancement of the isoscalar $`\pi `$$`N`$ amplitude . The calculation by Hernández and Oset employs an $`s`$-wave rescattering diagram where the isoscalar amplitude is described by the combined effect of a strong short-range repulsion and a medium range attraction of similar strenght, where the repulsion is represented by a contact term and the attractive part is parameterized by means of a $`\sigma `$-exchange diagram. Originally, this representation has been derived with dispersion theoretic methods by analyzing the experimental data on pion-nucleon scattering with discrepancy functions . Subsequently, the fit in Ref. has been reinterpreted as being generated by a sigma-exchange term plus a $`u`$-channel term including $`\overline{N}`$ exchange (or virtual $`N\overline{N}`$ creation) and other short-range contributions . That $`s`$-wave pion production/absorption is governed by off-shell effects has been known for quite a few years. Hachenberg and Pirner used a field theoretical description for $`\pi `$$`N`$ scattering in $`s`$ wave based on the linear $`\sigma `$ model and on pseudoscalar $`\pi `$$`NN`$ interaction. This combination results in a large cancellation between the $`\sigma `$-exchange diagram and the nucleon Born terms. The cancellation however breaks down producing an enhancement when one pion leg is off-mass shell, as happens when the pion rescatters before being absorbed. On the contrary, in the isovector channel the cancellation occurs more efficiently off-shell, producing a suppression of the charge-exchange interaction. Thus, the relative importance of the isospin odd and even channels is reversed because of the off-shell extrapolation. Yet, another off-shell extrapolation has been considered in Ref., derived from a field theoretic model of the $`\pi `$$`N`$ interaction which has been developed at Jülich . This approach is similar in spirit to the model developed by Hachenberg and Pirner, but here the meson exchanges in the scalar and vector channels are derived from correlated 2$`\pi `$ exchanges. The rescattering diagram of Ref. has a less pronounced off-shell enhancement with respect to both $`\pi `$$`N`$ models of Refs. and , and one must introduce here two-nucleon short-range mechanisms such as those mediated by $`\sigma `$ and $`\omega `$ exchange in order to reproduce the total cross section for $`pppp\pi ^o`$ at threshold. Finally, the problem of the off-shell extrapolation for the pion rescattering mechanism has been considered also in the more systematic framework of chiral-perturbation theory ($`\chi `$PT) . The main feature here is that the rescattering diagram, once extrapolated off-shell, produce a negative interference with the one-body term, thus yielding a cross section substantially smaller than the measured ones. In this case the inclusion of heavy meson-exchange effect does not solve the discrepancy. However, in another chiral-perturbation approach based on full momentum-space treatment, the rescattering diagram was shown to be larger by a factor of 3, thus leading to a substantial increase at threshold. More recently, the $`\chi `$PT approach has been carried further on by considering pion loop diagrams which might simulate $`\sigma `$-exchange phenomena, with the finding that these higher order contributions provide important improvements, but questioning at the same time the convergence properties of the power counting expansion for the specific reaction under consideration . The three-nucleon system is a richer testing ground for studies of pion production and scattering. The addition of just one nucleon increases the complexity of the reaction which involves now the simplest nontrivial multinucleon system where it is possible to test the fundamental $`NN`$$`\pi `$$`NN`$ process and, at the same time, to solve the accurately the nuclear dynamics with Faddeev methods. Applications of Faddeev methods to pion production/absorption started very recently with studies centered around the $`\mathrm{\Delta }`$ resonance and herein we apply the same technique of Ref. to study pion production at threshold. Besides the obvious difficulty of performing calculations with three nucleons instead of two, one encounters here the additional difficulty that for the $`pd\pi ^+t`$ reaction one must include from the start the effect of $`p`$-wave mechanisms, on top of the $`s`$-wave ones. This contrasts with the findings for the two-nucleon case, where the effect of the $`p`$-wave mechanisms (including the $`\mathrm{\Delta }`$), tends gradually to zero in approaching the threshold limit. The effect of this difference can be immediately perceived in the differential production cross section, since for $`NN`$ collision the angular dependence evolves gradually with energy, while in the case of $`pd`$ collisions it exhibits a remarkable $`s`$\- and $`p`$-wave interference in the threshold region, with strong forward-backward asymmetry. In this work, we have centered our study on the effects due to the $`s`$-wave rescattering processes, taking into account both isoscalar and isovector components and their interference to the $`p`$-wave mechanism (containing also the $`\mathrm{\Delta }`$ degrees of freedom). We have in particular taken into account the off-shell effects in the isoscalar channel by following the same prescription suggested in Ref. to explain the size of the excitation function for the $`pppp\pi ^o`$ process. It is important to stress that there are still large uncertainties inherent to this off-shell extrapolation, and such calculations should be repeated possibly also with other off-shell extensions. Moreover, other production mechanisms here omitted should be possibly included in the calculation, at least those mechanisms which proved to have relevant interference effects in $`NN`$ collisions. But to implement the production mechanisms in a three-nucleon process is not a trivial task and needs to be done gradually. At present stage, where we believe that the importance of the off-shell effects in $`s`$-wave pion production from $`NN`$ collisions has been fairly well established by various groups, it is clearly of great relevance to consider the consequences of such effects on more complex reactions. Herein we provide the results obtained when calculating off-shell effects in $`pd`$ collisions. ## II theory The production mechanisms are constructed starting from the following effective pion-nucleon couplings: $`_{\mathrm{int}}`$ $`=`$ $`{\displaystyle \frac{f_{\pi NN}}{m_\pi }}\overline{\mathrm{\Psi }}\gamma ^\mu \gamma ^5\stackrel{}{\tau }\mathrm{\Psi }_\mu \stackrel{}{\mathrm{\Phi }}`$ (2) $`4\pi {\displaystyle \frac{\lambda _I}{m_\pi ^2}}\overline{\mathrm{\Psi }}\gamma ^\mu \stackrel{}{\tau }\mathrm{\Psi }\left[\stackrel{}{\mathrm{\Phi }}\times _\mu \stackrel{}{\mathrm{\Phi }}\right]4\pi {\displaystyle \frac{\lambda _O}{m_\pi }}\overline{\mathrm{\Psi }}\mathrm{\Psi }\left[\stackrel{}{\mathrm{\Phi }}\stackrel{}{\mathrm{\Phi }}\right].`$ The first term represents the gradient coupling to the isotopic axial current, while the second denotes the coupling to the isovector nucleonic current, and the last is the pion-nucleon coupling in the isoscalar channel. As is well known, a good guess for the coupling constants can be obtained from chiral symmetry and PCAC, which constrain the three constants to be of the order $$f_{\pi NN}/m_\pi g_A/(2f_\pi ),$$ (3) $$4\pi \lambda _I/m_\pi ^21/(4f_\pi ^2),$$ (4) $$4\pi \lambda _O/m_\pi 0,$$ (5) where $`g_A`$ ($`1.255`$) is the axial nucleonic normalization, and $`f_\pi `$ is the pion decay constant ($`93.2`$ MeV). The first condition follows directly from the Goldberger-Trieman relation, while the last two are implied by the Weinberg-Tomozawa ones. Current phenomenological values for $`f_{\pi NN}^2/(4\pi )`$ can reach values as low as 0.0735 , 0.0749 , or 0.076 , but also values around 0.081 have been considered acceptable. Some years ago common values were centered around 0.078–0.079 . Similarly, from the pion-nucleon scattering lengths, $`\lambda _I`$ is determined within the range 0.055–0.045, while the weaker isoscalar coupling has typically larger indetermination, ranging from 0.007 to $`0.0013`$ . The isovector and isoscalar couplings, when combined with the axial $`\pi `$$`NN`$ vertex, are the basic ingredients for the two-nucleon $`s`$-wave rescattering mechanisms, while the axial-current coupling alone forms the basis for the one-body production process. The matrix elements for the rescattering process require an off-mass-shell extrapolation of the two constants $`\lambda _I`$ and $`\lambda _O`$, since the rescattered pion can travel off-mass-shell. For $`\lambda _O`$ we consider the off-shell structure previously employed in the $`pppp\pi ^o`$ process by Hernández and Oset (Ref. ) which is based on a parametrization due to Hamilton , namely $`\lambda _O(\stackrel{~}{q},\stackrel{~}{p})`$ $`=`$ $`\lambda _O^{\mathrm{on}}g_O(\stackrel{~}{q},\stackrel{~}{p})`$ (6) $`=`$ $`{\displaystyle \frac{1}{2}}(1+ϵ)m_\pi \left(a_{sr}+a_\sigma {\displaystyle \frac{m_\sigma ^2}{m_\sigma ^2(\stackrel{~}{q}\stackrel{~}{p})^2}}\right),`$ (7) with $`m_\sigma =550`$ MeV, $`a_\sigma =0.22m_\pi ^1`$, $`a_{sr}=0.233m_\pi ^1`$, and $`ϵ=m_\pi /M`$. In the threshold limit, $`(\stackrel{~}{q}\stackrel{~}{p})^2(\stackrel{~}{q}_om_\pi )^2\stackrel{~}{𝐪}^2`$, where $`\stackrel{~}{q}`$ is the transferred 4-momentum between the two active nucleons. According to previous treatments of the 2$`N`$ $`\pi `$-production amplitude, the time-component $`\stackrel{~}{q}_o`$ is fixed around $`\stackrel{~}{q}_om_\pi /2`$, while the space component $`\stackrel{~}{𝐪}`$ represents a loop variable and is integrated over. This form leads to an on-shell value of the order of 0.007. The on-shell value derives from a cancellation between the short-range term, $`a_{sr}`$, and the $`\sigma `$-exchange term, $`a_\sigma `$, while off shell the cancellation occurs only partially and thus leads to the off-shell enhancement. The use of a fictious sigma-exchange model should not be considered a crucial aspect of the model, since similar forms (possibly summed over a “distribution” of masses $`m_\sigma `$) can be easily obtained also in theoretical approaches based on subtracted dispersion relations. The approaches based on subtracted dispersion relations such as $`\pi `$$`N`$ model of Ref. lead indeed to similar off-shell enhancement. For $`\lambda _I`$ the extrapolation can be accomplished by invoking $`\rho `$-meson dominance and the related Riazuddin-Fayyazuddin-Kawarabayashi-Suzuki identity, which implies (on shell, for $`\omega _\pi m_\pi `$) $$\frac{\lambda _I^{\mathrm{on}}}{m_\pi ^2}=\frac{f_\rho ^2}{8\pi m_\rho ^2},$$ (8) with the corresponding off-shell extrapolation $`\lambda _I(\stackrel{~}{q},\stackrel{~}{p})`$ $`=`$ $`\lambda _I^{\mathrm{on}}g_I(\stackrel{~}{q},\stackrel{~}{p})`$ (9) $`=`$ $`\lambda _I^{\mathrm{on}}{\displaystyle \frac{m_\rho ^2}{m_\rho ^2(\stackrel{~}{q}\stackrel{~}{p})^2}}\left({\displaystyle \frac{\mathrm{\Lambda }_\rho ^2}{\mathrm{\Lambda }_\rho ^2(\stackrel{~}{q}\stackrel{~}{p})^2}}\right)^2,`$ (10) where again we use $`(\stackrel{~}{q}\stackrel{~}{p})^2=(\stackrel{~}{q}_om_\pi )^2\stackrel{~}{𝐪}^2`$ in the threshold limit. The production matrix-elements in the nonrelativistic 3$`N`$ space with such couplings are the following: $`3N|A^{1\mathrm{B}}|3N,\pi ={\displaystyle \frac{if_{\pi NN}}{m_\pi }}𝝈_2\stackrel{~}{𝐩}[𝝉_2]_1^{z_\pi }`$ (12) $`\times \delta (𝐩^{}𝐩{\displaystyle \frac{6+2ϵ}{6(2+ϵ)}}𝐏_\pi )\delta (𝐪^{}𝐪+{\displaystyle \frac{1}{3}}𝐏_\pi )`$ for the one-body process, $`3N|A_O^{2\mathrm{B}}|3N,\pi =\mathrm{\hspace{0.17em}2}i{\displaystyle \frac{f_{\pi NN}4\pi \lambda _O}{m_\pi ^2}}`$ (14) $`\times 𝝈_3\stackrel{~}{𝐪}[𝝉_3]_1^{z_\pi }{\displaystyle \frac{v_{\pi NN}(\stackrel{~}{q})g_O(\stackrel{~}{q},\stackrel{~}{p})}{m_\pi ^2+\stackrel{~}{𝐪}^2\stackrel{~}{q_0}^2}}\delta (𝐪^{}𝐪+{\displaystyle \frac{1}{3}}𝐏_\pi )`$ and $`3N|A_I^{2\mathrm{B}}|3N,\pi =\sqrt{2}i{\displaystyle \frac{f_{\pi NN}4\pi \lambda _I}{m_\pi ^3}}`$ (17) $`\times 𝝈_3\stackrel{~}{𝐪}[𝝉_3\times 𝝉_2]_1^{z_\pi }{\displaystyle \frac{v_{\pi NN}(\stackrel{~}{q})g_I(\stackrel{~}{q},\stackrel{~}{p})}{m_\pi ^2+\stackrel{~}{𝐪}^2\stackrel{~}{q_0}^2}}(\stackrel{~}{q}_0+\omega _\pi )`$ $`\times \delta (𝐪^{}𝐪+{\displaystyle \frac{1}{3}}𝐏_\pi )`$ for the two-body isoscalar and isovector rescattering, respectively. $`v_{\pi NN}(\stackrel{~}{q})`$ is the hadronic form factor of the $`\pi `$$`NN`$ vertex, whose structure is governed by the monopole-type cutoff $`\mathrm{\Lambda }_\pi `$. The momenta $`𝐩`$ and $`𝐪`$ are Jacobi momenta for nucleon 2 in the (2+3) center-of-mass (c.m.), and nucleon 1 in the (1+2+3) c.m., respectively, while $`𝐏_\pi `$ is the pion momentum in the total c.m. Similarly, $`𝐩^{}`$ and $`𝐪^{}`$ are the Jacobi momenta for the three nucleons in the no-pion case. Other relevant pion momenta are $$\stackrel{~}{𝐩}\frac{(3+ϵ)}{3(1+ϵ)}𝐏_\pi $$ (18) and $$\stackrel{~}{𝐪}𝐩+\frac{(6+2ϵ)}{6(2+ϵ)}𝐏_\pi 𝐩^{}.$$ (19) In the actual calculation ranging from threshold up to the $`\mathrm{\Delta }`$ resonance the on-shell couplings are further corrected by means of an Heitler-type (or $`K`$-matrix) form ($`\eta `$ is the pion momentum in pion-mass units $$\widehat{\lambda }_O\frac{2}{3}\frac{\lambda _O+\lambda _I}{1+2i\eta (\lambda _O+\lambda _I)}+\frac{1}{3}\frac{\lambda _O2\lambda _I}{1+2i\eta (\lambda _O2\lambda _I)},$$ (20) $$\widehat{\lambda }_I\frac{1}{3}\frac{\lambda _O+\lambda _I}{1+2i\eta (\lambda _O+\lambda _I)}\frac{1}{3}\frac{\lambda _O2\lambda _I}{1+2i\eta (\lambda _O2\lambda _I)}.$$ (21) This reduces the rescattering contributions at higher energies but the correction is uninfluential in the threshold limit. On top of these processes, we have included also the two-body mechanism mediated by $`\mathrm{\Delta }`$ rescattering, $`3N|A_\mathrm{\Delta }^{2\mathrm{B}}|3N,\pi ={\displaystyle \frac{if_{\pi N\mathrm{\Delta }}}{m_\pi }}`$ (24) $`\times {\displaystyle \frac{V_{N\mathrm{\Delta }}(𝐩^{},𝐩_\mathrm{\Delta })𝑺_2\stackrel{~}{𝐩}[𝑻_2]_1^{z_\pi }}{T_{\mathrm{cm}}+M_\mathrm{\Delta }+p_\mathrm{\Delta }^2/2\mu _\mathrm{\Delta }+q_{}^{}{}_{}{}^{2}/2\nu _\mathrm{\Delta }}}`$ $`\times \delta (𝐪^{}𝐪+{\displaystyle \frac{1}{3}}𝐏_\pi ),`$ since its contribution becomes soon important as the energy increases. The intermediate $`\mathrm{\Delta }`$ momentum is defined as $$𝐩_\mathrm{\Delta }𝐩+\frac{(6+2ϵ)}{6(2+ϵ)}𝐏_\pi .$$ (25) In Eq. (24) $`\mu _\mathrm{\Delta }`$ is the reduced mass of the $`\mathrm{\Delta }`$-$`N`$ system, while $`\nu _\mathrm{\Delta }`$ is the reduced mass for the $`N`$-($`\mathrm{\Delta }`$$`N`$) partition. $`T_{\text{c.m.}}`$ is the c.m. kinetic energy of the three nucleons in the initial state. The $`\mathrm{\Delta }`$$`N`$ transition potential is determined by the $`\pi `$ plus $`\rho `$ exchange diagrams, where the pseudoscalar meson provides the typical longitudinal structure to the $`\mathrm{\Delta }`$$`N`$ transition, i.e., $`(𝝈_3\stackrel{~}{𝐪})(𝑺_\mathrm{𝟐}^{}\stackrel{~}{𝐪})(𝝉_\mathrm{𝟑}𝑻_\mathrm{𝟐}^{})`$, while the vector meson generates the transversal contribution $`(𝝈_3\times \stackrel{~}{𝐪})(𝑺_\mathrm{𝟐}^{}\times \stackrel{~}{𝐪})(𝝉_\mathrm{𝟑}𝑻_\mathrm{𝟐}^{})`$. $`𝐒_2`$ ($`𝐓_2`$) is the spin (isospin) operator for the $`\mathrm{\Delta }`$ $`N`$ transition. For complete details on the employed transition potential, and for other aspects connected with the isobar mechanism, such as, e.g., the detailed expression for the complex $`\mathrm{\Delta }`$ mass $`_\mathrm{\Delta }`$, we refer to a set of studies performed around the resonance region . All amplitudes, Eqs. (12),(14),(17), and (24) must be multiplied in addition by the common factor $`1/\sqrt{(2\pi )^3\mathrm{\hspace{0.17em}2}\omega _\pi }`$. Moreover, taking into account Pauli identity, the full one-body mechanism results by multiplying Eq. (12) by the multiplicity factor $`\sqrt{3}(1+P)`$, while the remaining two-body mechanisms are multiplied by $`2\sqrt{3}(1+P)`$, where $`P`$ is the ternary permutator which commutes the 3$`N`$ coordinates cyclically/anticyclically. Combining the $`P`$ operator with the given mechanisms in 3$`N`$ partial waves is not a trivial task, and numerical treatment of the resulting amplitudes has been a challenge. The matrix elements are calculated between in and out nuclear states, where the out-state is specified by the three-nucleon bound-state wave function (plus a free pion wave), and the incoming state is determined by the deuteron-nucleon asymptotic channel. For the 3$`N`$ bound state we have taken the triton wave function as has been developed, tested and calculated in Ref.. As two-body input for the three-nucleon equations we used a separable representation of the Paris interaction. This form represents an analytic version of the PEST interaction, originally constructed and applied by the Graz group. We have in addition calculated the relevance of the three-nucleon dynamics in the initial state (ISI) by solving the Faddeev type Alt-Grassberger-Sandhas (AGS) equations . The AGS equations for neutron-deuteron scattering go over into effective two-body Lippmann-Schwinger equations when representing the input two-body T-operators in separable form. The T is represented in separable form using again the above mentioned EST method. Applying the same technique to the $`\pi `$ absorption process, an integral equation of rather similar structure is obtained for the corresponding amplitude. The only difference is that the driving term of the n-d scattering equation ( i.e., the particle-exchange diagram, the so–called “Z” diagram) is replaced by the off-shell extension of the plane-wave pion-absorption amplitude. More details can be found in and references therein. ## III Results To exhibit the relevance of isoscalar off-shell effects for $`pd\pi ^+t`$ we have calculated the integral cross section near threshold up to the $`\mathrm{\Delta }`$ region. The calculated results are compared with a collection of data from Refs. and others as explained in Fig. 1. Practically all the experimental data near threshold refer in fact to $`\pi ^o`$ production from $`pd`$ collisions, and the comparison has been made assuming isospin invariance and hence implying a factor of 2 between the two cross sections. In so doing we have avoided the need to include the effects of Coulomb distortions in the exit channel. Given the complexity of the reaction dynamics which depends upon several contributions, the isoscalar effects have been calculated on top of the other mechanisms we had considered. As explained previously, the model includes also $`p`$-wave $`\mathrm{\Delta }`$ rescattering, the one-body $`p`$-wave term, and the isovector $`\rho `$-exchange mechanisms. The relevant parameters employed herein (cutoffs, coupling constants) have been tested previously against the $`pp\pi ^+d`$ process in Ref. . For the $`\rho `$-exchange model we use $`\lambda _I=0.045`$ in Eq. (8). The results indicate that the modifications introduced by the isoscalar contributions are significant over the whole range considered, and that the effect is one of the most pronounced at threshold. Further evidences come from the results exhibited in Fig. 2, where the deuteron tensor analyzing power $`T_{20}`$ is shown. Details on the formalism for the calculation of polarization observables can be found in Ref. . The points represent a best-fit to experimental data as given in Ref.. The trend of the data is reproduced once both the isovector and isoscalar terms are taken into account. It is clearly important to assess the role of the initial state correlations, since the three-body dynamics between the nucleon and the deuteron could modify the whole picture and undermine the conclusions of this work. For this reason, we have calculated the ISI effects by solving the AGS equations for the 3$`N`$ system using as input a separable representation of the Paris interaction. The same Faddeev-like technique herein employed has been applied previously to pion production at the $`\mathrm{\Delta }`$ resonance in Ref. . In Fig. 3 one can examine the role of the 3$`N`$ dynamics in the initial state from the angular dependence of the unpolarized production cross section and from $`T_{20}`$, for $`\eta =0.25`$. The modifications introduced by the 3$`N`$ dynamics are sizable, but the overall picture does not change drastically. In addition, the 3$`N`$ effects improve the angular dependence of both observables, thus possibly confirming our findings about the importance of the isoscalar off-shell effects. ###### Acknowledgements. The work of L.C. was supported by the “Ministero dell’Universitá e della Ricerca Scientifica e Tecnologica” under the PRIN Project “Fisica Teorica del Nucleo e dei Sistemi a Piú Corpi”. The work of W. Sch. was supported by the Natural Science and Engineering Research Council of Canada.
no-problem/9911/hep-th9911145.html
ar5iv
text
# 1 Introduction ## 1 Introduction Recent lattice calculations have given strong evidence for two confinement scenarios: 1. the dual Meissner effect , which is based on a condensate of magnetic monopoles in the QCD vacuum and 2. the center vortex picture , where the vacuum consists of a condensate of magnetic flux tubes which are closed due to the Bianchi identity. There are also lattice calculations which indicate that the spontaneous breaking of chiral symmetry, which can be related to the topology of gauge fields, is caused by these objects, i.e. by either magnetic monopoles or center vortices . In this talk we would like to discuss the confinement and topological properties of magnetic monopoles and center vortices. We will first discuss the two confinement scenarios based on magnetic monopoles and vortices, respectively, and subsequently investigate the topological properties of these objects. In particular, we will study the nature of the deconfinement phase transition in the center vortex picture. We will also show that in Polyakov gauge the magnetic monopoles completely account for the non-trivial topology of the gauge fields. Subsequently, we will extend the notion of center vortices to the continuum. We will present the continuum analog of the maximum center gauge fixing and the Pontryagin index of center vortices. ## 2 Confinement The magnetic monopoles arise in Yang-Mills-Theories in the so called Abelian gauges . Recent lattice calculations have shown that below a critical temperature $`T_C`$ these monopoles are condensed and give rise to the dual Meißner effect. In particular in the so called maximal Abelian gauge where all links are made as diagonal as possible, one observes Abelian and monopole dominance in the string tension . However, very recent lattice calculations also show that the Yang-Mills ground state does not look like a Coulombic monopole gas but rather indicate a collimation of magnetic flux, which is consistent with the center vortex picture of confinement, proposed in refs. , , , . Center vortices represent closed magnetic flux lines in three space dimensions, describing closed two-dimensional world-sheets in four space-time dimensions. The magnetic flux represented by the vortices is furthermore quantized such that a Wilson loop linking vortex flux takes a value corresponding to a nontrivial center element of the gauge group. In the case of $`SU(2)`$ colour the only such element is (-1). For $`N`$ colours, there are $`N1`$ different possible vortex fluxes corresponding to the $`N1`$ nontrivial center elements of $`SU(N)`$. Center vortices can be regarded as a fraction of a Dirac string: $`N`$ superimposed center vortices form an unobservable Dirac string. Consider an ensemble of center vortex configurations in which the vortices are distributed randomly, specifically such that intersection points of vortices with a given two-dimensional plane in space-time are found at random, uncorrelated locations. In such an ensemble, confinement results in a very simple manner. Let the universe be a cube of length $`L`$, and consider a two-dimensional slice of this universe of area $`L^2`$, with a Wilson loop embedded into it, circumscribing an area $`A`$. On this plane, distribute $`N`$ vortex intersection points at random, cf. Fig. 1 (left). According to the specification above, each of these points contributes a factor (- 1) to the value of the Wilson loop if it falls within the area $`A`$ spanned by the loop; the probability for this to occur for any given point is $`A/L^2`$. The expectation value of the Wilson loop is readily evaluated in this simple model. The probability that $`n`$ of the $`N`$ vortex intersection points fall within the area $`A`$ is binomial, and, since the Wilson loop takes the value $`(1)^n`$ in the presence of $`n`$ intersection points within the area $`A`$, its expectaton value is $`<W>`$ $`=`$ $`{\displaystyle \underset{n=0}{\overset{N}{}}}(1)^n\left(\begin{array}{c}N\\ n\end{array}\right)\left({\displaystyle \frac{A}{L^2}}\right)^n\left(1{\displaystyle \frac{A}{L^2}}\right)^{Nn}`$ (3) $`=`$ $`\left(1{\displaystyle \frac{2\rho A}{N}}\right)^N\stackrel{N\mathrm{}}{}exp(2\rho A),`$ (4) where in the last step, the size of the universe $`L`$ has been sent to infinity while leaving the planar density $`\rho =N/L^2`$ of vortex intersection points constant. Thus, one obtains an area law for the Wilson loop, with the string tension $`\sigma _{rvm}=2\rho `$. In fact, in lattice calculations the vortex area density $`\rho `$ has been shown to obey the proper scaling behaviour as dictated by the renormalization group and thus represents a physical observable. Using a string tension of $`\sigma (440MeV)^2`$ as input one finds $`\rho 3.4F^2`$ corresponding to a string tension $`\sigma _{rvm}=(521MeV)^2`$ in the random model above which overestimates the input value. This overabundance of string tension can be easily understood by noticing that there are both dynamical and kinematical correlations between the vortex intersection points, which have been discarded in the random vortex model considered above, which assumes that all intersection points are completely random, i.e. uncorrelated. This is, however, not true since the vortices are closed loops in $`D=3`$ or closed surfaces in $`D=4`$. Therefore the intersection points in the plane of the Wilson loops come in pairs. But a pair of intersection points does not (non-trivially) contribute to the Wilson loop. Only for large vortices exceeding the size of the Wilson loop the intersection points inside the Wilson loop are uncorrelated and can contribute $`(1)`$. On the other hand all vortices contribute to the area vortex density $`\rho `$ measured on the lattice. This effect leads to a lower value of the string tension than the value $`\sigma _{rvm}=2\rho `$ resulting from the random vortex model. ## 3 Deconfinement The above presented vortex picture of confinement naturally explains also the deconfinement transition as a transition from a phase of large vortices percolating throughout space-time to a phase of small vortices in a sense to be specified more precisely below. Indeed, assume that all vortices have a maximal size $`d`$. Then only the intersection points in a strip of width $`d`$ along the perimeter of the Wilson loop can randomly contribute $`(1)`$, (while other intersection points come in pairs and hence do not contribute). The expectation value of the Wilson loop is then still given by eq. (3), however, with the full area $`A`$ of the Wilson loop replaced by the area of the strip of width $`d`$ along the perimeter $`P`$ of the Wilson loop, $`dP`$, resulting in a perimeter law $`<W>=exp(2\rho dP)`$ (5) implying deconfinement. This picture of the deconfinement phase transition arising in the random vortex model as a transition from a phase of percolating vortices to a phase of small vortices is supported by the lattice calculations. Fig. 2 shows the vortex matter distribution as function of the vortex cluster extension at various temperatures for a 3-dimensional slice resulting from the 4-dim. lattice by omitting one spatial direction, see also ref. . Far below the critical temperature $`T_C`$ of the deconfinement phase transition a dominant portion of the vortex matter is contained in a big cluster extending over the whole lattice universe. As the temperature rises smaller clusters are more and more formed and well above the deconfinement phase transition large vortices have ceased to exist, the connectivity of the clusters is lost and all vortex matter being contained in small clusters. If one analyzes the small vortex clusters dominating the deconfined phase in more detail, one finds that a large part of these vortices wind in the (Euclidean) temporal direction, i.e. the space-time direction whose extension is identified with the inverse temperature. Therefore, one finds that the typical configurations in the two phases can be characterized as displayed in Fig. 3 in a three-dimensional slice of space-time, where one space direction has been left away. Note that Fig. 3 also furnishes an explanation of the spatial string tension in the deconfined phase. A spatial Wilson loop embedded into Fig. 3 (right) can exhibit an area law since intersection points of winding vortices with the minimal area spanned by the loop can occur in an uncorrelated fashion despite those vortices having small extension. Note also the dual nature of this (magnetic) picture as compared with electric flux models . In such models, electric flux percolates in the deconfined phase, while it does not percolate in the confining phase. ## 4 Magnetic monopoles and topology Spontaneous breaking of chiral symmetry can be triggered by topologically non-trivial gauge fields, which give rise to zero modes of the quarks . Magnetic monopoles and percolated vortices are long range fields and should hence be relevant for the global topological properties. Topological properties of gauge configurations as measured by the Pontryagin index $`\nu ={\displaystyle \frac{1}{16\pi ^2}}{\displaystyle TrF_{\mu \nu }\stackrel{~}{F}_{\mu \nu }}={\displaystyle \frac{1}{4\pi ^2}}{\displaystyle d^3x\stackrel{}{E}(x)\stackrel{}{B}(x)}`$ (6) are preferably studied in the continuum theory. For the study of the topological properties of magnetic monopoles in the continuum theory the Polyakov gauge is particularly convenient. In this gauge one diagonalizes the Polyakov loop $`\mathrm{\Omega }(\stackrel{}{x})=Pe^{_0^T𝑑x_0A_0(x_0,\stackrel{}{x})}=V^{}\omega V`$ (7) which fixes $`VSU(2)/U(1)`$ i.e. the coset part of the gauge group, which we assume, for simplicity, to be $`SU(2)`$. Magnetic monopoles arise as defects of the gauge fixing, which occur when at isolated points in space $`\stackrel{}{x}_i`$ the Polyakov loop becomes a center element $`\mathrm{\Omega }(\stackrel{}{x}_i)=(1)^{n_i},n_i:\text{integer}`$ (8) The field $`A^V=VAV^{}+VV^{}`$ develops then magnetic monopoles located at these points. These monopoles have topologically quantized magnetic charge given by the winding number $`m[V]\mathrm{\Pi }_2(SU(2)/U(1))`$ (9) of the mapping $`V(\stackrel{}{x})`$ from a sphere $`S_2`$ around the magnetic monopole into the coset $`SU(2)/U(1)`$ of the gauge group. In the Polyakov gauge the Pontryagin index can be exactly expressed in terms of magnetic charges , , . If we assume a compact space-time manifold and that there are only point-like defects of gauge fixing, i.e magnetic monopoles are the only magnetically charged objects arising after gauge fixing, the Pontryagin index is given by $`\nu =\mathrm{\Sigma }_in_im_i`$ (10) The summation runs here over all magnetic monopoles with $`m_i`$ being the magnetic charge of the monopole defined by equation (9) and the integer $`n_i`$ is defined by the value of the Polyakov-loop at the monopole position (8). This relation shows that the magnetic monopoles completely account for the non-trivial topology of gauge fields, at least in the Polyakov gauge. Unfortunately, in other Abelian gauges like maximum Abelian gauge, such a simple relation between Pontryagin index and magnetic charges is not yet known and perhaps does not exist . However, in the maximum Abelian gauge correlations between instantons and monopoles have been found, in both analytical and lattice studies . ## 5 Center vortices in the continuum On the lattice center vortices are detected by going to the maximum center gauge and subsequently projecting the links onto center elements . In the maximum center gauge $`{\displaystyle \underset{x,\mu }{}}(TrU_\mu (x))^2max,`$ (11) which is obviously insensitive to center gauge transformations, one exploits the gauge freedom to rotate a link variable as close as possible to a center element. Once the maximum center gauge has been implemented, center projection implies to replace all links by their closest center element. One obtains then a $`Z(2)`$ lattice which contains $`D1`$ dimensional hypersurfaces $`\mathrm{\Sigma }`$ on which all links take a non-trivial center element, that is $`U=1`$ in the case of $`SU(2)`$. The $`D2`$ dimensional boundaries $`\mathrm{\Sigma }`$ of the hypersurfaces $`\mathrm{\Sigma }`$ represent the center vortices, which, when non-trivially linked to a Wilson loop, yield a center element for the latter. The notion of center vortices can be extended to the continuum theory by putting a given smooth gauge field $`A_\mu (x)`$ on a lattice in the standard fashion by introducing the link variables $`U_\mu (x)=exp(aA_\mu (x))`$. A careful analysis shows that the continuum analogies of the center vortices are defined by the gauge potential , $`𝒜_\mu (x,\mathrm{\Sigma })=E{\displaystyle _\mathrm{\Sigma }}d^{D1}\stackrel{~}{\sigma }_\mu \delta ^D(x\overline{x}(\sigma ))`$ (12) where $`d^{D1}\stackrel{~}{\sigma }_\mu `$ is the dual of the $`D1`$ dimensional volume element. Furthermore, the quantity $`E=E_aH_a`$ with $`H_a`$ being the generators of the Cartan algebra represents (up to a factor of $`2\pi `$) the so called co-weights which satisfy $`exp(E)=ZZ(N)`$. Due to this fact the Wilson-loop calculated from the gauge potential (12) becomes, $`W[𝒜](C)=\mathrm{exp}({\displaystyle _C}𝒜)=Z^{I(C,\mathrm{\Sigma })}`$ (13) where $`I(C,\mathrm{\Sigma })`$ is the intersection number between the Wilson-loop $`C`$ and the hypersurface $`\mathrm{\Sigma }`$. The representation, (12), is referred to as ideal center vortex. One should emphasize that the hypersurface $`\mathrm{\Sigma }`$ can be arbitrarily deformed by a center gauge transformation keeping, however, its boundary $`\mathrm{\Sigma }`$, i.e. the position of the center vortex, fixed. Thus for fixed $`\mathrm{\Sigma }`$ the dependence of the gauge potential (12) on the hypersurface itself is a gauge artifact. The dependence on the hypersurface $`\mathrm{\Sigma }`$ can be removed by performing the gauge transformation $`\phi (x,\mathrm{\Sigma })=exp(E\mathrm{\Omega }(x,\mathrm{\Sigma }))`$ (14) where $`\mathrm{\Omega }(x,\mathrm{\Sigma })`$ is the solid angle subtended by the hypersurface $`\mathrm{\Sigma }`$ as seen from the point $`x`$. One finds then $`𝒜_\mu (x,\mathrm{\Sigma })=\phi (x,\mathrm{\Sigma })_\mu \phi ^{}(x,\mathrm{\Sigma })+a_\mu (x,\mathrm{\Sigma })`$ (15) where $`a_\mu (x,\mathrm{\Sigma })=E{\displaystyle _\mathrm{\Sigma }}d^{D2}\stackrel{~}{\sigma }_{\mu \nu }_\nu D(x\overline{x}(\sigma ))`$ (16) depends only on the vortex position $`\mathrm{\Sigma }`$ and is referred to as ”thin vortex”. Here $`D(x\overline{x}(\sigma ))`$ represents the Green function of the $`D`$ dimensional Laplacian. In fact, one can show that the thin vortex represents the transversal part of the ideal vortex $`a_\mu (x,\mathrm{\Sigma })=P_{\mu \nu }𝒜_\nu (x,\mathrm{\Sigma })`$ where $`P_{\mu \nu }=\delta _{\mu \nu }\frac{_\mu _\nu }{^2}`$ is the usual transversal projector. A careful and lengthy analysis yields that the continuum analog of the maximum center gauge fixing is defined by $`\underset{\mathrm{\Sigma }}{\mathrm{min}}\underset{g}{\mathrm{min}}{\displaystyle \text{Tr}(A^ga(\mathrm{\Sigma }))^2}`$ (17) where the minimization is performed with respect to all (continuum) gauge transformations $`gSU(2)/Z(2)`$ (which represent per se coset gauge transformations) and with respect to all vortex surfaces $`\mathrm{\Sigma }`$. For fixed thin center vortex field configuration $`a(\mathrm{\Sigma })`$ minimization with respect to the continuum gauge transformation g yields the background gauge condition $`[_\mu +a_\mu (\mathrm{\Sigma }),A_\mu ]=0`$ (18) where the thin vortex field $`a_\mu (x,\mathrm{\Sigma })`$ figures as background gauge field. One should emphasize, however, that the background field has to be dynamically determined for each given gauge field $`A_\mu (x)`$ and thus depends on the latter. Obviously in the absence of a vortex structure in the considered gauge field $`A_\mu (x)`$ the background gauge condition reduces to the Lorentz gauge $`_\mu A_\mu =0`$. ## 6 Topology of Center vortices Once the center vortex configurations in the continuum are at our disposal, it is straightforward to calculate their Pontryagin index. In the continuum formulation where center vortices live in the Abelian subgroup by construction the direction of the magnetic flux of the vortices is fully kept. The explicit calculation shows that the Pontryagin index $`\nu `$ of the center vortices is given by $`\nu ={\displaystyle \frac{1}{4}}I(\mathrm{\Sigma },\mathrm{\Sigma })`$ (19) where $`I(\mathrm{\Sigma },\mathrm{\Sigma })`$ represents the self-intersection number of the closed vortex sheet $`\mathrm{\Sigma }`$ defined by $`I(\mathrm{\Sigma }_1,\mathrm{\Sigma }_2)={\displaystyle \frac{1}{2}}{\displaystyle \underset{\mathrm{\Sigma }_1}{}}𝑑\sigma _{\mu \nu }{\displaystyle \underset{\mathrm{\Sigma }_2}{}}𝑑\stackrel{~}{\sigma }_{\mu \nu }\delta ^4\left(\overline{x}(\sigma )\overline{x}(\sigma ^{})\right).`$ (20) A careful analysis shows that for closed oriented surfaces the self intersection number vanishes. In order to have a non-trivial Pontryagin index the vortex surfaces have to be not globally oriented, i.e., they have to consist of oriented pieces. One can further show that at the border between oriented vortex patches magnetic monopole currents flow. It is these monopole currents which make the vortex sheet non-oriented since they change the orientation of the magnetic flux. Thus we obtain that even for the center vortices the non-trivial topology is generated by magnetic monopole currents flowing on the vortex sheets. This is consistent with our finding in the Polyakov gauge (see eq. (10)) where the Pontryagin index was exclusively expressed in terms of magnetic monopoles . In fact, for a compact space-time manifold one can show that under certain mild assumptions the Pontryagin index can be expressed as $`\nu ={\displaystyle \frac{1}{4}}L(\mathrm{\Sigma },C),`$ (21) where $`L(\mathrm{\Sigma },C)`$ is the linking number between the vortex $`\mathrm{\Sigma }`$ and the monopole loop $`C`$ on the vortex. By implementing the maximum center gauge condition in the continuum one can derive, in an approximate fashion, an effective vortex theory , where the vortex action can be calculated in a gradient expansion. The leading order gives the Nambu-Goto action while in higher orders curvature terms appear. A model based on such an effective vortex action, in fact, reproduces the gross features of the center vortex picture found in numerical Yang-Mills lattice simulations. ## Acknowledgment: This work was supported in part by the DFG-Re 856/4-1 and DFG-En 415/1-1.
no-problem/9911/hep-th9911181.html
ar5iv
text
# Introduction ## Introduction Over the last few years there has been a great deal of interest in the possibility that we live on a three dimensional brane embedded in a higher dimensional space. Horava and Witten showed that in the strongly coupled limit $`E_8\times E_8`$ heterotic string theory can be viewed as an 11 dimensional theory on the orbifold $`R^{10}\times S^1/Z_2`$ with gravitons propagating in the bulk and super Yang-Mills fields confined to the two ten-branes that form the boundary of the spacetime. Recently a novel solution to the hierarchy problem was proposed by considering the possibility that $`n`$ of the compactified dimensions are ‘large’. It was shown that the effective 4 dimensional Planck mass $`M_{pl}`$ is related to the 5 dimensional Planck mass $`M`$ via $`M_{pl}^2M^{2+n}V_{(n)}`$, where $`V_{(n)}`$ is the volume of the compactified space. Thus, if the extra dimensions are large enough it is possible to have a small $`M`$ (even on the order of a TeV) with $`M_{pl}10^{19}`$ Gev. Therefore, the hierarchy problem can be solved by reducing the 5 dimensional Planck mass. For $`M1`$ Tev they found that $`n>2`$ to avoid conflicts with observations if the standard model fields are confined to a three brane while gravity propagates in the bulk. For $`n=2`$ astrophysical constraints force $`M\stackrel{>}{}100`$ Tev and the size of the dimensions is constrained to be $`\stackrel{<}{}5\times 10^5`$ mm. Two other interesting possibilities were recently suggested by Randall and Sundrum . In their first model they considered a five dimensional spacetime with the fifth dimension, labeled by $`w`$, compactified on $`S^1`$ with $`w_cww_c`$ and with the orbifold symmetry $`ww`$. The brane at $`w=0`$ is a domain wall with positive tension and the brane at $`w=w_c`$ is a domain wall with negative tension. They showed that mass scales on the negative tension brane can be severly suppressed, leading to a solution of the hierarchy problem. Of course, this assumes that we live on the negative tension brane. It was shown by Shiromizu, Maeda, and Sasaki that the effective Einstein field equations on the negative tension brane involve a negative gravitational constant which means that gravity would be repulsive instead of attractive. However, they showed that one does recover the correct Einstein equations in the low energy limit on the positive tension brane. More recently, it has been shown that the problem with the negative tension brane may disappear if the extra dimension is stabilized by a radion field. In the second scenario of Randall and Sundrum we live on the positive tension brane and the negative tension brane is moved off to infinity. Thus, in this scenario the extra dimension is infinite in extent. As usual the fields of the standard model live on the brane and gravity lives in the bulk. They showed that there is a single gravitational bound state confined to the brane that corresponds to the graviton. They also showed that, even though the extra dimension is infinite, the effective gravitational interaction on the brane is that of a four dimensional spacetime with some very small corrections. Some aspects of the cosmology of branes embedded in higher dimensional spacetimes have been examined. Binétruy, Deffayet, and Langlois (BDL) showed that the scale factor $`a_0`$ on a brane with flat spatial sections satisfies $$\frac{\ddot{a_0}}{a_0}+\frac{\dot{a_0}^2}{a_0^2}=\frac{k_{(5)}^4}{36}\rho (\rho +3p)$$ (1) where $`k_{(5)}`$ is the five dimensional Einstein constant. I have dropped a term in this equation proportional to $`T_{55}`$ by taking the bulk to be empty. As they pointed out, this equation is similar to one of the equations from standard cosmology except that the source terms are nonlinear. This leads to nonstandard cosmological evolution. Csáki, Graesser, Kolda, and Terning then showed that one can get standard evolution at late times if the brane has a cosmological constant $`\mathrm{\Lambda }_b`$. Note that $`\mathrm{\Lambda }_b`$ is different than the bulk cosmological constant, as it is confined to the brane. If $`\mathrm{\Lambda }_b>>\rho ,p`$ the right hand side of (1) gives the usual source terms plus nonlinear corrections that will become small at late times. One remaining difference between this scenario and the standard one is that equation (1) is second order. In the standard scenario the evolution is governed by a first order equation and the initial condition $`a(t=0)=a_0`$ (if there is an initial singularity $`a_0=0`$) is sufficient to determine the solution for a given equation of state in a $`k=0`$ universe. On the other hand, the $`k=0`$ cosmology on a brane depends on $`\dot{a}(t=0)`$ as well. Specific five dimensional and three-brane cosmologies have also been discussed by various authors . In this paper a general solution is found for a five dimensional orbifold spacetime that gives a $`k=0`$ cosmology on a three-brane. The energy density and pressure on the brane are written in terms of the metric on the brane. It is shown that it is possible to find five dimensional spacetimes that contain a brane with a given metric. This procedure is carried out for an inflationary universe and for a metric that correspond to a radiation dominated universe in standard cosmology. It is also shown that any $`k=0`$ cosmology can be embedded in a five dimensional flat orbifold spacetime and the embedding equation is found. For an inflationary universe it is shown that the surface is the usual hyperboloid representation of de Sitter space, although it is embedded in an orbifold spacetime. ## Cosmology in the Five Dimensional Space-time Here we consider a five dimensional spacetime with $`w`$ labeling the extra dimension. The fifth dimension is taken to be compactified on $`S^1`$ with $`w_cww_c`$. An orbifold $`S^1/Z_2`$ is produced by identifying $`w`$ and $`w`$, so that the range of $`w`$ can be taken to be $`0ww_c`$. The spacetime contains a three-brane at $`w=0`$ and at $`w=w_c`$. These branes will, in general, have non-zero surface energies and pressures. We will take the brane at $`w=0`$ to be the brane we live on and the brane at $`w=w_c`$ will be a hidden brane. To simplify the form of the metric we can use a Gaussian normal coordinate system based on the brane at $`w=0`$. In this coordinate system $`g_{55}=1`$ and $`g_{5\mu }=0`$ ($`\mu =0,1,2,3`$). Assuming that the three dimensional spatial sections are flat gives $$ds^2=b^2(t,w)dt^2+a^2(t,w)[dx^2+dy^2+dz^2]+dw^2.$$ (2) Since it is always possible to write a two dimensional metric in conformally flat form we can take the metric to be $$ds^2=c(t,w)[dw^2dt^2]+a^2(t,w)[dx^2+dy^2+dz^2].$$ (3) Transforming to retarded and advanced coordinates $`u=tw`$ and $`v=t+w`$ gives $$ds^2=c(u,v)dudv+a^2(u,v)[dx^2+dy^2+dz^2]$$ (4) The Einstein field equations $`R_{kl}=\lambda g_{kl}`$ in the bulk give $$c(_u^2a)(_uc)(_ua)=0,$$ (5) $$a(_u_va)+2(_ua)(_va)=\frac{1}{4}\lambda ca^2,$$ (6) $$c(_v^2a)(_va)(_vc)=0,$$ (7) and, $$3c^2(_u_va)+ac(_u_vc)a(_uc)(_vc)=\frac{1}{2}\lambda ac^3,$$ (8) where $`\lambda =2/3\mathrm{\Lambda }_{(5)}`$ and $`\mathrm{\Lambda }_{(5)}`$ is the five dimensional cosmological constant. First consider the case $`\lambda =0`$. The above equations are trivial to integrate. The general solution for $`_ua,_va0`$ is $$a(u,v)=[f(u)+g(v)]^{1/3},$$ (9) and $$c(u,v)=c_1f^{^{}}(u)g^{^{}}(v)[f(u)+g(v)]^{2/3},$$ (10) where $`f(u)`$ and $`g(v)`$ are arbitrary functions of $`u`$ and $`v`$ respectively and $`c_1`$ is an arbitrary constant. If $`_ua=_va=0`$ then $$a=a_1c(u,v)=f(u)g(v).$$ (11) where $`a_1`$ is a constant. Since all of these solutions correspond to Minkowski spacetime on the brane and in the bulk they will not be discussed further. If $`_va=0`$ and $`_ua0`$ then $$a(u)=h(u)c(u,v)=c_2h^{^{}}(u)k(v).$$ (12) A similar result holds if $`_ua=0`$ and $`_va0`$. Transforming (9) and (10) back to $`(t,w)`$ coordinates and imposing the orbifold condition $`ww`$ gives $$ds^2=|\dot{f}(u)\dot{g}(v)|[f(u)+g(v)]^{2/3}(dw^2dt^2)+[f(u)+g(v)]^{2/3}(dx^2+dy^2+dz^2)$$ (13) where $`u=t|w|`$ and $`v=t+|w|`$ and I have absorbed some constants into the definition of $`t`$ and $`w`$. Since the sign of $`c_1`$ cannot be absorbed the coefficients $`g_{tt}`$ and $`g_{ww}`$ should have a $`\pm `$ sign. I have forced the correct spacetime signature by using absolute values instead and by restricting $`f`$ and $`g`$ to be positive. Note that $`f`$ and $`g`$ interchange roles as we cross the brane at $`w=0`$. Transforming (12) back to $`(t,w)`$ coordinates gives $$ds^2=|\dot{h}(u)|k(v)(dw^2dt^2)+h^2(u)(dx^2+dy^2+dz^2)$$ (14) where $`k`$ has been taken to be positive. This metric corresponds to a flat five dimensional spacetime since the Riemann tensor vanishes. However, the geometry induced on the brane will not, in general, be flat. For notational simplicity the metric in both cases will be written as $$ds^2=n^2(t,|w|)[dw^2dt^2]+a^2(t,|w|)[dx^2+dy^2+dz^2]$$ (15) where $`n`$ and $`a`$ are given in (13) or in (14). Now consider $`\lambda 0`$. Here $`\lambda `$ will be taken to be positive, as in the Randall-Sundrum model. Eliminating $`c(u,v)`$ from (5) and (7) and integrating gives $$a(u,v)=H[f(u)+g(v)]$$ (16) where $`H`$, $`f(u)`$, and $`g(v)`$ are arbitrary functions. Solving for $`c(u,v)`$ from (5) and (7) gives $$c(u,v)=\alpha f^{}(u)g^{}(v)H^{}[f(u)+g(v)]$$ (17) where $`\alpha `$ is an arbitrary constant. Substituting $`a(u,v)`$ and $`c(u,v)`$ into (6) gives $$HH^{\prime \prime }+2(H^{})^2=\frac{1}{4}\lambda \alpha H^2H^{}.$$ (18) A first integral of this equation is $$H^2H^{}+\frac{1}{16}\lambda \alpha H^4=c_1$$ (19) where $`c_1`$ is an arbitrary constant. The solution is $$\mathrm{tan}^1[\beta H]\frac{1}{2}\mathrm{ln}\left[\frac{\beta H+1}{\beta H1}\right]=c_22(f(u)+g(v))$$ (20) where $`\beta =(\frac{\alpha \lambda }{16c_1})^{1/4}`$ and $`c_2`$ is an arbitrary constant. Unfortunately $`H`$ cannot be found explicitly from the above solution. To simplify the problem consider the case $`c_1=0`$. The solution then simplifies to $$H=\left[c_2+\frac{\alpha \lambda }{16}(f(u)+g(v))\right]^1.$$ (21) Absorbing $`\alpha \lambda /16`$ and $`c_2`$ into the functions $`f(u)`$ and $`g(v)`$ gives $$ds^2=\frac{16}{\lambda }\frac{f^{}(u)g^{}(v)}{[f(u)+g(v)]^2}[dt^2dw^2]+\frac{1}{[f(u)+g(v)]^2}[dx^2+dy^2+dz^2]$$ (22) where $`u=t|w|`$ and $`v=t+|w|`$ as before. ## Cosmology on the Brane I: $`\lambda =0`$ The four dimensional spacetime on our brane will be described by the metric $$ds_{(4)}^2=|\dot{f}\dot{g}|[f+g]^{2/3}dt^2+[f+g]^{2/3}(dx^2+dy^2+dz^2)$$ (23) or by $$ds_{(4)}^2=|\dot{h}|kdt^2+h^2(dx^2+dy^2+dz^2)$$ (24) where $`f=f(t),g=g(t),h=h(t),`$ and $`k=k(t)`$. The energy-momentum tensor on the brane will have the form $$T_n^m=\frac{\delta (w)}{n_0}diag(\rho ,p,p,p,0)$$ (25) where $`n_0=n(t,0)`$. It can easily be calculated using the jump conditions that follow from the Einstein field equations . BDL have found that $$\frac{[_wa]}{a_0n_0}=\frac{k_{(5)}^2}{3}\rho $$ (26) and $$\frac{[_wn]}{n_0^2}=\frac{k_{(5)}^2}{3}(3p+2\rho )$$ (27) where $`k_{(5)}`$ is the five dimensional Einstein constant, and $`[]`$ indicates the jump in a quantity, i.e. $$[_wa]=_wa(0^+)_wa(0^{}).$$ (28) For the metric given in (13) the energy density and pressure on our brane are given by $$\rho =\frac{2[f+g]^{2/3}[\dot{f}\dot{g}]}{k_{(5)}^2\sqrt{|\dot{f}\dot{g}|}},$$ (29) and $$p=\frac{[f+g]^{1/3}}{k_{(5)}^2\sqrt{|\dot{f}\dot{g}|}}\left[\frac{\ddot{g}}{\dot{g}}\frac{\ddot{f}}{\dot{f}}\right]\frac{1}{3}\rho .$$ (30) For the metric given in (14) we find that $$\rho =\frac{6}{k_{(5)}^2h}\sqrt{\frac{|\dot{h}|}{k}}sign(\dot{h})$$ (31) and $$p=\frac{1}{k_{(5)}^2}[k|\dot{h}|]^{1/2}\left[\frac{\dot{k}}{k}\frac{\ddot{h}}{\dot{h}}\right]\frac{2}{3}\rho $$ (32) Thus given $`f`$ and $`g`$ or $`h`$ and $`k`$ we can find the metric and the energy-momentum tensor on the brane. For example if $`f(t)=g(t)`$ we find that $$ds_{(4)}^2=d\tau ^2+\tau (dx^2+dy^2+dz^2)$$ (33) where $`d\tau =|\dot{f}|(2f)^{1/3}dt`$. This is the spacetime for a radiation dominated universe in the standard scenario. However, as can be seen from (29) and (30) the pressure and energy density on the brane vanish! Note that this is consistent with equation (1) found by BDL. It is also important to note that Minkowski space is also a solution for the spacetime on the brane if $`\rho =p=0`$. Surprisingly, flat spacetime also arises with $`\rho =3p=(4/k_{(5)}^2)`$sign($`\dot{h}`$) if $`f+g`$=constant. It is important to note that the energy densities and pressures above arise in the five dimensional Einstein equations. If an observer on the brane defined $`T_{\mu \nu }^{(4)}=k_{(4)}^2G_{\mu \nu }^{(4)}`$ the energy densities and pressures obtained would differ from the above expressions. Once we have $`f(t)`$ and $`g(t)`$, which specify the cosmology on the brane, we can extend the solution into the bulk by using (23) or (24). For example if $`f+g=`$constant, which gives flat spacetime on the brane, then $$ds^2=\dot{f}(t|w|)^2(dw^2dt^2)+dx^2+dy^2+dz^2.$$ (34) This is flat five dimensional spacetime. If $`f(t)=g(t)=t`$ on the brane the bulk metric is given by $$ds^2=d\tau ^2+\tau (dx^2+dy^2+dz^2)+\frac{1}{\tau }dw^2,$$ (35) which corresponds to a five dimensional anisotropic Kasner cosmology. Of course, for other choices of $`f`$ and $`g`$ the bulk spacetime metric will depend on $`w`$. For the remainder of this section I will work with metrics of the form (14). As discussed earlier all metrics of this form correspond to a flat five dimensional spacetime. To see this let $`\overline{u}=h(u)`$ and $`\overline{v}=k_1(v)=k(v)𝑑v`$ (for simplicity I have taken $`\dot{h}>0`$). In these coordinates the metric is given by $$ds^2=d\overline{u}d\overline{v}+\overline{u}^2(dx^2+dy^2+dz^2).$$ (36) The coordinate transformation that takes this to the flat spacetime metric $$ds^2=d\stackrel{~}{u}d\stackrel{~}{v}+d\stackrel{~}{x}^2+d\stackrel{~}{y}^2+d\stackrel{~}{z}^2$$ (37) is $$\begin{array}{cc}\stackrel{~}{u}=\overline{u}& \\ & \\ \stackrel{~}{v}=\overline{v}+\overline{u}x^2& \\ & \\ \stackrel{~}{x}^k=\overline{u}x^k& \end{array}$$ (38) where $`\overline{x}^2=x^kx_k`$. This transformation can easily be found by using the null geodesics as coordinate lines. Thus, the transformation from $`(t,w,x^k)`$ to $`(\stackrel{~}{t},\stackrel{~}{w},\stackrel{~}{x}^k)`$ is given by $$\begin{array}{cc}\stackrel{~}{t}=\frac{1}{2\lambda }[k_1(t+w)+(x^2+\lambda ^2)h(tw)],& \\ & \\ \stackrel{~}{w}=\frac{1}{2\lambda }[k_1(t+w)+(x^2\lambda ^2)h(tw)],& \\ & \\ \stackrel{~}{x}^k=h(tw)x^k,& \end{array}$$ (39) where $`\stackrel{~}{u}=\frac{1}{\lambda }(\stackrel{~}{t}\stackrel{~}{w})`$, $`\stackrel{~}{v}=\lambda (\stackrel{~}{t}+\stackrel{~}{w})`$, and $`\lambda `$ is a constant with dimensions of length. The orbifold in five dimensions is produced by identifying the points $`\stackrel{~}{x}^\mu (t,x^k,w)`$ and $`\stackrel{~}{x}^\mu (t,x^k,w)`$, where $`\mu `$ labels the five dimensional spacetime. The surface of the brane can therefore be parameterized by $$\begin{array}{cc}\stackrel{~}{t}=\frac{1}{2\lambda }[k_1(t)+(x^2+\lambda ^2)h(t)],& \\ & \\ \stackrel{~}{w}=\frac{1}{2\lambda }[k_1(t)+(x^2\lambda ^2)h(t)],& \\ & \\ \stackrel{~}{x}^k=h(t)x^k,& \end{array}$$ (40) where $`(t,x^k)`$ are the coordinates on the surface. This surface can also be described by the equation $$\stackrel{~}{t}^2\stackrel{~}{w}^2\stackrel{~}{x}^2\stackrel{~}{y}^2\stackrel{~}{z}^2=\left(\frac{\stackrel{~}{t}\stackrel{~}{w}}{\lambda }\right)k_1\left[h^1\left(\frac{\stackrel{~}{t}\stackrel{~}{w}}{\lambda }\right)\right].$$ (41) Now given any metric of the form $$ds_{(4)}^2=dt^2+a^2(t)(dx^2+dy^2+dz^2)$$ (42) it is easy to see that $`h(t)=a(t)`$ and $`k_1(t)=\frac{dt}{\dot{a}(t)}`$. Thus, any $`k=0`$ cosmology can be embedded in a flat five dimensional orbifold spacetime. For example consider the inflationary cosmology with $`a(t)=e^{Ht}`$. The equation of the surface is $$\stackrel{~}{t}^2\stackrel{~}{w}^2\stackrel{~}{x}^2\stackrel{~}{y}^2\stackrel{~}{z}^2=\frac{1}{H^2}.$$ (43) This is the usual hyperboloid representation of de Sitter space except that it is embedded in an orbifold spacetime. Finally, we will show that it is possible to find a three brane metric and the equation of its surface given the energy density on the brane. Instead of working with the metric in the form (24) it is convenient to work in a coordinate system in which the metric takes the form $$ds_{(4)}^2=d\tau ^2+a^2(\tau )[dx^2+dy^2+dz^2]$$ (44) where $`d\tau =\sqrt{k|\dot{h}|}dt`$. Equations (31) and (32) become $$\rho =\frac{6}{k_{(5)}^2a}\frac{da}{d\tau },$$ (45) and $$p=\frac{2}{k_{(5)}^2}\left(\frac{da}{d\tau }\right)^1\frac{d^2a}{d\tau ^2}\frac{2}{3}\rho .$$ (46) Note that the above are independent of $`k`$ so that $`k`$ will remain arbitrary. Inverting (45) gives $$a(\tau )=h_0\mathrm{exp}\left[\frac{k_{(5)}^2}{6}\rho (\tau )𝑑\tau \right].$$ (47) Note that $`p`$ is fixed once $`h`$ is known. For example if $`\rho =\rho _0`$= constant then $`a(\tau )=\mathrm{exp}(H\tau )`$ and $`p=\rho `$ where $`H=k_{(5)}^2\rho _0/6`$. The equation of the surface is given in (43). ## Cosmology on the Brane II: $`\lambda >0`$ From (22) the induced metric on the brane is $$ds_{(4)}^2=\frac{16}{\lambda }\frac{\dot{f}\dot{g}}{(f+g)^2}dt^2+\frac{1}{(f+g)^2}(dx^2+dy^2+dz^2)$$ (48) and the energy density and pressure on the brane for $`f+g>0`$ are given by $$\rho =\frac{3\sqrt{\lambda }}{2k_{(5)}^2}\left[\frac{\dot{g}\dot{f}}{\sqrt{\dot{f}\dot{g}}}\right]$$ (49) and $$P=\frac{\sqrt{\lambda }}{2k_{(5)}^2\sqrt{\dot{f}\dot{g}}}\left[3(\dot{f}\dot{g})+\frac{1}{2}(f+g)\left(\frac{\ddot{g}}{\dot{g}}\frac{\ddot{f}}{\dot{f}}\right)\right].$$ (50) Note that $`\dot{f}\dot{g}<0`$ for the metric on the brane to have the correct signature and for the energy density and pressure to be real. As an example consider the spacetime with $`f(t)=1/t`$, $`g(t)=t`$, and $`t>0`$. The metric on the brane is $$ds_{(4)}^2=d\tau ^2+\frac{1}{4}\mathrm{sin}^2\left(\frac{\sqrt{\lambda }\tau }{2}\right)[dx^2+dy^2+dz^2]$$ (51) where $`\tau =\frac{4}{\sqrt{\lambda }}\mathrm{tan}^1(t)`$. This describes a cosmology in which the universe initially expands and then collapses. The energy density and pressure on the brane are given by $$\rho =\frac{3\sqrt{\lambda }}{2k_{(5)}^2}\left[\mathrm{sin}\left(\frac{\sqrt{\lambda }}{2}\tau \right)\right]^1$$ (52) and $$P=\frac{2}{3}\rho .$$ (53) It is important to note that the energy density and pressure found above arise in the five dimensional Einstein equations. If an observer on the brane defined $`T_{\mu \nu }^{(4)}=k_{(4)}^2G_{\mu \nu }^{(4)}`$ different results would be obtained. ## Conclusion A general solution was found for a five dimensional orbifold spacetime that induces a $`k=0`$ cosmology on a three-brane. Expressions for the energy density and pressure on the brane where found in terms of the metric on the brane. It was shown that it is possible to find five dimensional spacetimes that contain the brane given the brane metric. This procedure was carried out for scale factors $`a(\tau )=e^{H\tau }`$ and $`a(\tau )=\tau ^{1/2}`$. It was also shown that any $`k=0`$ cosmology can be embedded in a flat five dimensional orbifold spacetime and the embedding equation was found. For an inflationary universe it was shown that the surface is the usual hyperboloid representation of de Sitter space, although it is embedded in an orbifold spacetime.
no-problem/9911/math9911040.html
ar5iv
text
# Representative dynamics ## 1. Representative dynamics ###### Definition Definition 1 Let $`𝕏=𝕏(t)=(X_1(t),\mathrm{}X_m(t))`$ ($`X_i(t)\mathrm{Mat}_n()`$) be the time-dependent vector of $`m`$ complex $`n\times n`$ matrices. The representative dynamics is a controlled system (with constraints) of the form $$\dot{𝕏}(t)=F(𝕏(t),a(t))$$ $`1`$ with the fixed initial data $`𝕏(t_0)`$, where the control parameter $`a(t)=(𝔄(t),𝕖(t))`$ is the pair of any associative algebra $`𝔄(t)`$ from the fixed class of such algebras $`𝔸`$, $`𝕖(t)=(e_1(t),\mathrm{}e_m(t))`$ is any set of algebraic generators of the algebra $`𝔄`$ (one may claim $`𝕖(t)`$ to be an algebraic basis in $`𝔄`$) such that the mapping $`e_i(t)X_i(t)`$ may be extended to the representation $`T(t):𝔄(t)\mathrm{Mat}_n()`$ of the algebra $`𝔄(t)`$ in the matrix algebra $`\mathrm{Mat}_n()`$ (this is a constraint on the control $`a(t)`$). Certainly, the claim that (1) is a representative dynamics restricts the choice of the function $`F`$ and initial data $`𝕏(t_0)`$ because for each moment $`t`$ any admissible choice of the pair $`(𝔄(t),𝕖(t))`$ should provide that the set of admissible pairs will not be empty in future. ###### Remark Exercise To describe all representation dynamics with $`m=3`$, $`n=2`$ and the class $`𝔸`$ of all associative algebras $`𝔄`$ with quadratic relations, which are isomorphic as linear spaces to the symmetric algebra $`S^{}(V)`$ over the linear space $`V`$ spanned by the fixed elements $`e_i`$ of $`𝔄`$ under the Weyl symmetrization mapping. ###### Remark Remark 1 Let us consider the following equivalence on the set $`𝒜`$ of all admissible $`a=(𝔄,𝕖)`$. The pairs $`a_1=(𝔄_1,𝕖_1)`$ and $`a_2=(𝔄_2,𝕖_2)`$ will be equivalent iff the algebras $`A_1`$ and $`A_2`$ are isomorphic under an isomorphism which maps the linear space $`V_1`$ spanned by the elements of $`𝕖_1`$ onto the linear space $`V_2`$ spanned by the elements of $`𝕖_2`$. Then the equivalence divides the time interval $`[t_0,t_1]`$, on which the representative dynamics is considered, onto the subsets, on which the pairs $`a(t)`$ are equivalent. ###### Remark Remark 2 Representative dynamics combines structural and functional features. The first are accumulated in the class $`𝔸`$ and the least are expressed by the function $`F`$. Both are interrelated. The situation is similar to one in the group theory of special functions . However, the difference is essential: in the representative dynamics the functional aspects are not derived from the structural ones and hve an independent origin. ## 2. Dynamical inverse problem of representation theory for controlled systems In the article dynamical inverse problem of representation theory was considered (see also the review on the general ideology of inverse problems of representation theory). Below this concept will be adapted for the controlled systems. The representative dynamics will play a crucial role. ###### Definition Definition 2 Let $$\dot{x}=\phi (x,u),$$ $`2`$ be the controlled system, where $`x`$ is the time-dependent $`m`$-dimensional complex vector and $`u`$ is the control parameter. Dynamical inverse problem of representation theory for the controlled system (2) is to construct a representative dynamics $$\dot{𝕏}=F(𝕏,a)$$ and the function $$a=a(u,x)\text{such that}\phi (x,u)=f(x,a(u,x)),$$ where the operator function $`F`$ is defined by the Weyl (symmetric) symbol $`f`$ as a function of $`m`$ non-commuting variables $`X_1,\mathrm{}X_m`$ \[5:App.1\]. ###### Remark Remark 3 If the controls are absent and the pair $`a(t)=(𝔄(t),𝕖(t))`$ is time-independent, Def.2 is reduced to the definition of the dynamical inverse problem of representation theory of the article . ###### Remark Remark 4 One may consider dynamical inverse problem of representation theory for games, the interactively controlled systems and interactive games (introduced by the author in ). ###### Remark Remark 5 One is able to interpret the correspondence $$\text{controlled system}\text{representative dynamics}$$ as a quantization of the first. Such interpretation is very important for the second quantization of intention fields in the interactive games . ###### Remark Remark 6 If the function $`\phi `$ contained some constants $`c_\alpha `$ then one may interpret them as time-independent variables and include the matrices $`C_\alpha \mathrm{Mat}_n()`$ instead of them in the operator function $`F`$ (compare with the quantization of constants ).
no-problem/9911/astro-ph9911505.html
ar5iv
text
# Intrinsic AGN Absorption Lines ## Intrinsic AGN Absorption Lines Any gaseous material along our line of sight to distant quasars or, more generally, active galactic nuclei (AGNs) will absorb light according to the amounts and range of ions present. Strong absorption lines are common in rest-frame UV spectra of AGNs due to a variety of resonant transitions, for example the HI Lyman series lines (most notably Ly$`\alpha `$ $`\lambda `$1216) and high-ionization doublets like CIV $`\lambda `$$`\lambda `$1549,1551. The lines are called intrinsic if the absorbing gas is physically related to the AGN, e.g. if the absorber resides broadly within the radius of the AGN’s surrounding “host” galaxy. Intrinsic absorption lines are thus valuable probes of the kinematics, physical conditions and elemental abundances in the gas near AGNs. Studies of intrinsic absorbers have historically emphasized quasar broad absorption lines (BALs), which clearly identify energetic outflows from the central engines. Today we recognize a wider variety of intrinsic lines in a wider range of objects. For example, we now know that Seyfert 1 galaxies (the less luminous cousins of quasars) have intrinsic absorption. We also realize that intrinsic lines can form in a range of AGN environments — from the dynamic inner regions like the BALs, to the more quiescent outer host galaxies $`>`$10 kpc away. One complicating factor is that most AGNs also have absorption lines due to unrelated gas or galaxies far from the AGN (see quasars: absorption lines). Part of the effort, therefore, is to identify the intrinsic lines and locate their absorbing regions relative to the AGNs. Empirical line classifications are a good starting point for this work. ### Empirical Line Types AGN absorption lines are usually classified according to the widths of their profiles. These classes separate the clearly intrinsic broad lines from the many others of uncertain origin. The main empirical classes are 1) the BALs, 2) the narrow absorption lines (NALs), and 3) the intermediate “mini-BALs.” Broad absorption lines (Fig. 1) are blueshifted relative to the AGN emission lines, implying outflow velocities from near 0 km s<sup>-1</sup> to as much as $``$60,000 km s<sup>-1</sup> ($``$0.2$`c`$). A representative line width is $``$10,000 km s<sup>-1</sup>, although there is considerable diversity among BAL profiles. Velocity widths $`>`$3000 km s<sup>-1</sup> and blueshifted velocity extrema $`>`$5000 km s<sup>-1</sup> are usually considered minimum requirements for classification as a BAL. Some BALs have several distinct absorption troughs, while others are strictly “detached” from the emission lines — such that the absorption appears only at blueshifts exceeding several thousand km s<sup>-1</sup>. Narrow absorption lines (Fig. 2) have widths less than a few hundred km s<sup>-1</sup>. NALs with absorption redshifts, $`z_a`$, within $`\pm `$5000 km s<sup>-1</sup> of the emission redshift, $`z_e`$, are called “associated” (or $`z_az_e`$) lines because of their likely physical connection to the AGN. In general, NALs can appear at redshifts from $`z_az_e`$ to $`z_a0`$. Many of these systems are not related to the AGNs. Figure 1. — Detached BALs in the quasar PG 1254+047. The BALs are labeled just above the spectrum. The wavelengths of prominent broad emission lines are marked across the top. The Flux has units 10<sup>-15</sup> ergs s<sup>-1</sup> cm<sup>-2</sup> Å<sup>-1</sup>. Figure 2. — Intrinsic absorption in the quasar PG 0935+417. A system of associated NALs is labeled above the spectrum, with open brackets showing the doublet separations. A mini-BAL due to CIV blueshifted by $``$51,000 km s<sup>-1</sup> is labeled below. The Flux has units 10<sup>-15</sup> ergs s<sup>-1</sup> cm<sup>-2</sup> Å<sup>-1</sup>. Absorption lines with intermediate widths between the BALs and NALs are increasingly called mini-BALs (Fig. 2). Their strictly blueshifted profiles appear smooth and BAL-like in high resolution spectra, and their centroid velocities span the same range as the BAL troughs (from near 0 km s<sup>-1</sup> to almost 0.2$`c`$). Mini-BALs evidently form in outflows similar to the BALs. ### Identifying Intrinsic NALs The first evidence that some NALs are intrinsic came from statistical tendencies, namely, 1) quasar NALs appear with greater frequency near the emission-line redshift, and 2) the strengths of these $`z_az_e`$ systems correlate with the quasars’ radio properties. The first tendency might be explained by external galaxies clustering around quasars, but the second clearly demonstrates a physical relationship to the quasars themselves. Figure 3. — High resolution blow-up of the CIV NALs in PG 0935+417, revealing multiple components (cf. Fig. 2). The open brackets mark the strongest doublet pairs. The outflow velocities are appropriate for the short-wavelength doublet members. The absorption complex at 1535–1540 Å is intrinsic based on the broad line profiles and doublet ratios that imply partial line-of-sight coverage. Direct evidence for the intrinsic origin of specific NALs has come from spectroscopic indicators, such as 1) time-variable line strengths, 2) well-resolved line profiles that are smooth and broad compared to the thermal velocity, 3) multiplet ratios that imply partial coverage of the background light source(s), and 4) high space densities ($`>`$100 cm<sup>-3</sup>) inferred from the presence of excited-state absorption lines. These properties signal intrinsic absorption because they are most easily understood in terms of the dense and dynamic environments near AGNs. Unrelated intervening absorbers — typically inter-galactic gas clouds or extended galactic halos — should generally be larger, more quiescent, and less ionized for a given gas density. The link between the first 3 properties and the near-quasar environment is further strengthened by the fact that they are common in BALs and mini-BALs. NALs with these characteristics probably also form in outflows; they have been measured at blueshifted (ejection) velocities from $``$0 to $``$24,000 km s<sup>-1</sup> (e.g. Fig. 3). ### Global Covering Factors and the Ubiquity of Intrinsic Absorbers All varieties of quasars and Seyfert 1 galaxies show some type of intrinsic absorption, but different objects favor different types of absorbers. For example, BALs and other high-velocity ($`>`$3000 km s<sup>-1</sup>) intrinsic lines appear only in quasars and never in the Seyfert galaxies. BALs also tend to avoid quasars with strong radio emission, while associated NALs seem to favor them. Preferences like these are probably tied to the unique physical conditions, whereby different AGNs drive different types of outflows. Intrinsic lines also do not appear in every individual spectrum. For example, BALs are detected in only $``$12% of quasars. The detection rate of associated (and so probably intrinsic) NALs in quasars is not well known, but it appears to be roughly similar to the BALs. Mini-BALs may be somewhat rarer than both BALs and associated NALs in quasars, but they can appear in either “radio-loud” or “radio-quiet” objects. Mini-BALs and intrinsic NALs are both common in Seyfert 1 galaxies, with at least one of these features appearing in 50–70% of the low-luminosity sources. These detection rates (within object classes) depend on viewing angle effects and on the global covering factors of the absorbing gas. If the objects are randomly oriented, the absorption-line detection frequencies should approximately equal the average value of the global covering factor, $`Q\mathrm{\Omega }/4\pi `$, where $`\mathrm{\Omega }`$ is the solid angle subtended by the absorber as seen from the central light source. In practice, the situation can be more complicated. For example, attenuation by BAL gas might bias quasar samples against the inclusion of sources with BALs. Thus the true global covering factor of BAL regions could be 20–30%, instead of the $``$12% implied by the detection rate. Nonetheless, the overall conclusion is that intrinsic absorbers are present in all AGNs, with global covering factors similar to the line detection frequencies in each type of object. ### Absorber Geometry A consistent picture of the geometry has emerged in which the absorbing gas resides primarily near the equatorial plane of the AGN accretion disk (Fig. 4). In particular, spectropolarimetric observations show that the continuum light from BAL quasars is more polarized than non-BAL quasars. Also, the percent polarization is typically much greater inside BAL troughs compared to the adjacent continuum. These results are understood in terms of light scattering in the disk geometry. Quasars viewed close to edge-on exhibit BALs because our sight-line intersects the the absorbing gas. These objects are also more polarized because the direct (unpolarized) light through the BAL region is slightly attenuated; thus the scattered (polarized) light makes a relatively larger contribution to their measured flux. BAL troughs have higher percent polarizations because the direct (unpolarized) light in the troughs is more attenuated than the continuum. Quasars without BALs, viewed from above or below the disk plane, have low polarizations because their direct (unpolarized) light dominates the measured flux. Figure 4. — Schematic absorber geometry. The dotted lines are representative light rays from the continuum source (central dot). The open rectangles mark the broad emission line region (BELR) near the accretion disk plane. The teardrop lobes above and below are putative scattering regions near the disk axis. Extended radio jets, when present, would lie along this axis. This picture is also supported by observations showing that quasars with reddened spectra, presumably caused by dust near the disk plane, are more likely to have BALs and high polarizations. Similarly, studies of radio-loud quasars show that associated NALs are stronger and more common in sources with their radio jets aligned perpendicular to our line of sight (such that their unresolved inner disks are viewed nearly edge-on). ### Basic Physical Properties #### Kinematics One surprising property of intrinsic absorbers is that none of them have changed velocity between observations that now span 10–20 yrs. In one well-studied case, distinct features in a quasar BAL show $`<`$30 km s<sup>-1</sup> of movement over 5 yrs in the quasar rest frame, implying an acceleration of $`<`$0.02 cm s<sup>-2</sup>. The outflow speeds of 10,000–20,000 km s<sup>-1</sup> are therefore stable to $`<`$0.2% on this time scale. Another general property is the kinematic complexity (e.g. Figs. 1–3). Many intrinsic absorbers have multiple distinct absorption troughs. Sometimes BALs, mini-BALs and/or NALs appear together in the same spectrum, either blended together or at different (non-overlapping) outflow velocities. The narrower lines, i.e. the NALs and mini-BALs, generally have small velocity dispersions compared to the outflow speeds. Evidently, the outflows producing these lines are not continuously accelerated from rest along our line of sight. These properties all present challenges for the physical models discussed below. #### Column Densities and Partial Coverage Estimates of the column densities in each absorbing ion are complicated by the fact that many (most?) intrinsic absorbers only partially cover the background light source(s) along our line(s) of sight. Partial coverage might occur if the absorbing regions are porous or they are smaller than the emission sources in overall projected area (see Fig. 4). In any case, partial coverage leads to unabsorbed flux filling in the bottoms of the measured troughs. The observed line intensities thus depend on both the line-of-sight coverage fraction, $`C_f`$, and the line optical depths as, $$I_v=(1C_f)I_o+C_fI_oe^{\tau _v}$$ (1) where $`0C_f1`$, $`I_v`$ and $`I_o`$ are the observed and emitted (unabsorbed) intensities, and $`\tau _v`$ is the line optical depth, at each line velocity $`v`$. The first term on the right side represents the unabsorbed light that fills in the troughs. In the limit $`\tau _v1`$ we have simply, $$C_f=1\frac{I_v}{I_o}$$ Outside of that limit, we can compare lines whose true optical depth ratios are fixed by atomic constants, such as the HI Lyman lines or doublets like CIV $`\lambda `$$`\lambda `$1548,1550, to determine uniquely both the coverage fractions and the true optical depths across the line profiles. For the commonly measured doublets (like CIV), where the true optical depth ratios are $``$2, the coverage fraction at each velocity follows from Eqn. 1 such that, $$C_f=\frac{I_1^22I_1+1}{I_22I_1+1}$$ where $`I_1`$ and $`I_2`$ are the observed line intensities, normalized by $`I_o`$, at the same velocity in the weaker and stronger line troughs, respectively. The column densities, $`N`$, follow from the line optical depths via, $$N=\frac{m_ec}{\pi e^2f\lambda _o}\tau _v𝑑v$$ where $`f`$ and $`\lambda _o`$ are the line’s oscillator strength and laboratory wavelength. The integral is performed over all or part of the line profile. This analysis has been applied to well-resolved multiplets in NALs and mini-BALs. The derived coverage fractions range from $``$10% to 100%. The column densities in commonly measured ions (e.g. HI, CIV, NV, SiIV, etc.) are usually in the range $`10^{13}<N<10^{16}`$ cm<sup>-2</sup>, consistent with the usual absence of Lyman edge absorption at 912 Å. Most BALs are too broad and blended for this analysis. They probably have $`C_f<1`$ in general, but the exact values are rarely known. Crude estimates suggest that the range of BAL column densities is perhaps an order of magnitude larger than the well-measured NALs. #### Ionization and Total Column Densities Intrinsic absorbers are in general highly ionized. Their lowest levels of ionization are usually characterized by ions such as CIII, NIII, SiIV or CIV. Lower ionization species like SiII, FeII, MgII or CII are rarely present. The upper limits of ionization are generally not known because higher ions have their resonance lines at shorter (and more difficult-to-measure) wavelengths. Existing data suggest that ionizations up to at least NeVIII are common. The strong radiative flux of the AGN provides a natural ionization source. Theoretical simulations of photoionized plasmas in equilibrium with the AGN radiation field are used to match the measured column densities, and thus derive the overall ionization state(s) and total column densities (in HI + HII). The ionization state is described by an ionization parameter, $`U`$, which is the dimensionless ratio of hydrogen-ionizing photon to hydrogen particle densities in the absorber: $$U\frac{1}{4\pi R^2cn_H}_{\nu _{LL}}^{\mathrm{}}\frac{L_\nu }{h\nu }𝑑\nu $$ where $`n_H`$ is the absorber’s hydrogen density, $`R`$ is its radial distance from the central quasar, $`L_\nu `$ is the quasar luminosity distribution and $`\nu _{LL}`$ is the frequency at the HI Lyman limit. For a “typical” AGN spectral shape we have, $$U0.3n_{10}^1R_{0.1}^2L_{46}$$ (2) where $`n_{10}`$ is the density in units of 10<sup>10</sup> cm<sup>-3</sup>, $`R_{0.1}`$ is the radial distance in units of 0.1 pc, and $`L_{46}`$ is the AGN luminosity relative to $`10^{46}`$ ergs s<sup>-1</sup>. Typical $`U`$ values are between $``$0.01 and $``$1. Absorbers at the same velocity, or at different velocities in the same spectrum, often have a range of $`U`$ values implying a range of densities or radii in the overall absorbing region. Derived total column densities are typically $`N_\mathrm{H}10^{19}`$ to $`10^{21}`$ cm<sup>-2</sup> for NAL regions and $`>`$$`10^{22}`$ cm<sup>-2</sup> for the BAL gas. #### Correlated X-Ray Absorption Recent studies have shown that the presence of intrinsic lines in the UV correlates with continuous (bound-free) absorption in soft X-rays. Evidently, the UV lines are just one aspect of the intrinsic absorber phenomenon. The X-ray results are important because they reveal generally higher ionizations, $`U>1`$, and much higher total column densities, by 1–2 orders of magnitude, compared to the UV lines. A critical issue, therefore, is the physical relationship between the UV and X-ray absorbers. The data clearly show that different absorbers (of either type) can have a wide range of physical conditions. Some well-measured UV absorbers, in particular, have ionizations and total column densities that should produce only minimal X-ray absorption. Moreover, the best ensemble data are often inconsistent with a 1-zone medium producing all of the measured features. Spatially distinct zones with different ionizations and column densities are at least sometimes present. It is therefore likely that the UV and X-ray absorbers are physically related but not, in general, identically the same. #### Space Densities and Radial Distances Most constraints on the radial distance come indirectly from estimates of the space density and ionization parameter (via Eqn. 2). The densities are derived in several ways. For example, the absence of broad or blueshifted forbidden emission lines, e.g. \[OIII\] $`\lambda `$5007, suggests (for BAL regions only) that this emission is suppressed by collisional deexcitation at high densities. Some AGNs have have absorption lines from excited energy states, implying significant densities to support the upper level populations. In other cases, time-variable absorption lines require minimum densities to allow the gas to adjust its ionization structure within the variability time. This last method assumes that the variability time exceeds the recombination time, $`t_{recomb}(n_e\alpha )^1`$, where $`n_e`$ is the electron density and $`\alpha `$ is the recombination rate coefficient. Even if the line variability is not caused by changes in the ionization, it turns out that dynamical limits on clouds moving across our line of sight lead to similar $`R`$ constraints. The overall results imply densities from $`<`$7 cm<sup>-3</sup> to $`>`$10<sup>6</sup> cm<sup>-3</sup> and corresponding distances from $`>`$300 kpc to $`<`$10 pc in different sources. The largest confirmed distances apply only to narrow and non-variable NALs. Lines with clear dynamic signatures, e.g. the BALs, mini-BALs and some NALs, probably form closer to the AGN than even the smallest of these upper limits. The minimum radial distance, $`R_{min}`$, is set by the fact that many intrinsic absorbers, e.g. most BALs, suppress both the continuum and broad line emissions. The absorber distances therefore cannot be much less than the broad emission line region radius, which is known independently to scale with the AGN luminosity, such that, $$R_{min}0.1\left(L_{46}\right)^{\frac{1}{2}}\mathrm{pc}$$ (3) where $`L_{46}1`$ is “typical” for quasars. (Values of $`L_{46}`$ can actually range from $`<`$0.001 in Seyfert nuclei to $`>`$100 in the most luminous quasars.) If there is absorbing gas near this minimum radius, the densities could reach $``$$`10^{10}`$ cm<sup>-3</sup> (see Eqn. 2). ### Physical Models Most physical models of intrinsic absorbers feature a wind-disk geometry similar to Fig. 4. The outflowing gas stays close to the accretion-disk plane, and we observe it (via blueshifted absorption lines) only if our sight line(s) to the emission source(s) intersect wind material. The disk provides a likely source for the wind material, and its acceleration to high speeds probably occurs via radiation pressure from the central emission source. The outward transfer of angular momentum in these winds might facilitate the inward flow of matter in the accretion disk, thus promoting the growth and fueling of the central black hole. The measured line velocities suggest that the outflows often originate near the minimum radius implied by Eqn. 3. In particular, the radial acceleration of a wind driven by radiation pressure from a central point source with luminosity $`L`$ and mass $`M`$ is, $$\frac{vdv}{dR}=\frac{f_LL}{4\pi R^2cm_\mathrm{H}N_\mathrm{H}}\frac{GM}{R^2}$$ where $`f_L`$ is the fraction of the luminosity absorbed or scattered in the wind. Integrating this equation from $`R`$ to infinity yields the terminal velocity, $$v_{\mathrm{}}10,000R_{0.1}^{\frac{1}{2}}\left(\frac{f_{0.1}L_{46}}{N_{22}}0.8M_9\right)^{\frac{1}{2}}\text{km s}\text{-1}$$ (4) where $`N_{22}`$ is the total column density in $`10^{22}`$ cm<sup>-2</sup>, $`M_9`$ is the central black hole mass relative to 10<sup>9</sup> M, and $`f_{0.1}`$ is the absorbed fraction compared to 10%. These expressions hold strictly for open geometries, where the photons escaping one location in the wind are not scattered or absorbed in another location. Estimates of the total absorption indicate that $`f_{0.1}`$ could be as large as a few for BAL flows, and proportionately less for the narrower mini-BALs and NALs. Estimates of $`N_{22}`$ range from $`<`$0.01 for some intrinsic NALs to $`>`$1 for typical BALs. Radiative acceleration therefore requires small radii — very roughly similar to the radius of the broad emission line region. The radial scale is important for defining the mass and kinetic energy of outflowing gas. The total wind mass at any instant is, $$M_w1.1Q_{0.1}N_{22}R_{0.1}^2\text{M}\text{}$$ where $`Q_{0.1}`$ is the global covering factor relative to 10%. The mass loss rate, $`\dot{M}_w`$, depends further on the characteristic flow time, $`v/\mathrm{\Delta }R`$, such that, $$\dot{M}_w0.11Q_{0.1}N_{22}\frac{R_{0.1}^2}{\mathrm{\Delta }R_{0.1}}v_4\text{M}\text{}\mathrm{yr}^1$$ where $`v_4`$ is the flow velocity in units of 10<sup>4</sup> km s<sup>-1</sup> and $`\mathrm{\Delta }R_{0.1}`$ is its radial thickness in units of 0.1 pc. The kinetic energy luminosity is $`L_K\frac{1}{2}\dot{M}_wv^2`$, or, $$L_K4\times 10^{42}Q_{0.1}N_{22}\frac{R_{0.1}^2}{\mathrm{\Delta }R_{0.1}}v_4^3\mathrm{ergs}\mathrm{s}^1$$ During a quasar’s lifetime, perhaps $``$10<sup>8</sup> yrs, outflows with these parameter values will eject a total of $``$10<sup>7</sup> M of gas with kinetic energy $``$$`10^{58}`$ ergs. BAL winds might often have total masses, kinetic energies, etc., that are an order of magnitude larger, based on our best estimates of $`Q_{0.1}1`$–3 and $`N_{22}>1`$ for those outflows. One unresolved issue is whether the outflowing gas is smoothly distributed or residing in discrete clouds. This seemingly simple question goes to the heart of the wind physics. It was long believed that the flows consist of many discrete clouds because, for example, a flow with $`N_{22}1`$ and $`\mathrm{\Delta }R_{0.1}R_{0.1}1`$ would have a mean density of only $`n3\times 10^4`$ cm<sup>-3</sup>. This density would lead to $`U`$ values several orders of magnitude larger than expected from the data (Eqn. 2). Moreover, a gas with this high ionization cannot be radiatively accelerated because the ions would be too stripped of electrons to absorb enough incident flux. The flows must therefore be distributed in much denser clouds that fill only part of the wind volume. If these clouds individually have velocity dispersions close to the sound/thermal speed (roughly 15 km s<sup>-1</sup> for a nominal 15,000 K gas), then a smooth BAL profile of width $`10^4`$ km s<sup>-1</sup> requires $`>`$1000 clouds along the line of sight. The main objection to this scenario is that the clouds must be very small, $`<`$10<sup>9</sup> cm across, and they cannot survive as discrete entities without some ad hoc external pressure. An alternative model has emerged in which the flows are, in fact, smoothly distributed with high ionization parameters. The high $`U`$ values, $`>`$100, are reconciled with the data (and with radiative acceleration) by invoking a large column density ($`N_\mathrm{H}>10^{23}`$ cm<sup>-2</sup> for BAL winds) of highly ionized gas at the base of the flow. This gas is not radiatively accelerated because it is too ionized. But its bound-free absorption (in soft X-rays and the extreme UV) greatly “softens” the spectrum seen by the wind material downstream, thereby lowering the wind’s ionization level and facilitating its acceleration. The most compelling aspect of this model is that it provides a physical basis for the observed correlation between UV and X-ray absorption: the winds revealed by the UV lines cannot exist without the shielding X-ray absorber. The leading alternative explanation, which simply equates the 2 absorbing regions, presents a serious problem because the high column densities implied by the X-ray absorption might be impossible to radiatively accelerate (Eqn. 4). A serious challenge to all models is the kinematic complexity. It is surprising, for example, that no intrinsic absorbers have shown line-of-sight acceleration. The crossing time for a flow with $`v_4=2`$ and $`\mathrm{\Delta }R_{0.1}=1`$ is only $``$5 yrs, yet repeated observations over this time frame reveal no velocity changes. Just as challenging are the multi-component line troughs and the frequent lack of absorption near zero velocity. These characteristics might be caused by the episodic ejection of discrete “blobs,” or by well-collimated flows that cross our line of sight and thus reveal only part of their full velocity extent. The latter hypothesis seems more likely for at least the broader lines, because their super-sonic velocity dispersions ($`>`$1000 km s<sup>-1</sup>) should quickly dissipate discrete blobs. Collimated accretion disk winds might cross our line of sight if they are driven at first vertically off the disk, before being bent into fully radial motion by the AGN’s radiative force. The collimation and vertical ejection might both be facilitated by magnetic fields running perpendicular through the disk plane. However, multiple line troughs would require multiple collimated outflows. A major problem with this picture is that these intricate flow structures, which are tied to the accretion disk, must remain stable while the disk rotates. A more general problem is determining how much, and what aspects, of the diversity among intrinsic absorbers results simply from orientation effects. Do the various outflows identified by BALs, mini-BALs and intrinsic NALs coexist generally in AGNs? Orientation is clearly not the only factor. For example, the complete lack of high-velocity lines in Seyfert galaxies suggests a luminosity dependence, probably related to the requirements for radiative acceleration. Similarly, the relationships to AGN radio emissions suggest that there is some (unknown) physical connection between the disk-wind properties and the formation of radio jets. ### Element Abundances and Host Galaxy Evolution Our understanding of the element abundances near AGNs is based on general principles of stellar nucleosynthesis and galactic chemical evolution. All of the heavy elements from carbon on up are synthesized from primordial H and He in stars. The amounts of these elements are thus revealing of both the amount of local star formation and the evolutionary status of the galactic environment. The elements near AGNs specifically probe these properties in the centers of big galaxies. For distant quasars, the local abundance evolution might involve some of the first stars forming in collapsed structures after the Big Bang. Abundance measurements from absorption lines are, in principle, quite simple; one has only to apply appropriate correction factors for the ionization to convert the ionic column densities into relative abundances. For example, the abundance ratio of any two elements $`a`$ and $`b`$ can be written simply as, $$\frac{a}{b}=\frac{N(a_i)}{N(b_j)}\frac{F(b_j)}{F(a_i)}$$ where $`N`$ and $`F`$ are respectively the column densities and number fractions of element $`a`$ in ion state $`i`$, etc., in the absorbing gas. The $`F`$ values are generally adopted from photoionization calculations. The results from intrinsic NALs, and independently from the broad emission lines, indicate that quasar environments have roughly solar or higher heavy-element abundances out to redshifts $`>`$4. (Results based on the BALs are so far unreliable because of the problems with partial line-of-sight coverage mentioned earlier.) The local star formation must be both rapid and extensive to achieve these high abundances at early cosmic times. In particular, much of the gas must have already collapsed into stars by the time the quasars “turned on” or became observable — just a few billion years after the Big Bang. These findings support prior expectations for the rapid, early-epoch evolution of massive galactic nuclei. ### Bibliography Studies of intrinsic AGN absorption lines began in the 1960s with measurements of BALs and associated NALs in quasars. The status of the field circa 1997 is summarized in numerous articles and reviews in the conference proceedings, Arav A, Shlosman I and Weymann R J (eds.) Mass Ejection from AGN (ASP Conference Series: San Francisco). More recent results on the BALs appear, with further references, in Hamann F 1998 Broad PV Absorption in the QSO PG 1254+047: Column Densities, Ionizations and Metal Abundances in Broad Absorption Line Winds Astrophys. J. 500 798–809. The latest results on intrinsic X-ray absorption are discussed by Brandt W N, Laor A, & Wills B J 1999 On the Nature of Soft X-Ray Weak Quasi-Stellar Objects Astrophys. J. 528 637. The status of the polarization work on BAL quasars is described by Schmidt G D & Hines D C 1999 The Polarization of Broad Absorption Line QSOs Astrophys. J. 512 125, and by Ogle P M, et al. 1999 Polarization of Broad Absorption Line QSOs. I. A Spectropolarimetric Atlas Astrophys. J. Suppl. 125 1–34. A recent summary of intrinsic UV absorption in Seyfert galaxies appears in Crenshaw D M, Kraemer S B, Boggess A, Maran S P, Mushotzky R F, & Wu C-C 1999 Intrinsic Absorption Lines in Seyfert 1 Galaxies: I. Ultraviolet Spectra from the Hubble Space Telescope Astrophys. J. 516 750–768. The work on AGN element abundances is reviewed by Hamann F H & Ferland G J 1999 Elemental Abundances in QSOs: Star Formation and Galactic Nuclear Evolution at High Redshifts Ann. Rev. Astr. Astrophys. 37 487. ### Author’s Credit Frederick Hamann University of Florida
no-problem/9911/hep-ph9911274.html
ar5iv
text
# hep-ph/9911274 UCCHEP/5-99 FSU-HEP-991020 Stop Decays with R–Parity Violation and the Neutrino Mass ## Acknowledgments I am thankful to my collaborators A. Akeroyd, J. Ferrandis, M.A. Garcia-Jareño, M. Hirsch, W. Porod, J.C. Romão, E. Torrente-Lujan, J.W.F. Valle, and specially D.A. Restrepo, without whom this work would not have been possible.
no-problem/9911/cond-mat9911122.html
ar5iv
text
# Orbitally Driven Spin Pairing in the 3D Non-Magnetic Mott Insulator BaVS3: Evidence from Single Crystal Studies ## Abstract Static electrical and magnetic properties of single crystal BaVS<sub>3</sub> were measured over the structural ($`T_S=240`$K), metal–insulator ($`T_{\mathrm{MI}}=69`$K), and suspected orbital ordering ($`T_X=30`$K) transitions. The resistivity is almost isotropic both in the metallic and insulating states. An anomaly in the magnetic anisotropy at $`T_X`$ signals a phase transition to an ordered low-T state. The results are interpreted in terms of orbital ordering and spin pairing within the lowest crystal field quasi-doublet. The disordered insulator at $`T_X<T<T_{\mathrm{MI}}`$ is described as a classical liquid of non-magnetic pairs. Spatial ordering of the occupancy of degenerate electronic orbitals plays important role in the diverse magnetic phenomena of transition metal compounds . To cite a well-known example: the interplay of magnetic and orbital long range ordering, and strong coupling to the lattice, account for the metal–insulator transitions of the V<sub>2</sub>O<sub>3</sub> system . In contrast, the metal–insulator transition of the $`S=1/2`$, $`3d^1`$ electron system BaVS<sub>3</sub> is not associated either with magnetic long range order, or with any detectable static spin pairing. As an alternative, the possibility of an orbitally ordered ground state was discussed , while other proposals emphasized the quasi-one-dimensional character of the material . The crystal structure is certainly suggestive of a linear chain compound since along the $`c`$ axis, the intrachain V–V distance is only 2.81 Å, while in the $`a`$$`b`$ plane the interchain separation is 6.73 Å. It is thus somewhat surprising that our present studies show that electrically BaVS<sub>3</sub> is nearly isotropic. This means that BaVS<sub>3</sub> provides one of the few realizations of a Mott transition within the non-magnetic phase of a three-dimensional system. Since this case (or rather its $`D\mathrm{}`$ counterpart) is much studied theoretically, but scarcely investigated experimentally, a good understanding of BaVS<sub>3</sub> should be valuable for strong correlation physics in general. BaVS<sub>3</sub> has a metal-insulator transition at $`T_{\mathrm{MI}}=69`$K, accompanied by a sharp spike in the magnetic susceptibility . The high temperature phase is a strongly correlated metal with mean free path in the order of the lattice constant. There is no sign of a sharp Fermi-edge in the UPS/XPS spectra and instead of a Pauli-susceptibility it exhibits Curie-Weiss like behavior. Though the magnetic susceptibility is similar to that of an antiferromagnet , no long-range magnetic order develops at the transition . The transition is clearly seen in the thermal expansion anomaly , and in the specific heat . The $`d`$-electron entropy right above $`T_{\mathrm{MI}}`$ is estimated as $`0.8R\mathrm{ln}2`$, and it seems that a considerable fraction of the electronic degrees of freedom is frozen even at room temperature . It appears that the 69K transition is not symmetry breaking: it is a pure Mott transition which does not involve either magnetic order or any static displacement of the atoms. Hints of long range order were found well below $`T_{\mathrm{MI}}`$, at $`T_X=30`$K, in recent NMR and NQR experiments . It was suggested that orbital order may develop here, but it could not be decided whether the $`T_X`$ phenomenon is a true phase transition, or purely dynamical. In any case, the associated entropy change must be very small since it escaped detection . In this work we prove that there is a phase transition, by finding its signature in static magnetic properties. In order to clarify the nature of BaVS<sub>3</sub>, we performed single crystal experiments, searching for macroscopic anisotropy in the electrical and the magnetic properties. Our results exclude the quasi-1D interpretations and supply direct evidence for the dominant role of $`e(t_{2g})`$ orbitals tilted out from the chain direction. The static magnetic susceptibility $`\chi `$, and more markedly, its anisotropy $`\chi _c\chi _a`$, show clear anomalies both at $`T_{\mathrm{MI}}`$ and $`T_X`$. Single crystals of BaVS<sub>3</sub> were grown by Tellurium flux method. Powders of BaVS<sub>3</sub> and sublimated tellurium 99.99% Ventron were mixed in a molar ratio 1 : 50 and heated up to 1050 <sup>o</sup>C in an evacuated silica ampoule. Then it was slowly cooled down to $`55^o`$C at the rate of $`1^o`$C/hour. The crystals, obtained from the flux by sublimation, have typical dimensions of $`3\times 0.5\times 0.5`$ mm<sup>3</sup>. For the longitudinal ($`c`$-direction) resistance measurement standard four probe contact arrangement was applied. The conduction anisotropy was determined by the Montgomery method . Experiments performed on several crystals from various preparation batches showed deviation only at low temperatures, which we attribute to the different purity of the samples (see later). The magnetic susceptibility was measured by Faraday-balance, the anisotropy measurements were carried out on a sensitive torque magnetometer . Figures 1 and 2 show the temperature dependence of the resistivity and the conduction anisotropy of single crystals. BaVS<sub>3</sub> is a “bad metal” with $`\rho _{\mathrm{RT}}=0.7`$m$`\mathrm{\Omega }`$cm. With decreasing $`T`$ a slight change of slope reflects the structural transition at $`T_S=240`$K (Fig. 1, upper panel). The resistivity has a minimum at $`125`$K. A sharp metal-insulator transition sets in at $`T_{\mathrm{MI}}=69`$K (Fig. 2 inset). Below $`T_{\mathrm{MI}}`$ the resistivity increases steeply and it varies 9 orders of magnitude down to about 20K. Crossing $`T_X`$ does not influence $`\rho (T)`$ as it is obvious also from the Arrhenius plot (Fig. 2). We deduce a gap $`\mathrm{\Delta }600`$K for the insulator ($`\mathrm{\Delta }`$ is not well defined due to a slight curvature in the $`\mathrm{ln}\rho 1/T`$ plot). The overall behavior of $`\rho (T)`$ agrees well with that of high purity polycrystalline samples . The conduction anisotropy, defined as the ratio of conductivities measured along and perpendicular to the chain direction, is surprisingly low, $`\sigma _c/\sigma _a3`$. It is temperature independent in the metallic phase and there is only a small jump at the metal–insulator transition. Below $`T_{\mathrm{MI}}`$ the anisotropy has the same small value over a broad temperature range down to about $`3040`$K. Note that in this range of $`T`$ the resistivity increases about 6 orders of magnitude in both directions. The low temperature upturn is related to impurities, the two most different results obtained on different samples are plotted in Fig. 2 (lower panel). It seems that impurity donated carriers enhance the $`c`$-direction conduction, and for this process the activation energy is about $`70`$K. Figure 3 summarizes the results of the magnetic susceptibility measurements. Along the chain direction, $`\chi _c(T)`$ agrees well with previous data on stoichiometric samples; the Curie–Weiss behavior in the metallic phase ($`\mu _{\mathrm{eff}}=1.2\mu _B`$; the small $`\mathrm{\Theta }<10`$K may vary slightly with the range of fit) is followed by a steep decrease in the susceptibility below $`T_{\mathrm{MI}}`$. $`\chi _c`$ and $`\chi _a`$ look very similar (therefore we show only $`\chi _c`$), and both are fairly smooth at $`T_X`$. However, we find a sharp peak in the temperature derivative of the $`a`$-axis susceptibility $`d\chi _a/dT`$, and a sudden break in the anisotropy $`\chi _c\chi _a`$ at $`T_X30`$K. These give convincing evidence of a phase transition within the insulating phase. We suggest that the transition temperature $`T_X`$ be defined by the sharp peak in $`d\chi _a/dT`$ . Discussing the results first we interpret the low value of the conduction anisotropy. For the present purposes, we assume that the simple ionic picture holds (Ba<sup>2+</sup>V<sup>4+</sup>S$`{}_{}{}^{2}{}_{3}{}^{}`$ ionic state, $`3d^1`$ configuration). Figure 4a shows the crystal field splitting of the vanadium $`d`$-levels. The sulphur octahedra surrounding nearest neighbor vanadium ions are face-sharing along the $`c`$-axis. Above $`T_S=240`$K, there is a trigonal distortion along the $`c`$-axis; for $`T<T_S`$, an additional orthorhombic component appears. The main effect is the trigonal splitting of the $`t_{2g}`$ level, which lifts the $`d_{z^2}`$ level (the $`z`$-axis being now the trigonal axis) above the two degenerate $`e(t_{2g})`$ orbitals . The remaining degeneracy is lifted by the orthorhombic distortion: below $`T_S`$, the low-energy crystal field states can be thought of as a quasi-doublet with a small splitting. In Fig. 4 the lobes of the low-lying orbitals are shown on the background of the crystal structure. The $`d_{z^2}`$ orbitals have large direct overlap along the chain direction as shown in the left side of Fig. 4b. Taking also into account that the vanadium chains are widely separated, any conduction through the $`d_{z^2}`$ channel is expected to be extremely anisotropic. In contrast, there is only weak, indirect overlap of the two $`e(t_{2g})`$ levels , and hopping along these channels is almost isotropic. From the observed small conduction anisotropy we conclude that the $`d_{z^2}`$ orbitals are not involved in the electron transport. In contrast to previous assumptions , the crystal field spitting between $`d_{z^2}`$ and the $`e(t_{2g})`$ levels must be large and the band formed from the $`d_{z^2}`$ orbitals does not overlap with the occupied states. We believe that the electron propagation along the $`c`$-axis occurs also through multiple nearest neighbor interchain hops involving only the $`e(t_{2g})`$ orbitals. Extended states of the $`d_{z^2}`$ band are excited only at low temperatures from impurity levels situated below the broad band. This process is seen in the impurity dependent low-temperature upturn of the anisotropy. The susceptibility data show that the number of polarizable moments drops drastically upon entering the insulating phase. On the other hand, the metal–insulator transition has no magnetic precursor on the metallic side, $`\chi `$ closely follows the paramagnetic formula down to $`T_{\mathrm{MI}}`$. The volume change and the onset of an $`ab`$-plane rearrangement of the V sublattice at the transition indicate that the state of the system changes profoundly, and the effective spin–spin interaction may be completely different from that in the metallic phase. It seems that the insulator phase does not arise from a magnetic instability of the metallic phase. Let us recall the case of V<sub>2</sub>O<sub>3</sub> where the onset of orbital order leads to an insulating state whose magnetic ordering pattern cannot be anticipated from the short range order found in either of the neighboring phases . We propose a similar picture for BaVS<sub>3</sub>: the metal–insulator transition involves both the spin and orbital degrees of freedom, and considering only the effect on spins, it amounts to a change of the spin hamiltonian. It is the consequence of the frustrated structure (a triangular array of V chains), and of the form of the relevant crystal field states, that the intermediate ($`T_X<T<T_{\mathrm{MI}}`$) phase is not an ordered magnet, but an overall non-magnetic state with peculiar spin and orbital short range order. In order to investigate the electronic structure of the insulating phase, we derived a Kugel-Khomskii type model starting from the atomic limit, including also the orbital index dependence of the hopping matrix elements, and the spin–orbit coupling . There is a broad range of parameters for which the ground state of an isolated pair of sites is non-magnetic. Further numerical and variational calculations for clusters of up to 24 sites revealed the presence of a large number of ordered low-energy states. Figure 5 gives examples of these various long-period structures which obey characteristic short range rules for the relative orientation of the pairs. Due to the small energy difference between the various configurations the singlet pair formation at $`T_{\mathrm{MI}}`$ is not accompanied by true long range order, but instead a spin–orbital liquid develops. Thermal averaging over the (exponentially large number of) accessible configurations gives rise to a homogeneous state with low susceptibility. Moreover, in accordance with NMR/NQR results , no static pairing is expected over a broad temperature interval below $`T_{\mathrm{MI}}`$. Long range static order is reached only at the much lower temperature $`T_X`$, as shown by the present susceptibility data, and also by anomalies in microscopic measurements . Due to the pre-existing short range order, this transition is not accompanied by any significant entropy change. The pictures in Fig. 5 may remind us of the resonating valence bond state of frustrated Heisenberg models . We emphasize that the situation is completely different here. Within each pair, those orbitals are occupied which give rise to a strong intrapair exchange, and thus each cluster state belongs to a different effective spin Hamiltonian. Considering the shape of the $`e(t_{2g})`$ orbitals one finds that pair formation does not quite saturate the exchange interaction, but there are residual interactions which govern farther-neighbor correlations, and cause a weak resonance. However, this resonance is much weaker than in the pure $`S=1/2`$ Heisenberg models, and the $`T_X<T<T_{\mathrm{MI}}`$ phase of BaVS<sub>3</sub> is better described as a thermal average over valence bond solids, than as a resonating valence bond liquid. The finding of a $`T`$-dependent spin gap, which vanishes at $`T_{\mathrm{MI}}`$ , is consistent with our scenario. The metal–insulator transition affects mainly the spin degrees of freedom: the electronic entropy present at $`T`$ slightly above $`T_{\mathrm{MI}}`$ is almost exhausted by the spin entropy required by the measured $`T>T_{\mathrm{MI}}`$ susceptibility which can be ascribed to $`70`$% of the V sites carrying nearly independent localized spins. The primary order parameter of the low temperature phase is the density of the non-magnetic pairs, and the driving force of the transition is the gaining of spin entropy. Sizeable orbital order must exist even above the metal–insulator transition (below $`T_{\mathrm{MI}}`$, it becomes additionally stabilized by the singlet formation as shown by the appearance of an extra component of the orthorhombic distortion ). We emphasize, however, that the entropy which would correspond to the complete absence of short range orbital order is certainly not present even at $`T=300`$K . In conclusion, we have shown that electron propagation in BaVS<sub>3</sub> occurs via nearest neighbor interchain hops involving the $`e(t_{2g})`$ orbitals only. The metal–insulator transition was described as a transition to a classical liquid of non-magnetic pairs, which shows spin and orbital short range order. We have proposed that though farther-neighbor inter-pair interactions prefer ordered structures, due to the frustrated triangular structure long range order can develop only well below the metal–insulator transition, at $`T_X`$. Our explanation is in overall agreement with the results of transport, magnetic and thermodynamic experiments. This work was supported by the Swiss National Foundation for Scientific Research and by the Hungarian Research Funds OTKA T025505, FKFP 0355, B10, and AKP 98-66.
no-problem/9911/cond-mat9911343.html
ar5iv
text
# Wetting at Non-Planar and Heterogeneous Substrates ## 1 Introduction The interaction of fluids with solid substrates is attracting new interest as experimental methods allow increasing control over the shape and chemical composition of solid surfaces . The theoretical description of fluid adsorption necesarily involves breaking the fluid translational invariance due to the presence of a wall, representing the solid substrate. Due to the intrinsic difficulty of this, theoretical studies have concentrated mainly on planar and homogeneous substrates producing a deep understanding of the rich behaviour of those systems . However, non-planar and chemically heterogenous surfaces exhibit adsorption properties which differ from those of planar and homogeneous systems and require further study . Here, we use an effective interfacial Hamiltonian model to examine the wetting properties of a corrugated substrate, and a planar substrate with a stripe of a material chemically different. ## 2 The Model For simplicity, we only break the symmetry along one of the directions of the wall and, therefore, one coordinate ($`x`$ say) will describe any point on the surface. The free energy of an interfacial configuration is given by the standard effective Hamiltonian $$H[\mathrm{}]=𝑑x\left[\frac{\mathrm{\Sigma }}{2}\left(\frac{d\mathrm{}}{dx}\right)^2+𝒲(\mathrm{};x)\right]$$ (1) where $`\mathrm{}(x)`$ represents the height of the fluid interface, $`\mathrm{\Sigma }`$ is the interface stiffness and $`𝒲(\mathrm{};x)`$ accounts for the (effective) interaction with the substrate. We can now anticipate that the new phenomena occuring for non-planar or heterogeneous systems take place due to the competition of the two terms in the Hamiltonian; whilst the first term forces the interface to minimize its extent, the second is constrained by the intermolecular forces between the particles. The character of these interactions is qualitatively captured by the following expression for the potential $`𝒲`$, $$𝒲(\mathrm{};x)=W_\gamma \left(\mathrm{}\psi (x)\right)\text{for }x\mathrm{\Lambda }_\gamma $$ (2) where $`\psi (x)`$ is the height of the wall at the point $`x`$ and $`W_\gamma `$ is the effective binding potential of a fluid interface on a planar and homogeneous substrate (which extends along $`\mathrm{\Lambda }_\gamma `$). A planar wall corresponds to $`\psi (x)=0`$ whilst, for a chemically homogeneous wall, there is only one region $`\mathrm{\Lambda }`$ and consequently only one binding potential $`W`$. The approximation (2) is appropriate for walls whose non-planarity is not too severe (see later for further quantification). Although effective binding potentials are well described in the literature , we want to outline some of their features for a subsequent discussion. In figure 1, two different types are plotted for different values of the temperature (at bulk liquid-vapour coexistence). For second-order wetting, (a), the potential has a single minimum, located at $`\mathrm{}=\mathrm{}_\pi `$, for $`T<T_W`$. The coverage of the planar system, $`\mathrm{}_\pi `$, diverges at the wetting temperature $`T_W`$. In contrast, for first-order wetting, (b), the potential shows a minimum at $`\mathrm{}=\mathrm{}_\pi `$ and a maximum, the activation barrier, at $`\mathrm{}=\mathrm{}_{}`$. In this second case, the thickness of the adsorbed layer remains finite at the wetting temperature, coexisting with an infinitely thick layer, $`W(\mathrm{}_\pi ^W)=W(\mathrm{})=0`$ (See figure 1). Furthermore, for a range of temperatures $`T_W<T<T_S`$, where $`T_S`$ is the spinoidal temperature, the first-order effective potential still shows a minimum which represents a thin layer metastable with respect to the infinitely thick one, $`W(\mathrm{}_\pi )>W(\mathrm{})=0`$. ## 3 Results We restrict ourselves to a mean-field description of the system and, to calculate the equilibrium profile, we minimise the Hamiltonian (1), which is equivalent to solving the Euler-Lagrange equation: $$\frac{d^{\mathrm{\hspace{0.17em}2}}\mathrm{}}{dx^2}=W_\gamma ^{}\left(\mathrm{}\psi (x)\right)\text{for }x\mathrm{\Lambda }_\gamma .$$ (3) The results depend sensitively on the character of the wetting transition on the planar substrate and, therefore, we present some representative results for non-planar surfaces according to the order of the wetting transition on the planar substrate. ### 3.1 Non-Planar Walls #### 3.1.1 Second-order wetting binding potentials. First, we consider a homogeneous corrugated wall, with $`\psi (x)=a\mathrm{cos}(2\pi x/L)`$, where $`a`$ represents the corrugation amplitude and $`L`$ the period of the corrugation. We assume that the wavelength $`La`$ so that the corrugation is relatively weak. In this limit, scaling properties emerge which are correctly captured by the assumption of a vertical height interaction in the effective binding potential (see (2)). Details of this problem for a second-order binding potential have been given elsewhere but we report the results here for the sake of comparison. In this case, the interface undergoes a first-order unbending transition at a temperature $`T`$ below the wetting transition $`T_w`$ provided the corrugation exceeds a certain threshold (See figure 2 (left)). At low temperatures, the interface closely follows the corrugations so that both the interface and wall, have a similar shape. Above the transition temperature, however, the interface is significantly flatter (unbent). The difference between these coexisting states reduces with the corrugation and it disappears at a critical point (●). As pointed out, this transition takes place due to the competition of the two terms of the Hamiltonian (1). Whilst the system minimises the functional at low temperatures by following the shape of the surface (with a large negative contribution of the second term of the Hamiltonian), the unbent configuration reduces the energy by decreasing the (positive) contribution of the first term. Interestingly, the period of the corrugation does not change the structure of the surface phase diagram but acts as a scaling parameter . Within this model, neither the location (at $`T=T_w`$) nor the character of the wetting (unbinding) transition is modified. #### 3.1.2 First-order wetting binding potentials. If the binding potential is first-order, the surface phase diagram is richer and the effect of the corrugation is triple. First, an unbending transition can also take place as the above mentioned competition between both terms of the Hamiltonian is still present. However, first-order effective potentials have an activation barrier which allows the interface to adopt a variety of shapes to minimize the energy. These shapes can be characterised by the number of minima of the interface $`\mathrm{}(x)`$ and the change of this number gives rise to a number of non-thermodynamic singularities. These singularities have been studied for a corrugated wall in a related system (in the context of confinement) . Figure 3 (left) shows schematically the possible configurations of the interface. Note that, in this case, the interface shape can deviate significantly from the wall shape and adopt configurations which are not found with a second-order effective potential. The third effect of the corrugation on this type of surface comes from the fact that the energy of the unbound state (i.e., the wet configuration, $`\mathrm{}=\mathrm{}`$) is always zero. Therefore, the wettability is favoured by any positive contribution to the Hamiltonian and the wetting temperature is reduced. This effect is absent in the second-order wetting potential because the interface always finds a configuration whose positive contribution (first term of the Hamiltonian) is lower than the negative (second term) and the balance remains negative for any corrugation. The presence of the activation barrier makes this compromise impossible and the wetting transition takes place at a lower temperature in the corrugated system than in the planar one, although the nature of the transition is still first-order. Figure 3 (right) shows a typical variation of the wetting temperature as a function of the corrugation amplitude (in units of that of the planar system). The periodicity of the corrugation, as in the previous case, only acts as a scaling parameter (this is a property of the Hamiltonian (1)). Note that the temperature drops as a function of the corrugation amplitude, $`a`$, but it presents a minima for $`a1.5`$ (in units of liquid bulk correlation-length). For larger values of $`a`$, the tendency is inverted and the wetting temperature increases slightly and tends to a certain constant value. This behaviour can be traced back to the varying shape the interface can adopt to minimize the energy . As a result of this non-monotonic variation of the wetting temperature with corrugation, the shape of the interface at the wetting transition itself shows similar sensitivity . ### 3.2 Heterogeneous Walls We now focus our attention on (planar) heterogeneous walls. As mentioned, we study an infinite homogeneous and planar substrate (called 2) with a stripe of constant width $`L`$ of a chemically different material (called 1), represented by two different effective potentials. This geometry allows us to concentrate only on the structural changes of the interface due to the heterogeneity since the infinite substrate 2 governs the wetting behaviour of the whole system . The influence of heterogeneity on the wetting properties of a substrate (for instance, due to a periodic array of stripes) is a more complex problem which requires the prior understanding of this simpler system. Without loss of generality, we can consider that homogeneous substrates 1 and 2 undergo first and second-order wetting transitions respectively. The structure of the phase diagram of this system does not depend on the order of the wetting transition of the infinite system. In fact it depends mainly on two quantities only: the stripe width, $`L`$, and the mismatch between the effective potentials of both substrates (at fixed temperature), i.e., the difference between the thickness of the adsorbed layers in the infinitely homogeneous systems, $`\mathrm{\Delta }\mathrm{}_\pi ^{(2)}\mathrm{}_\pi ^{(1)}`$. Figure 2 (right) shows a schematic phase diagram as a funcion of these variables. As expected, the interface is flat if the mismatch is zero ($`\mathrm{\Delta }=0`$,— — —). For small values of $`\mathrm{\Delta }`$, positive or negative, the thickness of the interface above the heterogeneity roughly behaves as in the infinite system. For narrow stripes ($`L0`$), however, we found that the interface also flattens when $`\mathrm{\Delta }`$ reaches the value $`\mathrm{\Delta }_{}\mathrm{}_{}^{(1)}\mathrm{}_\pi ^{(1)}`$ (— — —). At that point, the minimum of potential 2 matches the maximum of potential 1, $`\mathrm{}_\pi ^{(2)}=\mathrm{}_{}^{(1)}`$. Surprisingly, the flat configuration is stable even though it would be unstable for an infinite system. Nevertheless, if the stripe width exceeds a certain value, the flat configuration becomes metastable with respect to a non-flat one. This is due to the crossing of a line of generalized unbending transitions (——) which arises from the ubiquitous balance between the two terms in Hamiltonian (1). This line ends at a critical point $`(\mathrm{\Delta }_c,L_c)`$ (●). Note that the configurations along the coexistence line correspond to those of an unbending transition as mentioned above if $`\mathrm{\Delta }_c<\mathrm{\Delta }<\mathrm{\Delta }_{}`$ but, for $`\mathrm{\Delta }>\mathrm{\Delta }_{}`$, the coexisting states are interfaces bent in opposite directions. This second part of the line has its origin in the existence of an activation barrier of the effective potential of the heterogeneity. At this point, we note that the structure of the phase diagram when the stripe effective potential is second order (no activation barrier) can be intuitively obtained from that in figure 2 (right) by considering $`\mathrm{\Delta }_{}\mathrm{}`$, thus recovering a usual unbending transition. As a last remark, we want to discuss the twofold role of the temperature in this description. On the one hand, the phase diagram deforms (although the general features are conserved). On the other, the value of $`\mathrm{\Delta }`$ varies. We can anticipate that from these two competing tendencies the phase diagram can show a complex behaviour including reentrant phases . CR acknowledges economical support from the European Commission under contract ERBFMBICT983229. ## References
no-problem/9911/nucl-ex9911001.html
ar5iv
text
# A Compact Solid State Detector for Small Angle Particle Tracking ## 1 Introduction An experimental program has been started at the MAMI tagged photon beam facility to verify the Gerasimov-Drell-Hearn (GDH) sum rule. A detailed description of the physical motivations are reported in refs. . The experiment requires the measurement of the total absorption cross section of circularly polarized photons on longitudinally polarized nucleons over a wide range of energy. The GDH sum rule is related to the difference of the two helicity dependent cross sections, considering target and beam spins parallel and antiparallel. As a consequence a large angular and momentum acceptance detector is needed to reduce extrapolations and then minimize systematic errors in the total cross section evaluation. The DAPHNE detector already meets most of the above requirement, covering 94% of the total solid angle ($`21^{}\vartheta 159^{}`$; $`0^{}\phi 360^{}`$). However in order to increase the acceptance in the angular region $`\theta <21^{}`$ an upgrade of the detector was necessary. Since the DAPHNE structure includes a mechanical frame which shadows particle transmission in the region $`5^{}<\vartheta <21^{}`$, the extension of the acceptance into this range required a set-up inside the frame. For angles $`\vartheta <5^{}`$ beam-halo interference is a problem unless the detector is placed far downstream of the target. Thus the forward-angle system has two components: MIDAS, described here and in refs. , covers the region $`7^{}<\vartheta <16^{}`$ and is placed deep inside the DAPHNE frame, close to the target, while the STAR detector and a scintillator-lead device are placed externally to the frame covering the region $`2^{}<\vartheta <5^{}`$. Between the two devices an aerogel and N<sub>2</sub> gas Čerenkov detector enables electron detection with a threshold of about 15 MeV and an efficiency $`>`$99%. Its anti-coincident signal is used to suppress electromagnetic background from the trigger. The full experimental set-up is shown in Figure 1. ## 2 Detector design The volume around the target was very limited, so we decided to use silicon detectors in order to obtain a compact geometry for MIDAS. MIDAS includes two parts (see Figure 2): a tracking section consisting of two annular double sided silicon detectors (V<sub>1</sub> and V<sub>2</sub>) and an annular silicon/lead sandwich (Q<sub>1</sub>, Pb, Q<sub>2</sub>, Pb, Q<sub>3</sub>) for energy measurement. A central hole allows the primary $`\gamma `$ beam to go through. The p-side of the tracking detectors is divided into 48 concentric rings, while the n-side is segmented into 16 radial sectors. The signals are carried out by a flexi-rigid cable consisting of 5 overlapping layers each screened by a grounded wire-netting structure. The remaining detectors Q<sub>1</sub>, Q<sub>2</sub> and Q<sub>3</sub> are single sided with the p-side segmented into 4 radial sectors (quadrants). All detectors were manufactured by Micron Semiconductor Ltd, UK. Their main geometrical parameters are listed in table 1. The detectors and the lead absorbers were mounted inside an Aluminum tube fitted into the forward hole of the DAPHNE detector. The detector signal cables exit from the downstream opening of the tube leaving the $`\vartheta <5^{}`$ region free for the particle transmission towards the other forward devices. The tube was closed at both ends by 25 $`\mu `$m thick Aluminum windows and supplied with an Argon flow of a few $`\mathrm{}`$/h to ensure that the detectors current is stable and does not depend on atmospheric impurities or humidity. ## 3 Principle of operations GEANT simulations were used to optimize the geometrical set-up by taking into account the experimental conditions at MAMI energy. MIDAS provides the following functions: 1. charged-hadron triggering; 2. charged-particle tracking; 3. discrimination between e<sup>±</sup>, $`\pi ^\pm `$ and p ### 3.1 Hadron trigger Two trigger conditions were implemented to detect protons and pions with the minimum background contamination. The most important source of background is due to electrons (and positrons) from pair production and Compton scattering processes occurring in the experimental target. The double lead sandwich, whose total thickness corresponds to $``$3 radiation lengths, was designed to absorb most of these electrons. On the contrary, almost all the pions and high energy protons at forward angles are completely transmitted, giving a signal from all detectors. Therefore a good trigger for high energy hadrons was obtained by the coincidence of the 3 quadrant detectors Q<sub>1</sub>, Q<sub>2</sub>, Q<sub>3</sub>. The low energy protons stopped inside MIDAS produce a Q<sub>1</sub>-Q<sub>2</sub> coincidence. These particles have a large energy loss rate, so that in this case detection thresholds could be set high enough to cut most of the electromagnetic background. Simulations showed that the above triggers allow the detection of protons and pions with energies $`T_\mathrm{p}>`$60 MeV and $`T_\pi >`$50 MeV respectively, while electromagnetic background is suppressed with an efficiency of about 99%. The remaining background comes mainly from pair production processes. In most of the cases only one of the two electrons enters MIDAS, while the other is emitted at a very small angle and then detected by the Čerenkov counter (see Figure 1). These events were eliminated by requiring an anticoincidence between the Čerenkov signal and the MIDAS trigger. With this configuration the background rate was reduced by about 3 orders of magnitude and kept comparable to the pion rate. Further suppression was performed off-line as discussed in Section 5.2. ### 3.2 Tracking Particle tracking is performed by the V<sub>1</sub> and V<sub>2</sub> detectors. A charged particle passing through one of these detectors produces a signal from a single ring and a single sector. The intersection between the hit ring and the hit sector determines the impact point of the particle on the detector. Then two detectors allow the complete reconstruction of the particle trajectory. The polar and azimuthal angular resolution depends on the ring pitch and on the sector opening angle respectively, which were chosen to give $`\sigma (\vartheta )1^{}`$ and $`\sigma (\phi )10^{}`$. ### 3.3 Particle identification Most of the protons and pions entering MIDAS have momenta $`p`$1 GeV/c, so they can be identified by measuring their $`E/x`$. In particular we apply the Range Method, previously developed for DAPHNE and adapted to MIDAS geometry. A particle entering V<sub>1</sub> can cross several layers before stopping or going through Q<sub>3</sub> if its energy is high enough, the signal from each detector giving the particle $`E/x`$ along its path. In the Range Method we use the Bethe-Bloch equation to calculate the mean energy loss of a given particle type (proton or pion) in the different MIDAS layers, as a function of the particle kinetic energy. Then a Least Squares Fit is used to determine the kinetic energy value which minimize the difference between the measured and calculated values of the $`E/x`$ samples. The analysis of the $`\chi ^2`$ distribution of the fit allows the identification of the particle. ## 4 The electronics A block diagram of the electronics is shown in Figure 3. A detailed description can be found in ref. . The silicon detectors are biased by a CAEN SY527, system remotely controlled via VME. The readout electronics is based on Fastbus module FEMC described in refs. . These modules receive the differential signals from the preamplifiers and provide the shaping, the track & hold, the multiplexing and the digitization of the signals. There are four FEMC modules used in the electronics set-up, each module of 80 channels divided into 5 groups (rows) and each channel is equipped with a hybrid shaper, consisting of a (RC)<sup>2</sup>CR filter. It is followed by a hold circuit allowing the 80 signals to be multiplexed and serially digitized by a single 12-bit ADC. After the first shaping stage a signal for trigger logic purposes is output. A double discriminator circuit allows to set two different threshold levels. Data from the ADC’s are read and decoded into an interface, which contains control logic and then stored into a buffer memory under the control of a digital signal processor (DSP), Motorola type DSP56001 . These data can be transfered to a host computer either via the VME bus or via the serial port of the DSP. The first method, being faster, was used in the experimental data taking in which a master VMEbus computer handles the readout of the different sections of the whole apparatus. The second method was adopted to test MIDAS independently from the rest of the experimental apparatus. ## 5 Tests and performances A series of runs at the MAMI tagged-photon beam facility allowed the set-up of the entire apparatus. Photoreactions induced on a Hydrogen target were used to test the response of the different detector sections. In particular the response of MIDAS to pions and protons was studied using the double pion photoproduction reaction $`\gamma \mathrm{p}\mathrm{p}\pi ^+\pi ^{}`$. A clean sample of events with negligible background was obtained by a triple coincidence, detecting 2 charged particles in DAPHNE and one particle in MIDAS. Then two cases are possible: 1. the two pions are detected in DAPHNE and the proton in MIDAS 2. the proton and one pion are detected in DAPHNE and the other pion in MIDAS. The reaction kinematics is completely defined when the emission angles $`\vartheta `$ and $`\phi `$ of both particles detected in DAPHNE are measured, together with one of their energies. In both of the above cases it was possible to calculate precisely the energy and direction of the third particle by using DAPHNE only, since this detector has very good angular and momentum resolutions. ### 5.1 Angular resolution The angular resolution of MIDAS was obtained by comparing the measured $`\vartheta `$ and $`\phi `$ angles of the forward emitted particle with the values calculated by kinematics on the basis of the DAPHNE measurements. The distributions of the difference between these two values are shown in Figure 4. The contributions of the uncertainties induced by the DAPHNE resolutions to the distributions widths are $`\sigma (\vartheta )0.6^{}`$ and $`\sigma (\phi )3^{}`$. They can be easily subtracted to give the overall MIDAS resolution of $`\sigma (\vartheta )1.4^{}`$ and $`\sigma (\phi )12^{}`$. ### 5.2 Energy calibration and particle identification Most of the selected pions entering MIDAS are relativistic and release almost the same energy ($`E`$0.4 MeV) in each silicon detector. On the contrary proton energy losses range from about 0.5 MeV to several MeV, allowing the study of the response of the detectors at different energies. The Bethe-Bloch formula was used to calculate the energy released by protons, of a given incident energy, in each detector. The correlation between the pulse height and the calculated energy loss determined the detector calibration. The response of the Q<sub>1</sub> detector to $`T`$125 MeV monoenergetic protons is shown in Figure 5. The mean energy loss is $`E`$1.25 MeV, so its measured distribution width is $`\mathrm{\Delta }E/E`$34 % (FWHM). The energy loss width of the same detector Q<sub>1</sub> as a function of the proton energy is shown in Figure 6. The distribution width is completely dominated by straggling, due to the small detector thickness. Straggling is the most serious limitation in particle identification using $`E/x`$ techniques. However MIDAS provides $`E/x`$ measurements with enough resolution to discriminate protons and pions over most of the energy range covered in the experiment. As an example, the stopping power of particles measured by Q<sub>3</sub> is shown in Figure 7 as a function of the particle momentum. The pion and proton regions are clearly separated and discrimination is possible up to a momentum $`p700`$ MeV/c. The Range Method (RM) was used to obtain particle discrimination and energy reconstruction, as described in Section 3.3. The fit procedure was applied to the selected particles and a cut on the $`\chi ^2`$ distribution was established in order to have good proton identification efficiency and minimum contamination of pions. Pion contamination, estimated by counting pions fulfilling the fit condition for protons, is significant only at high energy (3% for protons above $`p`$=500 MeV/c. The proton energy resolution was obtained by comparing the value calculated by kinematics with that reconstructed by the RM. The difference of the two for protons with kinetic energy around T$``$105 MeV is shown in Figure 8. The curve is a gaussian fit whose parameters are reported in the upper box of the plot. The width of the distribution is $`\sigma `$6 MeV which corresponds to a kinetic energy resolution $`\mathrm{\Delta }T/T=`$14% (FWHM). The pion energy cannot be reconstructed with the RM because the stopping power of these particles is nearly independent of energy. In addition, the separation between pions and the residual electromagnetic background, discussed in Section 3.1, cannot be fully achieved by $`E/x`$. However electrons and positrons, producing a shower in the lead absorbers, exhibit an energy loss spectrum different from pions, as shown in Figure 9. In this case the electron sample was obtained by requiring a coincidence between the Čerenkov signal and MIDAS, in order to detect e<sup>+</sup>e<sup>-</sup> events. It turns out that we can reject about 40% of electrons while only loosing 10% of pions by selecting particles with an energy loss E$`<`$0.6 MeV in a single detector. The same condition, applied to all the detector Q<sub>1</sub>, Q<sub>2</sub> and Q<sub>3</sub>, produces the suppression of 70% of electrons while loosing 20% of pions. ## 6 Conclusions The MIDAS device meets the main needs of the GDH experiment for which it was build. The measured angular resolution ($`\sigma (\vartheta )1.4^{}`$ and $`\sigma (\phi )12^{}`$) are in reasonable agreement with the expected values (see Section 3.2) with discrepancies attributable to multiple scattering. The Range Method analysis provides a powerful tool to identify protons with $`<`$3% of pion contamination and to measure their energy with good resolution ($`\mathrm{\Delta }T/T=`$14% at T=105 MeV) by the simultaneous use of all the available measurements of the particle energy losses. The e<sup>±</sup> background was suppressed in the trigger logic by 3 orders of magnitude and further reduced off-line of 70% by applying a cut on the energy loss spectra. Under these conditions the background contamination to pion count rate is still about 50%. This value is acceptable, since the main goal of our experiment is the measurement of the hadronic reaction asymmetry and electromagnetic processes give a net null contribution to it. On the other hand the background contamination in the small angular region of MIDAS does not significantly increase the statistical error of the measurement. We acknowledge the careful work of the electronic and mechanical workshops of INFN-sezione di Pavia in the realization and assembling of the MIDAS set-up. In particular we are indebted with V. Arena, G. Bestiani, E.G. Bonaschi, D. Calabrò, A. Freddi, G. Iuvino, P. Ventura, F. Vercellati.
no-problem/9911/physics9911061.html
ar5iv
text
# About Charge Density Wave for Electromagnetic Field-Drive ## I Introduction Within the framework of classical electrodynamics, it has been shown how an electromagnetic propulsive force and, in particular, an electric (conservative) propulsive force can be generated without propellent mass and external couplings by using two confined, time-varying, neutral and macroscopic charge density waves (CDW). These CDW own a same symmetry axis, are adequately separated in space and have a relative temporal phase-shift. This last one controls the propulsive force’s intensity. From far fields point of view, these CDW are able to induce an asymmetry into the space distribution of the far fields momentum variation rate along the symmetry axis. They can do that because the relative temporal phase-shift controls the space distribution of constructive and destructive interferences of far fields produced by the two CDW . So, this relative temporal phase-shift controls the asymmetry. When this last one is created, an electromagnetic propulsive force along the symmetry axis is generated and applied on both CDW in a same direction. Such propulsive effect is impossible in statics because fields’ interferences can be produced only with time-varying fields. Because this propulsive force is generated by a spatial asymmetry in the (electromagnetic) field, it is a propulsion driven by the electromagnetic field or more simply an electromagnetic field-drive (EFD). In our first paper we have used the CDW concept in a theoretical way. Actually, nothing has been said about the material or the conductive fluid needed to sustain a neutral macroscopic charge density wave. The only thing we have mentioned was this CDW is a longitudinal (i.e. $`\varphi `$ direction in cylindrical coordinates) charge oscillation mode, it has a wave number “n”, it oscillates at frequency $`\omega `$ and it is pinned (circular standing wave) inside a ring made with an electrical conductor. In this simplified model, we have used two identical planar filiform rings with radii R’, placed in vacuum and separated by a distance D along the z axis. Planes of rings were perpendicular to the z axis; the symmetry axis, the thrust axis. In a more realistic way, rings have a cross section $`R_o`$ smaller than D and R’ according to section 4 in . However, we have never mentioned that a relation must exist (dispersion relation) between n and $`\omega `$ and what is this relation. Furthermore, what are needed conditions to support and maintain a time-varying CDW able to create the desired propulsive effect? Is it possible to use solid rings? Metallic ones? Or what else? In this work, we would like to give preliminary and partial answers to some of those questions. ## II A Longitudinal Plasma Mode A time-varying longitudinal CDW involves a time-varying longitudinal charge separation among opposite charges. In that case, there must be a restoring force among these charges and consequently, this creates a collective oscillation mode (i.e. longitudinal plasma mode) at plasma frequency $`\omega _p`$ . So, to sustain a large amplitude of charge separations in a neutral conductor or, more generally, in a conductive “fluid” and then support and maintain sources of large electric fields, our frequency $`\omega `$ must be close to (at least equal or greater than) the “resonant” frequency $`\omega _p`$. Thus, we will get an appropriate CDW (n$``$0) if each neutral conductive fluid of our two rings is a neutral plasma. The other reason to use a $`\omega `$ $`>`$ $`\omega _p`$ is this. In a sense $`\omega _p`$ can be considered as a cut-off . So, if $`\omega `$ is greater than this cut-off, fields created by one conductive fluid in a given ring will penetrate deeply inside the conductive fluid of the other ring to create propulsive effect throughout the ring’s cross section for non-filiform rings (i.e. torus for instance). Actually, if $`\omega `$ $`<`$ $`\omega _p`$ fields generated by one ring will remain near the surface of the other; they will be mostly reflected by this one and they will be nearly zero inside of it except to its surface. In such a case, the thrust’s amplitude will be limited and restricted to the rings’ surface. In addition, this will increase the probability of cold emission like in a metal (see below) because fields must be relatively strong (i.e. at least about 100kv) to get a good thrust . So, things like that can reduce the propulsive effect. According to the model in , the value of $`\omega `$ must be in the range of radio frequency or TV range. Consequently, our plasma must have a $`\omega _p`$ in these ranges too. However, if n$`=`$0 there are no charge separations at all; we have only a uniform longitudinal current on each ring. In this last case, we don’t need a longitudinal plasma mode; a neutral conductive fluid with a $`\omega _p`$ much larger than $`\omega `$ can be used. But let’s remember this, if n$`=`$0 the propulsive force has no electric contribution (i.e. no conservative part), only a magnetic one (i.e. a dissipative part because radiative) and we know that this last contribution has a poor efficiency according to section 6 in . For our purpose and for now, at least four limits or conditions must be considered in a neutral plasma. The first is related to the collision rate $`f`$. Our macroscopic time-varying CDW is a collective oscillation mode; a longitudinal plasma mode. Collisions break “coherence” among charges’ motions and then break the collective oscillation and, the CDW itself can be destroyed. To get collective oscillations, we must have $`f`$ $``$ $`\omega _p`$ $``$ $`(e^2n_o/m_eϵ_o)^{1/2}`$ (SI units) ; $`m_e`$ is the effective electron’s mass, e the electron’s charge, $`n_o`$ the electron density when n$`=`$0 (for electronic plasma with heavy positive ions as uniform background) and $`ϵ_o`$ is the vacuum electrical permittivity. We have to mention that $`f`$ increases with $`n_o`$ and temperature (see below). The second limit is associated with the wave number or the wave length of the charge density in a conductive fluid. For us, this is related to “n”. There is an upper limit for this wave number. Above this limit, the CDW cannot oscillate; the damping (i.e. Landau damping ) is too strong and thus, CDW does not exist (it’s too “viscous”). In a nondegenerate conductive fluid, like an ionized gas with relatively small density of electrons and ions for instance, this upper wave number is the Debye wave number $`k_D`$ given by $`k_{D}^{}{}_{}{}^{1}`$ $``$ $`(T_e/n_o)^{1/2}`$ cm ($`n_o`$ in cm<sup>-3</sup>). $`T_e`$ is the electrons’ temperature (in Kelvin); a measure of their mean kinetic energy. An electron within $`k_{D}^{}{}_{}{}^{1}`$ cannot move easily (“viscous” area) but outside, it can. So, if the wave length of our CDW is larger than $`k_{D}^{}{}_{}{}^{1}`$, the damping won’t exist or it will be weak or quite weak and then, this CDW will survive and will be able to oscillate. In a degenerate neutral conductive fluid like an electron gas in a solid metal at low temperature (i.e. low compared to the Fermi energy $`E_F`$ ), $`k_D`$ is replaced by the Fermi wave number $`k_F`$ . In that case, the typical kinetic energy is $`E_F`$ not $`T_e`$. In our situation, we need a neutral plasma with $`\omega _p`$ $``$ 100 MHz (radio frequency as order of magnitude) and a $`k_{DorF}^{}{}_{}{}^{1}`$ smaller than about $`10^2`$ cm. $`10^2`$ cm is a lower limit for the wave length of our CDW; a macroscopic length scale for which our classical approach in is certainly correct. With solid alkali metals like Li, Na, etc. or solid noble metals like Cu, Ag, Au, $`k_{F}^{}{}_{}{}^{1}`$ respects the above condition. For example, solid copper (Cu) at room temperature ($``$ 300K), $`k_{F}^{}{}_{}{}^{1}`$ $``$ $`10^8`$ cm . But the problem with solid metals like alkali (or noble) is their $`\omega _p`$ belong to ultraviolet frequency range ($``$ $`10^{15}`$Hz) . The reason for such a big value is a large $`n_o`$ ($``$ $`10^{22}`$ cm<sup>-3</sup>) and a very small effective mass of charge carrier (i.e. electron). So, solid metals can bee used only if n $`=`$ 0 (i.e. uniform currents on rings) according to above discussion ($`\omega `$ $`<`$ $`\omega _p`$). For instance, if n $`=`$ 0, we could use two metallic and solid torus (i.e. planar rings with cross section $`R_o`$ smaller than D and R’ according to above), fixed apart with distance D by some adequat isolators and placed in good vacuum at “room temperature”. However, one possible problem with metals is the cold emission ; when fields applied over metallic crystal become relatively strong, electrons (carriers) can be expelled outside the crystal by “quantum tunneling”. In that case, the charge’s momentum of carriers won’t be given to the whole crystal along the thrust axis so, the momentum transfer efficiency will be diminished and then, the propulsion too. Furthermore, with metals we will have $`\omega `$ $`<`$ $`\omega _p`$ and, as mentioned above, this is limitative. With n $``$ 0, we need something else. For instance an ionized gas; a neutral conductive gas formed by electrons and ions with a smaller electron density: $`n_o`$ $``$ $`10^8`$ cm<sup>-3</sup>. In that case this neutral plasma has a $`\omega _p`$ in the range that we want according to its expression given above. On the other hand, we want a relatively “cold” plasma because we wish to satisfy the condition $`f`$ $``$ $`\omega _p`$ and also because we want to avoid any complications about plasma confinement (“walls”). For example, let’s consider a temperature $`T_e`$ between 1000K to 10000K. Such values for $`T_e`$ and $`n_o`$ give us a $`k_{D}^{}{}_{}{}^{1}`$ $``$ $`10^3`$ to $`10^2`$ cm according to the above expression so, they give us a classical plasma (i.e. nondegenerate electrons gas where classical statistics can be applied) quite similar to the ionosphere’s one . Actually, at 90km into ionosphere, collision rate is $`f`$ $``$ $`10^6`$Hz and at 300km, $`f`$ $``$ $`10^3`$Hz . Such last values respect the preceding inequality between $`f`$ and $`\omega _p`$. This doesn’t mean ion species we need must be the same as the ionosphere’s ones. Best ion species we need is another issue. But it shows that such a kind of plasma exist. So, a priori, a neutral ionized gas with relatively “low”temperature, $`10^3`$ to $`10^4`$K, and low electrons density, $`n_o`$ $``$ $`10^8`$ cm<sup>-3</sup>, (i.e. a cold plasma) could be a good candidate for our purpose when n $``$ 0. Let’s take an example to get an order of magnitude of the propulsive force when a cold plasma gas is under consideration. Let us consider a lithium gas with electrons density $`n_o`$ $``$ $`10^8`$ cm<sup>-3</sup> and electrons temperature $`T_e`$ $``$ $`5000`$K. According to above expressions, $`\omega _p`$ $``$ $`564`$MHz and $`k_{D}^{}{}_{}{}^{1}`$ $``$ $`7`$$`\times `$$`10^3`$ cm. By simplicity, let’s imagine all atoms of lithium are ionized such as Li $``$ Li<sup>+</sup> $`+`$ e<sup>-</sup>. Atomic weight of Li is about 6.9a.m.u. so lithium mass density is about: $`n_o`$$`\times `$6.9$`\times `$1.66$`\times `$$`10^{27}`$kg $``$ $`10^{18}`$kg/cm<sup>3</sup>. (Of course, this doesn’t take into account the mass of confining “walls”). Mass of Li<sup>+</sup> is about $`10^4`$ times larger than the one of $`e^{}`$. So, ion Li<sup>+</sup> is at rest compared to e<sup>-</sup>; only electrons move at frequency $`\omega `$ along $`\varphi `$ direction. Now to get an order of magnitude of the propulsive force, we can use the Coulomb force expression. Coulomb force is one of main contributions (conservative part) to the thrust in . So, in these conditions if we consider a small volume of 1cm<sup>3</sup> of charges on each ring (or torus), the force we can get between these small volumes if D $`=`$ 0.1m (same order of magnitude than the one used in ) is given approximately by (1cm<sup>3</sup>)<sup>2</sup>$``$$`(`$ $`n_{o}^{}{}_{}{}^{2}`$$`e^2`$/$`4\pi `$$`ϵ_o`$$`D^2`$ $`)`$ $``$ $`10^{10}`$N. This evaluation is a $`\mathrm{𝑚𝑎𝑥𝑖𝑚𝑢𝑚}`$ one because it doesn’t take into account destructive interferences among fields produced by positive and negative charges in a same CDW and applied over charges in the other CDW. The reason for such a small force is the relatively small value of $`n_o`$. If we increase $`n_o`$, condition $`k_{D}^{}{}_{}{}^{1}`$ $``$ $`10^2`$ cm will be always satisfied but certainly not $`\omega _p`$ $``$ 100MHz. However, if we use an “ionic plasma” instead of an “electronic one” as in the above example, we will have $`\omega _p`$ $``$ $`(q^2n_o/m_iϵ_o)^{1/2}`$ and $`k_{D}^{}{}_{}{}^{1}`$ $``$ $`(T_i/n_o)^{1/2}`$ cm where $`n_o`$ is now the ions density, $`T_i`$ the ions temperature, $`m_i`$ is the reduced mass of ions and $`q`$, their charge. Consequently, if $`n_o`$ is increased, we will keep $`\omega _p`$ fixed if we take an appropriate reduced mass $`m_i`$ larger than $`m_e`$. Let’s give an example. Let’s take Li $`+`$ Cl $``$ Li<sup>+</sup> $`+`$ Cl<sup>-</sup>. Ion chlorine Cl<sup>-</sup> is about 5 times heavier than ion Li<sup>+</sup> so, $`m_i`$ $``$ $`m_{Li}`$ $`=`$ $`11.4`$$`\times `$$`10^{27}`$kg, $`q`$ = $`e`$ and $`T_i`$ $``$ $`T_{Li}`$. As before we take same temperature $`T_{Li}`$ $``$ 5000K. Now to get the same plasma frequency; $`\omega _p`$ $``$ 564MHz, we must take $`n_o`$ $``$ $`1.3`$$`\times `$$`10^{12}`$cm<sup>-3</sup>. In that case, $`k_{D}^{}{}_{}{}^{1}`$ $``$ $`6.2`$$`\times `$$`10^5`$cm and (1cm<sup>3</sup>)<sup>2</sup>$``$$`(`$$`n_{o}^{}{}_{}{}^{2}`$$`e^2`$/$`4\pi `$$`ϵ_o`$$`D^2`$$`)`$ $``$ $`3.9`$$`\times `$$`10^2`$N with the same D as before. However, condition $`f`$ $``$ $`\omega _p`$ is not respected. We can evaluate $`f`$ by using its expression for an ideal gas (i.e. low density and pressure). One has $`f`$ $``$ $`n_o`$$`\overline{v}`$$`\sigma _{Cl}`$ $`=`$ $`n_o`$$`(8k_BT_{Li}/\pi m_{Li})^{1/2}`$$`\sigma _{Cl}`$ $``$ 6.2GHz. $`k_B`$ is the Boltzmann’s constant, $`\sigma _{Cl}`$ $``$ $`\pi `$($`k_{D}^{}{}_{}{}^{1}`$)<sup>2</sup> is the scattering cross section of the screened chlorine ion and $`\overline{v}`$ is the mean speed of lithium ion; this velocity is close to the relative velocity between lithium and chlorine ions. Finally, mass density is $`n_o`$($`6.9`$$`+`$$`35.4`$)$`\times `$$`1.66`$$`\times `$ $`10^{27}`$kg $``$ $`9.1`$$`\times `$$`10^{14}`$kg/cm<sup>3</sup>. So, as we can see, the choice of ion species is quite important. The neutral plasma gas must be ionized by some external source (at the beginning at least) but, because temperature is relatively small, after a specific time there are recombinations among electrons and ions (or ions-ions) and then a radiation (named secondary here) is emitted. The primary radiation is the one emitted by the longitudinal plasma oscillations of both CDW at frequency $`\omega `$ $`\genfrac{}{}{0pt}{}{>}{}`$ $`\omega _p`$. Other kinds of secondary radiations can also be emitted like breaking radiation (bremsstrahlung) and spectral radiation coming from excited atoms (not ionized). Recombinations among charges imply that a third limit has to be considered in our neutral plasma. This limit is given by $`f_r`$ $``$ $`\omega _p`$ where $`f_r`$ is the recombination rate between negative and positive charges. Clearly, this quantity depends on electrons (or ions) density $`n_o`$ and electrons (or ions) temperature $`T_e`$ (or $`T_i`$). $`f_r`$ increases when temperature decreases because kinetic energy of opposite charges (i.e. their thermal energy) becomes smaller than their potential energy (i.e. mutual attraction). This is why temperature, on the other hand, cannot be too small. ## III Anisotropic Conductive Gas According to the model given in , charges must be well confined along the z direction (i.e. the thrust direction) and along the $`\rho `$ direction in some restricted regions (i.e. “filiform” rings). So, some constraints have to exist to maintain charges in these limited areas along those directions. These constraints have to ensure also the momentum transfer from charges to confining “walls”, specially along z. In that sense, the conductive fluid (or gas) must be strongly anisotropic; charges can move easily along $`\varphi `$ but should be nearly “at rest” along z and $`\rho `$ directions. Now, to get an appropriate anisotropic conductive gas (ionic and cold plasma gas), the cross section’s radii $`R_o`$ of a ring (or torus) must be smaller or equal to $`k_{D}^{}{}_{}{}^{1}`$ so, the fourth limit is $`R_o`$ $`\genfrac{}{}{0pt}{}{<}{}`$ $`k_{D}^{}{}_{}{}^{1}`$ $``$ $`6.2`$ $`\times `$ $`10^5`$ cm (using preceding value of chlorine-lithium gas) so, a “micro-torus” with a relatively large radii R’. The reason is this. Any charges inside $`k_{D}^{}{}_{}{}^{1}`$, around the heavier ion; the Cl<sup>-</sup> in our previous example, are in “viscous” area. This is true for Li<sup>+</sup> ions and for induced dipoles of the dielectric “walls” (see below). Consequently, with the above limit, any relative motions between Cl<sup>-</sup> and Li<sup>+</sup> along z and $`\rho `$ are quite well limited and this is true also among Cl<sup>-</sup> and dipoles, induced by this ion, inside the internal surface of the dielectric walls along those directions. In addition, the wall of this micro-torus must be a good dielectric. The neutral ionized gas will fill the micro-torus. The dielectric wall must be transparent to primary and secondary radiations. This is obvious for primary fields according to above; fields must reach the gas. But it is also important for the secondary to maintain a fixed temperature and get and sustain an equilibrium between ionization and recombination. Furthermore, this dielectric wall must be able to support high mechanical stress and relatively high temperature. ## IV Conclusion In this paper, a well confined neutral ionized gas at relatively low density and temperature (i.e. a nondegenerated conductive gas; a “cold plasma”) is proposed as a substrate in which a CDW (n $``$ 0) can be sustained; the CDW needed to produce a conservative propulsive force, according to the model given in our first work. Up to now, cold plasma is probably the most appropriate material able to create conservative propulsive force and meet conditions given in this paper. But, plasma stability, plasma confinement, momentum transfer from accelerated charges to the confining “walls” along the thrust axis, choice of best ion species and dispersion relation are certainly complicated issues to deal with in the near-term. In addition, the fourth condition is difficult to satisfy from a technological point of view now. On the other hand, as shown in , this model (i.e. rings and the specific charge and current density distributions used; the CDW) has a poor efficiency. For all of these reasons, modifications to this model (i.e. to charge distributions) are needed to get a more efficient and realistic near-term EFD.
no-problem/9911/gr-qc9911012.html
ar5iv
text
# GRLite and GRTensorJ: Graphical user interfaces to the computer algebra system GRTensorII ## Introduction GRLitegrlite and GRTensorJ are first- and second- generation graphical user interfaces to the computer algebra system GRTensorII grtensorII . They allow students and researchers in the area of General Relativity and related fields to perform symbolic calculations either locally or through the Web. GRLite is a calculator-style tool for evaluating more common tensors and scalars, reducing them to elementary functions. GRTensorJ provides fully customizable symbolic procedures without recompilation. These interfaces, which are open source, are written in Java and will run on any platform with browser support for JDK1.1. ## I Description Any user can initiate a GRLite or GRTensorJ session remotely on the Web by logging onto the GRTensorII home page and following the links. All that is needed is a browser with support for JDK1.1. The user initiates a session by clicking on the Java applet that opens the GRLite or GRTensorJ graphical user interfaces. They appear as in figures 1 and 2. Behind the scenes, a computer algebra session in GRTensorII is started automatically. GRLite and GRTensorJ use MapleVmaplev as the algebraic engine. They are in fact compatible with any engine that can output an ASCII stream. ### I.1 GRLite GRLite is a proof of concept study. It is perhaps most appropriate for beginning students as it is restricted to classical tensor analysis. The commands are performed through a predefined menu and selection of buttons. The first step is to select a spacetime. This choice automatically displays the components of the metric tensor. The second step is to simply click on the object to be calculated. GRLite includes the 35 pre-defined functions as shown in figure 1. After the calculation is displayed the user can apply to it eight simplification procedures from the menu. There is also some support, like buffer clearing and a help system. GRLite comes with a set of pre-defined spacetimes but also offers a graphical sub-interface for entering new spacetimes defined for a given session. The development of GRLite is complete but it will be maintained in the interim. Our current development effort concentrates on GRTensorJ. Figure 1 GRLite Graphical User Interface. Calculation of the differential invariant $`R_{abcd;e}R^{abcd;e}`$ for the Kerr metric. At the time of writing, this calculation executes in about one second on a contemporary PC with MapleV Release 5.1 and GRTensorII 1.76. ### I.2 GRTensorJ GRTensorJ provides all the functionality of GRLite and much more. It has a different architecture that allows it to be expanded and programmed by the user. The commands are accessed through menu and sub-menu selections. The first step is to select a spacetime in coordinates or in tetrads. Then, the user can select the object(s) to be calculated from the corresponding menus. After the result of the calculation is displayed the user can apply to it any simplification procedure supported by the engine. The help system is built into the menu system as shown in figure 2. GRTensorJ comes with a selection of spacetimes and a set of pre-defined commands. Further, the user can define new metrics, tetrads and procedures through the GRTensorII definition facilities. These are entered through a sub-interface with the option of saving them on the users disk space (locally) or in temporary space when used over the internet. When a session begins, GRTensorJ reads a directory on the server named TextSheets (not to be confused with “worksheets”) and builds the menu and sub-menus for the interface from the underlying structure. All sub-directories to TextSheets will appear as primary menu bar items. The names of ASCII files contained within these sub-directories will be displayed as menu selections. These files contain a sequence of commands written in the syntax of the computer algebra engine being usedmaplemath . By selecting a menu item the user sends these commands to the engine. Items to be displayed in the interface window are distinguished simply by an asterisk in the file. In other words, creating new menu items and calculation commands is as simple as creating and editing very simple ASCII files. Yet, the file, and resultant menu item, can be the equivalent of an entire worksheet with only the result chosen for display. Figure 2 GRTensorJ Graphical User Interface. An example of the embedded help system is shown. By the nature of the interface, the “help” system is more of the nature of an information system. ## II Internal Design overview GRLite and GRTensorJ are written in Java and have been designed using an object-oriented approach. In addition, all communication between the user and the server are object-based. GRTensorJ has a multi-layer architecture that allows the generic functionality described above. This architecture is outlined below. | GUI (Graphical User Interface) | | --- | | User Functional Interface | | User ICM Handler | | Interchange Modules (ICMs) | | Server ICM Handler | | Server Functional Interface | | Algebraic Engine - Server Structure | ## III Future development A number of useful features are easily added to GRTensorJ. For example, recently automatic Latex, Fortran and C output has been addedpapers . More involved is the current development of a dynamic database of solutions to Einstein’s field equations.
no-problem/9911/cond-mat9911335.html
ar5iv
text
# One-electron bands, quantum Monte Carlo, and real superconductors ## 1 Introduction The sheer size of the many-body Hilbert space makes treating strongly correlated systems adequately extremely difficult or even impossible. Examples of considerable interest are the superconducting doped Fullerenes. Even for a single C<sub>60</sub> molecule a full many-body calculation is still a challenge and calculations for solids made of Fullerenes are simply out of question. In this situation we are forced to restrict our attention to only the most relevant degrees of freedom. For the doped Fullerenes these are the electrons in the $`t_{1u}`$-band. Starting from ab initio density functional calculations we set up a tight-binding Hamiltonian that describes the electrons in the $`t_{1u}`$-band only. Including correlation effects we arrive at a generalized Hubbard Hamiltonian that can be treated by quantum Monte Carlo (QMC). Using the fixed-node approximation we study the screening of an point charge by the electrons in the $`t_{1u}`$-band. We find that the screening is surprisingly efficient even for strong correlations, almost up to the Mott transition. This has important implications for superconductivity in the doped Fullerenes. Given that molecular vibration energies are of the same order as electronic energies, retardation effects are inefficient at reducing the electron-electron repulsion. It is therefore not clear how the weak electron-phonon attraction can lead to superconductivity. Efficient metallic screening, as found in our calculations, can, however, reduce the electron-electron repulsion enough to allow for an electron-phonon driven superconductivity. But the screening does, of course, also affect the coupling to the phonons. It turns out that, due to screening, the alkali and $`A_g`$ modes couple only weakly, while the coupling to the $`H_g`$ modes is not affected. Therefore, although being electron-phonon driven, superconductivity in the doped Fullerenes differs in important ways from the conventional picture of superconductivity. ## 2 Model Hamiltonian Treating correlations in the Fullerenes is quite difficult. Even for a single C<sub>60</sub> molecule a full many-body calculation is still a challenge and simulations of Fullerenes, which are solids made of C<sub>60</sub> molecules are well beyond current computational capabilities. Solid C<sub>60</sub> is characterized by weak inter-molecular interactions. Hence the molecular levels merely broaden into narrow, well separated bands . Doping the solid with alkali metals has the effect of filling the band originating from the molecular $`t_{1u}`$-level with the weakly bound valence electrons of the alkali atoms. Since the $`t_{1u}`$-level is three-fold degenerate the corresponding band can hold up to six electrons per molecule; for A<sub>3</sub>C<sub>60</sub>, e.g., the band is half-filled. When we are interested in the low-energy properties of the $`t_{1u}`$ electrons, it is a good approximation to focus only on the region around the Fermi level; i.e. on the $`t_{1u}`$-band, projecting out all the other bands . That way we arrive at a tight-binding Hamiltonian comprising only the $`t_{1u}`$ orbitals, that reproduces the ab initio band structure remarkably well . To obtain a realistic description we have to include the correlation arising from the Coulomb repulsion among the electrons on the same molecule. The resulting Hamiltonian describes the interplay between the hopping of electrons between different molecules and the Coulomb repulsion among the electrons on the same molecule $$H=\underset{ij}{}\underset{nn^{}\sigma }{}t_{in,jn^{}}c_{in\sigma }^{}c_{jn^{}\sigma }^{}+U\underset{i}{}\underset{(n\sigma )<(n^{}\sigma ^{})}{}n_{in\sigma }n_{in^{}\sigma ^{}}.$$ (1) Here $`c_{in\sigma }^{}`$ creates an electron of spin $`\sigma `$ in the orbital with index $`n\{1,\mathrm{},3\}`$ on molecule $`i`$, the $`t_{in,jn^{}}`$ are hopping matrix elements between orbitals $`n`$ and $`n^{}`$ on neighboring molecules, and $`n_{in\sigma }=c_{in\sigma }^{}c_{in\sigma }`$ are occupation operators. Varying the value of the interaction term $`U`$, we can study the effect of correlations. The physical value for the doped Fullerenes is $`U1.21.4`$eV , which has to be compared to the width of the $`t_{1u}`$-band, $`W0.50.85`$eV. ## 3 Quantum Monte Carlo We now give a very brief outline of the quantum Monte Carlo method for determining ground states. The basic idea is to use the Hamiltonian to project out the ground-state from some trial function $`|\mathrm{\Psi }_T`$, that we have guessed: $$e^{\tau H}|\mathrm{\Psi }_T\stackrel{\tau \mathrm{}}{}|\mathrm{\Psi }_0.$$ (2) To see how this works let us assume we knew the expansion of the trial function in eigenfunctions of $`H`$ $$|\mathrm{\Psi }_T=\underset{n}{}c_n|\mathrm{\Psi }_ne^{\tau H}|\mathrm{\Psi }_T=c_ne^{\tau E_i}|\mathrm{\Psi }_n.$$ (3) Thus the component with the lowest eigenenergy is damped least during the projection, i.e., if $`c_00`$, in the limit of large $`\tau `$ the ground-state component will dominate. In practice we use for Hamiltonians $`H`$ with a spectrum that is bounded, both, from below and above, an iterative projection of the form $$|\mathrm{\Psi }^{(0)}=|\mathrm{\Psi }_T|\mathrm{\Psi }^{(n)}=[1\tau (Hw)]|\mathrm{\Psi }^{(n1)},$$ (4) which, for small but finite $`\tau `$, also exactly projects to the ground-state. We see that the basic operation in (4) is a matrix-vector product. Since we are working in a many-body Hilbert space, the dimension of the vectors is, however, in general enormous; see Table 1 for an illustration. To understand the Monte Carlo method for doing the iteration we first rewrite (4) in configuration space. Here $`R`$ denotes a configuration of the electrons in real space $$|R^{}R^{}|\mathrm{\Psi }^{(n)}=\underset{R,R^{}}{}|R^{}\underset{=:F(R^{},R)}{\underset{}{R^{}|1\tau (HE_0)|R}}R|\mathrm{\Psi }^{(n1)}.$$ (5) We see that the matrix $`F(R^{},R)`$ maps configuration $`R`$ into configurations $`R^{}`$. We clearly cannot follow every possible new configuration $`R^{}`$ since that would lead to an exponential growth in the number of configurations as we iterate. The idea of Monte Carlo is then to sample only one of the configurations $`R^{}`$ with a probability $`p(R^{},R)`$. To do that we want to interpret the matrix elements of $`F(R^{},R)`$ as probabilities. They are, however, in general not normalized and can even be negative $$F(R^{},R)=\underset{\mathrm{prob}.}{\underset{}{p(R^{},R)}}\underset{\mathrm{norm}.\&\mathrm{sign}}{\underset{}{m(R^{},R)}}.$$ (6) Normalization introduces the need for population control, while negative matrix elements introduce the sign problem . ## 4 Screening of a Point Charge In conventional superconductors the electron-phonon interaction leads to an effective electron-electron attraction. This attraction is, of course, counteracted by the Coulomb repulsion between the electrons. In the conventional picture this repulsion is, however, strongly reduced by retardation effects . The resulting effective Coulomb repulsion is described by the dimensionless Coulomb pseudopotential $`\mu ^{}`$, which is believed to be of the order of $`0.1`$. For the doped Fullerenes the situation is different: Retardation effects are inefficient . Therefore the screening of the Coulomb interaction becomes important for reducing the electron-electron repulsion. Assuming that the random phase approximation (RPA) is valid for the electrons within the $`t_{1u}`$-band it was found that efficient metallic screening significantly reduces the Coulomb pseudopotential . In this scenario the Coulomb pseudopotential $`\mu ^{}0.3`$ is substantially larger than that for conventional superconductors, but it is not too large to prevent superconductivity from being driven by the electron-phonon interaction. For strongly correlated systems like the doped Fullerenes the use of the RPA seems, however, highly questionable. To address this question, we have investigated how well the RPA describes the screening within the $`t_{1u}`$-band. It is clear that the RPA properly describes the screening when the kinetic energy is much larger than the interaction energy, i.e. RPA works well in the weakly correlated limit. For strong correlations, where the Coulomb energy dominates, the RPA gives qualitatively wrong results: Introducing a test charge $`q`$ on a molecule, RPA predicts that almost the same amount of electronic charge moves away form that molecule, since for a Coulomb integral $`U`$ much larger than the band width $`W`$ the gain in potential energy dominates the cost in kinetic energy. The RPA neglects, however, that in this limit, when an electron leaves a molecule it has to find another molecule with a missing electron, or there will be a large increase in Coulomb energy. It is not clear for what value of the Coulomb interaction $`U`$ this qualitative breakdown of the RPA starts, and up to which values of $`U`$ RPA gives still accurate results. To find out we have performed quantum Monte Carlo calculations. To study the screening of a point charge $`q`$ on molecule with index $`c`$ we consider the Hamiltonian $$H=\underset{ij}{}\underset{nn^{}\sigma }{}t_{in,jn^{}}c_{in\sigma }^{}c_{jn^{}\sigma }^{}+U\underset{i}{}\underset{(n\sigma )<(n^{}\sigma ^{})}{}n_{in\sigma }n_{in^{}\sigma ^{}}+qU\underset{n\sigma }{}n_{cn\sigma },$$ (7) which differs from (1) only by the last term describing the interaction with the test charge. As a trial function we use a generalized Gutzwiller function of the form $$|\mathrm{\Psi }_T=g^Dg_0^{n_c}|\mathrm{\Phi }_0,$$ (8) where $`|\mathrm{\Phi }_0`$ is a Slater determinant, $`D`$ is the number of double occupancies in the system, and $`n_c`$ is the number of electrons on the molecule with the test charge. Calculating the expectation value $`n_c(VMC)=\mathrm{\Psi }_T|n_c|\mathrm{\Psi }_T`$ by variational Monte Carlo and the mixed estimator $`n_c(DMC)=\mathrm{\Psi }_T|n_c|\mathrm{\Psi }_0`$ by fixed-node diffusion Monte Carlo, we obtain the ground-state expectation value $`n_c=\mathrm{\Psi }_0|n_c|\mathrm{\Psi }_0`$ from the extrapolated estimator $`n_c2n_c(DMC)n_c(VMC)`$. To estimate the accuracy of our approach we have compared the results of the quantum Monte Carlo (QMC) calculations with the results from exact diagonalization for a cluster of 4 molecules. We find that the QMC calculations are accurate up to very large values of the Coulomb interaction $`U`$. Performing QMC calculations for clusters of sizes $`N_{\mathrm{mol}}=`$ 32, 48, 64, 72, and 108 molecules, where exact diagonalization is not possible (cf. Table 1), we have extrapolated the screening charge $`\mathrm{\Delta }n_c=n_c(0)n_c(q)`$ to infinite cluster-size, assuming a finite-size scaling of the form $`\mathrm{\Delta }n_c=\mathrm{\Delta }n_c(N_{\mathrm{mol}})+\alpha /N_{\mathrm{mol}}`$ . The finite-size extrapolation gives only a small correction to the screening charge found for the larger clusters. Results for $`q=0.25e`$ are shown in Figure 1. For small values of $`U`$ the RPA somewhat underestimates the screening, a behavior also found in the electron gas . For intermediate values of $`U`$ ($`U/W1.02.0`$) the RPA still gives surprisingly accurate results, while for larger $`U`$ it rapidly becomes qualitatively wrong, as discussed above. We thus find efficient, RPA-like screening even for quite strong correlations close to the Mott transition. ## 5 Screening and Electron-Phonon Coupling As pointed out in the preceeding discussion, efficient metallic screening helps to reduce the effective electron-electron repulsion, i.e. the Coulomb pseudopotential. But the screening also affects the electron-phonon coupling. At first it might appear that efficient screening is not really helpful for superconductivity. Phonons couple to the electrons by perturbing the potential seen by the electrons. An example is the longitudinal mode of a jellium. Efficient screening tends to weaken the coupling to such phonons, since it reduces the perturbation considerably: The bare coupling constant $`g`$ is reduced to $`g/\epsilon `$, where $`\epsilon `$ is the dielectric constant . To some extent, such a reduction is also at work in the Fullerenes. An example are the alkali phonons. Each C<sub>60</sub> molecule is surrounded by 14 alkali ions that are bound with quite weak force constants. They should therefore respond strongly when an electron arrives on a C<sub>60</sub> molecule. This was, however, not confirmed experimentally; an alkali isotope effect could, e.g., not be observed . Given the efficient metallic screening this finding can be naturally understood. When an electron arrives on a C<sub>60</sub> molecule, other electrons leave the molecule, which thus stays almost neutral. The alkali ions then see only a small change in the net charge and therefore couple weakly. To analyze the situation more closely, we consider the change in the energy of the molecular orbital $`\alpha `$ under a deformation of the molecule with amplitude $`Q`$ $$\mathrm{\Delta }ϵ_\alpha ^0=g_\alpha Q,$$ (9) where $`g_\alpha `$ is the electron-phonon coupling. An illustration is given in Figure 2. The change of the on-site energy $`ϵ_\alpha ^0`$ will induce a response of the $`t_{1u}`$ electrons. Since the effect of a point charge $`q`$ is just to shift the on-site energy by $`qU`$ (cf. (7)), the screening charge induced by the change $`\mathrm{\Delta }ϵ_n^0`$ is given by $$\mathrm{\Delta }n_\alpha =\gamma \frac{\mathrm{\Delta }ϵ_\alpha ^0}{U},$$ (10) where $`\gamma >0`$ measures the efficiency of the screening: $`\mathrm{\Delta }n=\gamma q`$. The total screening charge induced by the molecular deformation $`Q`$ is then given by $$\mathrm{\Delta }n=\underset{\alpha }{}\mathrm{\Delta }n_\alpha =\frac{\gamma }{U}\left(g_\alpha \right)Q.$$ (11) Including screening, the shift in the molecular levels is then given by $$\mathrm{\Delta }ϵ_\alpha =\mathrm{\Delta }ϵ_\alpha ^0+U\underset{\beta }{}\mathrm{\Delta }n_\beta =\left(g_\alpha \gamma \underset{\beta }{}g_\beta \right)Q.$$ (12) For molecular solids like the doped Fullerenes the electron-phonon coupling is given by the dimensionless constant $$\lambda \underset{\alpha }{}\left(\frac{\mathrm{\Delta }ϵ_\alpha }{Q}\right)^2=\underset{\alpha }{}\left(g_\alpha \gamma \underset{\beta }{}g_\beta \right)^2.$$ (13) Given efficient metallic screening $`(\gamma 1)`$, the coupling to phonons that cause a net shift of the molecular levels $`g_\beta 0`$ will be reduced, while modes that leave the center of gravity of the molecular levels unchanged $`(g_\beta =0)`$ will not be affected. Such modes are the $`H_g`$ modes in C<sub>60</sub>. For these modes efficient screening serves to reduce the electron-electron repulsion without affecting the electron-phonon coupling. ## 6 Conclusion We have described how to construct a model for the $`t_{1u}`$ electrons in the doped Fullerenes, that can be analyzed by many body techniques. Using quantum Monte Carlo we have calculated the static screening of a point charge. We find that the RPA works surprisingly well, almost up to the Mott transition. The metallic screening helps to reduce the electron-electron repulsion in the doped Fullerenes, where retardation effects are inefficient. But the screening in general also reduces the electron-phonon coupling. In a molecular solid there can be, however, intra-molecular modes that are not screened, examples being the $`H_g`$ modes in the Fullerenes. We thus find that, although superconductivity in the Fullerenes is driven by the electron-phonon coupling, it differs in important points from the textbook picture of superconductivity. This work has been supported by the Alexander-von-Humboldt-Stiftung under the Feodor-Lynen-Program and the Max-Planck-Forschungspreis.
no-problem/9911/hep-th9911192.html
ar5iv
text
# 1 Introduction ## 1 Introduction Integrable field theory emerged in the last years as an elegant and effective tool for the study of many two-dimensional statistical models directly in their scaling limit. The approach relies on the fact that a large class of quantum field theories in (1+1) dimensions admits an infinite number of integrals of motions (i.e. they are ‘integrable’) and can be completely solved on-shell . The matrix elements of local operators on the asymptotic states are also exactly computable and lead to spectral series for the correlation functions whose quantitative effectiveness is remarkable. Among the statistical models that have been studied in this framework we mention the Ising model in a magnetic field , the $`q`$-state Potts model , the $`O(n)`$ model , and the Ashkin-Teller model . It is the purpose of this paper to illustrate how the programme outlined above applies to the “restricted solid-on-solid” (RSOS) models introduced by Andrews, Baxter and Forrester in Ref. . For any integer $`p3`$ they are defined on the square lattice in terms of a spin or “height” variable $`h_i`$ located at each site $`i`$ and taking the integer values from $`1`$ to $`p`$. The heights of two nearest-neighbour sites $`i`$ and $`j`$ are constrained by the condition $$|h_ih_j|=1,$$ (1.1) which, in particular, leads to a natural splitting of the lattice into two sublattices on which the heights are all even or all odd. The Hamiltonian is further specified by a one-site and a diagonal interaction terms. Their precise form does need to be reproduced here but it is important that the energy of a configuration is invariant under the global tranformation $$h_ip+1h_i,$$ (1.2) which is the basic symmetry of the models. In their general formulation the RSOS models contain a number of parameters which grows linearly with $`p`$. In Ref. the models were solved on two two-dimensional manifolds of the parameter space which can be parameterised by a temperature-like variable $`t`$ together with a second coordinate $`v`$ measuring the spatial anisotropy of the lattice interaction. The scaling limit, however, is isotropic and $`v`$ will be ignored in the following. Hence, for each of the two solutions and for any $`p`$, the phase diagram reduces to a line and exhibits a critical point separating two phases known as regimes I and II (for the first solution) and regimes III and IV (for the second solution). Here, we will only be interested in the second case, and more specifically in regime III. By comparison of critical exponents, Huse showed that the critical points separating regimes III and IV for the different values of $`p`$ correspond to the minimal unitary series of conformal field theories characterised by the values of the central charge $$C=1\frac{6}{p(p+1)},p=3,4,\mathrm{}.$$ (1.3) These models contain a finite number of primary operators $`\phi _{m,n}(x)`$ ($`m=1,\mathrm{},p1`$; $`n=1,\mathrm{},p`$) with scaling dimensions $$X_{m,n}=\frac{[(p+1)mpn]^21}{2p(p+1)}.$$ (1.4) It was shown in Ref. that the unitary minimal models admit the Landau-Ginzburg description $$S=d^2x\left[(\phi )^2+\underset{k=1}{\overset{p1}{}}g_k\phi ^{2k}\right],$$ (1.5) with $`g_1=g_2=\mathrm{}=g_{p2}=0`$. The scalar field $`\phi (x)`$ is then the continous version of the shifted height variable $`h_i(p+1)/2`$, in such a way that the reflection symmetry (1.2) is mapped into $`\phi \phi `$. The following identifications between normal ordered powers of $`\phi `$ and conformal operators hold $$\phi ^k\phi _{k+1,k+1},k=0,\mathrm{},p2.$$ (1.6) The effective action (1.5) makes transparent that the considered series of critical points corresponds to the $`(p1)`$-critical behaviour of a system with scalar order parameter and $`Z_2`$ symmetry, $`p=3`$ being the ordinary Ising universality class, $`p=4`$ the tricritical Ising, and so on. The RSOS models in regimes III and IV possess $`p1`$ and $`p2`$ degenerate ground states, respectively , and can be described by the action (1.5) with suitable fine tunings of the couplings $`g_k`$ leading to the appropriate number of degenerate minima in the effective potential. For $`p=3`$ the two regimes correspond to the low- and high-temperature phases of the Ising model in zero magnetic field and are both massive in the scaling limit. For $`p4`$, while regime III is still massive, regime IV become massless and corresponds to the crossover between the critical points labelled by $`p`$ and $`p1`$ . Solvability on the lattice naturally suggests integrability of the field theory describing the scaling limit. In fact, the scaling dimension of the ‘thermal’ operator (conjugated to $`t`$) is known from the lattice solution and coincide with $`X_{1,3}`$, so that the scaling limit of the RSOS models in regimes III and IV is described by the action $$𝒜=𝒜_{CFT}^{(p)}+\lambda d^2x\phi _{1,3}(x),$$ (1.7) where $`𝒜_{CFT}^{(p)}`$ is the action of the conformal theories with central charge (1.3), and $`\lambda `$ is a coupling with dimensions $`m^{2X_{1,3}}`$. This $`\phi _{1,3}`$ perturbation of conformal field theory is known to be massive (regime III) or massless (regime IV) depending on the sign of $`\lambda `$ , and to be integrable in both directions . The associated scattering theories are also known . The paper is organised as follows. In the next section we briefly review the exact scattering description for regime III and use it in Section 3 for the computation of form factors of the operators $`\phi _{1,3}`$, $`\phi _{1,2}`$ and (for $`p=3,4`$) $`\phi _{2,2}`$. In Section 4 we write down the two-particle approximation for the correlation functions of these operators and analyse its accuracy by computing the central charge and the scaling dimensions. Section 5 is devoted to a discussion of the perturbation of the RSOS critical points with both the operators $`\phi _{1,3}`$ and $`\phi _{1,2}`$. In the final section we briefly discuss the fact that the perturbed conformal field theory (1.7) also describe the scaling limit of the dilute $`q`$-state Potts model along its first order phase transition lines for $`0q4`$, as well as the $`O(n)`$ vector model for $`2n2`$. ## 2 Scattering theory In a (1+1)-dimensional theory with degenerate vacua the elementary excitations are kinks interpolating among these vacua. It is known from the lattice solution that the $`j`$-th ground state in regime III ($`j=1,\mathrm{},p1`$) is such that all the sites on one sublattice have height $`j`$ and all the sites on the other sublattice have height $`j+1`$ (Fig. 1). The space-time trajectory of a kink is a domain wall separating two different ground states. Since the pairing of two different ground states $`i`$ and $`j`$ can give an admissible configuration only if $`|ij|`$ equals $`1`$ (Fig. 2), the elementary excitations of the scattering theory are kinks<sup>1</sup><sup>1</sup>1The rapidity variable $`\theta `$ parameterises the on-shell momenta of the kink of mass $`m`$ as $`(p^0,p^1)=(m\mathrm{cosh}\theta ,m\mathrm{sinh}\theta )`$. $`K_{ij}(\theta )`$ interpolating between two vacua $`i`$ and $`j=i\pm 1`$. It follows from the precise form of the lattice interaction that the interfacial tension between two ground states $`i`$ and $`i+1`$ does not depend on $`i`$, and this amounts to say that the kinks $`K_{i,i\pm 1}`$ all have the same mass. Multikink excitations of the type $$\mathrm{}K_{i\pm 1,i}(\theta _1)K_{i,i\pm 1}(\theta _2)\mathrm{}$$ (2.1) will connect ground states with arbitrary indices. In an integrable field theory the scattering is completely elastic (no production processes allowed) and multiparticle processes factorise into the product of the two-body subprocesses, so that the problem of the determination of the $`S`$-matrix is reduced to the computation of the two-kink amplitudes . Taking into account the kink composition rules together with invariance under time reversal and spatial inversion, the allowed two-kink processes are those depicted in Fig. 3 and associated to the commutation relations $`K_{j,j\pm 1}(\theta _1)K_{j\pm 1,j}(\theta _2)`$ $`=`$ $`A_j^\pm (\theta _1\theta _2)K_{j,j\pm 1}(\theta _2)K_{j\pm 1,j}(\theta _1)`$ (2.2) $`+`$ $`B_j(\theta _1\theta _2)K_{j,j1}(\theta _2)K_{j1,j}(\theta _1),`$ (2.3) $`K_{j\pm 1,j}(\theta _1)K_{j,j1}(\theta _2)`$ $`=`$ $`C_j(\theta _1\theta _2)K_{j\pm 1,j}(\theta _2)K_{j,j1}(\theta _1).`$ (2.4) The scattering amplitudes are subject to a series of constraints. Invariance under the reflection $`jpj`$ requires $`A_j^+(\theta )=A_{pj}^{}(\theta ),`$ (2.5) $`B_j(\theta )=B_{pj}(\theta ),`$ (2.6) $`C_j(\theta )=C_{pj}(\theta ),`$ (2.7) while crossing symmetry implies $`A_j^\pm (\theta )=A_{j\pm 1}^{}(i\pi \theta ),`$ (2.8) $`B_j(\theta )=C_j(i\pi \theta ).`$ (2.9) Commuting once again the r.h.s. of Eqs. (2.3) and (2.4) leads to the unitarity equations $`A_j^\pm (\theta )A_j^\pm (\theta )+B_j(\theta )B_j(\theta )`$ $`=`$ $`1,`$ (2.10) $`A_j^\pm (\theta )B_j(\theta )+B_j(\theta )A_j^{}(\theta )`$ $`=`$ $`0,`$ (2.11) $`C_j(\theta )C_j(\theta )`$ $`=`$ $`1.`$ (2.12) A three-kink process can be factorised in two ways differing by the ordering of the two-body collisions. Equating the results leads to the factorisation equation $$A_j^\pm A_{j\pm 1}^{}A_j^\pm +B_jC_jB_j=A_{j\pm 1}^{}A_j^\pm A_{j\pm 1}^{}+B_{j\pm 1}C_{j\pm 1}B_{j\pm 1},$$ (2.13) and similar others (the arguments of the three factors in each product are $`\theta `$, $`\theta +\theta ^{}`$ and $`\theta ^{}`$, respectively). The minimal solution to all these requirements is well known and reads $`A_j^\pm (\theta )=\left({\displaystyle \frac{s_{j\pm 1}}{s_j}}\right)^{i\theta /\pi }{\displaystyle \frac{s_1}{s_j}}{\displaystyle \frac{\mathrm{sinh}\frac{1}{p}(ij\pi \pm \theta )}{\mathrm{sinh}\frac{1}{p}(i\pi \theta )}}S_0(\theta ),`$ (2.14) $`B_j(\theta )=\left({\displaystyle \frac{\sqrt{s_{j+1}s_{j1}}}{s_j}}\right)^{1+i\theta /\pi }{\displaystyle \frac{\mathrm{sinh}\frac{\theta }{p}}{\mathrm{sinh}\frac{1}{p}(i\pi \theta )}}S_0(\theta ),`$ (2.15) $`C_j(\theta )=\left({\displaystyle \frac{\sqrt{s_{j+1}s_{j1}}}{s_j}}\right)^{i\theta /\pi }{\displaystyle \frac{\mathrm{sinh}\frac{1}{p}(i\pi \theta )}{\mathrm{sinh}\frac{1}{p}(i\pi \theta )}}S_0(\theta ),`$ (2.16) where $$s_j\mathrm{sin}\frac{j\pi }{p},$$ (2.17) $`S_0(\theta )={\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{\mathrm{\Gamma }\left(1+\frac{2}{p}(n+\frac{1}{2})+\frac{\theta }{i\pi p}\right)\mathrm{\Gamma }\left(1+\frac{2}{p}n\frac{\theta }{i\pi p}\right)}{\mathrm{\Gamma }\left(1+\frac{2}{p}(n+\frac{1}{2})\frac{\theta }{i\pi p}\right)\mathrm{\Gamma }\left(1+\frac{2}{p}n+\frac{\theta }{i\pi p}\right)}}`$ $`\times {\displaystyle \frac{\mathrm{\Gamma }\left(\frac{2}{p}(n+1)\frac{\theta }{i\pi p}\right)\mathrm{\Gamma }\left(\frac{2}{p}(n+\frac{1}{2})+\frac{\theta }{i\pi p}\right)}{\mathrm{\Gamma }\left(\frac{2}{p}(n+1)+\frac{\theta }{i\pi p}\right)\mathrm{\Gamma }\left(\frac{2}{p}(n+\frac{1}{2})\frac{\theta }{i\pi p}\right)}}`$ $`=\mathrm{exp}\left\{i{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dx}{x}}{\displaystyle \frac{\mathrm{sinh}(p1)\frac{x}{2}}{\mathrm{sinh}\frac{px}{2}\mathrm{cosh}\frac{x}{2}}}\mathrm{sin}{\displaystyle \frac{x\theta }{\pi }}\right\}.`$ (2.18) It can be checked that the amplitudes do not posses poles in the physical strip $`\text{Im}\theta (0,\pi )`$, what ensures that there are no bound states and that the amplitudes given above entirely determine the scattering theory. ## 3 Form factors Let us denote $`\mathrm{\Phi }(x)`$ a local scalar operator of the theory with zero topological charge, namely such that its action on the vacuum $`|0_j`$ only produces excitations beginning and ending on this vacuum. All the operators we will consider in the following share this property. We are interested in the two-particle form factors (Fig. 4) $$F_{j,\pm }^\mathrm{\Phi }(\theta _1\theta _2)=0_j|\mathrm{\Phi }(0)|K_{j,j\pm 1}(\theta _1)K_{j\pm 1,j}(\theta _2).$$ (3.1) Eq. (2.3) implies the relation $$F_{j,\pm }^\mathrm{\Phi }(\theta )=A_j^\pm (\theta )F_{j,\pm }^\mathrm{\Phi }(\theta )+B_j(\theta )F_{j,}^\mathrm{\Phi }(\theta ),$$ (3.2) while crossing leads to the equations $$F_{j,\pm }^\mathrm{\Phi }(\theta +2i\pi )=F_{j\pm 1,}^\mathrm{\Phi }(\theta ),$$ (3.3) $$i\text{Res}_{\theta =i\pi }F_{j,\pm }^\mathrm{\Phi }(\theta )=i\text{Res}_{\theta =i\pi }F_{j\pm 1,}^\mathrm{\Phi }(\theta )=0_j|\mathrm{\Phi }|0_j0_{j\pm 1}|\mathrm{\Phi }|0_{j\pm 1}.$$ (3.4) As a last necessary condition, the two-kink form factors are subject to the asymptotic bound $$\underset{\theta +\mathrm{}}{lim}F_{j,\pm }^\mathrm{\Phi }(\theta )\text{constant}e^{X_\mathrm{\Phi }\theta /2},$$ (3.5) where $`X_\mathrm{\Phi }`$ denotes the scaling dimension of the operator $`\mathrm{\Phi }(x)`$. It is easily checked that a class of solutions of Eqs. (3.2) and (3.3) is given by $$F_{j,\pm }^\varphi =\frac{2i}{p}\left(\frac{s_{j\pm 1}}{s_j}\right)^{(1+i\theta /\pi )/2}\frac{F_0(\theta )}{\mathrm{sinh}\frac{1}{p}(\theta i\pi )}\mathrm{\Omega }_j^\varphi (\theta ),$$ (3.6) where $`\{\varphi \}\{\mathrm{\Phi }\}`$, $$F_0(\theta )=i\mathrm{sinh}\frac{\theta }{2}\mathrm{exp}\left\{_0^{\mathrm{}}\frac{dx}{x}\frac{\mathrm{sinh}(1p)\frac{x}{2}}{\mathrm{sinh}\frac{px}{2}\mathrm{cosh}\frac{x}{2}}\frac{\mathrm{sin}^2(i\pi \theta )\frac{x}{2\pi }}{\mathrm{sinh}x}\right\}$$ (3.7) is solution of the equations $`F_0(\theta )=S_0(\theta )F_0(\theta ),`$ (3.8) $`F_0(\theta +2i\pi )=F_0(\theta ),`$ (3.9) and behave as $$F_0(\theta )\mathrm{exp}[(1+1/p)\theta /4],\theta +\mathrm{}.$$ (3.10) The functions $`\mathrm{\Omega }_j^\varphi (\theta )`$ are free of poles and satisfy $`\mathrm{\Omega }_j^\varphi (\theta )=\mathrm{\Omega }_j^\varphi (\theta ),`$ (3.11) $`\mathrm{\Omega }_j^\varphi (\theta +2i\pi )=\mathrm{\Omega }_{j\pm 1}^\varphi (\theta ).`$ (3.12) These requirements, together with (3.5) imply that the $`\mathrm{\Omega }_j^\varphi (\theta )`$ are polynomials in $`\mathrm{cosh}(\theta /2)`$. Let us consider those operators $`\varphi (x)`$ which are relevant in the renormalisation group sense ($`X_\varphi <2`$) for all values of $`p3`$. Then, the bound (3.5) implies that these polynomials are at most of degree one, which means that the operator subspace $`\{\varphi \}`$ contains only two independent relevant operators. The trace of the stress-energy tensor $`\mathrm{\Theta }(x)`$ is a relevant operator and is even under the reflection symmetry, so that $`F_{j,+}^\mathrm{\Theta }(\theta )=F_{pj,}^\mathrm{\Theta }(\theta )`$. Moreover $`0_j|\mathrm{\Theta }|0_j`$ does not depend on $`j`$ and, according to (3.4), the two-kink matrix elements have no pole in $`\theta =i\pi `$. All these requirements are fulfilled if we take $$\mathrm{\Omega }_j^\mathrm{\Theta }(\theta )=2\pi m^2\mathrm{cosh}\frac{\theta }{2},$$ (3.13) with the normalisation constant fixed by $$F_{j,\pm }^\mathrm{\Theta }(i\pi )=2\pi m^2.$$ (3.14) The other independent relevant operator in $`\{\varphi \}`$ (let us denote it $`(x)`$) corresponds to the constant solution $$\mathrm{\Omega }_j^{}(\theta )=(1)^j0_1||0_1.$$ (3.15) The order parameter $`\phi (x)`$ is the most relevant operator which changes sign under reflection, and this means in particular $$0_j|\phi |0_j=0_{pj}|\phi |0_{pj},$$ (3.16) $$F_{j,+}^\phi (\theta )=F_{pj,}^\phi (\theta ).$$ (3.17) For generic values of $`p`$, these properties are incompatible with those of the space of solutions spanned by (3.6), (3.13), (3.15) and we conclude that $`\phi \{\varphi \}`$. We do not dispose of the solution for $`F_{j,+}^\phi (\theta )`$ for generic $`p`$ and just quote the result for the two simplest cases $$F_{1,+}^\phi (\theta )=i0_1|\phi |0_1\mathrm{tanh}\frac{\theta }{2},p=3$$ (3.18) $$F_{j,+}^\phi (\theta )=\frac{(1)^j}{2\mathrm{{\rm Y}}_j(i\pi )}0_1|\phi |0_1\left(\frac{s_{j+1}}{s_j}\right)^{(1+i\theta /\pi )/2}\frac{F_0(\theta )}{\mathrm{cosh}\frac{\theta }{2}}\mathrm{{\rm Y}}_j(\theta ),p=4.$$ (3.19) The functions $$\mathrm{{\rm Y}}_j(\theta )=\mathrm{exp}\left\{_0^{\mathrm{}}\frac{dx}{x}\frac{\mathrm{sin}^2[2i\pi |2j|\theta ]\frac{x}{2\pi }}{\mathrm{cosh}x\mathrm{sinh}2x}\right\}$$ (3.20) satisfy the equations $`\mathrm{{\rm Y}}_1(\theta )=\mathrm{{\rm Y}}_3(\theta )={\displaystyle \frac{\mathrm{sinh}\frac{1}{4}(i\pi +\theta )}{\mathrm{sinh}\frac{1}{4}(i\pi \theta )}}\mathrm{{\rm Y}}_1(\theta ),`$ (3.21) $`\mathrm{{\rm Y}}_1(\theta +2i\pi )=\mathrm{{\rm Y}}_2(\theta ),`$ (3.22) and behave as $`\mathrm{exp}(\theta /8)`$ when $`\theta +\mathrm{}`$. It can be checked that $`F_{j,\pm }^\phi (\theta )=F_{j,\pm }^{}(\theta )`$ for $`p=3`$. ## 4 Correlation functions The correlation functions are obtained using the resolution of the identity $$1=\underset{n=0}{\overset{\mathrm{}}{}}_{\theta _1>\mathrm{}>\theta _n}\frac{d\theta _1\mathrm{}d\theta _n}{(2\pi )^n}|nn|$$ (4.1) to sum over all intermediate $`n`$-kink states $`|n`$. A two-point function reads<sup>2</sup><sup>2</sup>2Here and in the following we always refer to connected correlators. $`0_j|\mathrm{\Phi }_1(x)\mathrm{\Phi }_2(0)|0_j`$ $`=`$ $`{\displaystyle \underset{\epsilon =\pm }{}}{\displaystyle _{\theta _1>\theta _2}}{\displaystyle \frac{d\theta _1}{2\pi }}{\displaystyle \frac{d\theta _2}{2\pi }}F_{j,\epsilon }^{\mathrm{\Phi }_1}(\theta _1\theta _2)F_{j,\epsilon }^{\mathrm{\Phi }_2}(\theta _2\theta _1)e^{|x|E_2}`$ (4.2) $`+`$ $`O(e^{4m|x|}),m|x|1`$ (4.3) where $`E_2=m(\mathrm{cosh}\theta _1+\mathrm{cosh}\theta _2)`$ is the energy of the two-kink asymptotic state. The “two-kink approximation” (4.3) is known to provide results of remarkable accuracy for integrated correlators (see and references therein), as can be checked through the use of the sum rules $`C={\displaystyle \frac{3}{4\pi }}{\displaystyle d^2x|x|^20_j|\mathrm{\Theta }(x)\mathrm{\Theta }(0)|0_j},`$ (4.4) $`X_\mathrm{\Phi }={\displaystyle \frac{1}{2\pi 0_j|\mathrm{\Phi }|0_j}}{\displaystyle d^2x0_j|\mathrm{\Theta }(x)\mathrm{\Phi }(0)|0_j},`$ (4.5) allowing the determination of the ultraviolet conformal data (central charge and scaling dimensions) in the form of moments of off-critical correlators. In Fig. 5 we compare the exact formula for the central charge (1.3) with the result yielded by the two-kink approximation (4.3) in the sum rule (4.4). Notice that $`p`$ can be considered as a continous parameter in the result of the latter computation, in agreement with the fact that many observables in the theory (1.7) have a continous $`p`$-dependence. With this remark in mind, in the remaining part of this section we will treat $`p`$ as a real number<sup>3</sup><sup>3</sup>3In doing that, of course we loose unitarity unless $`p`$ is an integer larger than $`2`$. Here and in the following the term ‘unitarity’ refers to the absence of negative norm states in the Hilbert space. $`1`$. For $`p=1`$, in particular, the two-kink form factor (3.6), (3.13) simply reduces to $`2\pi m^2(1)^{(1+i\theta /\pi )/2}`$. This residual rapidity dependence ensures the normalisation condition (3.14) but is immaterial in the computation of the correlator (4.3). Hence, the theory (1.7) at $`p=1`$ is free and the two-kink computation for the central charge gives the exact result $`C=2`$. Due to the equivalence between the Ising model and a free neutral fermion, the two-kink approximation is exact also for $`p=3`$ and gives $`C=1/2`$. The computation of $`X_\mathrm{\Theta }`$ through the sum rule (4.5) requires the knowledge of $`\mathrm{\Theta }`$. Although this quantity cannot be related to $`F_{j,\pm }^\mathrm{\Theta }(\theta )`$ due to the vanishing of the residue (3.4), we dispose of the thermodynamic Bethe ansatz result $$\mathrm{\Theta }=\pi m^2\mathrm{tan}\frac{\pi p}{2}.$$ (4.6) Since $`\phi _{1,3}(x)`$ is the operator which drives the theory away from criticality, we must have $$\mathrm{\Theta }(x)\lambda \phi _{1,3}(x),$$ (4.7) and $`X_\mathrm{\Theta }=X_{1,3}`$. The two-kink computation of $`X_{}`$ through (4.5) gives $`1/4`$ at $`p=1`$ and $`1/8`$ at $`p=3`$. Since the theory is free for these two values of $`p`$, these results are expected to be exact. Assuming that $`(x)`$ corresponds to a primary operator whose position in the Kac’s table does not depend on $`p`$, they can be substituted into the formula (1.4) to fix $$(x)\phi _{1,2}(x).$$ (4.8) In Fig. 6 we compare the two-kink approximation for $`X_\mathrm{\Theta }`$ and $`X_{}`$ with the exact formulae $`X_{1,3}=2{\displaystyle \frac{p1}{p+1}},`$ (4.9) $`X_{1,2}={\displaystyle \frac{p2}{2(p+1)}}.`$ (4.10) respectively. Concerning the two values of $`p`$ for which we determined the form factors of the order parameter, the two-kink computation gives $`X_\phi =1/8`$ at $`p=3`$ and $`X_\phi =0.0734`$ at $`p=4`$. In view of the identification $`\phi (x)\phi _{2,2}(x)`$, these results must be compared with the exact values $`1/8`$ and $`3/40=0.075`$, respectively. Some comments are in order about the results yielded by the sum rules (4.4) and (4.5). Consider the moment $$I_k=d^2x|x|^k\mathrm{\Phi }_1(x)\mathrm{\Phi }_2(0),$$ (4.11) and denote $`\mathrm{\Phi }_3(x)`$ the leading operator determining the short distance behaviour $$\mathrm{\Phi }_1(x)\mathrm{\Phi }_2(0)\frac{\mathrm{\Phi }_3}{|x|^{\eta _{\mathrm{\Phi }_1\mathrm{\Phi }_2}}},|x|0$$ (4.12) with $$\eta _{\mathrm{\Phi }_1\mathrm{\Phi }_2}X_{\mathrm{\Phi }_1}+X_{\mathrm{\Phi }_2}X_{\mathrm{\Phi }_3}.$$ (4.13) Then $`I_k`$ is convergent if $`2+k\eta _{\mathrm{\Phi }_1\mathrm{\Phi }_2}>0`$. In the cases of interest for us we have $`\eta _{\mathrm{\Theta }\mathrm{\Theta }}=2X_{1,3}`$ and $`\eta _\mathrm{\Theta }=X_{1,3}`$, so that the sum rules for $`C`$ and $`X_{1,2}`$ converge for all finite values of $`p`$, while the sum rule for $`X_{1,3}`$ converges only for $`p<3`$. This ‘failure’ of the sum rule for the scaling dimension due to the divergence of the integral is originated by operator mixing under renormalisation and was discussed in Ref. . Let us now discuss the issue of the accuracy of the results obtained using the approximated correlators (4.3) into the integrals (4.4) and (4.5). The spectral series for the correlation functions is a large distance expansion and any partial sum including the contributions up to $`n`$ particles will appreciably depart from the exact result at sufficiently small distances. In the moment (4.11), however, the factor $`|x|^k`$ causes a suppression of the short distance contribution whose importance, for a fixed $`k`$, depends on the high energy behaviour of the form factors. The two-kink contribution to $`I_k`$ is given by $$𝑑\theta \frac{F_{j,\pm }^{\mathrm{\Phi }_1}(2\theta )F_{j,\pm }^{\mathrm{\Phi }_2}(2\theta )}{(\mathrm{cosh}\theta )^{k+2}}.$$ (4.14) The integrand here behaves asymptotically as $`\mathrm{exp}[\mathrm{\Sigma }_{\mathrm{\Phi }_1\mathrm{\Phi }_2}^{(k)}\theta ]`$, where $$\mathrm{\Sigma }_{\mathrm{\Phi }_1\mathrm{\Phi }_2}^{(k)}2+ky_{\mathrm{\Phi }_1}y_{\mathrm{\Phi }_2},$$ (4.15) with $`y_\mathrm{\Phi }`$ defined by $$F_{j,\pm }^\mathrm{\Phi }(\theta )e^{y_\mathrm{\Phi }\theta /2},\theta +\mathrm{}.$$ (4.16) In an unitary theory, this exponent is subject to the bound $$y_\mathrm{\Phi }X_\mathrm{\Phi }.$$ (4.17) We see that $`\mathrm{\Sigma }_{\mathrm{\Phi }_1\mathrm{\Phi }_2}^{(k)}`$ has to be positive to ensure the convergence of the integral and that the suppression of the short distance contribution is proportional to this exponent. Of course, this observation does not determine the absolute accuracy of the two-particle approximation, but it helps understanding the accuracy pattern exhibited in Figs. 6. In fact, the solutions of Section 3 determine $`y_\mathrm{\Theta }={\displaystyle \frac{3(p1)}{2p}},`$ (4.18) $`y_{}={\displaystyle \frac{p3}{2p}}.`$ (4.19) Then, $`\mathrm{\Sigma }_{\mathrm{\Theta }\mathrm{\Theta }}^{(0)}`$ goes to zero as $`p3`$ and we observe that the deviation of the two-kink approximation for $`X_\mathrm{\Theta }`$ from the exact result becomes large as we approach this value. Analogous considerations apply to the case of $`X_{}`$ as $`p\mathrm{}`$. In the central charge sum rule (4.4), on the contrary, $`\mathrm{\Sigma }_{\mathrm{\Theta }\mathrm{\Theta }}^{(2)}`$ tends to 1 as $`p\mathrm{}`$, what means that the high energy contribution is still strongly suppressed in this limit. The remarkable accuracy (1%) of the two-kink approximation as $`p\mathrm{}`$ shows that also the contributions with a larger number of kinks undergo a similar suppression . The relation between $`\mathrm{\Sigma }_{\mathrm{\Phi }_1\mathrm{\Phi }_2}^{(k)}`$ and the accuracy of the two-kink approximation (defined as the absolute deviation from the exact result divided by the exact result) is illustrated in Table 1 through few examples. ## 5 Double perturbation and phase diagram In this section we briefly consider what happens if we add to the action (1.7) the operator $`\phi _{1,2}`$, namely if we take<sup>4</sup><sup>4</sup>4When useful in this section we explicitely label the operators by the superscript $`(p)`$ identifying the critical point they refer to. $$𝒜^{}=𝒜_{CFT}^{(p)}+\lambda d^2x\phi _{1,3}^{(p)}(x)+\mu d^2x\phi _{1,2}^{(p)}(x).$$ (5.1) Within the usual conventions for the operator normalisations, the regime III we considered in the previous sections corresponds to $`\mu =0`$ and $`\lambda <0`$. If $`\mu `$ is very small we can use form factor perturbation theory around regime III. The correction to the energy density $`\epsilon _j`$ of the vacuum $`|0_j`$ is proportional to the vacuum expectation value of the perturbing operator $`\phi _{1,2}`$ computed at $`\mu =0`$, and reads (remember (3.15) and (3.4)) $$\delta \epsilon _j\mu 0_j|\phi _{1,2}|0_j=\mu (1)^j0_1|\phi _{1,2}|0_1.$$ (5.2) This means that only a subset of alternating vacua among the $`p1`$ degenerate vacua of regime III preserves the same energy when $`\lambda `$ is negative and $`\mu `$ is small in (5.1). For $`p`$ odd, in particular, we see that the $`Z_2`$ symmetry characteristic of the case $`\mu =0`$ is broken by the $`\phi _{1,2}`$ perturbation. For $`p=3`$ the action (5.1) describes the Ising model in a magnetic field . If $`p`$ is odd the number of surviving degenerate vacua is $`(p1)/2`$. If $`p`$ is even, instead, this number is $`p/2`$ or $`p/21`$ depending on the sign of $`\mu `$. It is clear that in presence of such a pattern of degeneracy breaking the kinks $`K_{j,j\pm 1}`$ of regime III are no longer asymptotic excitations when $`\mu 0`$. Rather, they will be confined into pairs $`K_{j,j+1}K_{j+1,j+2}`$ providing the new stable kinks in the perturbed theory. This phenomenon appears in the formalism when we try to compute the correction to the mass of the kinks $`K_{j,j\pm 1}`$, which is given by $$\delta m\mu F_{j,\pm }^{\phi _{1,2}}(i\pi ).$$ (5.3) Since the form factor on the r.h.s. has a pole at $`\theta =i\pi `$ (see (3.6) and (3.15)), it follows that this correction is infinite, a fact that reveals the removal of the kinks $`K_{j,j\pm 1}`$ from the spectrum of the asymptotic excitations. The conformal field theories with $`C<1`$ perturbed with one of the operators $`\phi _{1,3}`$, $`\phi _{1,2}`$ or $`\phi _{2,1}`$ are integrable . It is then natural to look for solvable lattice models whose scaling limit corresponds to these quantum field theories. In Ref. a ‘dilute’ version of the RSOS models was considered and found to be solvable on the lattice along four distinct branches. It was found, in particular, that the scaling limit of ‘branch 2’ is described by the action (5.1) with $`\lambda =0`$, and that for $`p`$ odd this branch possesses $`(p1)/2`$ degenerate ground states. This result is consistent with our perturbative considerations and suggests that they hold true for the whole region $`\lambda 0`$ in (5.1). We already mentioned that for $`p>3`$ the regime IV of the RSOS models ($`\lambda >0`$, $`\mu =0`$ in (5.1)) corresponds to a massless flow to the critical point with action $`𝒜_{CFT}^{(p1)}`$. It is known that the operator $`\phi _{1,2}^{(p)}`$ renormalises into the operator $`\phi _{2,1}^{(p1)}`$ in the infrared limit of this flow . Hence we conclude that the action (5.1) in the limit $`\lambda =+\mathrm{}`$ describes the $`\phi _{2,1}^{(p1)}`$ perturbation of the critical point $`𝒜_{CFT}^{(p1)}`$. This integrable perturbation was identified in as corresponding to the scaling limit of the lattice models along ‘branch 1’<sup>5</sup><sup>5</sup>5The remaining two solvable branches of Ref. are not related to perturbations of the critical points considered in this paper.. The phase diagram associated with the action (5.1) is shown in Fig. 7. ## 6 Conclusion In the central part of this paper we applied the $`S`$-matrix–form factor approach to the regime III of the RSOS models. These models, however, are not the only lattice models whose scaling limit is described by the $`\phi _{1,3}`$ perturbation of $`C<1`$ conformal field theories. It is well known that the same action (1.7) corresponds to the scaling dilute $`q`$-state Potts model at the critical temperature and zero external field (with $`q=4\mathrm{cos}^2\pi /p[0,4]`$), and to the scaling $`O(n)`$ vector model in zero external field (with $`n=2\mathrm{cos}\pi /p[2,2]`$). The latter two models make sense for continous values of $`q`$ and $`n`$ through mapping onto cluster and loop models, respectively. Excepting special values of $`p`$ ($`p=3`$ in particular) these three models are characterised by different internal symmetries and then represent different universality classes of critical behaviour. In fact, the order parameter has a different number of independent components in the three cases and corresponds to different operators (see Table 2). The fact that the three models are described by the same action along the renormalisation group trajectories specified above means that the theory (1.7) admits different microscopic descriptions distinguished by the choice of local observables<sup>6</sup><sup>6</sup>6Famous examples of this kind of situation are the equivalence between the Ising model and free neutral fermions, or that between the Sine-Gordon and massive Thirring models.. Each description is characterised by a specific set of mutually local operators with well defined transformation properties under the group of internal symmetry. Of course, the perturbing operator $`\phi _{1,3}`$ appears in all these local sets and is invariant under the different symmetry groups. The observables associated to this operator, e.g. the correlation length critical exponent $`\nu =1/(2X_{1,3})`$, are the same in the three cases. At the conformal level, the possibility of different local descriptions appears through the existence of different modular invariant partition functions for the same value of the central charge $`C`$ . In the S-matrix approach away from criticality the different nature of the order parameter leads to the existence of different scattering descriptions for the action (1.7). They all exhibit the same spectrum and very similar analytic form but differ from each other for the nature and the number of the elementary excitations (see Table 2). In this paper we used the scattering description based on the $`Z_2`$ symmetry which characterises the RSOS models. The massive dilute $`q`$-state Potts model at $`T=T_c`$ has $`q+1`$ degenerate vacua located at the $`q`$ vertices and at the center of a hypertetrahedron living in the $`(q1)`$-dimensional space of the independent order parameter components. The elementary excitations are the $`2q`$ kinks interpolating from the center to the vertices, and vice versa (Fig. 8). In the massive phase of the $`O(n)`$ model there is a single vacuum and the elementary excitations are $`n`$ ordinary particles transforming according to the vector representation of the group . Of course, for non-integer values of $`q`$ and $`n`$ the number of excitations is also non-integer, but this is not more surprising than the appearence of operators with non-integer multiplicity in the modular invariant partition functions for the two models at criticality (see ). The different number of excitations for a given $`p`$ ensures that there is no one to one correspondence between the three scattering descriptions, although some connections certainly exists<sup>7</sup><sup>7</sup>7A relation between the RSOS and dilute Potts scattering theory for $`p=6`$ was pointed out in . The issue of the relation between the $`O(n)`$ and RSOS scattering descriptions has been discussed in from the point of view of quantum group reduction of Sine-Gordon model.. For each particle basis the asymptotic states have obvious transformation properties under the relevant symmetry group and this fact allows a natural identification within the form factor approach of the interesting operators (for example the order parameter). As was discussed above, no matter which particle basis is used, summation over the intermediate asymptotic states must lead to the same result for the correlation functions of some invariant operators, in particular the trace of the stress-energy tensor $`\mathrm{\Theta }(x)\lambda \phi _{1,3}(x)`$. Since each $`n`$-particle contribution to the spectral sum has a distinct large distance behaviour $`\mathrm{exp}(nmr)`$, the identification is expected to occur term by term. It is easy to check comparing the results of this paper with those of Refs. that this is indeed the case for the first (two-particle) contribution to $`\mathrm{\Theta }(x)\mathrm{\Theta }(0)`$. Most of the considerations of this section can be extended to the case of the more general action (5.1). Acknowledgements. I thank John Cardy for interesting discussions. | $`\mathrm{\Sigma }_{\mathrm{\Phi }_1\mathrm{\Phi }_2}^{(k)}`$ | $`0.5`$ | $`1.5`$ | $`2.5`$ | | --- | --- | --- | --- | | $`\rho _{\mathrm{\Theta }\mathrm{\Theta }}^{(0)}`$ | $`0.09`$ | $`0.004`$ | | | | $`(0.66)`$ | $`(0.55)`$ | | | $`\rho _{\mathrm{\Theta }\mathrm{\Theta }}^{(2)}`$ | | $`0.005`$ | $`0.0008`$ | | | | $`(0.86)`$ | $`(0.66)`$ | | $`\rho _\mathrm{\Theta }^{(0)}`$ | $`0.03`$ | $`0.007`$ | $`0.004`$ | | | $`(0.86)`$ | $`(0.66)`$ | $`(0.55)`$ | Table 1. Accuracy $`\rho _{\mathrm{\Phi }_1\mathrm{\Phi }_2}^{(k)}`$ of the two particle approximation for the $`k`$-th moment of the correlator $`\mathrm{\Phi }_1(x)\mathrm{\Phi }_2(0)`$ for three values of the exponent (4.15). The numbers in parentesis are the corresponding values of $`p/(p+1)`$. | | RSOS | Dilute | | | --- | --- | --- | --- | | | models | Potts model | $`O(n)`$ model | | | | $`q=4\mathrm{cos}^2\frac{\pi }{p}`$ | $`n=2\mathrm{cos}\frac{\pi }{p}`$ | | Symmetry | $`Z_2`$ | $`S_q`$ | $`O(n)`$ | | Order parameter: | | | | | number of components | $`1`$ | $`q1`$ | $`n`$ | | scaling dimension | $`X_{2,2}`$ | $`X_{\frac{p}{2},\frac{p}{2}}`$ | $`X_{\frac{p1}{2},\frac{p+1}{2}}`$ | | number of vacua | $`p1`$ | $`q+1`$ | $`1`$ | | number of | | | | | elementary excitations | $`2(p2)`$ | $`2q`$ | $`n`$ | Table 2. Some features of the lattice models whose continuum limit is described by the action (1.7). The notation for the scaling dimensions refers to (1.4).
no-problem/9911/math9911059.html
ar5iv
text
# The Maximality of Cartesian Categories ## 1 Cartesian categories A category with binary products is a category with a binary operation $`\times `$ on objects, projection arrows $$\text{k}_{A,B}^1:A\times BA,$$ $$\text{k}_{A,B}^2:A\times BB,$$ and the pairing operation that to a pair of arrows ($`f:CA`$, $`g:CB`$) assigns the arrow $`f,g:CA\times B`$. The arrows must satisfy the equations | $`(\beta 1)`$ | $`\text{k}_{A,B}^1f,g=f`$, | | --- | --- | | $`(\beta 2)`$ | $`\text{k}_{A,B}^2f,g=g`$, | | $`(distr)`$ | $`f,gh=fh,gh`$, | | $`(\eta )`$ | $`\text{k}_{A,B}^1,\text{k}_{A,B}^2=\text{1}_{A\times B}`$. | A category has a terminal object T iff it has the special arrows $$\text{k}_A:A\text{T},$$ which satisfy the equation $$(\text{k})\text{ }\text{for }f:A\text{T},\text{ }f=\text{k}_A.$$ A cartesian category is a category with binary products and a terminal object. In terms of the projection arrows and the pairing operation on arrows we may define in every category with binary products the following arrows: | $`\stackrel{}{\text{b}}_{A,B,C}`$ | $`=_{def.}`$ | $`\text{k}_{A,B\times C}^1,\text{k}_{B,C}^1\text{k}_{A,B\times C}^2,\text{k}_{B,C}^2\text{k}_{A,B\times C}^2`$ | | --- | --- | --- | | | | of type $`A\times (B\times C)(A\times B)\times C`$, | | $`\stackrel{}{\text{b}}_{A,B,C}`$ | $`=_{def.}`$ | $`\text{k}_{A,B}^1\text{k}_{A\times B,C}^1,\text{k}_{A,B}^2\text{k}_{A\times B,C}^1,\text{k}_{A\times B,C}^2`$ | | | | of type $`(A\times B)\times CA\times (B\times C)`$, | | $`\text{c}_{A,B}`$ | $`=_{def.}`$ | $`\text{k}_{A,B}^2,\text{k}_{A,B}^1`$ | | | | of type $`A\times BB\times A`$, | | $`\text{w}_A`$ | $`=_{def.}`$ | $`\text{1}_A,\text{1}_A`$ | | | | of type $`AA\times A`$. | We may also define the product operation on arrows, which to a pair of arrows ($`f:AB`$, $`g:CD`$) assigns the arrow $`f\times g:A\times CB\times D`$: $$f\times g=_{def.}f\text{k}_{A,C}^1,g\text{k}_{A,C}^2.$$ In every cartesian category we also have the arrows | $`𝝈_A`$ | $`=_{def.}`$ | $`\text{k}_A,\text{1}_A`$ | | --- | --- | --- | | | | of type $`A\text{T}\times A`$, | | $`𝜹_A`$ | $`=_{def.}`$ | $`\text{1}_A,\text{k}_A`$ | | | | of type $`AA\times \text{T}`$. | These definitions of category with binary products and cartesian category are equivalent to standard definitions (see , Part 0, Chapter 5, and Part I, Chapter 3), where $$A\stackrel{\text{k}_{A,B}^1}{}A\times B\stackrel{\text{k}_{A,B}^2}{}B$$ is a product diagram with the desired universal property. ## 2 Graphs of arrow terms in free cartesian categories Consider the free cartesian category Cart generated by a set of objects $`𝒪`$ called letters (T is not a letter). This category is the image of $`𝒪`$ under the left adjoint to the forgetful functor from the category of cartesian categories, with cartesian structure-preserving functors as arrows, to the category of sets (of objects). The construction of Cart out of syntactic material is explained in detail in (Part I, Chapter 4; note that there the name “Cart” has a different meaning). For an object $`A`$ of Cart let the letter length $`|A|`$ of $`A`$ be the number of occurrences of letters in $`A`$. For example, if $`p`$ and $`q`$ are letters, then $`|((p\times q)\times p)\times (\text{T}\times p)|`$ is 4. Let $`f:AB`$ be an arrow term of Cart and let $`|A|=n`$ and $`|B|=m`$. To $`f`$ we associate a function $`\mathrm{\Gamma }_f`$ from $`\{1,\mathrm{},m\}`$ to $`\{1,\mathrm{},n\}`$, called the graph of $`f`$. If $`m=0`$, then $`\{1,\mathrm{},m\}`$ is $`\mathrm{}`$. The function $`\mathrm{\Gamma }_f`$ is defined by induction on the complexity of $`f`$ in the following manner. If $`f`$ is of the form $`\text{1}_A:AA`$, then $`m=n`$ and $`\mathrm{\Gamma }_f(i)=i`$, where $`i\{1,\mathrm{},m\}`$. If $`f`$ is of the form $`\text{k}_{A,B}^1:A\times BA`$, then $`\mathrm{\Gamma }_f(i)=i`$, and if $`f`$ is of the form $`\text{k}_{A,B}^2:A\times BB`$, then $`\mathrm{\Gamma }_f(i)=i+|A|`$. If $`f`$ is of the form $`\text{k}_A:A\text{T}`$, then $`\mathrm{\Gamma }_f`$ is the empty function (i.e., $`\mathrm{\Gamma }_f`$ is $`\mathrm{}:\mathrm{}\{1,\mathrm{},|A|\}`$). If $`f`$ is of the form $`g,h:CA\times B`$, with $`g:CA`$ and $`h:CB`$, then for $`i|A|`$ we have $`\mathrm{\Gamma }_f(i)=\mathrm{\Gamma }_g(i)`$ and for $`i>|A|`$ we have $`\mathrm{\Gamma }_f(i)=\mathrm{\Gamma }_h(i)`$. Finally, if $`f`$ is of the form $`hg`$, then $`\mathrm{\Gamma }_f(i)=\mathrm{\Gamma }_g(\mathrm{\Gamma }_h(i))`$. For $`f:AB`$, the graph $`\mathrm{\Gamma }_f`$ can be interpreted as connecting an occurrence of a letter $`p`$ in $`A`$ with a finite set of occurrences of $`p`$ in $`B`$ (this set may have more than one member, it may be a singleton, or it may be empty). We can establish the following lemma. Lemma 1 If $`f=g`$ in Cart, then $`\mathrm{\Gamma }_f=\mathrm{\Gamma }_g`$. Proof: We proceed by induction on the length of the derivation of $`f=g`$ in Cart. In this induction it is essential to check that for the equations ($`\beta 1`$), ($`\beta 2`$), ($`distr`$), ($`\eta `$) and (k), the graphs of the two sides of the equation must be equal. The induction step, which involves the rules of symmetry and transitivity of equality, as well as congruence with composition and pairing, is quite trivial. q.e.d. We shall demonstrate the converse of this lemma in Lemma 3 below. ## 3 Normal form of arrow terms in free cartesian categories From now on, identity of arrow terms of Cart will be taken up to associativity of composition. So, for example, $`\text{k}_{B,C}^1(\text{k}_{A,B\times C}^2\text{1}_{A\times (B\times C)})`$ will be considered to be the same arrow term as $`(\text{k}_{B,C}^1\text{k}_{A,B\times C}^2)\text{1}_{A\times (B\times C)}`$, and we may omit parentheses in compositions. (Formally, we may work with equivalence classes of arrow terms.) An arrow term of Cart is called an atomized k-composition iff it is of the form $`f_n\mathrm{}f_1:AB`$, with $`n1`$, where $`B`$ is a letter and each $`f_i`$ is either of the form $`\text{k}_{C,D}^1`$ or of the form $`\text{k}_{C,D}^2`$. Arrow terms of Cart in normal form are defined inductively as follows: 1. every arrow term $`\text{1}_A`$ with $`A`$ a letter is in normal form; 2. every atomized k-composition is in normal form; 3. every arrow term $`\text{k}_A`$ is in normal form; 4. if $`f:CA`$ and $`g:CB`$ are in normal form, then $`f,g`$ is in normal form. The arrow terms defining $`\stackrel{}{\text{b}}`$, $`\stackrel{}{\text{b}}`$, c, w, $`𝝈`$ and $`𝜹`$ arrows in section 1 are in normal form if $`A`$, $`B`$ and $`C`$ are letters. For the normal form of arrow terms of Cart and their graphs we can prove the following fundamental lemma. Lemma 2 Suppose $`f,g:AB`$ are arrow terms of Cart in normal form. Then $`\mathrm{\Gamma }_f=\mathrm{\Gamma }_g`$ iff $`f`$ and $`g`$ are the same arrow term. Proof: Suppose $`f,g:AB`$ are different arrow terms of Cart in normal form. We shall show that in that case $`\mathrm{\Gamma }_f\mathrm{\Gamma }_g`$ by induction on the number of pairing operations in $`f`$. If this number is zero, then it must be zero too in $`g`$, because $`B`$ is a letter or T. If $`f`$ is of the form $`\text{1}_A`$, then $`g`$ must also be $`\text{1}_A`$, and if $`f`$ is of the form $`\text{k}_A`$, then $`g`$ must also be $`\text{k}_A`$. So $`f`$ cannot be $`\text{1}_A`$ or $`\text{k}_A`$. The only remaining possibility is that $`f`$ be an atomized k-composition of the form $`f_n\mathrm{}f_1`$. Then $`g`$ must be an atomized k-compsition, too; let it be of the form $`g_m\mathrm{}g_1`$. It is excluded that $`f`$ be of the form $$f_n\mathrm{}f_{nk}g_m\mathrm{}g_1$$ for $`nk1`$, because the codomain of $`g_m`$ is a letter. Analogously, it is excluded that $`g`$ be of the form $$g_m\mathrm{}g_{mk}f_n\mathrm{}f_1$$ for $`mk1`$. So for some $`i`$ we must have that $`f_i`$ and $`g_i`$ are different; let $`j`$ be the least such $`i`$. Then one of $`f_j`$ and $`g_j`$ is $`\text{k}_{C,D}^1`$ while the other is $`\text{k}_{C,D}^2`$, and the letter $`B`$ must occur in both $`C`$ and $`D`$. It follows easily that $`\mathrm{\Gamma }_f\mathrm{\Gamma }_g`$. If the number of pairing operations in $`f`$ is at least one, then $`f`$ is of the form $`f_1,f_2:AB_1\times B_2`$, and hence $`g`$, which is in normal form, must be of the form $`g_1,g_2`$. Since $`f`$ is different from $`g`$, either $`f_1`$ and $`g_1`$ or $`f_2`$ and $`g_2`$ are two different arrow terms with identical domains and codomains, which are both in normal form. By the induction hypothesis, either $`\mathrm{\Gamma }_{f_1}\mathrm{\Gamma }_{g_1}`$ or $`\mathrm{\Gamma }_{f_2}\mathrm{\Gamma }_{g_2}`$, and in both cases we can infer $`\mathrm{\Gamma }_f\mathrm{\Gamma }_g`$. This concludes the induction. So, by contraposition, if $`\mathrm{\Gamma }_f=\mathrm{\Gamma }_g`$, then $`f`$ and $`g`$ are the same arrow term. Since the converse implication is trivial, this proves the lemma. q.e.d. Every arrow term of Cart can be reduced to normal form by using the following reductions: We reduce an arrow term $`t_0`$ of Cart to normal form via a sequence $`t_0,t_1,\mathrm{},t_n`$, where $`t_{i+1}`$ is obtained from $`t_i`$ by a reduction step, which is a replacement of a subterm of $`t_i`$ that is a redex by the corresponding contractum. We call each $`t_i`$, $`i<n`$, in this sequence a candidate for reduction. To obtain that every sequence of reduction steps terminates we must exclude in the first kind of atomizing reduction that the redex $`f`$ be of the form $`f_1,f_2`$. We should exclude as well that it reduces to this form by other reductions. We must also ensure that the respective occurrence of $`f`$ does not belong to a subterm of the form $`hf`$ where $`h`$ is either $`\text{k}_{A,B}^1`$ or $`\text{k}_{A,B}^2`$, or $`h`$ reduces to $`\text{k}_{A,B}^1`$ or $`\text{k}_{A,B}^2`$. In the second kind of atomizing reduction the redex $`g`$ should be different from $`\text{k}_C`$. All this will be guaranteed if we add to the atomizing reductions the provisos we are going to formulate below. For an arrow term $`t`$ of Cart let $`\gamma (t)`$ be defined inductively as follows: $$\begin{array}{c}\gamma (\text{k}_A)=2,\hfill \\ \gamma (t)=3\text{ if }t\text{ is }\text{1}_A\text{}\text{k}_{A,B}^1\text{ or }\text{k}_{A,B}^2\text{}\hfill \\ \gamma (gf)=\gamma (g)\gamma (f),\hfill \\ \gamma (f,g)=\gamma (f)+\gamma (g)+1.\hfill \end{array}$$ Let $`\alpha (t)`$ be $`\gamma (t)n`$, where $`n`$ is 1 if in $`t`$ there are pairing operations $`\mathrm{},\mathrm{}`$ within the scope of composition $``$; otherwise $`n`$ is 0. If $`h_2h_1`$ is a subterm of an arrow term $`t`$, then $`h_1`$ and $`h_2`$ are called compositional subterms of $`t`$. Let a subterm $`f`$ of $`t`$ be called product-eliminative (for this terminology, suggested by natural deduction, see section 6) iff the pairing operation does not occur in $`f`$, and $`f`$ is not a compositional subterm of $`t`$. The proviso for the first atomizing reduction says that the redex $`f`$ must be product-eliminative with respect to the candidate for reduction $`t`$, of which $`f`$ is a subterm, and $`\alpha (t)=0`$. The proviso for the second atomizing reduction says just that $`g`$ is not $`\text{k}_C`$. With these provisos, it is easy to check that $`t`$ is in normal form iff there are no redexes in it. (For that we rely on the fact that in Cart there is no arrow of type $`\text{T}A`$ with $`A`$ a letter.) For an object $`A`$ of Cart let the length of $`A`$ be the letter length $`|A|`$ of $`A`$ plus the number of occurrences of $`\times `$ and T in $`A`$ (i.e., the length of $`A`$ is the number of occurrences of symbols in $`A`$). For an arrow term $`t`$ of Cart let $`\beta (t)`$ be the sum of all the lengths of all the targets of product-eliminative subterms of $`t`$. (If the same object is the target of $`n`$ product-eliminative subterms of $`t`$, then it is counted $`n`$ times in the sum $`\beta (t)`$.) Let the degree of a candidate for reduction $`t`$ be the ordinal $`\omega ^2\alpha (t)+\omega \beta (t)+\gamma (t)`$. It is easy to check that by replacing a redex of $`t`$ by a contractum the degree of the resulting arrow term strictly decreases. Then an induction up to $`\omega ^3`$ (which is, of course, reducible to an ordinary induction up to $`\omega `$) shows that every sequence of reduction steps terminates. Our provisos don’t only yield that every sequence of reduction steps terminates in a normal form, but they optimize reductions in other respects, too. For our purposes, however, it is enough to know that some sequence of reduction steps terminates in a normal form, so that the provisos for atomizing reductions are not essential. But the provisos do help to make it clear that such a sequence exists. All the reductions above (without the provisos, and hence with the provisos, too) are covered by equations of Cart. Lemmata 1 and 2 guarantee that if $`f=g`$ is satisfied in Cart and $`f^{}`$ and $`g^{}`$ are normal forms of $`f`$ and $`g`$, respectively, then $`f^{}`$ and $`g^{}`$ are the same arrow term. To prove that we conclude by Lemma 1 from $`f=g`$, $`f=f^{}`$ and $`g=g^{}`$, which we have in Cart, that $`\mathrm{\Gamma }_f^{}=\mathrm{\Gamma }_g^{}`$. Then by Lemma 2, it follows that $`f^{}`$ and $`g^{}`$ are the same arrow term. So arrows of Cart have a unique normal form. Note that we have demonstrated this without appealing to the Church-Rosser property for our reductions. ## 4 Coherence Before proving our theorem about the maximality of cartesian categories, we establish a lemma converse to Lemma 1. Lemma 3 Suppose $`f,g:AB`$ are arrow terms of Cart. If $`\mathrm{\Gamma }_f=\mathrm{\Gamma }_g`$, then $`f=g`$ in Cart. Proof: Suppose $`f,g:AB`$ are arrow terms of Cart such that $`\mathrm{\Gamma }_f=\mathrm{\Gamma }_g`$. Then let $`f^{}`$ and $`g^{}`$ be the normal forms of $`f`$ and $`g`$, respectively. Since $`f=f^{}`$ and $`g=g^{}`$ in Cart, by Lemma 1 we have $`\mathrm{\Gamma }_f^{}=\mathrm{\Gamma }_g^{}`$, and hence, by Lemma 2, $`f^{}`$ and $`g^{}`$ are the same arrow term. Then by the symmetry and transitivity of equality it follows that $`f=g`$ in Cart. q.e.d. A notion of graph analogous to ours may be found in (p. 94). A coherence result analogous to Lemma 3 is envisaged in (p. 129), and is demonstrated in (Theorem 2.2), (Theorem 8.2.3, p. 207) and . Lemmata 1 and 3 guarantee that there is a faithful cartesian functor from Cart to the category $`\text{Finord}^{op}`$, whose objects are finite ordinals and whose arrows are arbitrary functions from finite ordinals to finite ordinals, with domains being targets and codomains sources. This functor is onto on objects and on arrows if Cart is generated by a nonempty set of letters. The product of $`\text{Finord}^{op}`$, to which the product of Cart is mapped, is simply addition, and the terminal object is zero. If Cart is generated by a single object, then $`\text{Finord}^{op}`$ is equivalent (but not isomorphic) to Cart: it is the skeleton of Cart. Our demonstration of uniqueness of the normal form in the preceding section, which did not appeal to the Church-Rosser property of reductions, is akin to model-theoretical methods of normalization (see and references therein). In the spirit of these methods, a computation without reductions of the normal form of an arrow term $`f`$ of Cart consists in finding $`\mathrm{\Gamma }_f`$, and then constructing out of $`\mathrm{\Gamma }_f`$ an arrow term of Cart in normal form whose graph is $`\mathrm{\Gamma }_f`$. The equivalence between $`f=g`$ and $`\mathrm{\Gamma }_f=\mathrm{\Gamma }_g`$, which follows from Lemmata 1 and 3, provides an easy decision procedure for commuting of diagrams in Cart. To check whether $`f=g`$ in Cart, we just have to check whether $`\mathrm{\Gamma }_f=\mathrm{\Gamma }_g`$. (An alternative decision procedure for the commuting of diagrams in Cart can be obtained via reduction to normal form, according to section 3.) ## 5 The maximality theorem For arrow terms $`f,g:AB`$ of Cart, we say that $`f=g`$ holds in a cartesian category $`𝒞`$ iff for every cartesian functor $`F`$ from Cart to $`𝒞`$ we have $`F(f)=F(g)`$ in $`𝒞`$. Then we can prove our theorem. Theorem Suppose $`f,g:AB`$ are arrow terms of Cart such that in Cart we don’t have $`f=g`$. If $`f=g`$ holds in a cartesian category $`𝒞`$, then $`𝒞`$ is a preorder. Proof: Let $`f,g:AB`$ be arrow terms of Cart such that in Cart we don’t have $`f=g`$. By Lemma 3, it follows that $`\mathrm{\Gamma }_f\mathrm{\Gamma }_g`$. So there must be an occurrence of a letter $`p`$ in $`B`$ such that if $`p`$ is the $`i`$-th letter symbol of $`B`$, counting from the left, then $`\mathrm{\Gamma }_f(i)\mathrm{\Gamma }_g(i)`$. Consider the substitution instances $`f^{},g^{}:A^{}B^{}`$ of $`f`$ and $`g`$ obtained by replacing every letter by $`p`$. So the only letter in $`A^{}`$ and $`B^{}`$ is $`p`$. It is clear that $`\mathrm{\Gamma }_f^{}(i)\mathrm{\Gamma }_g^{}(i)`$. Then there is an arrow term $`h:p\times pA^{}`$ of Cart made of possibly multiple occurrences of the arrow terms 1, $`\stackrel{}{\text{b}}`$, $`\stackrel{}{\text{b}}`$, c, w, $`𝝈`$ and $`𝜹`$, together with the product operation on arrows and composition, such that $`\mathrm{\Gamma }_h(\mathrm{\Gamma }_f^{}(i))=1`$ and $`\mathrm{\Gamma }_h(\mathrm{\Gamma }_g^{}(i))=2`$. There is also an arrow term $`j:B^{}p`$ that is either $`\text{1}_p`$ or an atomized k-composition (see section 3) such that $`\mathrm{\Gamma }_j(1)=i`$. Then $`jf^{}h`$ and $`jg^{}h`$ are both of type $`p\times pp`$, while $`\mathrm{\Gamma }_{jf^{}h}=\mathrm{\Gamma }_{\text{k}_{p,p}^1}`$ and $`\mathrm{\Gamma }_{jg^{}h}=\mathrm{\Gamma }_{\text{k}_{p,p}^2}`$. Therefore, by Lemma 3, in Cart we have $`jf^{}h=\text{k}_{p,p}^1`$ and $`jg^{}h=\text{k}_{p,p}^2`$. So these two equations hold in every cartesian category $`𝒞`$. If $`f=g`$ holds in $`𝒞`$, then $`f^{}=g^{}`$ holds too, and so $`\text{k}_{p,p}^1=\text{k}_{p,p}^2`$ holds in $`𝒞`$. Now suppose $`s,t:CD`$ are two arrows of $`𝒞`$. Then in $`𝒞`$ we have $$\text{k}_{D,D}^1s,t=\text{k}_{D,D}^2s,t,$$ and so $`s=t`$, by ($`\beta 1`$) and ($`\beta 2`$). q.e.d. An analogous maximality theorem can be proved for categories with binary products. For that we have to replace Cart by the free category with binary products $`\text{Cart}^{}`$ generated by a set of letters. Graphs for the arrow terms of $`\text{Cart}^{}`$ are defined by just omitting the clause for $`\mathrm{\Gamma }_{\text{k}_A}`$ in section 2. In the definition of the normal form in section 3 we omit clause (3), and the second kind of atomizing reduction in the same section is now superfluous. Exact analogues of Lemmata 1, 2 and 3 are provable as before, while in the proof of the Theorem we need not mention $`𝝈`$ and $`𝜹`$ arrow terms in the second paragraph. This way we show that, if $`f,g:AB`$ are arrow terms of $`\text{Cart}^{}`$ such that in $`\text{Cart}^{}`$ we don’t have $`f=g`$, and $`f=g`$ holds in a category with binary products $`𝒞`$, then $`𝒞`$ is a preorder. ## 6 A logical conclusion It is well known that equations between arrows in cartesian categories correspond to equivalence between deductions in conjunctive logic, including the constant true proposition. This equivalence induces an equality relation on equivalence classes. As far as $`\times `$ is concerned, this equality between deductions permits us to reduce every deduction into one in atomized normal form, where elimination rules, which correspond to $`\text{k}^1`$ and $`\text{k}^2`$ arrows, precede introduction rules, which correspond to the pairing operation, and the middle part between eliminations and introductions is atomic. This atomized normal form corresponds to the normal form of arrow terms of Cart in section 3. The import of the maximality of cartesian categories for logic is that the usual assumptions for equivalence between deductions in conjunctive logic are optimal. These assumptions are wanted, because they are induced by normalization of deductions, and no assumption is missing, because any further equation would equate all deductions that share premises and conclusions. Of course, by duality, this solves the problem for disjunctive logic, including the constant absurd proposition or not. This logic corresponds to categories with binary coproducts, which when they have an initial object are “cocartesian” categories, i.e. cartesian categories with arrows reversed. It is natural to enquire whether similar maximality results hold for other sorts of categories of interest to logic. An exactly analogous result for cartesian closed categories is proved in (Theorem 1) and . However, this result is independent from the maximality result for cartesian categories proved in this note: neither result can be inferred from the other.
no-problem/9911/cond-mat9911030.html
ar5iv
text
# Long–range interacting rotators: connection with the mean–field approximation ## Captions for Figures Figure 1:(a) Modulus of the magnetization, $`|𝐌|`$, as a function of the specific energy $`E/N`$; (b) $`|𝐌|`$ as a function of the scaled specific energy $`E/(N\stackrel{~}{N})`$; (c) scaled temperature $`T/\stackrel{~}{N}`$ as a function of $`E/(N\stackrel{~}{N})`$. The symbols correspond to numerical simulations in the microcanonical ensemble for $`\alpha =0.5`$ and different system sizes indicated on the figure. Each symbol corresponds to an average of different initial conditions (typically 10). The solid lines correspond to the theoretical equilibrium predictions for the HMF model (for which $`\stackrel{~}{N}=1`$, since the HMF Hamiltonian is already $`N`$-scaled). Figure 2: $`|𝐌|`$ as a function of $`E/(N\stackrel{~}{N})`$. The symbols correspond to numerical simulations in the microcanonical ensemble for $`\alpha =0.25`$, $`0.5`$, and $`0.75`$, all for $`N=400`$. The solid line corresponds to the theoretical results for the HMF model. Figure 3: (a) $`|𝐌|`$ as a function of $`E/(N\stackrel{~}{N})`$; (b) scaled temperature $`T/\stackrel{~}{N}`$ as a function of $`E/(N\stackrel{~}{N})`$. The symbols correspond to numerical simulations in the microcanonical ensemble for $`\alpha =1.5`$ and different system sizes. For comparison, the dotted line corresponds to the theoretical equilibrium predictions of the HMF model. Figure 4: Scaled temperature $`T/\stackrel{~}{N}`$ as a function of $`E/(N\stackrel{~}{N})`$. The symbols correspond to numerical simulations in the microcanonical ensemble for $`\alpha =`$ $`2.5`$ (gray), $`5.0`$ (white) and different system sizes indicated on the figure. The dashed and solid lines correspond to the theoretical results for the HMF ($`\stackrel{~}{N}=1`$) and the $`\alpha \mathrm{}`$ ($`\stackrel{~}{N}=2`$) limits, respectively.
no-problem/9911/astro-ph9911462.html
ar5iv
text
# 1 Introduction ## 1 Introduction This field is now in an explosive phase of growth, driven mainly by a wealth of observations at ever higher redshifts. We probably know as much today about galaxies at $`z=3`$ as we did five years ago about galaxies at $`z=0.3`$. This new knowledge is the product of many clever people and some powerful telescopes and instruments: HST, Keck, COBE, ISO, and SCUBA among them. With these observations and some theoretical background, we have begun to write an outline of the story of galaxy formation, the subject of this conference. If we continue to make progress at anything like the present rate, we will know, within a decade or two, the full story of galaxy formation. At that time, I predict, we will look back to this one and recall with nostalgia how exciting it was to help write that story. We have heard at this meeting (and can now read in this book) a large number of excellent presentations, all with new data and/or ideas. While this has certainly made the conference stimulating, it has not made the job of summarizing it easy! In fact, so many new results were presented that it would be impossible for me to review more than a small fraction of them. Instead, it is probably more valuable and tractable to highlight a couple of the major themes that pervade much of the recent work in this field and at this meeting. These are the global evolution of galaxies and the origin of the Hubble sequence; the first ignores the individuality of galaxies, while the second attempts to understand it. ## 2 Global Evolution of Galaxies One of the grand themes to emerge in the last few years is the idea that we may be able to trace the global histories of star formation, gas consumption, and metal production in galaxies from high redshifts to the present. By “global,” we mean averages over the whole population of galaxies or, equivalently, over large, representative volumes of the universe. This idea is illustrated in Figure 1, where we sketch the evolution of the contents of a large, comoving box. We can conveniently quantify the masses of the different constituents of the box by the corresponding mean comoving densities normalized to the critical density. We are especially interested in the comoving densities of stars, gas (both inside and outside galaxies, i.e., ISM and IGM), metals, dust, and black holes: $`\mathrm{\Omega }__\mathrm{S}`$, $`\mathrm{\Omega }_{\mathrm{ISM}}`$, $`\mathrm{\Omega }_{\mathrm{IGM}}`$, $`\mathrm{\Omega }_\mathrm{M}`$, $`\mathrm{\Omega }_\mathrm{D}`$, and $`\mathrm{\Omega }_{\mathrm{BH}}`$, respectively. We are also interested in the cosmic emissivity $`E_\nu `$, the power radiated per unit comoving volume per unit rest-frame frequency $`\nu `$, and the background intensity $`J_\nu `$, the power received per unit solid angle of sky per unit area of detector per unit observed frequency $`\nu `$. After recombination, our comoving box is filled with neutral, metal-free gas with nearly uniform density. Perturbations in this intergalactic medium (IGM) eventually condense, probably by gravitational clumping and inflow, into protogalaxies. Stars then form in the resulting interstellar medium (ISM). They produce metals and may drive outflows of gas from galaxies. In this way, both the ISM and IGM may be enriched with metals. Some of the metals remain in the gas phase; others condense into solid dust grains. Black holes form in the nuclei of some, perhaps even all, galaxies and, when fueled, can power active galactic nuclei (AGN). Some of the radiation emitted by stars and AGN propagates freely, while the rest is absorbed and then emitted at longer wavelengths by dust. Thus, the radiation we detect from galaxies tells us primarily about their star, AGN, and dust contents. The spectra of high-redshift quasars contain signatures of the absorption and scattering of radiation by the intervening ISM and IGM (absorption lines, reddening, etc). Such observations tell us primarily about the composition and comoving densities of the ISM and IGM. Exactly how all this happens is not yet known, of course. This is a major long-term goal, the holy grail, of our subject. It should be clear from Figure 1 and the commentary above, however, that the constituents of our comoving box, including the radiation that propagates through it, are very much interrelated. In fact, the corresponding comoving densities must obey a series of coupled conservation equations. This is illustrated in Figure 2, which shows the hypothetical but plausible evolution of several of these comoving densities. Clearly, $`\mathrm{\Omega }_\mathrm{S}`$, $`\mathrm{\Omega }_{\mathrm{ISM}}`$, and $`\mathrm{\Omega }_{\mathrm{IGM}}`$ must add up to $`\mathrm{\Omega }_{\mathrm{baryon}}`$, a constant. Similarly, $`\mathrm{\Omega }_\mathrm{M}^\mathrm{S}`$, $`\mathrm{\Omega }_\mathrm{M}^{\mathrm{ISM}}`$, and $`\mathrm{\Omega }_\mathrm{M}^{\mathrm{IGM}}`$, the comoving densities of metals in stars, the ISM and the IGM, must add up to $`\mathrm{\Omega }_\mathrm{M}`$. Moreover, $`\mathrm{\Omega }_\mathrm{M}`$ remains a fixed fraction of $`\mathrm{\Omega }_\mathrm{S}`$ on the assumption that the global yield is constant and that delayed recycling is negligible (a good approximation in the present context). The bolometric emissivity, $`E_{\mathrm{bol}}=E_\nu 𝑑\nu `$, is the sum of two terms, one nearly proportional to the star formation rate $`\dot{\mathrm{\Omega }}_\mathrm{S}`$ and one proportional to the black hole fueling rate $`\dot{\mathrm{\Omega }}_{\mathrm{BH}}`$. The spectral shape of $`E_\nu `$ depends on $`\dot{\mathrm{\Omega }}_\mathrm{S}`$, $`\dot{\mathrm{\Omega }}_{\mathrm{BH}}`$, and the amount of reprocessing by dust and hence on $`\mathrm{\Omega }_\mathrm{D}`$. Finally, the background and emissivity are related by $`J_\nu =(c/4\pi )E_{(1+z)\nu }𝑑t`$. In the past few years, we have made great progress in sketching out a global picture of galactic evolution. In fact, we now have empirical estimates of several of the quantities mentioned above at redshifts from $`z=0`$ up to $`z4`$ and hence over most of cosmic time. One of the great advantages of the global approach is that the quantities of interest are so interrelated that, in principle, any one of them could be predicted from the others. For example, one could infer the global history of star formation from its consequences on the ISM and IGM, and hence on the spectra of background quasars, without observing a single photon emitted by a star! In practice, of course, there are uncertainties. Some of these stem from the difficulty of making measurements at particular wavelengths and redshifts. For example, it is harder to determine the redshifts of galaxies at $`1\text{ }<z\text{ }<\mathrm{\hspace{0.17em}\hspace{0.17em}2}`$ than at $`z\text{ }<\mathrm{\hspace{0.17em}\hspace{0.17em}1}`$ and $`z\text{ }>\mathrm{\hspace{0.17em}\hspace{0.17em}2}`$. Another source of uncertainty is that various differential quantities must be extrapolated outside the ranges over which they have been observed in order to estimate integral quantities. The emissivity and background, for example, are computed from the luminosity function and number vs flux density relation through $`E_\nu =L_\nu \varphi (L_\nu )𝑑L_\nu `$ and $`J_\nu =S_\nu (dN/dS_\nu )𝑑S_\nu `$. As Figure 3 indicates, extrapolating $`\varphi (L_\nu )`$ to $`L_\nu 0`$ and $`dN/dS_\nu `$ to $`S_\nu 0`$ may introduce uncertainties in $`E_\nu `$ and $`J_\nu `$. How well are we doing with this program? Most authors seem to agree that the global star formation rate $`\dot{\mathrm{\Omega }}_\mathrm{S}`$ declines by a large factor ($`10`$) from $`z1`$ to $`z=0`$, although there are still some uncertainties (i.e., factors of 5–20 are possible). It appears that $`\dot{\mathrm{\Omega }}_\mathrm{S}`$ may level off at $`1\text{ }<z\text{ }<\mathrm{\hspace{0.17em}\hspace{0.17em}2}`$, but its behavior at higher redshifts has been a topic of much recent debate. Within the large uncertainties, it could rise, fall, or remain constant. Part of the uncertainty comes from the unknown corrections for dust when converting observed UV emissivities into star formation rates. Some authors have ignored these corrections altogether, while others have advocated corrections by an order of magnitude or more. In this context, it is worth noting that the average correction factor (over all redshifts) can be estimated by comparing the energy densities in the background at wavelengths above and below 10 $`\mu `$m: $$CF1+\left(_{\lambda >10\mu \mathrm{m}}J_\nu 𝑑\nu /_{\lambda <10\mu \mathrm{m}}J_\nu 𝑑\nu \right).$$ Figure 4 shows recent estimates of, and limits on, $`J_\nu `$ over a wide range of wavelengths. From this, we infer a modest correction factor, $`CF`$ 2–3 (i.e., neither negligible nor dominant). Damped Ly$`\alpha `$ absorbers (DLAs) are usually taken to represent the ISM of galaxies. The reasons for this are that they contain most of the neutral gas in the universe and have column densities near or above the threshold for star formation ($`N_\mathrm{H}\text{ }>\mathrm{\hspace{0.17em}\hspace{0.17em}10}^{20}`$ cm<sup>-2</sup>). The comoving density of gas in the DLAs declines by a factor of about 10 between $`z=`$ 2–3 and $`z=0`$, the redshifts at which it is known most reliably. Moreover, $`\mathrm{\Omega }_{\mathrm{ISM}}`$ at $`z=`$ 2–3 is nearly equal to $`\mathrm{\Omega }_\mathrm{S}`$ at $`z=0`$, highly suggestive of the conversion of ISM into stars. The value of $`\mathrm{\Omega }_{\mathrm{ISM}}`$ at $`z1`$ is less certain because relatively few DLAs are known at these redshifts. The global mean ISM metallicity, $`Z_{\mathrm{ISM}}\mathrm{\Omega }_\mathrm{M}^{\mathrm{ISM}}/\mathrm{\Omega }_{\mathrm{ISM}}`$, rises substantially, from $`Z_{\mathrm{ISM}}0.1Z_{}`$ at $`z=`$ 2–3 to $`Z_{\mathrm{ISM}}Z_{}`$ at $`z=0`$, again suggestive of a great deal of star formation in this period. However, the observed value of $`Z_{\mathrm{ISM}}`$ at $`z1`$ is surprisingly low. This may be the result of a selection effect in which metal-rich regions of the ISM are also dust-rich and hence obscure any quasars behind them. Finally, we note recent estimates of the global mean IGM metallicity: $`Z_{\mathrm{IGM}}\mathrm{\Omega }_\mathrm{M}^{\mathrm{IGM}}/\mathrm{\Omega }_{\mathrm{IGM}}10^3Z_{}`$ at $`z3`$. This and the observations of DLAs indicate that the comoving density of metals in the IGM is less than that in the ISM at this redshift. We know very little about the metallicity of the IGM at $`z\text{ }<\mathrm{\hspace{0.17em}\hspace{0.17em}1}`$, a situation we hope will improve soon. Another recent development is the realization that AGN may make a significant contribution to the global radiation budget. This can be understood as follows. The present comoving density of black holes can be estimated from dynamical studies of the nuclei of nearby galaxies. Given an efficiency of conversion between black hole mass and radiant energy and a typical redshift of conversion, one can estimate the contribution of AGN to the bolometric background intensity. For $`ϵ10`$% and $`z2`$, both plausible values, the result is $`J_{\mathrm{AGN},\mathrm{bol}}0.2J_{\mathrm{bol}}`$. Within the uncertainties in the various input quantities (including $`\mathrm{\Omega }_{\mathrm{BH}}`$ at $`z=0`$), $`J_{\mathrm{AGN},\mathrm{bol}}`$ could be several times larger or smaller, i.e., nearly dominant or nearly negligible. For comparison, AGN make only a minor contribution to $`J_\nu `$ at visible wavelengths. Thus, if they are a major contributor to $`J_{\mathrm{bol}}`$, most of the radiation must be reprocessed by dust. Visible and radio observations of the sub-mm sources detected with SCUBA may help to resolve this issue. In any case, it serves as a useful reminder that the observed background is strictly an upper limit on the mean intensity of stellar radiation. ## 3 Origin of the Hubble Sequence The other major theme on which we have seen many new results is the origin and evolution of the Hubble sequence of galactic morphologies. Much of this progress comes from deep imaging with HST at visible and, very recently, near-infrared wavelengths. From these and other observations, the following picture emerges. Large elliptical galaxies in clusters appear relatively old at $`z1`$, indicating formation at $`z\text{ }>`$ 2–3. In the field, however, a small but significant fraction of elliptical galaxies and the spheroids of disk galaxies ($`20`$%) either formed late or experienced recent episodes of star formation (at $`z\text{ }<\mathrm{\hspace{0.17em}\hspace{0.17em}1}`$). Large galactic disks appear to have changed relatively little—in luminosity, size, or rotation velocity—from $`z1`$ to $`z=0`$. The decline in the cosmic UV emissivity between $`z1`$ and $`z=0`$ is caused mainly by rapid evolution in the number density and/or luminosities of small galaxies \[e.g., blue compact galaxies (BCGs)\]. The last result needs confirmation, however, because it depends on the faint end of the luminosity function, which is notoriously difficult to determine. The situation at higher redshifts is potentially even more interesting: galaxies at $`z\text{ }>\mathrm{\hspace{0.17em}\hspace{0.17em}2}`$ appear smaller and more disturbed than their present-day descendants. We observe some normal-looking elliptical galaxies but remarkably few (if any) normal-looking disk galaxies. Taken at face value, these observations suggest that most ellipticals and spheroids formed at $`z\text{ }>`$ 2–3, that most disks (large ones, at least) formed later, at $`1\text{ }<z\text{ }<\mathrm{\hspace{0.17em}\hspace{0.17em}2}`$, that the subsequent evolution of large galaxies was mainly passive (ellipticals and spheroids) or quiescent (disks), while the activity of dwarf galaxies declined at $`z\text{ }<\mathrm{\hspace{0.17em}\hspace{0.17em}1}`$. If this picture is correct, the Hubble sequence, in the form familiar to us, largely came into being during the period $`1\text{ }<z\text{ }<\mathrm{\hspace{0.17em}\hspace{0.17em}2}`$. It is highly significant, in this regard, that the sizes and morphologies of galaxies at high redshifts appear much the same at visible and NIR wavelengths (i.e., rest-frame UV and visible wavelengths). There is, however, a selection effect to worry about. As a result of the usual cosmological dimming, it becomes increasingly difficult to observe low surface brightness features, such as the outer parts of quiescent disks, at higher redshifts. We need to understand this effect better before we can be confident we have witnessed the origin of the Hubble sequence. Tentative though these results may be, they do invite some harmless speculation. First, the observations are at least qualitatively consistent with the idea that galaxies formed in a hierarchical sequence, starting with small objects and progressing to larger ones by merging and inflow. Second, the observations contain a vital clue as to why galactic disks appeared relatively late. At high redshifts, galaxies were close together and interacted frequently, leading in some cases to mergers and the formation of elliptical galaxies and spheroids (“hot” stellar systems). These disturbances, clearly visible in the HST images, would destroy or prevent the formation of thin disks (“cold” stellar systems). At lower redshifts, however, galaxies were farther apart and interacted rarely, permitting the formation and survival of disks. In this situation, the gas in galactic halos can cool, contract, spin up, and settle into thin, rotationally supported disks, where it can then be converted into stars. Clearly, much remains to be done to confirm or refute this picture. But we can look forward to a bright future, informed by even better observations with HST, with the large ground-based telescopes, and ultimately, with NGST. ## 4 Appreciation This has been a wonderful conference: excellent food, excellent skiing, excellent conversation, and especially, excellent science. The success of the meeting derives from the efforts of many people, including all the speakers and participants. We are especially grateful to the scientific organizing committee—Véronique Cayatte, Bruno Guiderdoni, François Hammer, Trinh Xuan Thuan—and the organizer of organizers—Trân Thanh Vân—for the vision that created this and other Moriond meetings. Last but not least, we thank Sabine Kimmel, who helped turn this vision into reality.
no-problem/9911/astro-ph9911349.html
ar5iv
text
# Magnetic field of pulsars with superconducting quark core ### Acknowledgments. K.M.S. acknowledges IAU support for the participation at the Conference. This work has been supported in part by the Volkswagen Stiftung under grant no. I/71 226. ## References Berges, J. & Rajagopal, K. 1999, Nucl. Phys. B, 538, 215 Bailin, D. & Love, A. 1984, Phys. Rep., 107, 325 Blaschke, D., Sedrakian, D.M. & Shahabasyan, K.M. 1999, astro-ph/9904395 Glendenning, N.K., 1992, Phys. Rev. D, 46, 1274 Makashima, K. 1992, in: The structure and evolution of neutron stars, eds. D. Pines, R. Tamagaki & S. Tsuruta (Addison-Wesley), 86 Sedrakian, D.M. & Shahabasian, K.M. 1980, Astrofizika, 16, 417 Sedrakian, A.D. & Sedrakian, D.M. 1995, ApJ, 447, 305 Wilczek, F. 1999, hep-ph/9908480
no-problem/9911/cond-mat9911236.html
ar5iv
text
# Lattice effects on the current-voltage characteristics of superconducting arrays ## Abstract The lattice effects on the current-voltage characteristics of two-dimensional arrays of resistively shunted Josephson junctions are investigated. The lattice potential energies due to the discrete lattice structure are calculated for several geometries and directions of current injection. We compare the energy barrier for vortex-pair unbinding with the lattice pinning potential, which shows that lattice effects are negligible in the low-current limit as well as in the high-current limit. At intermediate currents, on the other hand, the lattice potential becomes comparable to the barrier height and the lattice effects may be observed in the current-voltage characteristics. PACS number: 74.50.+r, 74.25.Fy, 74.60.Ge Two-dimensional (2D) arrays of weakly coupled Josephson junctions in equilibrium are well described by the 2D $`XY`$ model, which exhibits the Berezinskii-Kosterlitz-Thouless (BKT) transition driven by the unbinding of vortex-antivortex pairs . In experiment, on the other hand, the systems are usually in nonequilibrium and dynamical quantities are measured in the presence of external driving . Although the static nature of the BKT transition has been well established, there still remain unsolved questions on the dynamics of the system, for example, the value of the exponent of the current-voltage ($`IV`$) characteristics, the dynamic universality class, and the noise spectrum . These 2D Josephson junction arrays (JJAs) also draw much interest in relation with superconducting films and highly anisotropic high-$`T_c`$ superconductors. In JJAs, unlike the latter superconducting materials, the underlying discrete lattice structure causes nonzero pinning potentials for vortices; such lattice pinning potentials have not been taken into account properly in the theoretical studies based on the continuum limit. This work investigates the lattice effects in the 2D $`N\times N`$ resistively-shunted junction (RSJ) model. We first calculate the lattice potential in several geometries including different lattice structures and directions of current injection. The lattice potential barrier on a square array with diagonal current injection is found to be much larger than that on the same array with horizontal current injection. Comparing the obtained lattice pinning potential with the energy barrier which a bound vortex pair should overcome to be free vortices, we find that the effects of lattice pinning are negligibly small in the low-current regime as well as in the high-current regime; this is confirmed by the $`IV`$ characteristics computed numerically for the square arrays with different directions of current injection. It is also found that there exists an intermediate regime of current, where the lattice potential effects are observable. We begin with the calculation of the lattice potential energy, which a vortex should overcome to move to the next face of the lattice. In the presence of external currents, a vortex is expected to be exerted by the Magnus force $$𝐅=\frac{1}{c}\mathrm{\Phi }_0𝐉\times \widehat{𝐳},$$ (1) where $`𝐉`$ is the external current density and $`\mathrm{\Phi }_0hc/2e`$ is the flux quantum. Here the direction of the Magnus force is perpendicular to that of the current density, enforcing the vortex to move along the perpendicular direction to the current injection. As an example, Fig. 1 displays square arrays in the presence of the external currents (a) in the horizontal and (b) in the diagonal direction. Under the (sufficiently strong) Magnus force, the vortex at position A, which corresponds to the phase configuration with local minimum energy $`E_{\mathrm{min}}`$, passes position $`B`$, with maximum energy $`E_{\mathrm{max}}`$, and then moves to $`C`$, which yields the same configuration as $`A`$. Accordingly, the potential barrier $`E_b`$ that the vortex should overcome to move is determined by $$E_b=E_{\mathrm{max}}E_{\mathrm{min}}$$ (2) with the energy $$E=E_J\underset{ij}{}\mathrm{cos}(\varphi _i\varphi _j),$$ (3) where $`E_J`$ is the Josephson coupling strength, $`\varphi _i\varphi _j`$ is the phase difference between sites $`i`$ and $`j`$, and the summation is taken over all nearest neighboring pairs. We use the method in Ref. to obtain the phase configuration: The minimization of the energy in Eq. (3) leads to the condition that the net current flowing into site $`i`$ should vanish: $$\underset{j}{}\mathrm{sin}(\varphi _i\varphi _j)=0,$$ (4) which can be rewritten as $$\mathrm{tan}\varphi _i=\frac{_j\mathrm{sin}\varphi _j}{_j\mathrm{cos}\varphi _j}$$ (5) with the summations performed over the four nearest neighbors of $`i`$. The phase configuration can then be found iteratively from $$\mathrm{tan}\varphi _i^{(n+1)}=\frac{_j\mathrm{sin}\varphi _j^{(n)}}{_j\mathrm{cos}\varphi _j^{(n)}},$$ (6) where $`\varphi _j^{(n)}`$ is the value of $`\varphi _j`$ at the $`n`$th iteration. Equation (6), together with appropriate boundary conditions, gives the phase configurations for the minimum energy $`E_{\mathrm{min}}`$ and for the maximum energy $`E_{\mathrm{max}}`$. In this manner we compute the lattice potential barrier for $`N\times N`$ square, triangular, and honeycomb arrays with horizontal current injection, as well as for a square array with diagonal current injection. Figure 2 shows the obtained lattice potential barrier $`E_b`$ versus the inverse size $`1/N`$, which manifests that the barrier heights saturate to constant values for sufficiently large $`N`$. In the case of square and triangular arrays with horizontal current injection, the barrier heights saturate to the values $`E_b/E_J0.199`$ and $`0.043`$, reproducing the results obtained in Ref. . In addition, Fig. 2 also gives the values $`E_b/E_J0.575`$ and $`0.822`$ for a honeycomb array with horizontal current injection and for a square array with diagonal injection, respectively. We first consider the energy for unbinding of a vortex pair without the lattice pinning potential and then examine how the result changes as the pinning effects are included. The interaction energy of a vortex-antivortex pair separated by distance $`r`$ is given by $`E_1G^{}(r)`$, where $`E_12\pi E_J`$ at low temperatures and $`G^{}(r)`$ is the lattice Coulomb Green function (with the diagonal part subtracted) . To an excellent approximation, $`G^{}(r)`$ takes the form $$G^{}(r)\mathrm{ln}(r/a)+C$$ (7) for all $`ra`$, where $`a`$ is the lattice spacing and $`C`$ is a constant . Accordingly, in the presence of external current $`I`$ the energy of a vortex-antivortex pair reads $$E(r)=E_0+E_1\mathrm{ln}\left(\frac{r}{a}\right)\left(\frac{hI}{2ea}\right)r,$$ (8) where $`E_0`$ is a constant and the last term arises from the Magnus force in Eq. (1). We measure the energy in units of $`E_J`$ and write Eq. (8) in the dimensionless form: $$E(r)=E_1\mathrm{ln}r2\pi Ir,$$ (9) where $`E_0`$ in Eq. (9) has been dropped for convenience, and $`r`$ and $`I`$ are in units of $`a`$ and the single junction critical current $`I_c2eE_J/\mathrm{}`$, respectively. The condition for the maximum pair energy $`E/r=0`$ gives the estimation of the maximum pair size: $$r_{\mathrm{max}}=\frac{E_1}{2\pi I}$$ (10) for $`IE_1/2\pi 1`$ (in the dimensionless form). (For $`I1`$, we have $`r_{\mathrm{max}}1`$.) The energy barrier $`\mathrm{\Delta }E`$ for the pair unbinding is thus given by $$\mathrm{\Delta }E=E(r_{\mathrm{max}})E(1)=E_1\left[\mathrm{ln}\left(\frac{E_1}{2\pi I}\right)1\right]+2\pi I$$ (11) in the absence of the lattice pinning potential. We display $`\mathrm{\Delta }E`$ as a function of $`I`$ in Fig. 3, where it is observed that $`\mathrm{\Delta }E`$ has very large values in the small-current regime. Note also that $`\mathrm{\Delta }E=0`$ for $`I1`$, which implies that the vortex-antivortex pair can unbind even at zero temperature if the external current is larger than the critical current. We now consider the lattice pinning effects on the vortex-antivortex pair energy. Taking the position of the vortex as the origin, we write the lattice pinning potential in the simple form $$E_p(r)=\frac{E_b}{2}\mathrm{cos}2\pi r,$$ (12) which is to be included in Eq. (9), with the lattice potential barrier $`E_b`$ in units of $`E_J`$. In the presence of such a lattice potential, the vortex may feel some roughness when it moves around. Under small or large external currents, however, the lattice effects on the vortex motion are expected not to be appreciable: In the latter case of large currents, the lattice pinning potential $`E_p`$ is so small compared with the driving potential $`2\pi Ir`$ in Eq. (9), thus resulting in negligible effects. At small external currents, the energy barrier $`\mathrm{\Delta }E`$ for the pair unbinding, shown in Fig. 3, is much larger than the lattice potential barrier $`E_b`$, and dominates the transport properties of vortices since the vortex-antivortex pair should overcome the largest energy barrier to be free vortices. On the other hand, it is of interest to note that there exists the intermediate-current regime, where $`E_b`$ is comparable to $`\mathrm{\Delta }E`$. In that regime the lattice effects on transport properties such as the $`IV`$ characteristics can presumably be measured. To investigate the above possibility, we focus on square arrays in the presence of external currents in horizontal and diagonal directions and compute the $`IV`$ characteristics at finite temperatures. Since these two cases (see Fig. 1) are believed to differ only in the lattice potential barrier, we expect that any difference in the $`IV`$ characteristics should be attributed to the lattice pinning effects. The net current through a Josephson junction of shunt resistance $`R`$ is given by the sum of the supercurrent, normal current, and thermal noise current; the resulting current conservation condition at each grain yields the equations of motion for a square $`N\times N`$ JJA : $$I_i^{\mathrm{ext}}=\underset{j}{}^{}\left[\frac{\mathrm{}}{2eR}\frac{d}{dt}(\varphi _i\varphi _j)+I_c\mathrm{sin}(\varphi _i\varphi _j)+\eta _{ij}\right],$$ (13) where $`I_i^{\mathrm{ext}}`$ is the external current fed into grain $`i`$, the primed summation runs over the nearest neighbors of grain $`i`$, and $`\eta _{ij}`$ is the thermal noise current. We here employ the fluctuating boundary conditions, and introduce the twist variables $`𝚫(\mathrm{\Delta }_x,\mathrm{\Delta }_y)`$ to write the phase difference between the nearest-neighboring grains in the form $$\varphi _i\varphi _j𝐫_{ij}𝚫,$$ where $`𝐫_{ij}`$ is the displacement between $`i`$ and $`j`$ and the periodic boundary conditions on $`\{\varphi _i\}`$ are imposed in both directions. The equations of motion in Eq. (13) then take the form $$\underset{j}{}^{}\left[\frac{d}{dt}(\varphi _i\varphi _j)+\mathrm{sin}(\varphi _i\varphi _j𝐫_{ij}𝚫)+\eta _{ij}\right]=0,$$ (14) where time has been rescaled in units of $`\mathrm{}/2eRI_c`$, and the thermal noise current in units of $`I_c`$ satisfies $`\eta _{ij}(t)=0`$ and $`\eta _{ij}(t)\eta _{kl}(0)=2T(\delta _{ik}\delta _{jl}\delta _{il}\delta _{jk})\delta (t)`$ with temperature $`T`$ in units of $`E_J/k_B`$. The dynamics of the twist variables is governed by two additional equations: $`\dot{\mathrm{\Delta }}_x`$ $`=`$ $`{\displaystyle \frac{1}{N^2}}{\displaystyle \underset{ij_x}{}}\mathrm{sin}(\varphi _i\varphi _j\mathrm{\Delta }_x)+\eta _{\mathrm{\Delta }_x}I_x`$ (15) $`\dot{\mathrm{\Delta }}_y`$ $`=`$ $`{\displaystyle \frac{1}{N^2}}{\displaystyle \underset{ij_y}{}}\mathrm{sin}(\varphi _i\varphi _j\mathrm{\Delta }_y)+\eta _{\mathrm{\Delta }_y}I_y,`$ (16) where the summation $`_{ij_x}`$ is over all links in the $`x`$ direction and thermal noise terms satisfy $`\eta _\mathrm{\Delta }(t)=0`$ and $`\eta _\mathrm{\Delta }(t)\eta _\mathrm{\Delta }(0)=(2T/N^2)\delta (t)`$. In the case of horizontal current injection, we have $`I_x=I`$ and $`I_y=0`$, while for diagonal injection $`I_x=I_y=I/\sqrt{2}`$. By means of the Euler algorithm, we integrate Eq. (16) and compute the voltage $`V=\sqrt{V_x^2+V_y^2}`$, where $`V_{x(y)}N\dot{\mathrm{\Delta }}_{x(y)}_t`$ with $`\mathrm{}_t`$ denoting the time average. The use of the fluctuating twist boundary conditions in the presence of external currents has the advantage that the direction of current injection can be controlled easily. Figure 4 presents the resulting $`IV`$ curves of the arrays of sizes $`N=4,8,16`$, and $`32`$ under horizontal and diagonal current injections at (a) $`T=0.84`$ and (b) $`T=1.30`$. As expected from the existence of the resistive BKT transition at $`T=T_{\mathrm{BKT}}0.9`$, the voltage $`V`$ at $`T=0.84`$ (below $`T_{\mathrm{BKT}}`$) keeps decreasing as the system size is increased in the low-current regime, while $`V`$ appears to saturate to a nonzero value at $`T=1.30`$. It is, however, rather difficult to discern the data for horizontal and diagonal injection in the logarithmic scale. To manifest the difference and to reveal the lattice effects in detail, we thus plot in the linear scale the difference between the voltage under horizontal injection ($`V_h`$) and that under diagonal injection ($`V_d`$), which is displayed in Fig. 5 as a function of the current $`I`$ in the system of size $`N=16`$ at $`T=0.84`$ and $`1.30`$. It is obvious that the difference $`V_hV_d`$ indeed approaches zero in the limit of small and large currents, confirming the previous prediction based on the comparison of the energy scales: the lattice pinning potential and the energy barrier for pair unbinding. In particular, independence of the voltage in the low-current regime upon the direction of current injection suggests that the $`IV`$ exponent for a square array with horizontal current injection found in Ref. is universal in the sense that it is independent of the underlying lattice structure. Furthermore, since the lattice pinning potential is much smaller for horizontal injection ($`E_b0.199E_J`$) than for diagonal injection ($`E_b0.822E_J`$), vortices with horizontal injection can move around more freely, which implies that in the intermediate-current regime we have $`V_h>V_d`$, as again confirmed in Fig. 5. In conclusion, we have studied the lattice effects in two-dimensional arrays of resistively shunted Josephson junctions for several geometries and directions of current injection. The lattice potential energy due to the underlying discrete array structure has been calculated and then compared with the energy scale which a vortex-antivortex pair should overcome to be unbound. From this comparison, we have found that lattice pinning effects can be observed in the intermediate-current regime, which has been confirmed by the direct numerical integration of the current-voltage characteristics. We thank G.S. Jeon and P. Minnhagen for useful discussions, and acknowledge the partial support from the SNU research fund, from the Korea Research Foundation, and from the Korea Science and Engineering Foundation.
no-problem/9911/hep-ph9911536.html
ar5iv
text
# Vector Boson Transverse Momentum Distributions at the Tevatron ## References