id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9909/nucl-th9909052.html
ar5iv
text
# References On the nucleon instability of heavy oxygen isotopes Yu. S. Lutostansky<sup>1</sup> and M. V. Zverev<sup>2</sup> <sup>1</sup> Moscow Engineering Physics Institute, 115409, Moscow, Russia <sup>2</sup> Kurchatov Institute, 123182, Moscow, Russia ## Abstract The instability of the $`{}_{}{}^{26}\text{O}`$ nucleus with respect to decay through the two-neutron channel is investigated. It is shown that this isotope, unobserved in the fragmentation experiments, can exist as a narrow resonance in the system $`{}_{}{}^{24}\text{O}+2n`$. A role of deformation in formation of the neutron-drip line in the region $`N20`$ is discussed. The structure of nucelei near the neutron-drip line possesses interesting features. These are: 1) two-neutron instability of nuclei which are stable with respect to one-neutron emission, 2) arising a new region of deformation when approaching the neutron-drip line, 3) change of the concept of nuclear shells at the boundary of neutron stability, 4) formation of the neutron halo, 5) clasterization in very neutron-rich nuclei. In this article, we pay attention to nucleon stability of oxygen nuclei near the neutron-drip line. This region, close to double magic area, has been attracting an attention of nuclear experimentalists an theorists for last decade. The instability of the $`{}_{}{}^{26}\text{O}`$ nucleus has been observed for the first time at the LISE spectrometer at GANIL in the fragmentation of the <sup>48</sup>Ca beam with an energy of 44 Mev per nucleon on a Ta target . The work confirmed this result. No events associated with the isotope $`{}_{}{}^{26}\text{O}`$ were also observed in the recent analysis of oxygen fragments by the separator RIPS in the projectile fragmentation experiments with using 94.1 MeV per nucleon <sup>40</sup>Ar beam at the RIKEN . Theoretical study of properties of $`{}_{}{}^{26}\text{O}`$ was done in within the framework of the self-consistent theory of finite Fermi systems . Estimation of one-neutron separation energy yielded $`S_n0.7`$MeV, i.e. this nucleus was found to be stable to one-neutron emission. This clears the way for the pure two-neutron instability of the $`{}_{}{}^{26}\text{O}`$ nucleus. Before going over to investigation of this subject we concern briefly neighbouring even-even oxygen isotopes. Calculations made in for the nucleus $`{}_{}{}^{24}\text{O}`$ observed in all the experiments yielded $`S_n=3.6`$MeV and $`S_{2n}=5.9`$MeV. At the same time those calculations showed 1$`n`$\- and 2$`n`$-instability of the nucleus <sup>28</sup>O unobserved in . The two-neutron instability corresponding to positive one-neutron separation energy $`S_n`$ and negative two-neutron separation energy $`S_{2n}`$ is possible owing to the positive energy of neutron pairing . This energy is usually characterized by the even-odd difference $`\mathrm{\Delta }_n`$ of the binding energies $`E_b`$ of three neighboring isotopes, which is given by $$\mathrm{\Delta }_n=\frac{1}{2}\left[E_b(N+2,Z)2E_b(N+1,Z)+E_b(N,Z)\right]$$ (1) and is positive for even-even nucleus $`(N,Z)`$. The obvious relation $`S_{2n}2S_n=2\mathrm{\Delta }_n`$ follows from eq. (1). This relation indicates that two-neutron instability is possible provided that the condition $`0<S_n<\mathrm{\Delta }_n`$ is satisfied. Bearing in mind that in this region of nuclei $`\mathrm{\Delta }_n2÷3`$MeV one can see that this condition holds for the $`{}_{}{}^{26}\text{O}`$ nucleus. To investigate the stability of the $`{}_{}{}^{26}\text{O}`$ nucleus, we consider the $`{}_{}{}^{24}\text{O}+2n`$ system. The energies $`E_q`$ and widths $`\mathrm{\Gamma }_q`$ of quasistationary states of this system are determined by the poles $`E_qi\mathrm{\Gamma }/2`$ of the Green’s function of two neutrons in the mean field of the $`{}_{}{}^{24}\text{O}`$ nucleus . The equation for the two-neutron Green’s function can be reduced to the equation for the two-neutron wave function $`\mathrm{\Psi }(1,2)`$: $$\left[H_0(1)+H_0(2)+V(1,2)\right]\mathrm{\Psi }(1,2)=E\mathrm{\Psi }(1,2),$$ (2) where the arguments 1 and 2 stand for the sets of neutron spatial and spin coordinates, $`H_0`$ is the single-particle Hamiltonian, and $`V`$ is the effective interaction in the two-particle channel. Recent microscopic calculation of this interaction for semi-infinite nuclear matter showed its surface nature. The phenomenological analysis of even-odd effects in tin isotopic chain showed that density dependence of the effective pairing interaction is quite sophisticated. However the parameters controlling the shape and the strength of this interaction are fitted for rather heavy nuclei not very far from the valley of $`\beta `$-stability and can hardly be directly used in the problem under discussion. That is why we did not claim to calculate the wave function $`\mathrm{\Psi }(1,2)`$ and the energy $`E`$. The aim is to estimate the width of the two-neutron state by the order of value. To solve eq. (2), we expand the function $`\mathrm{\Psi }(1,2)`$ in the basis formed by the products of two eigenfunctions of the Hamiltonian $`H_0`$ with the total angular momentum $`I=0`$ and its projection $`M=0`$. This expansion has the form $$\mathrm{\Psi }(1,2)=\underset{0}{\overset{\mathrm{}}{}}\underset{0}{\overset{\mathrm{}}{}}𝑑\epsilon _1𝑑\epsilon _2C^\nu (\epsilon _1,\epsilon _2)\left\{\psi _{\epsilon _1}^\nu (1)\psi _{\epsilon _2}^\nu (2)\right\}^{00},$$ (3) where $`\nu `$ is the set of angular quantum numbers. For low energies, the $`\epsilon `$-dependence of the functions $`\psi _\epsilon (1)`$ can be separated from the coordinate and spin dependences as follows : $$\psi _\epsilon (1)=\sqrt{\mathrm{\Delta }(\epsilon )}\psi _0(1).$$ (4) According to the theoretical scheme of neutron single-partcle levels of the $`{}_{}{}^{24}\text{O}`$ nucleus, the $`2s_{1/2}`$ state is the shallowest bound state, and the $`1d_{3/2}`$ state is the lowest quasistationary state. At low energies, the factor $$\mathrm{\Delta }^d(\epsilon )=\frac{1}{2\pi }\frac{\gamma }{(\epsilon \epsilon _0^d)^2+\gamma ^2/4}$$ (5) is resonantly enhanced for the $`d_{3/2}`$ continuum states because of proximity to the $`1d_{3/2}`$ quasistationary state with energy $`\epsilon ^d=\epsilon _0^di\gamma /2`$ in the $`{}_{}{}^{24}\text{O}`$ nucleus . Owing to existence of the weakly bound $`2s_{1/2}`$ state, the wave functions of the $`s_{1/2}`$ continuum states are enhanced at low energies by the factor $$\mathrm{\Delta }^s(\epsilon )\frac{\sqrt{2m}\beta }{\pi \sqrt{\epsilon }(\epsilon +\beta )},$$ (6) where $`\beta =2/mR^2`$ and $`R`$ is of the order of the $`{}_{}{}^{24}\text{O}`$ radius. For this reasons, we restrict the basis in the expansion (3) of the wave function $`\mathrm{\Psi }(1,2)`$ to the $`s_{1/2}`$ and $`d_{3/2}`$ states of single-particle continuum. To derive the required relations, we follow the method used in to obtain the relations of the theory of two-proton radioactivity. Substituting expansion (3) with the functions $`\psi _\epsilon (1)`$ factorized in form (4) into eq. (2), we find that the energy of the quasistationary state satisfies the algebraic equation $$g_{dd}M^d(E)+g_{ss}M^s(E)+(g_{ds}^2g_{dd}g_{ss})M^d(E)M^s(E)=1,$$ (7) where $$M^\lambda (E)=\underset{0}{\overset{\mathrm{}}{}}\underset{0}{\overset{\mathrm{}}{}}𝑑\epsilon _1𝑑\epsilon _2\frac{\mathrm{\Delta }^\lambda (\epsilon _1)\mathrm{\Delta }^\lambda (\epsilon _2)}{\epsilon _1+\epsilon _2E},$$ (8) $$g_{\lambda \nu }=d1d2V(1,2)\left\{\psi _0^\lambda (1)\psi _0^\lambda (2)\right\}^{00}\left\{\psi _0^\nu (1)\psi _0^\nu (2)\right\}^{00},$$ (9) and $`\lambda ,\nu =s,d`$. Substituting expresiions (5) and (6) for the quantity $`\mathrm{\Delta }^\lambda (\epsilon )`$ into eq. (8) and calculating the integrals by rotating the contour to the negative imaginary axis, we obtain $$M^s(E)=2m\left(\frac{1}{\pi }\mathrm{ln}\frac{E}{4\beta }\frac{1}{2}+i\right),$$ (10) $$M^d(E)=\frac{1}{2\epsilon _0^dE}+\frac{2\gamma ^2}{\pi (2\epsilon _0^dE)^2}\left\{\frac{E}{\epsilon _0^dE}+\frac{1}{2\epsilon _0^dE}\mathrm{ln}\left|\frac{\epsilon _0^d}{\epsilon _0^dE}\right|\right\}.$$ (11) Eq. (7) with $`M^s(E)`$ and $`M^d(E)`$ specified by eqs. (10) and (11) is an algebraic equation with complex coefficients. Numerical analysis of eq. (7) shows that it has a complex root $`E=E_0i\mathrm{\Gamma }/2`$ in a wide region of input parameters. While the position of this root depends on the values of the parameters, it is located around $`E_00.1`$MeV, $`\mathrm{\Gamma }10^3`$MeV. We reserve a detailed study of this dependence for the future publication. The width $`\mathrm{\Gamma }10^3`$MeV of this quasistationary state corresponds to a lifetime $`\tau \mathrm{}/\mathrm{\Gamma }10^{18}`$s. Since $`\tau \tau _{nucl}10^{22}`$s the emitted neutron pair should be a weakly bound di-neutron state. It can be observed in correlation experiments. The similar situation was discussed earlier in connection with an analyzis of the $`\beta `$-delayed multi-neutron emission . However, in that case the cascade ($`n+n`$) neutron emission strongly dominates and the di-neutron ($`{}_{}{}^{2}n`$) channel is suppressed (e.g. for <sup>35</sup>Na, $`P_{{}_{}{}^{2}n}/P_{n+n}<0.19`$ ) due to a decay of the excited state. In the case of $`{}_{}{}^{26}\text{O}`$, the neutron pair is emitted from the ground state, so that the situation is “more clear”. The spherical basis was used in calculations for neutron-rich oxygen isotopes. However, account of deformation (even small, with $`\beta _2<0.2`$) could strongly change the picture. Indeed, splitting of the single-particle $`d_{3/2}`$ neutron state results in destroying the shell $`N=20`$ for nuclei near the neutron-drip line . Such a sensitivity to small deformations should essentially complicate description of neutron-rich nuclei. However, the oxygen isotopes evidently should not be deformed. Otherwise, being deformed with $`\beta _2>0.1`$, the isotope $`{}_{}{}^{26}\text{O}`$ could obtain additional stability due to increase of the binding energy by $``$ 1 MeV and should be observed experimentally, but this is not the case. This is the deformation that seems to give an explanation of existence of the isotope <sup>31</sup>F observed in the experiment . The calculations of one- and two-neutron separation energies for this nucleus based on the self-consistent finite Fermi system theory in the sphericl geometry yielded nucleon instability of <sup>31</sup>F. At the same time the scenario of sharp shift of the neutron-drip line owing to onset of a deformation was suggested several years ago for explanation of the nucleon stability of heavy sodium isotopes . Nuclei with less than half occupation of the level $`f_{7/2}`$ being slightly unbound in spherical calculation ($`S_n\stackrel{<}{_{}}0`$) can become stable due to lowering the energy of states with asymptotic quantum numbers $`\frac{1}{2}^{}[330]`$ and $`\frac{3}{2}^{}[321]`$ at deformation . The analogous scenario seems to take place for fluorine isotopes: the neutron level $`f_{7/2}`$ just starts to be occupied in the nucleus <sup>31</sup>F with $`N=22`$. Following this scenario one could expect nucleon stability of the isotope <sup>33</sup>F as well. The problem of onset of a deformation for nuclei near the neutron-drip line will be discussed in a separate article, in particular, in connection with an opportunity of clasterization in weakly bound neutron-rich systems. Authors are grateful to B. V. Danilin, D. Guillemaud-Mueller and M. V. Zhukov for valuable discusiions.
no-problem/9909/cond-mat9909007.html
ar5iv
text
# Zero-field spin splitting in InAs-AlSb quantum wells revisited ## Abstract We present magnetotransport experiments on high-quality InAs-AlSb quantum wells that show a perfectly clean single-period Shubnikov-de Haas oscillation down to very low magnetic fields. In contrast to theoretical expectations based on an asymmetry induced zero-field spin splitting, no beating effect is observed. The carrier density has been changed by the persistent photo conductivity effect as well as via the application of hydrostatic pressure in order to influence the electric field at the interface of the electron gas. Still no indication of spin splitting at zero magnetic field was observed in spite of highly resolved Shubnikov- de Haas oscillations up to filling factors of 200. This surprising and unexpected result is discussed in view of other recently published data. While charge transport in two-dimensional electron gases (2DEG) is fairly well understood, many open experimental and theoretical questions related to the spin of the electrons remain. Several proposals have addressed the possibility of spin transistors, the detection of Berry’s phase, or spin filters in 2DEGs. The standard 2DEG which is embedded in AlGaAs-GaAs heterostructures is most likely not the optimal candidate for such investigations, since spin effects as well as spin-orbit interactions are small perturbations compared to other effects. This has brought InAs-based material systems into focus where the electrons reside in an InAs well between AlSb or GaSb barriers. The unique advantage of this material system in this context is the large $`g`$-factor up to $`|g|=15`$ and the possibility of large spin-orbit interactions. Several experiments in different material systems have revealed a beating of low-field Shubnikov-de Haas (SdH) oscillations. In the literature, these observations have been interpreted as manifestations of spin-orbit interactions in asymmetric quantum wells . Especially InAs-based systems are expected to lead to large spin orbit interactions. However, Heida et al. found several inconsistencies with theoretical expectations. The size of the spin splitting was different for samples from different parts of the wafer, and the spin splitting did not depend on the electric field as tuned by a front gate voltage. In the present paper, we follow up on this question and report additional inconsistencies, even stronger than those found by . We have conducted SdH studies on many InAs-AlSb quantum wells grown by molecular beam epitaxy. In this paper we focus on samples from four different wafers, grown by three different individuals, at different times over a 5-year period, with different $`known`$ asymmetries. We find the following: (a) Tested in the dark, none of the samples shows any SdH beats. (b) Under some conditions, beats can be introduced by illumination (persistent photoconductivity). (c) The beats under (b) are strongly sample-size-dependent; they appear only in fairly large samples, suggesting an essential role of spatial non-uniformities. With regard to (a), earlier magnetoresistance data by Hopkins et al. on samples similar to ours also did not show any SdH beats. However, at the time, no particular note was taken of this absence, and the matter was not pursued. All samples contained 15nm-wide InAs quantum wells, confined by $`AlSb`$ or $`Al_xGa_{1x}Sb`$ (x $``$ 0.8) barriers. The sample details are summarized in Table 1. The shutter sequence was designed to enforce InSb-like interfaces . All growths were on semi-insulating GaAs substrates. To accommodate the $`7\%`$ lattice mismatch between InAs and GaAs thick ($`1\mu m`$) GaSb buffer layers were grown, including a GaSb/AlSb superlattice ”smoothing” section . All growths were terminated in a thin (typically $`5nm`$) cap layer of either GaSb or InAs. The nature of the cap, and its (intentional) separation from the well via additional electrically inactive spacer layers, play an essential role in determining the electron sheet concentreation of the well. It is known that the GaSb surface (but apparently not InAs) contains a very high concentration of donor-like surface states, at an energy sufficiently high to drain electrons into the well . For samples grown under otherwise identical conditions, the resulting transferred electron concentration decreases with increasing well-to-surface distance. In samples 2 and 4, these surface states are the dominant source of electrons; neither sample contained any intentional doping. In sample 1, with a much deeper well, this contribution is small; here the dominant electron source is a Te delta-doping donor sheet embedded into the top AlSb barrier; this is the only sample with intentionally added donors. Sample 3 has an InAs cap; the electrons in this case are believed to be contributed by donor-like interface states at one or both of the well interfaces, or interface-related bulk defects in the AlSb barrier; their concentration is in good agreement with the values reported by Nguyen et al. . It is not known how this interface doping is distributed over both interfaces, but it is unlikely that the distribution is a symmetrical one. The samples were patterned into Hall geometries of $`100\mu `$m width by wet chemical etching. Voltage probes are placed at several locations along the current path, to probe different regions along the sample length. Ohmic contacts to the 2DEGs were obtained by alloying AuGe/Ni contacts. A magnetic field was applied perpendicular to the sample surface. The magnetoresistance of the 4 samples at 1.7 K is displayed in Fig. 1. We have measured the samples at temperatures down to 100 mK and found no significant improvement of the SdH oscillations, in agreement with expectations based on estimates of the Landau level broadening. Oscillations can be resolved down to magnetic fields of 0.15 T and filling factors up to 200. All observed features can be analyzed with one single SdH period with very high accuracy. From the largest filling factors that we can observe we estimate the Landau level width to about 0.4 meV. An expected zero-field spin splitting should depend on the effective electric field across the quantum well. Since we found it difficult to fabricate reliably functioning gates, we varied the carrier density and with it the effective electric field in the 2DEG via the persistent photoconductivity effect . We used a red LED to illuminate the sample. Since we estimate the effective electric field to be largest in samples 1 and 4, we focus the following discussion on these samples. Figure 2 displays magnetoresistance traces obtained on sample 1 for three different carrier densities tuned via illumination with light. The data was taken after the light was switched off and the carrier density was stable as a function of time. The Drude scattering time $`\tau _D`$ as obtained from the resistivity at B=0 as well as the quantum scattering time $`\tau _q`$ from the magnetic field dependence of the SdH amplitude are also given for each resistance trace. The electron density in InAs quantum wells can also be changed by hydrostatic pressure . We reduced the carrier density in sample No. 4 by almost a factor of two via application of pressure up to $`p=1GPa`$ and did again not find any beating pattern in the low-field SdH oscillations (not shown). However, in some samples, in which the carrier density could be tuned with light, we found a beating pattern right after the illumination. Usually, after waiting for some time of the order of an hour the beating pattern was gone. In a few cases the beating pattern remained constant on the time scales of the experiment. Figure 3 shows resistance traces for sample 1 after the sample has been illuminated with an infrared LED and then kept in the dark for more than 24 hours. In this stage the resistivity of the sample changed by less than $`10^3`$ per hour. The magnetoresistance across two voltage probes separated by 1 mm clearly displays a weak beating pattern. A measurement taken at the same sample at the same time for voltage probes separated by only 200 $`\mu `$m shows a perfectly one-period SdH pattern. Upon further illumination the beating pattern disappeared. We can observe such effects very rarely and only for special voltage contacts and illumination doses. There seems to be at least qualitative agreement between experiment and theory on InAs wells with GaSb barriers and other material systems . Our data obtained on InAs quantum wells with AlGaSb barriers with a large Al content as well as the data by Hopkins et al. indicating the absence of SdH beating within the experimental resolution cannot be explained within this framework. The magnitude of the spin splitting according to the theory of Rashba et al. should depend on the effective electric field across the quantum well. In the following we estimate this value of the effective electric field for our quantum wells. Both the surface states and any Te doping of the top barrier will introduce a strong transverse electric field into the wells, pointing towards the substrate side. If there were no other doping sources present, the field at the top of the well would be given by $`eN_s/ϵ`$, where $`N_s`$ is the electron sheet concentration, and $`ϵ`$ the InAs permittivity. The field would decay to zero at the bottom, interface, implying an average field of approximately $`E=eN_s/2ϵ`$. The background bulk doping in the InAs itself is negligible compared to the measured concentrations. However, part of the electron concentration in all samples is due to interface donors, and in sample 3 this is the only known source. If we assume that this contribution is symmetrical and has the same value in all samples, $`4.510^{15}m^2`$, we must subtract this value from the measured $`N_s`$. The fields obtained in this way are given in the last row of Table I. If the interface donors were unsymmetrically distributed, the values in the Table would have to be adjusted by an amount depending on the magnitude and sign of the asymmetry, maximally $`\pm 3.510^6V/m`$, but probably much less. With the possible exception of sample 3, all samples have large built-in asymmetries, with transverse electric fields estimated to range from $`6.610^6V/m`$ to $`5.010^6V/m`$ for Samples 4 and 1, down to nominally zero for sample 3. The uncertainties on these estimates are on the order $`\pm 110^6V/m`$, i.e. small compared to the range of values. It is extraordinarily unlikely that accidental effects would compensate the different asymmetries in all samples. The absence of the SdH beating in our samples as well as those of Hopkins et al. suggests a more fundamental suppression mechanism, somehow associated with InAs/AlSb wells, but absent in GaAs/(Al,Ga)As wells, and even in InAs/GaSb wells. The data of Heida et al. appear to contradict this hypothesis, but it may be important that even their work indicates significant discrepancies between experiment and theory. A Fourier-transform of the SdH pattern of sample 1 indicates a resolution of our experiment of better than 1 meV for the possible detection of a beating phenomenon. This limit is comparable with the one obtained from the width of the Landau levels. We have self-consistently calculated the conduction band profile and wave function based on the sample parameters and then calculated the expected spin splitting using Rashba‘s theory . We found a value of about 5 meV in agreement with Refs. . Let us now return to the light induced beating pattern as displayed in Fig. 3. As light changes the carrier density, it also changes the effective electric field across the well. If this were the underlying reason for the observed beating pattern one would expect that the beating pattern is present without light, disappears at some does of light as the potential well becomes symmetric and then appears again once the asymmetry points to the other direction. In our case, if we observe this feature at all in an experiment, the beating pattern is only present for a certain dose of light, it is absent for lower and higher carrier densities. These observations strongly hint at the fact that in our samples a beating pattern in the low-field SdH oscillations does not stem from an asymmetry induced Rashba-type interaction. In the following we argue that the observed SdH beating pattern in Fig. 3 arises from an inhomogeneous carrier distribution induced by the illumination. The light is not distributed homogenously along the Hall geometry and might therefore lead to an inhomogeneous carrier distribution. If a reasonable number of areas of different carrier density occur along the current path of the Hall geometry this could lead to a beating pattern of the low-field SdH oscillations. After the carriers have had enough time to relax back to thermal equilibrium the inhomogeneities and with it the beating pattern disappear. The time scales of the non-persistent photoconductivity effect are of the order of hours and are consistent with the disappearance of the beating pattern. The importance of sample inhomogeneities obviously depends on the length scale of the experiment. The data in Fig. 3 suggest that over short length scales, in this case 200 $`\mu `$m, the sample is homogeneous within the experimental resolution and therefore displays single period SdH oscillations. For a larger length scale of 1 mm the beating pattern is experimentally observed. We find roughly 21 oscillations between two nodes of the beating. If interpreted in terms of sample inhomogeneities this leads to a value of $`\mathrm{\Delta }N_s/N_s5\%`$, which is not an unreasonable number. While we do not question the valid interpretion of other experiments in terms of the Rasha-type spin orbit splitting, our experimental results cannot be explained within this framework. It is not clear why in our InAs-AlSb quantum wells the low-field SdH beating cannot be observed. We do not know why our samples behave differently compared to Ref. but like to stress that our sample quality is higher in terms of scattering times and electron mobilities. We do not expect to observe Berry phase-type effects in our samples induced by strong Rashba-type spin orbit interaction. From Fig. 1 it is obvious that spin splitting of SdH oscillations can be observed at magnetic fields as low as $`B=1.5T`$. The magnitude of the g-factor in our quantum wells can be determined by temperature dependent measurements or via experiments where the magnetic field in tilted with respect to the sample surface. We find in both cases values for the g-factor of $`|g|1215`$ . This makes InAs-AlSb quantum wells promising candidates for spin-related experiments. The fact that we do not observe a beating of the low-field SdH oscillations comes as a surprise and is completely unexpected. While spin-orbit interaction in general could still play a substantial role in these systems the contribution of the quantum well inversion asymmetry to it is likely to be small. This, however, could be an advantage for the possible realization of coupled spin states in quantum dots. We are grateful to T. Heinzel and S. Ulloa for helpful discussions and thank ETH Zürich and QUEST for financial support.
no-problem/9909/hep-lat9909126.html
ar5iv
text
# Edinburgh 99/13September 1999 The Isgur Wise function from the lattice ## 1 Introduction The calculation of matrix elements corresponding to the semi-leptonic decay of heavy mesons is crucial to the extraction of elements of the CKM matrix. We present results of a lattice study of the decay between heavy-light pseudo-scalar mesons. The vector current matrix element can be parametrised in terms of two form factors $`{\displaystyle \frac{D(v^{})|V^\mu |B(v)}{\sqrt{M_BM_D}}}=(v+v^{})^\mu h_+(\omega ,m_Q,m_Q^{})`$ (1) $`+(vv^{})^\mu h_{}(\omega ,m_Q,m_Q^{})`$ where $`v`$ and $`v^{}`$ are the meson four velocities and $`\omega =vv^{}`$. In the heavy quark limit, heavy quark symmetry reduces the two form factors to a single function of the recoil, $`\xi (\omega )`$, the Isgur Wise function . For finite quark mass, there exist two sources of symmetry breaking, one from the exchange of hard gluons allowing resolution of the heavy quark dynamics by the light degrees of freedom, and modifications to the current arising from higher dimensional operators in HQET. The form factors are related to the Isgur Wise function by $$h_+(\omega )=\left[1+\beta _+(\omega )+\gamma _+(\omega )\right]\xi (\omega )$$ $$h_{}(\omega )=\left[\beta _{}(\omega )+\gamma _{}(\omega )\right]\xi (\omega )$$ (2) where $`\beta _\pm `$ and $`\gamma _\pm `$ are the radiative and power corrections respectively. ## 2 Calculation Details The simulation was performed on quenched lattices generated using the Wilson gauge action. The quark propagators were generated from an $`𝒪(a)`$ improved Sheikholeslami-Wohlert action with the coefficient $`c_{\mathrm{sw}}`$ determined non-perturbatively . The details are summarised in table 1. At $`\beta =6.0`$ and $`\beta =6.2`$ we used 305 and 216 configurations respectively. The inverse lattice spacing as determined from the string tension is $`a^1=1.89`$ GeV for $`\beta =6.0`$ and $`a^1=2.64`$ GeV for $`\beta =6.2`$. The four heavy and three light hopping parameters correpsond to the regions of charm and strange quark masses respectively. The matrix element in equation 1 is obtained from the large Euclidean time behaviour of the ratio of the three point function over the two meson correlators. The improvement prescription is completed with the modification of the vector current. Under the non-perturbative scheme, the renormalisation of the vector current is given by $$V^\mu =Z_{\mathrm{eff}}^\mathrm{v}\left(V_{\mathrm{latt}}^\mu +\frac{ac^\mathrm{v}}{2}[_\nu ^{}+_\nu ]\mathrm{\Sigma }^{\mu \nu }\right)$$ (3) where $`Z_{\mathrm{eff}}^\mathrm{v}=Z^\mathrm{v}(1+b^\mathrm{v}am_\mathrm{q})`$ and the improvement coefficients are all known non-perturbatively. ## 3 Results for $`Z_{\mathrm{eff}}^\mathrm{v}`$ At zero recoil, the Isgur Wise function is normalised to one. This condition allows an estimate of the renormalisation constant for the vector current. Hence equation 2 and the normalisation of $`\xi (\omega )`$ lead to $$Z_{\mathrm{eff}}^\mathrm{v}h_+^{\mathrm{latt}}(1)=h_+(1)=\left[1+\beta _+(1)+𝒪\left(\frac{1}{m_\mathrm{Q}^2}\right)\right]$$ (4) where due to Luke’s theorem , the power corrections to $`h_+(\omega )`$ are suppressed at $`𝒪(\frac{1}{m_\mathrm{Q}})`$. The radiative corrections are perturbative and hence calculable and are obtained using Neubert’s prescription . The results for $`Z_{\mathrm{eff}}^\mathrm{v}`$ for a fixed final heavy quark $`\kappa _\mathrm{A}`$ and spectator quark $`\kappa _\mathrm{P}`$ mass are shown in figure 1. The agreement with the ALPHA determined values is excellent and thus we conclude that discretisation errors are small. ## 4 Heavy Quark Scaling An Isgur-Wise function may be defined as $$\xi (\omega )=\frac{h_+(\omega )}{1+\beta _+(\omega )}$$ (5) To test the heavy quark dependence of the form factors, degenerate transitions are fitted to the Neubert-Rieckert parametrisation of the Isgur Wise function, $$\xi (\omega )=\frac{2}{\omega +1}\mathrm{exp}\left[(2\rho ^21)\frac{1\omega }{1+\omega }\right]$$ (6) The results are shown in figure 2. For both datasets, the data lie on the same curve. Independent fitting for each transition shows there is no dependence on heavy quark mass. ## 5 Slope of $`\xi (\omega )`$ The Isgur Wise function corresponding to the physical decays, $`\overline{B}_\mathrm{s}D_\mathrm{s}l\overline{\nu }`$ and $`\overline{B}Dl\overline{\nu }`$ may be obtained from the extrapolation of the light spectator anti-quark to the strange and chiral limits respectively. With only two values for the spectator, a linear fit is performed in the improved bare quark mass. From an analysis of the meson correlators it is found that at $`\beta =6.0`$, $`\kappa _c=0.13525`$ and $`\kappa _s=0.13400`$, while at $`\beta =6.2`$, $`\kappa _c=0.13583`$ and $`\kappa _s=0.13493`$. The results of the extrapolations are shown in figures 3 and 4. The slopes for each lattice are in excellent agreement and scaling is observed. The slope exhibits a slight dependence on spectator quark mass. ## 6 Extracting $`|V_{\mathrm{cb}}|`$ The CKM element $`|V_{\mathrm{cb}}|`$ may be extracted by comparing the theoretical prediction to the experimentally measured decay rate. However $`\overline{B}Dl\overline{\nu }`$ is helicity suppressed and hence the measured rate for $`\overline{B}D^{}l\overline{\nu }`$ is used instead. Ignoring kinematic factors, the rate is given by $`{\displaystyle \frac{d\mathrm{\Gamma }(\overline{B}D^{}l\overline{\nu })}{d\omega }}`$ $``$ $`\left[1+\beta ^{\mathrm{A}_1}(1)\right]^2|V_{cb}|^2`$ (7) $`\times `$ $`K(\omega )\xi _{\mathrm{u},\mathrm{d}}^2(\omega )`$ where $`K(\omega )`$ is a collection of radiative and power corrections. Making the assumption $`K(\omega )=1`$ away from the limit of exact heavy quark symmetry, $`|V_{\mathrm{cb}}|`$ is extracted comparing the experimental results from ALEPH with our lattice results for $`\xi (\omega )`$. Hence we find, using Neubert’s result for the radiative correction ($`\beta ^{A_1}(1)=0.01`$), $$V_{\mathrm{cb}}|_{\beta =6.0}=0.038_{102}^{+2+1+2}$$ $$V_{\mathrm{cb}}|_{\beta =6.2}=0.038_{212}^{+3+1+2}$$ (8) where the first error is statistical, the second is systematic (obtained from analysing the slope of $`\xi (\omega )`$ for different momentum channels) and the third is experimental. These values are consistent with the world average ($`|V_{\mathrm{cb}}|=0.0395_{17}^{+17}`$). ## 7 Acknowledgements I would like to acknowledge the support of a PPARC studentship and EPSRC grant GR/K41663.
no-problem/9909/quant-ph9909034.html
ar5iv
text
# Explanation of Quantum Mechanics ## Abstract By assuming that the kinetic energy,potential energy,momentum,and some other physical quantities of a particle exist in the field out of the particle,the Schr$`\ddot{o}`$dinger equation is an equation describing field of a particle,but not the particle itself. That there are energy and momentum in electromagnetic field is an indisputable fact. According to Maxwell equation,density of electromagnetic energy and momentum in vacuum respectively is $$W=\frac{1}{2}(\epsilon _0E^2+\frac{1}{\mu _0}B^2)$$ (1) $$𝐩=\epsilon _\mathrm{𝟎}𝐄\times 𝐁$$ (2) This means the energy and momentum of electromagnetic field exist in whole space. The Coulomb interaction of charges can be described in terms of field,too. Assume the distance of two charges $`Q_1,Q_2`$ is $`\stackrel{}{r}`$,hence $$𝐅=𝐔=\frac{\mathrm{𝟏}}{\mathrm{𝟐}}\epsilon _\mathrm{𝟎}𝐄^\mathrm{𝟐}𝐝\tau =\frac{\mathrm{𝟏}}{\mathrm{𝟐}}\epsilon _\mathrm{𝟎}𝐄_\mathrm{𝟏}𝐄_\mathrm{𝟐}𝐝\tau =\frac{\mathrm{𝟏}}{\mathrm{𝟒}\pi \epsilon _\mathrm{𝟎}}𝐐_\mathrm{𝟏}𝐐_\mathrm{𝟐}\frac{\stackrel{}{𝐫}}{𝐫^\mathrm{𝟑}}$$ Note that the potential energy U is an spatial integration,the energy distributes in whole space.The energy in any volume element $`d\tau `$ that contributes to potential energy is $`\frac{1}{2}\epsilon _0E_1E_2`$ , where $`E_1,E_2`$ are electric amplitudes of $`Q_1,Q_2`$,respectively. Since the electromagnetic energy ,momentum ,potential energy of a particle exist in filed, we make a crazy ,but reasonable assumption:the kinetic energy,momentum,potential energy,and some other physical quantities of a particle are taken by the field(real field) out of the particle(energy distributes in space),but not be taken by the particle itself. Hence,if we know the movement equation of field(real field) out of a particle, we can calculate the kinetic energy,momentum,potential energy,and some other physical quantities of the particle.So we should find the equation of field .The easiest and most natural way is to assume the Schr$`\ddot{o}`$dinger equation $`i\mathrm{}\frac{}{t}\psi =H\psi `$ is the equation describing the movement of field. In classical physics,the equation $`\frac{d}{\mathrm{𝐝𝐭}}\psi =\alpha ^2\psi `$ can be used as heat field equation,that we assume Schr$`\ddot{o}`$dinger equation is the equation of field is reasonable. As to the movement of a particle,we can accept L.De Broglie’s idea that particle moves led by wave,since we admit that particle and field are parts of a body. References $`1.`$ Zhen-Qiu Lu,Equations of Classical and Modern Mathematical Physics, Shenhai,1991. $`2.`$ L.De.Broglie,La physique quantique restera-telle indeterministe?Gauth ier-Villars,Paric,1953.
no-problem/9909/quant-ph9909092.html
ar5iv
text
# Comment on Identical Motion in Classical and Quantum Mechanics ## Abstract Makowski and Konkel \[Phys. Rev. A 58, 4975 (1998)\] have obtained certain classes of potentials which lead to identical classical and quantum Hamilton-Jacobi equations. We obtain the most general form of these potential. PACS numbers: 03.65.Sq, 03.65.Bz In their recent paper , Makowski and Konkel study the class of potentials allowing for a quantum potential $`Q`$ which only depends on time, $$Q=K(t).$$ (1) Since the quantum effects are related to the quantum force $`Q`$, the corresponding systems have identical classical and quantum dynamics. In the polar representation where the wave function takes the form $`\mathrm{\Psi }(\stackrel{}{x},t)=R(\stackrel{}{x},t)\mathrm{exp}[i\phi (\stackrel{}{x},t)/\mathrm{}]`$ the Schrödinger equation is written as $`{\displaystyle \frac{R^2}{t}}+\left(R^2{\displaystyle \frac{\phi }{m}}\right)`$ $`=`$ $`0,`$ (2) $`{\displaystyle \frac{\phi }{t}}+{\displaystyle \frac{(\phi )^2}{2m}}+V+Q`$ $`=`$ $`0.`$ (3) Here $`R`$ and $`\phi `$ are real-valued functions and the quantum potential $`Q`$ is defined by $`Q(\stackrel{}{x},t):=\mathrm{}^2^2R/(2mR)`$. The classification of all the potentials with the above mentioned property is equivalent to finding the general solution of Eqs. (1), (2) and (3) for the three real unknowns, $`R,\phi ,`$ and $`V`$. The complete classification for the case $`K=0`$ is given in Ref. which predates the work of Makowski and Konkel . In Ref. the analogous problem for Klein-Gordon equation is also addressed. Moreover a new semiclassical perturbation theory around these potentials is outlined, and its application to quantum cosmology is discussed. The results of Makowski and Konkel can also be generalized to give the complete classification of potentials which allow for identical classical and quantum dynamics in arbitrary dimensions. Following Ref. , we shall refer to these potential as semiclassical potentials. As explained by Makowski and Konkel , for the case of stationary states, where $`R/t=0`$ and $`\phi (\stackrel{}{x},t)=Et+S(\stackrel{}{x})`$, Eq. (2) reduces to $$(R^2S)=0.$$ (4) Makowski and Konkel obtain a class of solutions of this equation by solving $$R^2S=\mathrm{const}.$$ (5) Therefore, they restrict their analysis to a set of particular solutions of Eq. (4). This is, however, not necessary. The general solution of Eq. (4) can be easily obtained by making the following change of dependent variable: $$S\stackrel{~}{S}:=RS.$$ (6) Let us first introduce $$\lambda :=\frac{\sqrt{2mK}}{\mathrm{}},$$ in terms of which Eq. (1) takes the form $$^2R+\lambda ^2R=0.$$ (7) Now substituting $`S=\stackrel{~}{S}/R`$ in (4) and making use of Eq. (7), we obtain $$^2\stackrel{~}{S}+\lambda ^2\stackrel{~}{S}=0.$$ (8) Therefore in view of Eqs. (3), (6), and (7), the most general potential allowing for identical classical and quantum dynamics for a stationary state is given by $$V=E\frac{1}{2m}\left\{(\mathrm{}\lambda )^2+\left[\left(\frac{\stackrel{~}{S}}{R}\right)\right]^2\right\},$$ (9) where $`R`$ and $`\stackrel{~}{S}`$ are solutions of Eqs. (7) and (8), respectively. Note that Eq. (9) is valid in any number of dimensions. More generally, we can use the analog of the change of variable (6), namely $$\phi \stackrel{~}{\phi }:=R\phi ,$$ (10) to handle the general problem, where the wave function $`\mathrm{\Psi }`$ does not represent a stationary state. In this case, Eq. (7) still holds, but $`\lambda `$ is a function of time. Using (10) and (7), Eqs. (2) and (3) take the form $`^2\stackrel{~}{\phi }+\lambda ^2\stackrel{~}{\phi }=2m{\displaystyle \frac{R}{t}},`$ (11) $`V={\displaystyle \frac{}{t}}\left({\displaystyle \frac{\stackrel{~}{\phi }}{R}}\right){\displaystyle \frac{1}{2m}}\left\{(\mathrm{}\lambda )^2+\left[\left({\displaystyle \frac{\stackrel{~}{\phi }}{R}}\right)\right]^2\right\}.`$ (12) Therefore the set of all the semiclassical potentials are classified by the solutions of Eqs. (7) and (11). Note that these equations are not evolution equations. Eq. (7) may be viewed as a constraint equation in which $`t`$ enters as a parameter through the dependence of $`\lambda `$ on $`t`$. Once the boundary conditions of this equation are chosen, it can be solved using the known methods of solving linear partial differential equations with ‘constant’ coefficients. The solution is then used to evaluate the right hand side of Eq. (11). The latter is a nonhomogeneous linear partial differential equation. It can be solved using the well-known Green’s function methods. Again it is the boundary conditions that determine the solution. The above analysis shows that it is the choice of the function $`K`$ or alternatively $`\lambda `$ together with the boundary conditions of Eqs. (7) and (11) that determine the set of potentials which allow for identical classical and quantum dynamics. We wish to conclude this article by the following remarks. * Makowski and Konkel conclude their paper emphasizing that “A number of additional potentials would be found if new solutions of Eq. (2a) \[This is our Eq. (2)\] were obtained. This, however, can be a difficult task.” We have shown that a simple change of the dependent variable $`\phi `$, namely $`\phi \stackrel{~}{\phi }:=R\phi `$, eases this ‘difficulty’ and leads to a complete classification of all such potentials. * Following the above remark, Makowski and Konkel write: “Among the potentials derived here we have not found any example from the known set of potentials implying bound states, e.g., Coulomb, Morse, or Pöschl-teller. This likely follows from the fact that most stationary states of physical interest have no classical limit.” In this connection, we must emphasize that for the semiclassical potentials leading to identical classical and quantum dynamics, the amplitude $`R:=|\mathrm{\Psi }|`$ of the wave function of a stationary state satisfies Eq. (7). This is just the eigenvalue equation for the Laplacian in $`\mathrm{IR}^n`$. It is well-known that this equation does not admit a solution corresponding to a bound state. * It is well-known that shifting the Hamiltonian $`H`$ by a time-dependent multiple of the identity operator, i.e., $$HH^{}=H+f(t).$$ (13) leaves all the physical quantities of the system invariant . This is in fact true both in quantum and classical mechanics. In quantum mechanics, such a transformation corresponds to a phase transformation $$\mathrm{\Psi }\mathrm{\Psi }^{}=e^{i\zeta (t)}\mathrm{\Psi }$$ (14) of the Hilbert space, where $`\zeta (t):=_0^tf(t^{})𝑑t^{}`$. Therefore, it leaves the quantum states, the expectation values of the observables, and the excitation energies invariant. It also leaves the quantum potential invariant. But it does change the classical potential according to $$VV^{}=V+f(t).$$ (15) In fact, the effect of such a transformation on Eqs. (2) and (3) is the shift (15) of the potential. Now consider the case that the quantum potential is a function of time, $`Q=K(t)`$. Then we can make a phase transformation (14) of the Hilbert space with $`\zeta (t)=_0^tK(t^{})𝑑t^{}`$ in (14), so that $`f=K`$ and the total potential $`V+Q=V+K`$ in Eq. (3) is transformed to $`V^{}+QK=V`$. Therefore, the dynamics of a state with time-dependent quantum potential and a state with a zero quantum potential are equivalent. In general, we can make a phase transformation of the Hilbert space which effectively removes such a quantum potential. Therefore, as far as the physical quantities are concerned the case $`Q=K(t)`$ is equivalent to the case $`Q=0`$. The latter has been thoroughly studied in Ref. .
no-problem/9909/astro-ph9909397.html
ar5iv
text
# Arc Statistics in Clusters: Galaxy Contribution ## 1 Introduction The discovery of long arcs in clusters of galaxies (Lynds & Petrosian (1986); Soucail et al. (1987)) offered the prospect of using their observed frequency as a tool to test cosmological models, using the paradigm of the frequency of quasar lensing studies set forth by Turner, Ostriker, & Gott (1984). Wu & Mao (1996) were the first to carry out such a study, in order to gauge the influence of a cosmological constant on the observed frequency of arcs in a homogeneous sample of EMSS clusters (Le Fèvre et al. (1994), Lupino et al. (1999)). The main conclusion of Wu & Mao (1996) was that in a spatially flat, low-density universe ($`\mathrm{\Omega }_m=0.3`$) one would observe about twice as many arcs as in an Einstein-de Sitter universe, but still only about a half the observed number of arcs. The discrepancy with the observed number was somewhat larger though: observational restrictions reduce the number considerably (Hattori, Watanabe, & Yamashita (1997)), but source evolution increases the number (Hamana & Futamase (1997)). A more recent study by Cooray (1999) found the number to be in agreement with observations for a low-density universe (open or flat), if a minimum cluster velocity dispersion of $`1080`$ km/s is assumed. At least 5 of the eight clusters with confirmed arcs, though, have dispersions below this (see Lupino et al. (1999); and references therein). In these studies a given cosmological model enters mostly via the geometry of space-time. A recent study by Bartelmann et al. (1998), however, finds the predicted number of arcs to be rather sensitive to the differences in the properties of the clusters predicted in different cosmological models. This study has sharpened the conflict between predictions and observations of long arcs for the low-density flat CDM model. They find that an $`\mathrm{\Omega }_m=0.3`$ open cold dark matter (OCDM) model produces about as many arcs as are observed, but a spatially flat $`\mathrm{\Omega }_m=0.3`$ CDM model ($`\mathrm{\Lambda }`$CDM) produces an order of magnitude fewer, and a standard CDM (SCDM) model two orders of magnitude fewer. Unlike previous studies, the difference in formation epoch and concentration between clusters in different cosmological models was consistently taken into account, and found to be mainly responsible for the drastic differences in their predicted numbers of long arcs. With many independent pieces of evidence indicating that $`\mathrm{\Lambda }`$CDM is the only concordant cosmological model (e.g., Bahcall et al. 1999), it is rather surprising that such a model fails so drastically the arc number test. Clearly, a close examination of possible sources of uncertainty is warranted. One possible source of enhancement in the observed number of arcs is the contribution of the cluster galaxies to the creation of giant arcs. Previous studies (Wu & Mao (1996); Hattori et al. 1997; Hamana & Futamase (1997); Cooray (1999)) have treated clusters as smooth mass distributions. The clusters in the dissipationless N-body simulations studied by Bartelmann et al. (1998) have significant substructure, but they could not resolve galaxies. It is therefore desirable to study the magnitude of the effect galaxies would have on the arc abundance in clusters. Including galaxies in cluster lensing studies is not new (see e.g. Grossman & Narayan (1988)). Their effect in deep, high-resolution studies of individual clusters, e.g. in A2218 (Kneib et al. (1996)), AC114 (Natarajan et al. (1998)) and A370 (Bézecourt et al. (1999)), has been found to be significant indeed. Here we quantify their effect on arc statistics by calculating the ratio, $`1+\mathrm{\Delta }`$, of the cross section to produce long arcs<sup>1</sup><sup>1</sup>1 We concentrate only on arcs with length-to-width ratio $`10`$ and length $`8\mathrm{}`$, the criteria of the search in X-ray clusters (Lupino et al. (1999)). when cluster galaxies are included to the cross section when they are not. Of course, the comparison is to be made while keeping the projected mass in the field of view fixed. We find that the results for $`\mathrm{\Delta }`$ are surprisingly small, typically less than 15% (i.e. $`\mathrm{\Delta }0.15`$), although there is considerable scatter. The scatter can easily be reduced by averaging over 10 or so clusters. We describe the gravitational lensing model we have used to calculate the arc cross section in section 2. We then explore the observational constraints on the various parameters of the model in section 3. Finally, we presents our results in section 4 and conclude with a discussion of our results and the conclusions we draw from them in section 5, where we also comment on the recent work of Meneghetti et al. (1999) on the same problem. We use throughout a Hubble constant $`H_o=100h`$ km/s/Mpc. ## 2 Gravitational Lensing Model We model the main cluster mass distribution (dark matter plus the hot intracluster gas) using the standard Pseudo Isothermal Elliptical Mass Distribution (PIEMD), for which the bending angle components at position $`(x,y)`$ relative to the cluster center are given by (Kassiola & Kovner (1993); Keeton & Kochanek (1998)) $`\theta _x={\displaystyle \frac{b}{(1q^2)^{1/2}}}\mathrm{tan}^1\left[{\displaystyle \frac{(1q^2)^{1/2}x}{\psi +r_{core}}}\right],`$ (1) $`\theta _y={\displaystyle \frac{b}{(1q^2)^{1/2}}}\mathrm{tanh}^1\left[{\displaystyle \frac{(1q^2)^{1/2}y}{\psi +q^2r_{core}}}\right],`$ (2) where $`b=4\pi (\sigma _{cl}/c)^2(D_{LS}/D_{OS})(e/\mathrm{sin}^1e)`$ and $`\psi ^2=q^2(x^2+r_{core}^2)+y^2`$. The cluster has line-of-sight velocity dispersion $`\sigma _{cl}`$ and core radius $`r_{core}`$. Its mass distribution has intrinsic and projected axial ratios $`q_3`$ and $`q`$ respectively, and $`e=(1q_3^2)^{1/2}`$. $`D_{OS}`$ and $`D_{LS}`$ are angular-diameter distances, from the observer to the source and from the lens to the source respectively. We model galaxies as truncated isothermal spheres (see Kneib et al. (1996)), so that the contribution to the bending angle at $`(x,y)`$ of a galaxy at $`(x_g,y_g)`$ is given by $`\theta _x^g=b_g\left({\displaystyle \frac{r_{cut}}{r_{cut}r_{core}^g}}\right)\left({\displaystyle \frac{x_gx}{d}}\right)\left[{\displaystyle \frac{(d^2+r_{core}^{g2})^{1/2}r_{core}^g}{d}}{\displaystyle \frac{(d^2+r_{cut}^2)^{1/2}r_{cut}}{d}}\right],`$ (3) $`\theta _y^g=b_g\left({\displaystyle \frac{r_{cut}}{r_{cut}r_{core}^g}}\right)\left({\displaystyle \frac{y_gy}{d}}\right)\left[{\displaystyle \frac{(d^2+r_{core}^{g2})^{1/2}r_{core}^g}{d}}{\displaystyle \frac{(d^2+r_{cut}^2)^{1/2}r_{cut}}{d}}\right],`$ (4) where $`b_g=4\pi (\sigma _g/c)^2(D_{LS}/D_{OS})`$ and $`d^2=(x_gx)^2+(y_gx)^2`$. The galaxy has line-of-sight velocity dispersion $`\sigma _g`$, core radius $`r_{core}^g`$, and truncation radius $`r_{cut}`$. Figure 1 shows results for a fiducial cluster. The values of the various parameters of the model are explained and justified in section 3, and summarized in Table 1. The cluster is at redshift $`z_{cl}=0.2`$, seen edge-on with $`q=0.75`$, and has $`\sigma _{cl}=1200`$ km/s and $`r_{core}=1h^1`$ kpc. An Einstein-de Sitter universe and the filled-beam approximation are assumed in the calculation of angular diameter distances. The background cosmological model is not of great importance, since we are here interested in the difference in the lensing cross section of clusters to produce long arcs due to the inclusion of galaxies in the clusters. The top-left panel shows results for a smooth cluster (no galaxies). The shaded area is the region behind which a circular source at redshift $`z_S=1`$, and of angular radius $`0.5\mathrm{}`$, would be imaged into a long arc farther out in the cluster. The inner (outer) dashed line is the tangential (radial) caustic. The top-right panel shows the same results when the galaxies are taken into account. The total mass inside a $`150\mathrm{}\times 150\mathrm{}`$ field of view centered on the cluster, shown in the bottom panels, has been kept fixed. It can be seen that there is a significant distortion of the tangential caustic, which results in an increased area where the source can be imaged into a long arc. This is shown for the source marked as a filled star (at $`x=13\mathrm{}`$ and $`y=2\mathrm{}`$), whose image can be seen in the bottom right panel. Note there that the counter arc, marked by the arrow, would not be seen given the typical magnitude of a long arc. Also, most arcs that appear only when galaxies are taken into account are not formed on top of galaxies, as noted early on by Grossman & Narayan (1988). The circles mark the positions of the galaxies and have radii chosen to roughly correspond to the size of the galaxies in a deep image. The caustics labeled 1-4 in the top right panel are due to the correspondingly labeled galaxies in the bottom right panel. The bottom left panel shows the critical curves corresponding to the caustics in the top left (top right) panel as dashed (solid) lines. The outer (tangential) dashed critical curve of the smooth cluster is repeated in the bottom right panel. In general, the galaxies that most distort and enlarge the shaded region in the top left panel are galaxies close to this critical curve. ## 3 Model Parameters In order to study the properties of images created by the cluster lens model we need to specify all the parameters needed. Here we explain our choices based on what is known about clusters and cluster galaxies. The sample of clusters searched for long arcs is selected by X-ray flux (strictly speaking, by central surface brightness; see the discussion in Lupino et al. (1999)). This is expected to select very massive clusters given the known correlation of X-ray luminosity with $`\sigma _{cl}`$ ($`L_x\sigma _{cl}^4`$) first established by Solinger & Tucker (1972). None of the EMSS clusters with $`L_x<4\times 10^{44}`$ erg/s show any arcs (Lupino et al. (1999)), corresponding to a minimum dispersion $`\sigma _{cl}=784_{62}^{+68}`$ km/s using the recent analysis of Wu, Xue, & Fang (1999). Indeed, the lowest velocity dispersion of the clusters with arcs is $`\sigma _{cl}800`$ km/s (see Lupino et al. (1999); and references therein). Therefore we consider here clusters with $`\sigma _{cl}800`$ km/s. The shape of clusters is not known observationally. Lens models that use the projected axial ratio $`q_{BCG}`$ of their brightest cluster galaxy (BCG) reproduce rather well the orientation of arcs and arclets in deep, high-resolution studies of clusters (e.g. Kneib et al. (1996), Natarajan et al. (1998)). Bézecourt et al. (1999) find a somewhat rounder mass distribution to give a better fit. To the extent that $`q_{BCG}`$ is a good guide to $`q`$, the study of Porter, Schneider, & Hoessel (1991) implies $`q0.6`$. Numerical simulations of clusters find triaxial shapes for galaxy clusters (Thomas et al. (1999)), with mean minor/major axial ratio of 0.5 for a low-density universe. It is easy to translate their distribution into the distribution for $`q`$ assuming nearly oblate or prolate halos (see Binney (1978)), from which we estimate that $`q0.50.9`$ for most halos, with a median $`q0.7`$. We use $`q=0.5,0.75\mathrm{and}0.9`$ as representative values. This range covers the values used in the studies of A370, A2218 and AC114. Clusters are expected to have density profiles well approximated by the Navarro, Frenk, & White (1997) \[NFW\] profile, $`\rho R^1(R+R_s)^2`$. However, in the radial range of interest here it is the inner profile, where the density distribution changes from $`\rho R^1`$ to $`\rho R^2`$, that really matters. It has been argued (Williams, Navarro, & Bartelmann (1999)) that a cluster with NFW profile cannot reproduce the angular distance of arcs from their cluster centers (for the dispersions of interest here, $`\sigma _{cl}8001400`$ km/s), and the steep inner profile of a BCG is needed. On the other hand, lensing studies of several clusters find that a core radius $`r_{core}30h^1`$ kpc is needed (e.g. Tyson, Kochanski, & Dell’Antonio (1998), Smail et al. (1996)). Here we use isothermal spheres with $`r_{core}=1\mathrm{or}30h^1`$ kpc to bracket these results. A pure NFW profile would give results intermediate between these two cases. There are several parameters that describe the galaxies. First, we use $`r_{core}^g=0.1h^1`$ kpc throughout, in agreement with the constraints from quasar lensing studies (see e.g. Kochanek (1996)). Second, we follow standard practice in lensing studies (see e.g. the discussion in Kochanek (1996)) and assign a velocity dispersion to a galaxy using a Faber-Jackson (Faber & Jackson (1976)) relation $`\sigma _g=\sigma _{}(L/L_{})^{1/\beta }`$. The luminosity $`L`$ is drawn from a Schechter distribution, $`(L/L_{})^\alpha \mathrm{exp}(L/L_{})`$ (Schechter (1976)). The value of $`\beta `$ ranges from $`\beta 3`$ in the B band to $`\beta 4`$ in the infrared (de Vaucouleurs & Olson (1982)). Here we use $`\beta =3`$; our results do not change much if we use $`\beta =4`$ instead, as we discuss below. Our next step is to choose the galaxy truncation radius, $`r_{cut}`$. The truncation of galaxy halos inside clusters has been studied numerically by Klypin et al. (1999). An estimate of the size of a halo at distance $`R`$ from a cluster center is given by the tidal radius $`r_t`$. For a galaxy and cluster with $`r_{core}=0`$, $`r_t=(\sigma _g/\sigma _{cl})R`$. Since we do not know the distance $`R`$ for a galaxy at projected separation $`r`$ from the cluster center, we use the average distance along the line of sight, $`R=R(r,z)\rho (r,z)𝑑z/\rho (r,z)𝑑z`$. The galaxies that contribute the most to the arc cross section are those near the critical curve of the smooth cluster (see Fig. 1). Therefore, we evaluate $`R`$ at the Einstein radius, $`r60h^1(\sigma _{cl}/1200\mathrm{kms}^1)^2`$ kpc. For $`\sigma _{cl}1200`$ km/s, this is comparable to $`R_s`$ for such a cluster (see Thomas et al. (1999)). Thus, we evaluate $`R`$ at projected separation $`r=R_s`$ and obtain $`R2r`$ for the NFW profile. Finally, we compare the rotation curve of a numerical halo (the typical example discussed by Klypin et al. (1999), bottom curve of their Fig. 6) to the model rotation curve of a truncated isothermal sphere, and obtain $`r_{cut}3r_t/4`$. We shall take here $`R=100h^1`$ kpc, and use throughout the paper $`r_{cut}=3r_t/4`$. Therefore, $`r_{cut}^{}=15(\sigma _{}/230\mathrm{kms}^1)(\sigma _{cl}/1200\mathrm{kms}^1)`$ $`h^1`$ kpc. It is interesting to note that this value agrees fairly well with the value inferred from the effect of galaxies on the spatial distribution and orientation of the arcs and arclets in the cluster A2218 (see Kneib et al. (1996)). The scaling of $`r_{cut}`$ with $`\sigma _g`$ implies that for a given cluster $`r_{cut}=r_{cut}^{}(L/L_{})^\gamma `$ with $`\gamma =1/\beta =1/3`$. Thus, the scalings of $`\sigma _g`$ and $`r_{cut}`$ with $`L`$ are different from those suggested by Brainerd, Blandford, & Smail (1996) ($`\beta =4`$, $`\gamma =1/2`$), and used in the studies of A370, A2218 and AC114. However, we find the arc cross section to be very similar in either case because the galaxy mass-to-light ratio, $`M/L\sigma _g^2r_{cut}/L`$, is constant in both cases. Some authors have also explored $`\gamma =0.8`$ based on studies of the fundamental plane of ellipticals that suggest $`M/LL^{0.3}`$ (see Natarajan et al. (1998), and Bézecourt et al. (1999)). It has been noted, however, that the fundamental plane can also be interpreted assuming constant $`M/L`$ (Bender, Burstein, & Faber (1992)), so the jury is still out on this question. Finally, we also note that we expect $`r_{cut}^{}\sigma _{cl}`$ to be smaller for lower-dispersion clusters. This might be part of the reason for the different $`r_{cut}^{}`$ obtained in the AC114 and A2218 analyses (see Natarajan et al. (1998) and Kneib et al. (1996), respectively). In order to find the effect of galaxies on the arc cross section we must also choose their characteristic velocity dispersion $`\sigma _{}`$, how they are distributed inside a cluster, and how many there are. Since galaxies too faint and/or too far from the critical curve do not contribute much to the arc cross section, we find it enough to include galaxies down to 2 mag fainter than $`L_{}`$ inside an area corresponding to $`150\mathrm{}\times 150\mathrm{}`$ for a cluster at redshift $`z_{cl}=0.2`$ (see Fig. 1). This will be our fiducial field of view (FOV) at this redshift, and we shall study a region of the same physical size at other redshifts. We are interested in clusters in the redshift range $`z_{cl}=0.20.6`$, the range in which the arc cross section is large (see Bartelmann et al. (1998)). Smail et al. (1998) have studied a sample of 10 clusters at $`z_{cl}0.2`$ with X-ray luminosities in the range of interest here. They find that the surface number density of red galaxies is $`r^\delta `$, with $`\delta =0.96\pm 0.08`$, and that the luminosity distribution is well fit by a Schechter function with $`\alpha =1.25`$ and $`M_V^{}=(20.8\pm 0.1)+5\mathrm{l}\mathrm{o}\mathrm{g}h`$. Using $`M_V^{}=20.8`$ and $`M_V=20.358.5(\mathrm{log}\sigma _g2.3)+5\mathrm{l}\mathrm{o}\mathrm{g}h`$ (de Vaucouleurs & Olson (1982)), we infer $`\sigma _{}=226`$ km/s. From their Table 2 we infer a count of $`2040`$ galaxies (down to 2.3 mag fainter than $`L_{}`$) in a $`150\mathrm{}\times 150\mathrm{}`$ FOV, with a mean of 32. Furthermore, Smail et al. (1997) have studied a set of 10 clusters in the redshift range $`z_{cl}=0.370.56`$. They find that elliptical galaxies are distributed with $`\delta =0.8\pm 0.1`$ in the radial range of interest here, and their luminosities are well described by a Schechter function with $`\alpha =1.25`$. Furthermore, we derive a count of $`29`$ galaxies per cluster in a $`150\mathrm{}\times 150\mathrm{}`$ FOV for their clusters at $`z_{cl}0.4`$ (down to 2 mag fainter than $`L_{}`$), consistent with a count of $`20`$ at $`z_{cl}=0.2`$ assuming equal numbers in areas of equal physical size. We find that the same holds, within errors, for their clusters at $`z_{cl}0.54`$. The previous results also agree with a homogeneous sample of clusters at low redshift (Lumsden et al. (1997)). We infer a count of $`17`$ galaxies in our FOV at $`z_{cl}=0.2`$ for the mean of the sample (down to 2 mag fainter than $`L_{}`$, assuming $`\delta =1`$ and equal numbers in equal areas). However, most of these systems have low velocity dispersions. For the only cluster with $`\sigma _{cl}1200`$ km/s, the fit parameters imply $`30`$ galaxies. The mean $`M_{b_j}^{}=20.2`$ implies $`\sigma _{}=232`$ km/s, assuming a mean color $`b_jV.7`$. Finally, we also infer similar counts from the detailed study of 7 rich Abell clusters at $`z_{cl}0.15`$ by Driver, Couch, & Phillipps (1998). They find $`50150`$ galaxies inside a $`280h^1`$ kpc radius (down to 3 mag fainter than $`L_{}`$; see their fig. 11). Thus, we infer $`1957`$ galaxies in our FOV, with a mean of 38 (down to 2 mag fainter than $`L_{}`$, using their fig. 6 and assuming both $`\delta =1`$ and equal numbers in equal areas). We shall summarize these observations by adopting $`\sigma _{}=230`$ km/s and $`\alpha =1.25`$. We shall assume a universal luminosity function, an adequate assumption for the inner region of rich clusters (see Driver, Couch, & Phillipps (1998)). Finally, we shall take $`\delta =1`$.<sup>2</sup><sup>2</sup>2 Strictly speaking, we distribute the galaxies in projection just like the cluster surface density profile, including flattening and core radius. Therefore, the galaxies trace the dark matter, in agreement with gravitational lensing studies of clusters (see Tyson, Kochanski, & Dell’Antonio (1998); and references therein). The choice of the number of galaxies in our FOV is complicated by the fact that it depends on the cluster velocity dispersion (Bahcall (1981)). Girardi et al. (1999) have computed total cluster luminosities within fixed physical radii for a large, homogeneous sample of 89 clusters for which there is also velocity dispersion data. We have analyzed their data for luminosities inside $`0.5h^1`$ Mpc, and find that the data are well fit by a cluster luminosity $`L_{cl}6.3(\sigma _{cl}/770\mathrm{kms}^1)^{1.5}\times 10^{11}h^2L_{}`$. There is, of course, significant scatter. This is believed to be physical, and results from the fundamental plane of clusters (Schaeffer et al. (1993)) seen in projection. We find that 68% of the clusters have luminosities $`(.671.5)L_{cl}`$. Extrapolating the validity of $`L_{cl}`$ to $`\sigma _{cl}=1200`$ km/s, we infer that there should be $`37`$ galaxies in our fiducial FOV (assuming $`\delta =1`$ and equal numbers in equal areas). In view of this, and the previous discussion, we shall take $`N_g=40`$ galaxies inside a square $`316h^1`$ kpc on a side (our FOV at $`z_{cl}=0.2`$) for a cluster with $`\sigma _{cl}=1200`$ km/s, but we explore the range $`N_g=2060`$ as representative of the likely scatter to be encountered. For clusters with different $`\sigma _{cl}`$ we scale $`N_g`$ by $`(\sigma _{cl}/1200\mathrm{kms}^1)^{1.5}`$. We finish this discussion of our choice of parameters by summarizing them in Table 1. ## 4 Results We have calculated $`\mathrm{\Delta }`$ for Monte Carlo realizations of the galaxy distribution in a cluster by ray tracing through a fine grid in the image plane (we find that $`0.375\mathrm{}`$ spacing works well enough) to find the points that are imaged back to a given source. We take circular sources (this is adequate for our purpose of finding the cross section for very long arcs, for which the intrinsic ellipticity of the sources is not important) of $`1\mathrm{}`$ diameter, and at redshift $`z_S=1`$. Sources are placed with $`0.25\mathrm{}`$ spacing or smaller depending on $`z_{cl}`$. Sets of contiguous pixels in the image plane that trace back to a given source are then an image. If at least one image has angular area at least 10 times the area of the source, the pixel area around the source position is added to the arc cross section. Our main results are presented in Table 2, where we give the average $`\mathrm{\Delta }`$ of 10 realizations of the distribution of galaxies in a cluster, $`\mathrm{\Delta }`$. Results are given for a given cluster at three different redshifts and for three representative axial ratios $`q=q_3`$ (i.e. the cluster is seen edge-on; see discussion below). The top row at each redshift gives results for a cluster with $`\sigma _{cl}=1000`$ km/s and $`r_{core}=1(30)h^1`$ kpc, while the bottom row gives results for $`\sigma _{cl}=1200`$ km/s and $`r_{core}=1(30)h^1`$ kpc. The sources are asumed to be at redshift $`z_S=1`$. Based on the scatter of 100 realizations of the galaxy distribution in a cluster at $`z_{cl}=0.2`$, with $`q=0.75`$ and $`\sigma _{cl}=1200`$ km/s, we estimate the error for $`\mathrm{\Delta }`$ in Table 2 to be $`\pm 0.02`$. Thus, we see that typically $`\mathrm{\Delta }0.12`$. Increasing the number of galaxies to $`N_g=60`$ changes $`\mathrm{\Delta }`$ only to $`\mathrm{\Delta }=0.15`$ from $`\mathrm{\Delta }=0.11`$ for our fiducial cluster at $`z_{cl}=0.2`$: $`\sigma _{cl}=1200`$ km/s, $`r_{core}=1h^1`$ kpc, and $`q=0.75`$. Also, for the entire range $`\sigma _{cl}=8001400`$ km/s we find that $`\mathrm{\Delta }=0.070.15`$ for the same cluster. The scatter introduced by the discrete nature of galaxies (the numerical scatter introduced by the finite size of our grids on the source and image planes is very small) is such that in 68% of the realizations $`\mathrm{\Delta }`$ is in the range $`\mathrm{\Delta }=0.030.16`$, again for our fiducial cluster. The results are not sensitive to the assumed source redshift. For $`z_S=1.2`$ or $`0.8`$, $`\mathrm{\Delta }.08`$ for the same cluster. We do not find our neglect of the dependence of $`r_{cut}`$ on the distance of a galaxy to the cluster center to be important either. We have used $`r_{cut}=(\sigma _g/\sigma _{cl})R`$, where $`R=(\pi /2)r`$ for a galaxy at projected separation r if we assume $`\rho R^3`$, as appropriate for the more distant galaxies. We find that this changes $`\mathrm{\Delta }`$ only to $`\mathrm{\Delta }=0.13`$ from $`\mathrm{\Delta }=0.11`$ for our fiducial cluster. Our numbers are given for edge-on clusters for simplicity. However, the projection effect could significantly increase $`\mathrm{\Delta }`$ only for fairly flattened clusters seen nearly face-on. For example, for a cluster at $`z_{cl}=0.2`$ with $`\sigma _{cl}=1200`$ km/s and $`r_{core}=1h^1`$ kpc, $`\mathrm{\Delta }=0.037`$ if $`q_3=q=0.9`$ (see Table 2). We find that this changes to $`\mathrm{\Delta }=0.067`$ instead if the cluster has $`q_3=0.5`$, and is seen in projection with $`q=0.9`$. If we considered the cluster to be prolate instead, $`\mathrm{\Delta }`$ would be smaller. Thus, our results for $`\mathrm{\Delta }`$ are not significantly different when projection effects are taken into account. We have assumed a Schechter luminosity function throughout. This often underestimates the number of bright galaxies in a cluster (see e.g. Lumsden et al. (1997)). We have corrected for this by assuming a luminosity function $`(L/L_{})^\alpha \mathrm{exp}((L/L_{})^{1/4})`$. We find that this functional form (with the same $`\alpha `$) fits the data better for $`L>L_{}`$, without changing much the galaxy count for $`L<L_{}`$. However, we find that with this luminosity function the value of $`\mathrm{\Delta }`$ changes only to $`\mathrm{\Delta }=0.12`$ from $`\mathrm{\Delta }=0.11`$ for our fiducial cluster. A possible concern is that these results apply only to a smooth cluster, whereas the clusters in the simulations are clearly substructured. However, we have done a realization of a substructured cluster by adding a large, secondary clump away from the center of the cluster, and described by a truncated isothermal sphere density profile with $`\sigma _{cl}=500`$ km/s and $`r_{core}=1h^1`$ kpc. In this case we took $`r_{cut}=225h^1`$ kpc $`2r`$, where r is the projected separation. This is clearly large given its velocity dispersion, but we took this value to maximize the effect of this subclump in the calculation. We find that even in this case, keeping the same total mass in our FOV as in the case of a smooth cluster with $`\sigma _{cl}=1200`$ km/s, $`\mathrm{\Delta }=0.074`$ instead of $`\mathrm{\Delta }=0.11`$. ## 5 Discussion & Conclusions Our main conclusion from this study is that the likelihood that a cluster generate long-arc images of background sources is not significantly enhanced by the presence of its galaxies. The many observationally based constraints that we have taken into account imply that there are simply not enough sufficiently massive galaxies in a cluster to affect significantly the probability of a long arc. The effect could be more significant for the probability of finding arcs of certain characteristics. For example, typically the long arcs appear isolated and aligned more or less perpendicular to the cluster major axis (Lupino et al. (1999)). It can be seen in Figure 1 that the cross section for those arcs (the shaded area outside the left and right side of the tangential caustic in the top left panel) is enhanced more by the presence of the galaxies: $`\mathrm{\Delta }0.4`$. The effect would also be much larger for arclet statistics, which we have not addressed here. Undoubtedly out treatment is simplified, but it is clear that the presence of galaxies within a cluster is a minor effect that cannot reconcile the observed frequency of arcs in clusters with the expectations in a universe dominated by a cosmological constant. Meneghetti et al. (1999) have recently studied this problem with a different methodology. Our studies are fairly complementary. For example, their clusters have realistic large-scale substructure, whereas we explore more systematically the galaxy distribution parameter space. Our results are consistent; e.g. their ensamble of clusters with galaxies generate about 7% fewer long arcs than their pure dark matter clusters, a result entirely within the range we find here. Our results make it clear that the effect of galaxies is not necessarily to decrease the number of arcs, and that the number can be significantly larger in individual clusters. Acknowledgments: One of us (RF) would like to acknowledge the organizers of the first Princeton-PUC Workshops on Astrophysics (held in Pucon, Chile, January 11-14, 1999) for their invitation to present our results prior to publication. AHM and JRP acknowledge support from NASA and NSF grants at UCSC.
no-problem/9909/hep-ph9909432.html
ar5iv
text
# 1 Introduction ## 1 Introduction Unpolarized parton distributions are now well known with various lepton and hadron scattering data. We also have a rough idea on the longitudinally polarized ones with many experimental data on the $`g_1`$ structure function . However, the details of the polarized distributions are not known yet. For example, the polarized light-antiquark distributions are assumed to be flavor symmetric although the flavor asymmetric sea is confirmed in the unpolarized case . The unpolarized asymmetry was first revealed by the New Muon Collaboration (NMC) in the failure of the Gottfried sum rule, which was studied in muon deep inelastic scattering experiments. It was then confirmed by the CERN-NA51 and Fermilab-E866 collaborations in Drell-Yan experiments. Furthermore, the HERMES semi-inclusive data indicated a similar flavor asymmetry. In this way, the $`\overline{u}/\overline{d}`$ asymmetry is now an established fact. It is also theoretically understood that various factors contribute to the asymmetry . They include nonperturbative mechanisms such as meson clouds and the exclusion principle. Although the effect may not be large, there could be also a perturbative contribution. In order to determine the major mechanism for creating the asymmetry, other observables should be investigated. The flavor asymmetries in longitudinally-polarized and transversity distributions are appropriate candidates for the observables in the light of the Relativistic Heavy Ion Collider (RHIC) SPIN project and others. There are also some model studies on the possible antiquark flavor asymmetry . Because the $`g_1`$ data are not enough to find the asymmetry, we should reply on semi-inclusive or hadron-scattering ones. For example, charged-hadron production data are valuable. However, the Spin Muon Collaboration (SMC) and HERMES data are not accurate enough at this stage for finding a small effect although an analysis suggests a slight $`\mathrm{\Delta }\overline{u}`$ excess over $`\mathrm{\Delta }\overline{d}`$ . There is another possibility of studying it at RHIC by $`W`$ production processes. It has been already shown that the $`W`$ charge asymmetry is very sensitive to the antiquark flavor asymmetry in the proton-proton (pp) reaction. On the other hand, the disadvantage of the $`W`$ production is that the asymmetry in the transversity distributions cannot be investigated because of the chiral-odd nature . Therefore, we need to find an alternative method. This is one of our major purposes for investigating the polarized proton-deuteron (pd) Drell-Yan processes. An alternative way is to combine the pd Drell-Yan data with the pp data as it has been done in the unpolarized case . However, the formalism of the polarized pd Drell-Yan had not been available until recently. In particular, it was not obvious how the additional tensor structure is involved in the polarized cross sections because the deuteron is a spin-1 hadron. References made it possible to address ourselves to the polarized pd processes. Taking advantage of the formalism, we can discuss the possibility of measuring the polarized flavor asymmetry by combing the pd Drell-Yan data with the pp ones. Although such an idea was already pointed out in Refs. , the purpose of this paper is to show the actual possibility by numerical analyses. The relation between the polarized pd Drell-Yan cross section and the flavor asymmetry is discussed in Sec. 2. Then, numerical results are explained in Sec. 3, and conclusions are given in Sec. 4. ## 2 Flavor asymmetry in polarized proton-deuteron Drell-Yan processes Although the lepton scattering suggests the asymmetry $`\overline{u}/\overline{d}1`$ in the failure of the Gottfried sum rule, it does not enable us to determine the $`x`$ dependence. Therefore, the pd Drell-Yan process ($`p+d\mu ^+\mu ^{}+X`$) has been used for measuring the unpolarized $`\overline{u}/\overline{d}`$ ratio in combination with the pp Drell-Yan. Another possibility is to use the $`W`$ production processes . We discuss a method of using the pd Drell-Yan process for finding the polarized flavor asymmetry in this section. There are at least two complexities in handing the deuteron reaction. First, the deuteron is a spin-1 hadron so that additional spin structure exists. This point is clarified in Refs. , so that the interested reader may read these papers for the details. Second, the deuteron structure functions are not simple summations of proton and neutron ones because of nuclear effects. Although such nuclear corrections are important for a precise analysis, we do not address ourselves to them in this paper because nuclear modification does not affect major consequences of this paper. If experimental data are taken in future, shadowing, D-state admixture, and Fermi-motion corrections should be taken into account for detailed comparison. We found in Ref. that the difference between the longitudinally-polarized pd cross sections is given by $$\mathrm{\Delta }\sigma _{pd}=\sigma (_L,1_L)\sigma (_L,+1_L)\frac{1}{4}\left[2V_{0,0}^{LL}+(\frac{1}{3}cos^2\theta )V_{2,0}^{LL}\right],$$ (1) where the subscripts of $`_L`$, $`+1_L`$, and $`1_L`$ indicate the longitudinal polarization and $`\sigma (pol_p,pol_d)`$ indicates the cross section with the proton polarization $`pol_p`$ and the deuteron one $`pol_d`$. The longitudinally polarized structure functions $`V_{0,0}^{LL}`$ and $`V_{2,0}^{LL}`$ are defined in Ref. . The subscripts $`\mathrm{}`$ and $`m`$ of the expression $`V_{\mathrm{},m}^{LL}`$ indicate that it is obtained by the integration $`𝑑\mathrm{\Omega }Y_\mathrm{}m\mathrm{\Delta }\sigma _{pd}`$, and the superscript $`LL`$ means that the proton and deuteron are both longitudinally polarized. The $`\theta `$ is the polar angle of the lepton $`\mu ^+`$. A parton model should be used for discussing relations between the structure functions and parton distributions. In the following, we take the expression which is obtained by integrating the cross section over the virtual-photon transverse momentum $`\stackrel{}{Q}_T`$. According to Ref. , it is given by $$\mathrm{\Delta }\sigma _{pd}\underset{a}{}e_a^2\left[\mathrm{\Delta }q_a(x_1)\mathrm{\Delta }\overline{q}_a^d(x_2)+\mathrm{\Delta }\overline{q}_a(x_1)\mathrm{\Delta }q_a^d(x_2)\right],$$ (2) where $`\mathrm{\Delta }q_a^d`$ and $`\mathrm{\Delta }\overline{q}_a^d`$ are the longitudinally-polarized quark and antiquark distributions in the deuteron. The subscript $`a`$ indicates quark flavor, and $`e_a`$ is the corresponding quark charge. The momentum fractions are given by $`x_1=\sqrt{\tau }e^{+y}`$ and $`x_2=\sqrt{\tau }e^y`$ with $`\tau =M_{\mu \mu }^2/s`$ and the dimuon rapidity $`y=(1/2)\mathrm{ln}[(E^{\mu \mu }+P_L^{\mu \mu })/(E^{\mu \mu }P_L^{\mu \mu })]`$ in the case of small $`P_T`$. We neglect the nuclear corrections in the deuteron and assume isospin symmetry, so that the distributions in the deuteron become $`\mathrm{\Delta }u^d=\mathrm{\Delta }u+\mathrm{\Delta }d,`$ $`\mathrm{\Delta }d^d=\mathrm{\Delta }d+\mathrm{\Delta }u,`$ $`\mathrm{\Delta }s^d=2\mathrm{\Delta }s,`$ $`\mathrm{\Delta }\overline{u}^d=\mathrm{\Delta }\overline{u}+\mathrm{\Delta }\overline{d},`$ $`\mathrm{\Delta }\overline{d}^d=\mathrm{\Delta }\overline{d}+\mathrm{\Delta }\overline{u},`$ $`\mathrm{\Delta }\overline{s}^d=2\mathrm{\Delta }\overline{s}.`$ (3) The polarized gluon distribution does not contribute to our LO analysis. Even in the NLO studies, its effect is rather small in the small-$`P_T`$ region which is investigated in this paper, whereas it contributes significantly to large-$`P_T`$ cross sections. Estimates of such gluon contributions are discussed in Ref. . The situation is slightly different in the transversity case. If the cross-section difference is simply given by $`\mathrm{\Delta }_T\sigma _{pd}=\sigma (\varphi _p=0,\varphi _d=0)\sigma (\varphi _p=0,\varphi _d=\pi )`$, where $`\varphi `$ is the azimuthal angle of a polarization vector, four structure functions ($`V_{0,0}^{TT}`$, $`V_{2,0}^{TT}`$, $`U_{2,2}^{TT}`$, and $`U_{2,1}^{UT}`$) contribute. Here, the superscript $`UT`$, for example, indicates that the proton is unpolarized and the deuteron is transversely polarized. However, the parton model with the $`\stackrel{}{Q}_T`$ integration suggests that only $`U_{2,2}^{TT}`$ remains finite and higher-twist functions vanish . In the following discussions, we completely neglect the higher-twist contributions. Then, the cross-section difference is given in the parton model as $`\mathrm{\Delta }_T\sigma _{pd}`$ $`=\sigma (\varphi _p=0,\varphi _d=0)\sigma (\varphi _p=0,\varphi _d=\pi )`$ $`{\displaystyle \underset{a}{}}e_a^2\left[\mathrm{\Delta }_Tq_a(x_1)\mathrm{\Delta }_T\overline{q}_a^d(x_2)+\mathrm{\Delta }_T\overline{q}_a(x_1)\mathrm{\Delta }_Tq_a^d(x_2)\right],`$ (4) where $`\mathrm{\Delta }_Tq`$ and $`\mathrm{\Delta }_T\overline{q}`$ are quark and antiquark transversity distributions. The nuclear corrections are again neglected in the parton distributions of the deuteron, so that the equations corresponding to Eq. (3) are used in the following analysis. The pp cross sections are given in the same way simply by replacing the parton distributions in Eqs. (2) and (4): $`q^dq`$ and $`\overline{q}^d\overline{q}`$. The ratio of the pd cross section to the pp one is then given by $$R_{pd}\frac{\mathrm{\Delta }_{(T)}\sigma _{pd}}{2\mathrm{\Delta }_{(T)}\sigma _{pp}}=\frac{_ae_a^2\left[\mathrm{\Delta }_{(T)}q_a(x_1)\mathrm{\Delta }_{(T)}\overline{q}_a^d(x_2)+\mathrm{\Delta }_{(T)}\overline{q}_a(x_1)\mathrm{\Delta }_{(T)}q_a^d(x_2)\right]}{2_ae_a^2\left[\mathrm{\Delta }_{(T)}q_a(x_1)\mathrm{\Delta }_{(T)}\overline{q}_a(x_2)+\mathrm{\Delta }_{(T)}\overline{q}_a(x_1)\mathrm{\Delta }_{(T)}q_a(x_2)\right]},$$ (5) where $`\mathrm{\Delta }_{(T)}=\mathrm{\Delta }`$ or $`\mathrm{\Delta }_T`$ depending on the longitudinal or transverse case. At large $`x_F=x_1x_2`$, the $`\mathrm{\Delta }_{(T)}\overline{q}_a(x_1)`$ terms can be neglected, so that the ratio becomes $$R_{pd}(x_F1)=1\frac{[\mathrm{\hspace{0.17em}4}\mathrm{\Delta }_{(T)}u_v(x_1)\mathrm{\Delta }_{(T)}d_v(x_1)][\mathrm{\Delta }_{(T)}\overline{u}(x_2)\mathrm{\Delta }_{(T)}\overline{d}(x_2)]}{8\mathrm{\Delta }_{(T)}u_v(x_1)\mathrm{\Delta }_{(T)}\overline{u}(x_2)+2\mathrm{\Delta }_{(T)}d_v(x_1)\mathrm{\Delta }_{(T)}\overline{d}(x_2)},$$ (6) where $`x_11`$ and $`x_20`$. If the distribution $`\mathrm{\Delta }_{(T)}\overline{u}`$ is the same as $`\mathrm{\Delta }_{(T)}\overline{d}`$, the ratio is simply given by $$R_{pd}(x_F1)=1\text{if}\mathrm{\Delta }_{(T)}\overline{u}=\mathrm{\Delta }_{(T)}\overline{d}.$$ (7) Equation (6) shows that the deviation from one is directory proportional to the $`\mathrm{\Delta }_{(T)}\overline{u}\mathrm{\Delta }_{(T)}\overline{d}`$ distribution. If the valence-quark distributions satisfy $`\mathrm{\Delta }_{(T)}u_v(x1)\mathrm{\Delta }_{(T)}d_v(x1)`$ , Eq. (6) becomes $$R_{pd}(x_F1)=1\left[\frac{\mathrm{\Delta }_{(T)}\overline{u}(x_2)\mathrm{\Delta }_{(T)}\overline{d}(x_2)}{2\mathrm{\Delta }_{(T)}\overline{u}(x_2)}\right]_{x_20}=\frac{1}{2}\left[\mathrm{\hspace{0.17em}1}+\frac{\mathrm{\Delta }_{(T)}\overline{d}(x_2)}{\mathrm{\Delta }_{(T)}\overline{u}(x_2)}\right]_{x_20}.$$ (8) Therefore, if the $`\mathrm{\Delta }_{(T)}\overline{u}`$ distribution is negative as suggested by the recent parametrizations and if the $`\mathrm{\Delta }_{(T)}\overline{u}`$ distribution is larger (smaller) than $`\mathrm{\Delta }_{(T)}\overline{d}`$, the ratio is larger (smaller) than one. However, if the $`\mathrm{\Delta }_{(T)}\overline{u}`$ distribution is positive, it is a different story. In this way, we find that the data in the large-$`x_F`$ region are especially useful in finding the flavor asymmetry ratio $`\mathrm{\Delta }_{(T)}\overline{u}(x)/\mathrm{\Delta }_{(T)}\overline{d}(x)`$. On the other hand, the other $`x_F`$ regions are not so promising. For example, if another limit $`x_F1`$ is taken, the ratio is $$R_{pd}(x_F1)=\frac{[\mathrm{\hspace{0.17em}4}\mathrm{\Delta }_{(T)}\overline{u}(x_1)+\mathrm{\Delta }_{(T)}\overline{d}(x_1)][\mathrm{\Delta }_{(T)}u_v(x_2)+\mathrm{\Delta }_{(T)}d_v(x_2)]}{8\mathrm{\Delta }_{(T)}\overline{u}(x_1)\mathrm{\Delta }_{(T)}u_v(x_2)+2\mathrm{\Delta }_{(T)}\overline{d}(x_1)\mathrm{\Delta }_{(T)}d_v(x_2)},$$ (9) where $`x_10`$ and $`x_21`$. If the condition $`\mathrm{\Delta }_{(T)}u_v(x1)\mathrm{\Delta }_{(T)}d_v(x1)`$ is satisfied, the ratio becomes $$R_{pd}(x_F1)=\frac{1}{2}\left[\mathrm{\hspace{0.17em}1}+\frac{\mathrm{\Delta }_{(T)}\overline{d}(x_1)}{4\mathrm{\Delta }_{(T)}\overline{u}(x_1)}\right]_{x_10}.$$ (10) If the antiquark distributions are same, the ratio is given by $`R_{pd}=5/8=0.625`$. Comparing the above equation with Eq. (8), we find the difference of factor 4. It suggests that the ratio $`R_{pd}`$ is not as sensitive as the one in the large-$`x_F`$ region although the $`\mathrm{\Delta }_{(T)}\overline{u}/\mathrm{\Delta }_{(T)}\overline{d}`$ asymmetry could be found also in this region. The pd/pp ratio has been used for finding the flavor asymmetry $`\overline{u}/\overline{d}`$ in the unpolarized reaction . In this paper, we would like to show the possibility of finding it in the polarized parton distributions. Our investigation is particularly important for the transversity distributions. In finding the flavor asymmetry in the unpolarized and longitudinally-polarized distributions, popular ideas are to use inclusive lepton scattering and $`W`$ production data. However, these methods cannot be used for the transversity distributions because of the chiral-odd property. The pd asymmetry in the transversely-polarized Drell-Yan processes enables us to determine the flavor asymmetry $`\mathrm{\Delta }_T\overline{u}/\mathrm{\Delta }_T\overline{d}`$. ## 3 Results We show expected pd/pp ratios numerically in this section by using recent parametrizations for the polarized parton distributions. First, the leading-order (LO) results are shown in Fig. 1 at $`\sqrt{s}=50`$ GeV and $`M_{\mu \mu }=5`$ GeV. The Drell-Yan cross-section ratio $`R_{pd}`$ is calculated in the longitudinally- and transversely-polarized cases, and the results are shown by the solid and dashed curves, respectively. The longitudinally-polarized distributions are taken from the 1999 version of the LSS (Leader-Sidorov-Stamenov) parametrization . Strictly speaking, their distributions cannot be used in the LO analysis because they are provided at the NLO level. Nevertheless, the same input distributions are used in our LO analysis in order to compare with the next-to-leading-order (NLO) evolution results in the following. The flavor asymmetry ratio is taken as $$r_{\overline{q}}\frac{\mathrm{\Delta }_{(T)}\overline{u}}{\mathrm{\Delta }_{(T)}\overline{d}}=0.7,1.0,\mathrm{or}1.3,$$ (11) at $`Q^2`$=1 GeV<sup>2</sup>. Because the input distributions are provided at $`Q^2`$=1 GeV<sup>2</sup>, they should be evolved to those at $`Q^2=M_{\mu \mu }^2`$ with the LO evolution equations for the longitudinally-polarized and transversity distributions. Although the longitudinal distributions are roughly known from the $`g_1`$ data, there is no experimental information on the transversity ones. Because nonrelativistic quark models indicate that they are equal to the longitudinal ones, we assume the same LSS99 distributions at the initial point $`Q^2=1`$ GeV<sup>2</sup>. If the antiquark distributions are flavor symmetric ($`r_{\overline{q}}`$=1), the pd/pp ratio satisfies the conditions, $`R_{pd}1`$ as $`x_F1`$ and $`R_{pd}0.625`$ as $`x_F1`$. The flavor-asymmetry effects are conspicuous especially at large $`x_F`$ as we explained in Sec. 2. Because the $`\mathrm{\Delta }_{(T)}\overline{u}`$ distribution is negative in the used LSS99 parametrization, the ratio $`R_{pd}`$ is larger than one at large $`x_F`$ if there exists a $`|\mathrm{\Delta }_{(T)}\overline{d}|`$ excess over $`|\mathrm{\Delta }_{(T)}\overline{u}|`$ ($`r_{\overline{q}}`$=0.7). On the other hand, if $`|\mathrm{\Delta }_{(T)}\overline{u}|`$ is larger than $`|\mathrm{\Delta }_{(T)}\overline{d}|`$ ($`r_{\overline{q}}`$=1.3), it is smaller than one. In the small-$`x_F`$ region, the flavor-asymmetry contributions are not so large due to the suppression factor 1/4. It is also interesting to find that there is almost no difference between the longitudinally- and transversely-polarized ratios if the initial distributions are identical. Next, we show NLO evolution results in Fig. 2. Using the same LSS99 distributions at $`Q^2`$=1 GeV<sup>2</sup>, we evolve them to the distributions at $`Q^2`$=25 GeV<sup>2</sup> by the NLO longitudinal and transversity evolution equations . As shown in the figure, the calculated ratios are almost the same as those of the LO. However, there are slight differences as it is noticeable in the large-$`x_F`$ region: the ratio $`R_{pd}`$ is not equal to one although the antiquark distributions are flavor symmetric at $`Q^2`$=1 GeV<sup>2</sup>. It is because the $`Q^2`$ evolution gives rise to the asymmetric sea although the initial distributions are flavor symmetric. This kind of perturbative QCD effect is not so large in the evolution from $`Q^2`$=1 GeV<sup>2</sup> to 25 GeV<sup>2</sup>. If the distributions are evolved from the GRSV (Glück-Reya-Stratmann-Vogelsang) type small $`Q^2`$, the effect is larger. The comparison of Fig. 2 with Fig. 2 indicates that the NLO analysis is important for a precise determination of the $`\mathrm{\Delta }_{(T)}\overline{u}/\mathrm{\Delta }_{(T)}\overline{d}`$ ratio from measured experimental data. In Fig. 4, the dependence on the center-of-mass energy $`\sqrt{s}`$ is shown. The LO cross-section ratio is calculated at the RHIC energies $`\sqrt{s}`$=200 and 500 GeV. The calculated ratios are almost equal to those at $`\sqrt{s}`$=50 GeV in the large and small $`x_F`$ regions. However, the ratio becomes a steeper function of $`x_F`$ as $`\sqrt{s}`$ increases, so that intermediate-$`x_F`$ results depend much on the c.m. energy. In other words, the intermediate region is sensitive to the details of the parton distributions. Finally, we discuss parametrization dependence. A difference from the unpolarized ratio is that the distributions $`\mathrm{\Delta }_{(T)}q`$ and $`\mathrm{\Delta }_{(T)}\overline{q}`$ could be negative so that the denominator, for example in Eq. (5), may vanish depending on the kinematical condition. If this is the case, the ratio has strong $`x_F`$ dependence in the intermediate region. Therefore, we should be careful that the obtained numerical results could change significantly depending on the choice of the input polarized parton distributions. In particular, the $`x`$ dependence of the antiquark distributions, needless to say for the gluon distribution, is not well known although $`\mathrm{\Delta }\overline{q}`$ seems to be negative and $`|\mathrm{\Delta }\overline{q}|`$ is rather small according to the recent parametrizations . The LSS99 distributions have been used so far in our analysis. There are several other polarized parametrizations. In order to show the dependence on the used parametrization, we employ the Gehrmann-Stirling set A (GS-A NLO) and the GRSV96 (NLO) . The calculated LO ratios are shown in Fig. 4. The GS-A and GRSV96 distributions are calculated first at $`Q^2`$=1 GeV<sup>2</sup> by their own programs. Then, a certain antiquark ratio $`r_{\overline{q}}`$ is introduced. After this prescription, the distributions are evolved to $`Q^2`$=25 GeV<sup>2</sup> by the programs in Ref. . The calculated results are not much different between the LSS99 and GRSV parametrizations. However, if the GS-A distributions are used, the results are much different. This is because the GS-A antiquark distributions are positive at large $`x`$ and become negative at small $`x`$. The denominator of Eq. (5) could vanish at some $`x_F`$ points, so that the ratio is infinite at these points. Therefore, the intermediate-$`x_F`$ region is especially useful for finding the detailed $`x`$ dependence of the antiquark distributions. As we have found in these analyses, the pd Drell-Yan is important for finding not only new structure functions but also the details of polarized antiquark distributions. At this stage, there is no experimental proposal for the polarized deuteron Drell-Yan. However, there are possibilities at FNAL, HERA, and RHIC, and we do hope that the feasibility is studied seriously at these facilities. ## 4 Conclusions We have studied the polarized Drell-Yan cross-section ratio $`R_{pd}=\mathrm{\Delta }_{(T)}\sigma _{pd}/2\mathrm{\Delta }_{(T)}\sigma _{pp}`$. Using the recent formalism for the polarized pd processes and typical parametrizations, we have shown that it is possible to extract the information on the light antiquark flavor asymmetry ($`\mathrm{\Delta }_{(T)}\overline{u}/\mathrm{\Delta }_{(T)}\overline{d}`$). The large-$`x_F`$ region is very sensitive to the flavor asymmetry. Our proposal is particularly important for the transversity distributions because the $`\mathrm{\Delta }_T\overline{u}/\mathrm{\Delta }_T\overline{d}`$ asymmetry cannot be found in the W production processes. Furthermore, the intermediate-$`x_F`$ region is valuable for finding the detailed $`x`$ dependence of the polarized antiquark distributions. ## Acknowledgments S.K. and M.M. were partly supported by the Grant-in-Aid for Scientific Research from the Japanese Ministry of Education, Science, and Culture. M.M. was supported by a JSPS Research Fellowship for Young Scientists. Figures
no-problem/9909/astro-ph9909238.html
ar5iv
text
# Microlensing Results and Removing the Baryonic Degeneracy ## 1 Constraints on the BDM, and BDM halo models Microlensing (ML) observations revolutionized galactic halo modeling. The abundance and properties of lenses influence the cosmological picture of the abundance, structure and evolution of the baryonic content of the universe. Widely held assumptions: (I) MACHOs are baryonic, (II) Milky Way is a typical $`L_{}`$ spiral galaxy, are necessary for correct account of baryons, comparison with BBNS bounds (Fields et al. 1998), and comparing local mass census with total mass-energy of the universe. Our assumption: (III) MACHOs possess finite mass-to-light ratios, exclude dense clouds, or stellar-mass black holes from (I). Unknown baryonic fraction of primordial galactic haloes $`f_g`$ became acute problem with the discovery of non-zero cosmological constant. It can vary between 0.04 and 0.2, depending on precise values of light element abundances, Hubble constant and cosmological constant (Gates, Gyuk & Turner 1995). ## 2 Discussion Contribution of MACHOs to the total baryonic density, if Milky Way is typical for the present epoch, $`\mathrm{\Omega }_{\mathrm{MACHO}}/\mathrm{\Omega }_B0.7h`$ (Fields et al. 1998), is based on ML surveys toward Magellanic Clouds, still in progress. If MACHO halo extends further than the distance to Magellanics, $`50`$ kpc, the ratio also increases. Inclusion of gaseous phases into Galactic BDM mass budget is based on a persistence of Ly$`\alpha `$ absorption systems down to low redshifts (HST observations), and their proven association with normal galaxies (Fukugita et al. 1998), as diffuse remnants of huge gaseous haloes galaxies once possessed. With maximal absorption radius $`178h^1`$ kpc (Chen et al. 1998), they are much larger than MACHO haloes, suggesting that amount of gas in various ionization stages around normal luminous galaxies is high, contrary to the conventional wisdom. High baryon budget, discussed by Milošević-Zdjelar, Samurović & Ćirković (this Conference) and Jakobsen (1998), can not be found in the present-day IGM. We conclude that by the recent epochs, most baryons have been incorporated into collapsed structures of gas of varying ionization stages, and MACHOs. Extending discussion of Gates et al. (1995) with gaseous phase of halo matter we establish contact between the observed column density and physical density of gas, integrate density over the halo volume, integrate masses of such haloes over Schechter luminosity function, and obtain three density profiles. In model which gives the physical density a priori, we may substitute the first step for integration of this density along the line-of-sight in order to compare with the observational data on the column density spatial distribution (Ćirković 1999). ML results, as manifested in optical depths and event durations, suffer from intrinsic degeneracies which preclude construction of a realistic, 3-D model of lens distribution. Degeneracy could be removed by supplementing ML data with other independent constraints from different branches of astronomy. The explosive advances recently made in the study of gas around galaxies at low redshift promise that supplementing of this information to the ML data, and taking into account ”high-precision era” of BBNS studies, will enable fixing the MACHO abundance and the extent of MACHO halo for $`L_{}`$ galaxies. Composite BDM models may certainly aspire to be more realistic than gas-alone (Mo & Miralda-Escudé 1996) or the MACHO-dominated (e.g. Honma & Kan-ya 1998). ###### Acknowledgements. Srdjan Samurović acknowledges the financial support of the University of Trieste. ## References Chen, H.-W., Lanzetta, K. M., Webb, J. K., & Barcons, X. 1998, ApJ, 498, 77 Ćirković, M. M. 1999, Ap & SS, in press Fields, B. D., Freese, K., & Graff, D. S. 1998, NewA, 3, 347 Fukugita, M., Hogan, C. J., & Peebles, P. J. E. 1998, ApJ, 503, 518 Gates, E. I., Gyuk, G., & Turner, M. S. 1995, Phys. Rev. Lett. 74, 3724 Honma, M. & Kan-ya, Y. 1998, ApJ, 503, L139 Jakobsen, P. 1998, A & A, 331, 61 Mo, H. J., & Miralda-Escudé, J. 1996, ApJ, 469, 589
no-problem/9909/physics9909036.html
ar5iv
text
# Abstract ## Abstract Different mechanisms believed to be responsible for the generation of bursts in hydrodynamical systems are reviewed and a new mechanism capable of generating regular or irregular bursts of large dynamic range near threshold is described. The new mechanism is present in the interaction between oscillatory modes of odd and even parity in systems of large but finite aspect ratio, and provides an explanation for the bursting behavior observed in binary fluid convection. Additional applications of the new mechanism are proposed. ## 1 Introduction Bursts of activity, be they regular or irregular, are a common occurrence in physical and biological systems. In recent years several models of bursting behavior in hydrodynamical systems have been described using ideas from dynamical systems theory. In this article we review these and then describe a new mechanism (Moehlis & Knobloch , Knobloch & Moehlis ) which provides an explanation for the bursting behavior observed in experiments on convection in <sup>3</sup>He/<sup>4</sup>He mixtures by Sullivan & Ahlers . This mechanism ope-rates naturally in systems with broken D<sub>4</sub> symmetry undergoing a Hopf bifurcation from a trivial state. This symmetry, the symmetry group of a square, may be present because of the geometry of the system under consideration (for example, the shape of the container) but also appears in large aspect ratio systems with reflection symmetry (Landsberg & Knobloch ). In either case bursting arises as a result of the nonlinear interaction between two nearly degenerate modes with different symmetries. This article is an expanded version of Knobloch & Moehlis . ## 2 Mechanisms producing bursting As detailed further below, bursts come in many different forms, distinguished by their dynamic range (i.e., the range of amplitudes during each burst), duration, and recurrence properties. Particularly important for the purposes of the present article is the question of whether the observed bursts occur close to the threshold of a primary instability or whether they are found far from threshold. In the former case a dynamical systems approach is likely to be successful: in this regime the spatial structure usually resembles the eigenfunctions of the linear problem and it is likely that only a small number of degrees of freedom participate in the burst. Such bursts take place fundamentally in the time domain with their spatial manifestation of secondary importance, in contrast to pulses which are structures localized in both time and space; the latter are not considered here. Since the equations governing the evolution of primary instabilities are often highly symmetric (see Crawford & Knobloch ) global bifurcations are likely to occur and these serve as potential candidates for bursting mechanisms. In contrast, bursts found far from threshold usually involve many degrees of freedom but even here some progress is sometimes possible. ### 2.1 Bursts in the wall region of a turbulent boundary layer The presence of coherent structures in a turbulent boundary layer is well established (see, e.g., Robinson and the collection of articles edited by Panton ). The space-time evolution of these structures is often characterized by intermittent bursting events involving low speed streamwise “streaks” of fluid. Specifically, let $`x_1,x_2`$, and $`x_3`$ be the streamwise, wall normal, and spanwise directions with associated velocity components $`U+u_1,u_2`$, and $`u_3`$, respectively; here $`U(x_2)`$ is the mean flow. In a “burst” the streak breaks up and low speed fluid moves upward away from the wall ($`u_1<0,u_2>0`$); this is followed by a “sweep” in which fast fluid moves downward towards the wall ($`u_1>0,u_2<0`$). After the burst/sweep cycle the streak reforms, often with a lateral spanwise shift. A low-dimensional model of the burst/sweep cycle was developed by Aubry et al. ; further details and later references may be found in Holmes et al. . To construct such a model the authors used a Karhunen-Loève decomposition of the data to identify an energetically dominant empirical set of eigenfunctions, hereafter “modes”. The original study of Aubry et al. used experimental data for pipe flow with $`Re6750`$, while later studies used data for channel flow from direct numerical simulation with $`Re30004000`$ and large eddy simulations with $`Re13800`$ (Holmes et al. ). The model was constructed by projecting the Navier-Stokes equation onto this basis and consists of a set of coupled ODEs for the amplitudes of these modes. The fixed points of these equations are to be associated with the presence of coherent structures. There are two types, related by half-wavelength spanwise translation. Numerical integration of the model reveals that these fixed points are typically unstable and that they are connected by a heteroclinic cycle. In such a cycle the trajectory alternately visits the vicinities of the two unstable fixed points. In the model of Aubry et al. this heteroclinic cycle is found to be structurally stable, i.e., it persists over a range of parameter values. This is a consequence of the O(2) symmetry of the equations inherited from periodic boundary conditions in the spanwise direction. Moreover, for the parameter values of interest this cycle is attracting, i.e., it attracts all nearby trajectories. Since the transition from one fixed point to the other corresponds to a spanwise translation by half a wavelength, the recurrent excursions along such a heteroclinic cycle can be identified with the burst/sweep cycle described above. However, since this cycle is attracting, the time between successive bursts will increase as time progresses. This is not observed and Aubry et al. appeal to the presence of a random pressure term modeling the effect of the outer fluid layer to kick the trajectory from the heteroclinic cycle. In the language of Busse such a pressure term results in a statistical limit cycle, with the bursting events occurring randomly in time but with a well-defined mean rate. The resulting temporal distribution of the burst events is characterized by a strong exponential tail, matching experimental observations. Attracting structurally stable heteroclinic cycles occur in a number of problems of this type, i.e., mode interaction problems with O(2) symmetry (Armbruster et al. , Proctor & Jones , Melbourne et al. , Steindl & Troger , Krupa , Hirschberg & Knobloch ). ### 2.2 Bursts in shear flows undergoing subcritical transition to turbulence Experimental studies of plane Couette flow and Poiseuille (pipe) flow have shown that at high enough values of the Reynolds number $`Re`$ the basic laminar flow becomes turbulent. However, these laminar flows are linearly stable at all $`Re`$ (see, e.g., Drazin & Reid ). Consequently, the transition to turbulence in these systems must arise from finite (i.e., not infinitesimal) perturbations to the basic flow; such transitions are called subcritical since the turbulent state exists for values of $`Re`$ for which the laminar state is stable. Much recent work (reviewed in Baggett & Trefethen ) has emphasized the importance of the fact that the linear operators $`L`$ governing the evolution of perturbations of these flows are non-normal, i.e., $`L^{}LLL^{}`$, where $`L^{}`$ is the adjoint of $`L`$. Linear systems with a non-normal $`L`$ can exhibit transient growth even though the laminar state is linearly stable; if the growth is large enough, nonlinearities in the system may then trigger a transition to turbulence. Analysis of low-dimensional models supported by numerical simulations suggests that the minimum perturbation amplitude $`ϵ`$ that results in turbulence scales as $`ϵ=𝒪(Re^\alpha )`$ for some $`\alpha <1`$ (Baggett & Trefethen ). Thus, $`ϵ`$ decreases rapidly with increasing $`Re`$, and the experimentally determined $`Re`$ for transition should be that value at which $`ϵ`$ is roughly equal to perturbations due to noise or imperfections in the system. The turbulence excited by finite amplitude perturbations in shear flows often takes the form of turbulent spots which can move, grow, split, and merge (see, e.g., Daviaud et al. ). Turbulent spots can also burst intermittently. For example, Bottin et al. observed intermittent turbulent bursts in plane Couette flow for $`Re325`$; these bursts were triggered by a spanwise wire through the center of the channel, and could be localized in space by introducing a bead into the central plane. We focus here on this burst-like behavior, also found in low-dimensional models of the subcritical transition to turbulence. The important issue in such models is the attractor to which the flow settles after an appropriate finite perturbation to the basic laminar flow. This will depend crucially on nonlinear terms in the equations, a point emphasized, for example, by Waleffe and Dauchot & Manneville . If this attractor is a fixed point, the system settles into a steady state in which bursts do not occur. On the other hand, if this attractor is a limit cycle it may be appropriate to interpret the resulting behavior as burst-like. This is the case in the model studied in Waleffe in which the amplitudes of streamwise streaks, streamwise rolls, and the streak instability can periodically undergo short-lived explosive growths at the expense of the mean shear. Waleffe refers to this as a self-sustaining process. Since the laminar state is linearly stable, such limit cycles cannot bifurcate off the laminar state. Instead, as pointed out in Waleffe , they may be born in a Hopf bifurcation (from a fixed point other than that corresponding to the laminar state) or a homoclinic bifurcation in which a trajectory homoclinic to a fixed point forms at some value of $`Re`$ such that for smaller (larger) values of $`Re`$ a limit cycle exists (does not exist), or vice versa. Near such a homoclinic bifurcation, bursts are expected because the trajectory spends a long time near the fixed point, bursting away and returning in a periodic or chaotic fashion, depending on the eigenvalues of the fixed point. Other generic codimension one bifurcations leading to the appearance or disappearance of limit cycles are saddle-node bifurcations of limit cycles and saddle-nodes in which a pair of fixed points appears on the limit cycle (Guckenheimer et al. ). Limit cycles could also come in from or go off to infinity. In other models chaotic attractors have been found (see, e.g., Gebhardt & Grossman and Baggett et al. ) and these may also give rise to burst-like behavior. The relation of the bursts described in this section (which occur for moderate values of $`Re`$, e.g., $`Re325`$) to those which occur in the turbulent boundary layer (which occur for high values of $`Re`$) is not completely clear. However, the similarities in the phenomenology have led Waleffe to propose that the self-sustaining process will continue to have importance in the near-wall region of high $`Re`$ flows. ### 2.3 Heteroclinic connections to infinity Another mechanism involving heteroclinic connections, distinct from that discussed in section 2.1, has been investigated by Newell et al. as a possible model for spatio-temporal intermittency in turbulent flow. The authors suggest that such systems may be viewed as nearly Hamiltonian except during periods of localized intense dissipation. A related “punctuated Hamiltonian” approach to the evolution of two-dimensional turbulence has met with considerable success (Carnevale et al. and Weiss & McWilliams ). For their description Newell et al. divide the instantaneous states of the flow into two categories, a turbulent soup (TS) characterized by weak coherence, and a singular (S) state characterized by strong coherence, and suppose that the TS and S states are generalized saddles in an appropriate phase space. Furthermore, they suppose that in the Hamiltonian limit the unstable manifold of TS (S) intersects transversally the stable manifold of S (TS). If the constant energy surfaces are noncompact (i.e., unbounded), the evolution of the Hamiltonian system may take the system into regions of phase space with very high (“infinite”) velocities and small scales. These regions are identified with the S states and high dissipation. In such a scenario the strong dissipation events are therefore identified with excursions along heteroclinic connections to infinity. Perturbations to the system (such as the addition of dissipative processes) may prevent the trajectory from actually reaching infinity, but this underlying unperturbed structure implies that large excursions are still possible. Newell et al. apply these ideas to the two-dimensional nonlinear Schrödinger equation (NLSE) with perturbations in the form of special driving and dissipative terms which act at large and small scales, respectively. Here S consists of “filament” solutions to the unperturbed NLSE which become singular in finite time and represent coherent structures which may occur at any position in the flow field. When the solution is near S a large portion of the energy is in small scales; for the perturbed equations the dissipative term then becomes important so that the filament solution is approached but collapses before it is reached. This leads to a spatially and temporally random occurrence of localized burst-like events for the perturbed equation. The rate of attraction at S is determined by the faster than exponential rate at which the filament becomes singular, while the rate of repulsion at S is governed by the dissipative process and hence is unrelated to the rate of attraction. This bursting mechanism shares characteristics with that described in Kaplan et al. in which solutions of a single complex Ginzburg-Landau equation with periodic boundary conditions undergo faster than exponential bursting due to a destabilizing nonlinearity and collapse due to strong nonlinear dispersion (see also Bretherton & Spiegel ). A study of a generalization of Burger’s equation modeling nonlocality effects suggests the presence of burst-like events through a similar scenario (Makarenko et al. ). ### 2.4 Bursts in the Kolmogorov flow The Kolmogorov flow $`𝐮=(k\mathrm{sin}ky,0)`$ is an exact solution of the two-dimensional incompressible Navier-Stokes equation with unidirectional forcing $`𝐟`$ at wavenumber $`k`$: $`𝐟=(\nu k^3\mathrm{sin}ky,0)`$. With increasing Reynolds number $`Re\nu ^1`$ this flow becomes unstable, and direct numerical simulation with $`2\pi `$-periodic boundary conditions shows that for moderately high Reynolds numbers and $`k>1`$ the resulting flow is characterized by intermittent bursting (She , Nicolaenko & She , Armbruster et al. ). A burst occurs when the system evolves from a coherent vortex-like modulated traveling wave (MW) to a spatially disordered state following transfer of energy from large to small scales. The system then relaxes to the vicinity of another symmetry-related MW state, and the process continues with bursts occurring irregularly but with a well-defined mean period. The details of what actually happens appear to depend on the value of $`k`$ because $`k`$ determines the symmetry of the nonlinear equation describing the evolution of the perturbation streamfunction $`\varphi `$ about the Kolmogorov flow. Although there is no compelling reason for it, all simulations of this equation have been performed with $`2\pi `$-periodic boundary conditions in both directions. With these boun-dary conditions this equation has a symmetry which is the semi-direct product of the dihedral group D<sub>2k</sub> (generated by the actions $`(x,y,\varphi )(x,y,\varphi )`$ and $`(x,y,\varphi )(x,y+\pi /k,\varphi )`$) and the group SO(2) (representing the symmetry under translations $`xx+`$const). In the simplest case, $`k=1`$, this symmetry group is isomorphic to the direct product of the group O(2) of rotations and reflections of a circle and the group Z<sub>2</sub> representing reflections in the $`y`$-direction. Unfortunately, when $`k=1`$ the Kolmogorov flow with $`2\pi `$-periodic boundary conditions is not unstable for any value of $`Re`$ (Meshalkin & Sinai , Green , Marchioro ), and one is forced to consider $`k>1`$. Armbruster et al. analyzed carefully the $`k=2`$ case and showed that while a heteroclinic cycle between the MW states does form, it is not structurally stable; the bursts are therefore not produced by a mechanism of the type described in section 2.1. It is possible, however, that the onset of bursting is associated with a symmetry-increasing bifurcation at $`ReRe_s`$ (see, e.g., Rucklidge & Matthews ). This would explain why the system stays in a single MW state for $`Re`$ just below $`Re_s`$ but visits the vicinity of different but symmetry-related MW states for $`Re`$ just above $`Re_s`$. However, despite much work a detailed understanding of the bursts in this system remains elusive. An alternative approach is to consider periodic domains with different periodicities in the two directions. In particular, if we consider the domain $`\{\pi <x\pi ,\pi /k<y\pi /k\}`$ with $`k>1`$ the symmetry group remains O(2)$`\times `$Z<sub>2</sub> but sufficiently long perturbations now grow. The unstable modes are either even or odd under the reflection $`(x,y)(x,y)`$ with respect to a suitable origin. Mode interaction between these two steady modes can result in a sequence of transitions summarized in figure 1 (Hirschberg & Knobloch ): the Kolmogorov flow loses stability to an even mode (Z), followed by a steady state bifurcation to a mixed parity state (MM<sub>π/2</sub>). Since each of these states is defined to within a translation in $`x`$ modulo $`2\pi `$, we say that it forms a circle of states. The MM<sub>π/2</sub> state then loses stability in a further steady state bifurcation to a traveling wave (TW) which in turn loses stability at a Hopf bifurcation to a MW. The MW two-torus terminates in a collision with the two circles of pure parity states, forming an attracting structurally stable heteroclinic cycle connecting them and their quarter-wavelength translates (Hirschberg & Knobloch ). In this regime the behavior would resemble that found in the numerical simulations, with higher modes kicking the system away from this cycle. Indeed this sequence of transitions echoes the results obtained by She and Nicolaenko & She for $`k=8`$. While it is likely that the $`k=1`$ scenario is relevant to these calculations because of the tendency towards an inverse cascade in these two-dimensional systems, we emphasize that, as mentioned above, a careful analysis of the $`k=2`$ case by Armbruster et al. shows that while a heteroclinic cycle of the required type does indeed form, it is not structurally stable. The case $`k=4`$ has also been studied (Platt et al. ) and a similar sequence of transitions found. Undoubtedly simulations on the domain $`\{\pi <x\pi ,\pi /k<y\pi /k\}`$ would shed new light on the problem, cf. Posch & Hoover . The group O(2)$`\times `$Z<sub>2</sub> also arises in convection in rotating straight channels (Knobloch ) and in natural convection in a vertical slot (Xin et al. ). In both of these cases the linear eigenfunctions are either even or odd with respect to a rotation by $`\pi `$ and the systems exhibit similar transitions. Related behavior has recently been observed in three-dimensional magnetoconvection with periodic boundary conditions on a square lattice. Unlike the Kolmogorov flow the evolution equation describing a steady state secondary instability of a square pattern is isotropic in the horizontal. Numerical simulations (Rucklidge et al. ) show intermittent breakdown of a square pattern followed by its restoration modulo translation. ### 2.5 Bursts in the Taylor-Couette system The Taylor-Couette system consists of concentric cylinders enclosing a fluid-filled annulus (see, e.g., Tagg ). The cylinders can be rotated independently. In the counterrotating regime the first state consists of spiral vortices of either odd or even parity with respect to midheight. Slightly above onset the flow resembles interpenetrating spiral (IPS) flow which may be intermittently interrupted by bursts of turbulence which fill the entire flow field (Hamill , Coughlin et al. ). With periodic boundary conditions in the axial direction numerical simulations by Coughlin et al. show that the IPS flow is temporally chaotic and consists of coexisting modes with different axial and azimuthal wavenumbers. This flow is confined primarily to the vicinity of the inner cylinder where the axisymmetric base flow is subject to an inviscid Rayleigh instability. Coughlin et al. conclude that the onset of turbulence is correlated with a secondary instability of one of the coexisting modes of the IPS flow, namely the basic spiral vortex flow with azimuthal wavenumber $`m=4`$. Indeed, when this mode is taken as the initial flow for parameters chosen such that the full IPS flow undergoes bursts, a secondary Hopf bifurcation from this state with the same azimuthal wavenumber but four times the axial wavelength is identified. The secondary instability grows in amplitude and ultimately provides a finite amplitude perturbation to the inviscidly stable flow near the outer cylinder, triggering a turbulent burst throughout the whole apparatus. During a burst small scales are generated throughout the apparatus leading to a rapid collapse of the turbulence and restoration of the IPS flow; the bursting process then repeats roughly periodically in time (see figure 2). As discussed in section 3.1, in a finite Taylor-Couette apparatus there is a natural mechanism for generating bursts close to onset. This mechanism operates in the regime $`ϵ\mathrm{\Gamma }^2`$, where $`ϵ`$ measures the fractional distance above threshold for the primary instability and $`\mathrm{\Gamma }`$ is the aspect ratio of the annulus (Landsberg & Knobloch , Renardy ). For larger $`ϵ`$ the influence of the boundaries no longer extends throughout the apparatus and is confined to boundary layers near the top and bottom. In this regime the dynamics in the bulk may be described by imposing periodic boundary conditions with period that is a multiple of the wavelength of the primary instability. The success of the simulation of the observed turbulent bursts using such periodic boundary conditions (Coughlin et al. ) suggests that these bursts occur too far above threshold to be explained by the mechanism described in section 3.1 below. This suggestion is supported by figure 3 which compares the location of the regime where this mechanism may be expected to operate with $`ϵ_b`$, the experimental value of $`ϵ`$ for the onset of bursts, as a function of $`\mathrm{\Gamma }`$. The latter is obtained using the approximation $`ϵ_b(R_bR_{IPS})/R_{IPS}`$, where $`R_{IPS}`$ and $`R_b`$ are the inner cylinder Reynolds numbers for the onset of IPS flow and bursts, and $`R_{IPS}`$ $`=`$ $`(3837\pm 374)\mathrm{\Gamma }^1+680\pm 14`$ $`R_b`$ $`=`$ $`(3680\pm 374)\mathrm{\Gamma }^1+701\pm 14`$ (Hamill ). This is a good approximation for strongly counterrotating cylinders because the IPS flow sets in for inner cylinder Reynolds number only about 1% above the primary onset to spiral vortices (Hamill ). The figure suggests that the observed bursts may fall within the range of validity of this theory for $`\mathrm{\Gamma }`$ only slightly smaller than those used in the experiments ($`\mathrm{\Gamma }=17.9`$$`26`$); for such bursts the presence of endwalls should become significant. ### 2.6 Intermittency The term intermittency refers to occasional, non-periodic switching between different types of behavior; such transitions may be viewed as bursts. In this section we describe different types of intermittency, dividing them into three classes. #### 2.6.1 Low-dimensional intermittency It is often the case that fluid dynamical systems may be modelled by low-dimensional Poincaré maps. Fixed points of such maps correspond to limit cycles of an appropriate dynamical system; when these lose stability intermittency may result. Without loss of generality we assume that for $`\lambda <\lambda _p`$ the system is in a “laminar” state corresponding to a stable fixed point of the map, and the onset of intermittency occurs at $`\lambda =\lambda _p`$. For $`\lambda >\lambda _p`$ the state of the system will resemble the stable laminar state which existed for $`\lambda <\lambda _p`$ for some time but will intermittently undergo “bursts” away from this state, followed by “reinjections” to the vicinity of the laminar state. The three types of intermittency identified by Pomeau & Manneville correspond to three different ways in which the laminar state ceases to exist or loses stability at $`\lambda =\lambda _p`$: Type I intermittency results when a stable and unstable limit cycle annihilate in a saddle-node bifurcation, Type II results when the limit cycle loses stability in a subcritical Hopf bifurcation, while in Type III the limit cycle loses stability in an inverse period-doubling bifurcation. Type I and Type III intermittency have been observed, for example, in Rayleigh-Bénard convection (see Bergé et al. and Dubois et al. , respectively), while Type II intermittency has been observed in a hot wire experiment (Ringuet et al. ). A different type of intermittency, called Type X, was observed by Price & Mullin in a variant of the Taylor-Couette system; this is similar to Type I intermittency but involves a hysteretic transition due to the nature of the reinjection. Another type of intermittency, called Type V, was introduced in Bauer et al. and He et al. ; this is similar in spirit to Type I intermittency but involves one-dimensional maps which are nondifferentiable or discontinuous. These different types of intermittency have distinct properties, such as the scaling behavior of the average time between bursts with $`\lambda \lambda _p`$. #### 2.6.2 Crisis-induced intermittency A crisis is a sudden change in a strange attractor as a parameter is varied (Grebogi et al. ). There are three types of crises, and without loss of generality we assume that the crisis occurs as $`\lambda `$ is increased through $`\lambda _c`$. In a boundary crisis, at $`\lambda =\lambda _c`$ the strange attractor collides with a coexisting unstable periodic orbit which lies on the boundary of the basin of attraction of the strange attractor. This leads to the destruction of the strange attractor, but chaotic transients will still exist. Intermittency is not associated with this crisis. In an interior crisis the strange attractor collides with a coexisting unstable periodic orbit at $`\lambda =\lambda _c`$, but here this leads to a widening rather than the destruction of the strange attractor. For $`\lambda `$ slightly larger than $`\lambda _c`$ the trajectory stays near the region of phase space occupied by the strange attractor before the crisis for long times, but intermittently bursts into a new region. In an attractor merging crisis two strange attractors with basins of attraction separated by a basin boundary are present when $`\lambda <\lambda _c`$. At $`\lambda _c`$ the two strange attractors simultaneously touch the basin boundary and “merge” to form a larger strange attractor for $`\lambda >\lambda _c`$. Such a crisis is often associated with a symmetry-increasing bifurcation (Chossat & Golubitsky ). For $`\lambda >\lambda _c`$ there is a single strange attractor on which the trajectory intermittently switches between states resembling the distinct strange attractors which existed for $`\lambda <\lambda _c`$. #### 2.6.3 Intermittency involving an invariant manifold A manifold in phase space is invariant if every initial condition on the manifold generates an orbit that remains on the manifold. Invariant manifolds often exist due to symmetries, but this is not necessary. There are several mechanisms for intermittency involving strange attractors on an invariant manifold. First, suppose that as a bifurcation parameter $`\lambda `$ is increased through $`\lambda _p`$, the strange attractor on the invariant manifold loses stability transverse to the manifold; this is called a blowout bifurcation (Ott & Sommerer ). Suppose that the dynamics within the invariant subspace do not depend on $`\lambda `$. Such a system thus has a skew-product structure (Platt et al. ), and $`\lambda `$ is called a normal parameter (Ashwin et al. ). If the blowout bifurcation is “supercritical” (Ott & Sommerer ) then for $`\lambda `$ just above $`\lambda _p`$ a trajectory will spend a long time near the invariant manifold, intermittently bursting away from it, only to return due to the presence of a reinjection mechanism. This scenario is known as on-off intermittency, where the “off” (“on”) state corresponds to the system being near (away from) the invariant manifold (see, e.g., Platt et al. , Venkataramani et al. ). Recently it has been shown that in appropriate circumstances a blowout bifurcation can lead to a structurally stable (possibly attracting) heteroclinic cycle between chaotic invariant sets (Ashwin & Rucklidge ). For systems which lack a skew-product structure and thus are governed by non-normal parameters, in-out intermittency is possible (Ashwin et al. , Covas et al. ). Here the attraction and repulsion to the invariant subspace are controlled by different dynamics. This occurs when the attractor for dynamics restricted to the invariant subspace is smaller than the intersection of the attractor for the full dynamics with the invariant subspace, and may be viewed as a generalization of on-off intermittency in which the attraction to and repulsion from the invariant subspace are controlled by the same dynamics. Another mechanism for bursting occurs when the strange attractor on the invariant manifold attracts typical orbits near the surface but is unstable in the sense that there are unstable periodic orbits embedded within the chaotic set which are transversely repelling. If the trajectory comes near such an unstable periodic orbit, there will be a burst away from the invariant surface. Such bursts may occur intermittently if noise is present or small changes (called “mismatch”) are made to the dynamical system that destroy the invariant surface. The instability of the first of these orbits is known as a bubbling transition (see, e.g., Ashwin et al. , Venkataramani et al. ). ### 2.7 Bursts in neural systems In neural systems, bursting refers to the switching of an observable such as a voltage or chemical concentration between an active state characterized by rapid (spike) oscillations and a rest state. Models of such bursting typically involve singularly perturbed vector fields in which system variables are classified as being “fast” or “slow” depending on whether or not they change significantly over the duration of a single spike. The slow variables may then be thought of as slowly varying parameters for the equations describing the fast variables (Rinzel , Bertram et al. , Wang & Rinzel , Guckenheimer et al. ). As the slow variables evolve, the state of the system in the fast variables may change from a stable periodic orbit (corresponding to the active state) to a stable fixed point (corresponding to the rest state) and vice versa; such transitions are often associated with a region of bistability for the periodic orbit and the fixed point, but need not be. Mechanisms by which such transitions can occur repeatedly have been classified (Rinzel , Bertram et al. , Wang & Rinzel ). Behavior of the time interval between successive spikes near a transition from the active to the rest state is discussed by Guckenheimer et al. ; in this paper the presence of a subcritical Hopf-homoclinic bifurcation is also identified as a possible mechanism for the transition from an active to a rest state. ## 3 A new mechanism for bursting ### 3.1 Description of the mechanism We now describe a bursting mechanism which involves the interaction between oscillatory modes in systems with approximate D<sub>4</sub> symmetry, where D<sub>4</sub> is the symmetry group of the square. This mechanism can lead to bursts of large dynamic range very close to the instability onset (Moehlis & Knobloch , Knobloch & Moehlis ) and is expected to be relevant in many different systems with approximate D<sub>4</sub> symmetry. This symmetry may be present for obvious or subtle reasons, as the following discussion demonstrates. Consider binary fluid convection in a system of large but finite aspect ratio. If the separation ratio is sufficiently negative the system is overstable, i.e., the primary instability is via a Hopf bifurcation. This is the case for the <sup>3</sup>He/<sup>4</sup>He mixture used by Sullivan & Ahlers in their experiment carried out in a rectangular container $`D\{x,y,z|\frac{1}{2}\mathrm{\Gamma }x\frac{1}{2}\mathrm{\Gamma },\frac{1}{2}\mathrm{\Gamma }_yy\frac{1}{2}\mathrm{\Gamma }_y,\frac{1}{2}z\frac{1}{2}\}`$ with $`\mathrm{\Gamma }=34,\mathrm{\Gamma }_y=6.9`$. In this experiment Sullivan & Ahlers observed that immediately above threshold ($`ϵ(RaRa_c)/Ra_c=3\times 10^4`$) convective heat transport may take place in a sequence of irregular bursts of large dynamic range despite constant heat input (see figure 4). In this system the presence of sidewalls destroys translation symmetry in the $`x`$ direction which would be present if the system were unbounded, but with identical boundary conditions at the sidewalls the system retains a reflection symmetry about $`x=0`$; the primary modes are thus either even or odd with respect to this reflection (Dangelmayr & Knobloch ). Numerical simulations of the appropriate partial differential equations suggest that the bursts observed in the experiments involve the interaction between the first odd and even modes of the system (Jacqmin & Heminger ; see also Batiste et al. as described in section 3.2). Thus, to describe the dynamical behavior near threshold we suppose that the perturbation from the conduction state takes the form $$\mathrm{\Psi }(x,y,z,t)=ϵ^{\frac{1}{2}}\mathrm{Re}\{z_+(t)f_+(x,y,z)+z_{}(t)f_{}(x,y,z)\}+𝒪(ϵ),$$ (1) where $`ϵ1`$, $`f_\pm (x,y,z)=\pm f_\pm (x,y,z)`$. Following Landsberg & Knobloch we now derive amplitude equations describing the evolution of $`z_+`$ and $`z_{}`$ using symmetry arguments. To do this, we first briefly review the topic of bifurcations in systems with symmetry (see, e.g., Golubitsky et al. and Crawford & Knobloch ). Suppose that $$\dot{v}=f(v,\lambda ),$$ (2) where $`vR^n`$ and $`\lambda R^m`$ represent dependent variables and system parameters, respectively. Let $`\gamma G`$ describe a linear group action on the dependent variables. We say that if $$f(\gamma v,\lambda )=\gamma f(v,\lambda )$$ (3) for all $`\gamma G`$ then (2) is equivariant with respect to the group $`G`$. This is equivalent to the statement that if $`v(t)`$ is a solution to (2) then so is $`\gamma v(t)`$. For example, if a system is equivariant under left-right reflections and a right-moving wave exists as a solution, then a reflection-related left-moving wave will also exist as a solution. For the amplitudes $`z_+`$ and $`z_{}`$ the requirement that a reflected state (obtained by letting $`xx`$ in (1)) also be a state of the system gives the requirement that the amplitude equations be equivariant with respect to the group action $$\kappa _1:(z_+,z_{})(z_+,z_{}).$$ (4) Moreover, as argued by Landsberg & Knobloch , the equations for the formally infinite system cannot distinguish between the two modes, i.e., in this limit the amplitude equations must also be equivariant with respect to the group action $$\kappa _2:(z_+,z_{})(z_{},z_+)$$ (5) which we call an interchange symmetry. These two operations generate together the symmetry group D<sub>4</sub>. For a container with large but finite length, this symmetry will be weakly broken; in particular, the even and odd modes typically become unstable at slightly different Rayleigh numbers and with slightly different frequencies (see section 3.2). The resulting equations are thus close to those for a 1:1 resonance, but with a special structure dictated by the proximity to D<sub>4</sub> symmetry. Finally, we may put the equations for $`z_+`$ and $`z_{}`$ into normal form by performing a series of near-identity nonlinear transformations of the dependent variables so as to simplify the equations as much as possible (see, e.g., Guckenheimer & Holmes ). The normal form equations have the additional symmetry (Elphick et al. ) $$\widehat{\sigma }:(z_+,z_{})\mathrm{e}^{i\sigma }(z_+,z_{}),\sigma [0,2\pi ),$$ (6) which may be interpreted as a phase shift symmetry. Truncating the resulting equations at third order we obtain $`\dot{z}_+`$ $`=`$ $`[\lambda +\mathrm{\Delta }\lambda +i(\omega +\mathrm{\Delta }\omega )]z_++A(|z_+|^2+|z_{}|^2)z_+`$ (7) $`+B|z_+|^2z_++C\overline{z}_+z_{}^2`$ $`\dot{z}_{}`$ $`=`$ $`[\lambda \mathrm{\Delta }\lambda +i(\omega \mathrm{\Delta }\omega )]z_{}+A(|z_+|^2+|z_{}|^2)z_{}`$ (8) $`+B|z_{}|^2z_{}+C\overline{z}_{}z_+^2.`$ Here $`\mathrm{\Delta }\omega `$ measures the difference in frequency between the two modes at onset, and $`\mathrm{\Delta }\lambda `$ measures the difference in their linear growth rates. Under appropriate nondegeneracy conditions (which we assume here) we may neglect all interchange symmetry-breaking contributions to the nonlinear terms. In the following we consider the regime in which $`\lambda `$, $`\mathrm{\Delta }\lambda `$, and $`\mathrm{\Delta }\omega `$ are all of the same order; in the large aspect ratio binary fluid convection context this will occur when these quantities are all $`𝒪(\mathrm{\Gamma }^2)`$ (see section 3.2). We will see that when $`\mathrm{\Delta }\lambda `$ and/or $`\mathrm{\Delta }\omega `$ are nonzero, (7,8) have bursting solutions. Thus, the bursting mechanism may be viewed as an interaction between spontaneous symmetry breaking (in which the trivial conduction state loses stability to a convecting state with less symmetry) and forced symmetry breaking (in which the presence of sidewalls make $`\mathrm{\Delta }\lambda `$ and/or $`\mathrm{\Delta }\omega `$ nonzero, thereby breaking the D<sub>4</sub> symmetry). The introduction of small symmetry-breaking terms is also responsible for the possibility of complex dynamics in other systems that would otherwise behave in a regular manner (Dangelmayr & Knobloch , Lauterbach & Roberts , Knobloch , Hirschberg & Knobloch ). To identify the bursts we introduce the change of variables $$z_\pm =\rho ^{\frac{1}{2}}\mathrm{sin}\left(\frac{\theta }{2}+\frac{\pi }{4}\pm \frac{\pi }{4}\right)e^{i(\pm \varphi +\psi )/2}$$ and a new time-like variable $`\tau `$ defined by $`d\tau /dt=\rho ^1`$. In terms of these variables (7,8) become $`{\displaystyle \frac{d\rho }{d\tau }}`$ $`=`$ $`\rho [2A_R+B_R(1+\mathrm{cos}^2\theta )+C_R\mathrm{sin}^2\theta \mathrm{cos}2\varphi ]`$ $`2(\lambda +\mathrm{\Delta }\lambda \mathrm{cos}\theta )\rho ^2`$ $`{\displaystyle \frac{d\theta }{d\tau }}`$ $`=`$ $`\mathrm{sin}\theta [\mathrm{cos}\theta (B_R+C_R\mathrm{cos}2\varphi )C_I\mathrm{sin}2\varphi ]`$ (10) $`2\mathrm{\Delta }\lambda \rho \mathrm{sin}\theta `$ $`{\displaystyle \frac{d\varphi }{d\tau }}`$ $`=`$ $`\mathrm{cos}\theta (B_IC_I\mathrm{cos}2\varphi )C_R\mathrm{sin}2\varphi +2\mathrm{\Delta }\omega \rho ,`$ (11) where $`A=A_R+iA_I`$, etc. There is also a decoupled equation for $`\psi (t)`$ so that fixed points and periodic solutions of equations (3.1-11) correspond, respectively, to periodic solutions and two-tori in equations (7,8). In the following we measure the amplitude of the disturbance by $`r|z_+|^2+|z_{}|^2=\rho ^1`$; thus $`\rho =0`$ corresponds to infinite amplitude states. Equations (3.1-11) show that the restriction to the invariant subspace $`\mathrm{\Sigma }\{\rho =0\}`$ is equivalent to taking $`\mathrm{\Delta }\lambda =\mathrm{\Delta }\omega =0`$ in (10,11). The resulting D<sub>4</sub>-symmetric problem has three generic types of fixed points (Swift ): $``$ $`u`$ solutions with $`\mathrm{cos}\theta =0,\mathrm{cos}2\varphi =1`$ $``$ $`v`$ solutions with $`\mathrm{cos}\theta =0,\mathrm{cos}2\varphi =1`$ $``$ $`w`$ solutions with $`\mathrm{sin}\theta =0`$. In the binary fluid context the $`u`$, $`v`$ and $`w`$ solutions represent mixed parity traveling wave states localized near one of the container walls, mixed parity chevron (or counterpropagating) states, and pure even ($`\theta =0`$) or odd ($`\theta =\pi `$) parity chevron states, respectively (Landsberg & Knobloch ). Such states are shown in figure 5 using the approximate eigenfunctions $$f_\pm (x)=\left\{e^{\gamma x+ix}\pm e^{\gamma xix}\right\}\mathrm{cos}\frac{\pi x}{L},$$ (12) where $`\gamma =0.15+0.025i`$, $`L=80`$ and $`\frac{L}{2}x\frac{L}{2}`$. Depending on the coefficients $`A`$$`B`$ and $`C`$ the subspace $`\mathrm{\Sigma }`$ may contain additional fixed points and/or limit cycles (Swift ). In our scenario, a burst occurs for $`\lambda >0`$ when a trajectory follows the stable manifold of a fixed point (or a limit cycle) $`P_1\mathrm{\Sigma }`$ that is unstable within $`\mathrm{\Sigma }`$. The instability within $`\mathrm{\Sigma }`$ then kicks the trajectory towards another fixed point (or limit cycle) $`P_2\mathrm{\Sigma }`$. If this point has an unstable $`\rho `$ eigenvalue the trajectory escapes from $`\mathrm{\Sigma }`$ towards a finite amplitude ($`\rho >0`$) state, forming a burst. If $`\mathrm{\Delta }\lambda `$ and/or $`\mathrm{\Delta }\omega 0`$ this state may itself be unstable to perturbations of type $`P_1`$ and the process then repeats. This bursting behavior is thus associated with a codimension one heteroclinic cycle between the infinite amplitude solutions $`P_1`$ and $`P_2`$ (Moehlis & Knobloch , Knobloch & Moehlis ). Examples of such cycles are shown in figure 6. Since in such cycles the trajectory reaches infinity in finite time the heteroclinic cycle actually describes bursts of finite duration (Moehlis & Knobloch ). For such a heteroclinic cycle to form it is required that at least one of the branches in the D<sub>4</sub>-symmetric system be subcritical ($`P_1`$) and one supercritical ($`P_2`$). Based on the <sup>3</sup>He/<sup>4</sup>He experiments, we focus on parameter values for which the $`u`$ solutions are subcritical and the $`v`$, $`w`$ solutions supercritical when $`\mathrm{\Delta }\lambda =\mathrm{\Delta }\omega =0`$ (Moehlis & Knobloch ). When $`\mathrm{\Delta }\lambda `$ and/or $`\mathrm{\Delta }\omega 0`$, two types of oscillations in $`(\theta ,\varphi )`$ are possible: $``$ rotations (see figure 7) $``$ librations (see figure 8). For $`\lambda >0`$ these give rise, under appropriate conditions, to sequences of large amplitude bursts arising from repeated excursions towards the infinite amplitude ($`\rho =0`$) $`u`$ solutions. Irregular bursts are also readily generated: figure 9 shows bursts arising from chaotic rotations. Figure 10 provides a partial summary of the different solutions of (3.1-11) and their stability properties; much of the complexity revealed in these figures is due to the Shil’nikov-like properties of the heteroclinic cycle (Moehlis & Knobloch , Knobloch & Moehlis ). We now focus on the physical manifestation of the bursts. In figure 11 we show the solutions of figures 7 and 8 in the form of space-time plots using the approximate eigenfunctions (12). The bursts in figure 11(a) are generated as a result of successive visits to different (but symmetry-related) infinite amplitude $`u`$ solutions, cf. figure 7; in figure 11(b) the generating trajectory makes repeated visits to the same infinite amplitude $`u`$ solution, cf. figure 8. The former state is typical of the blinking state identified in binary fluid and doubly diffusive convection in rectangular containers (Kolodner et al. , Steinberg et al. , Predtechensky et al. ). It is likely that the irregular bursts reported in Sullivan & Ahlers are due to such a state. The latter is a new state which we call a winking state; winking states may be stable but often coexist with stable chevron-like states which are more likely to be observed in experiments in which the Rayleigh number is ramped upwards (cf. figure 10). For other values of $`\mathrm{\Delta }\lambda `$ and $`\mathrm{\Delta }\omega `$ it is also possible to find stable chaotic winking states and states which are neither purely blinking nor purely winking (see figure 12). The bursts described above are the result of oscillations in amplitude between two modes of opposite parity and “frozen” spatial structure. Consequently the above burst mechanism applies in systems in which bursts occur very close to threshold. This occurs not only in the convection experiments already mentioned but also in the mathematically identical (counterrotating) Taylor-Couette system where counterpropagating spiral vortices play the same role as traveling waves in convection (Andereck et al. , Pierce & Knobloch ). In slender systems, such as the convection system described above or a long Taylor-Couette apparatus, a large aspect ratio $`\mathrm{\Gamma }`$ is required for the presence of the approximate D<sub>4</sub> symmetry. If the size of the D<sub>4</sub> symmetry-breaking terms $`\mathrm{\Delta }\lambda `$, $`\mathrm{\Delta }\omega `$ is increased too much the bursts fade away and are replaced by smaller amplitude, higher frequency states (see figure 13). Indeed, if $`\mathrm{\Delta }\omega \mathrm{\Delta }\lambda `$, averaging eliminates the $`C`$ terms responsible for the bursts (Landsberg & Knobloch ). From these considerations, we conclude that bursts will not be present if $`\mathrm{\Gamma }`$ is too small or $`ϵ`$ too large. However, the mechanism is quite robust and even for $`\mathrm{\Delta }\omega \mathrm{\Delta }\lambda `$ it may still be possible to choose $`\lambda `$ values so that bursts of large dynamic range occur (Moehlis & Knobloch ). It is possible that the burst amplitude can become large enough for secondary instabilities not captured by the Ansatz (1) to be triggered. Such instabilities could occur on very different scales and result in turbulent rather than just large amplitude bursts. However, it should be emphasized that the physical amplitude of the bursts is $`𝒪(ϵ^{\frac{1}{2}})`$ and so approaches zero as $`ϵ0`$, cf. eq. (1). Thus despite their large dynamic range (cf. figure 14), the bursts are fully and correctly described by the asymptotic expansion that leads to (7,8). In particular, as shown in Moehlis & Knobloch , the mechanism is robust with respect to the addition of small fifth order terms. However, the effects of including D<sub>4</sub>-symmetry-breaking terms in the cubic terms in (7,8) have not been analyzed; these terms can dominate the symmetry-breaking terms retained in the linear terms when $`\rho `$ is small. ### 3.2 Applicability to binary mixtures In view of the motivation for studying systems with approximate D<sub>4</sub> symmetry described above, it is of interest to examine carefully the properties of the linear stability problem for binary fluid convection in finite containers. In the Boussinesq approximation this system is described by the nondimensionalized equations (Clune & Knobloch ) $`_t𝐮+(𝐮)𝐮`$ $`=`$ $`P+\sigma R[\theta (1+S)S\eta ]\widehat{𝐳}+\sigma ^2𝐮,`$ (13) $`_t\theta +(𝐮)\theta `$ $`=`$ $`w+^2\theta ,`$ (14) $`_t\eta +(𝐮)\eta `$ $`=`$ $`\tau ^2\eta +^2\theta ,`$ (15) together with the incompressibility condition $$𝐮=0.$$ (16) Here $`𝐮(u,w)`$ is the velocity field in $`(x,z)`$ coordinates, $`P`$, $`\theta `$ and $`C`$ denote the departures of the pressure, temperature and concentration fields from their conduction profiles, and $`\eta \theta C`$. These equations are to be solved in the rectangular domain $`D\{x,z|\frac{1}{2}\mathrm{\Gamma }x\frac{1}{2}\mathrm{\Gamma },\frac{1}{2}z\frac{1}{2}\}`$. The system is specified by four dimensionless parameters in addition to the aspect ratio $`\mathrm{\Gamma }`$: the separation ratio $`S`$, the Prandtl and Lewis numbers $`\sigma `$, $`\tau `$, and the Rayleigh number $`R`$. The boundary conditions appropriate to the experiments are no-slip everywhere, with the temperature fixed at the top and bottom and no sideways heat flux. The final set of boundary conditions is provided by the requirement that there is no mass flux through any of the boundaries. The boundary conditions are thus $$𝐮=𝐧\eta =0\text{ on }D,$$ (17) $$\theta =0\text{ at }z=\pm 1/2,_x\theta =0\text{ at }x=\pm \mathrm{\Gamma }.$$ (18) Here $`D`$ denotes the boundary of $`D`$. Figure 15 shows the results of solving the linear problem describing the stability properties of the conduction state $`𝐮=\theta =\eta =0`$ for parameter values used by Sullivan & Ahlers in their <sup>3</sup>He/<sup>4</sup>He experiment: $`\sigma =0.6`$, $`\tau =0.03`$, $`\mathrm{\Gamma }=34.0`$. The figure shows the neutral stability curves and corresponding frequencies for the first four modes in the range $`33.0\mathrm{\Gamma }35.0`$ for $`S=0.001`$ and $`S=0.021`$. Observe that when $`|S|`$ is sufficiently small the first two families of neutral curves are separated by a gap that is much larger than the amplitude of the “braids” within each family (figure 15(a)). This is typical of what happens in Rayleigh-Bénard convection with non-Neumann boundary conditions (Hirschberg & Knobloch ) and makes it easy to justify projecting the fluid equations onto the first two modes that become unstable. However, the situation is not so simple. This is because in the case of overstability this behavior does not persist for all $`\mathrm{\Gamma }`$ or all values of $`|S|`$. For larger values of these parameters the results take instead the form shown in figures 15(c,d) which show the linear stability results for $`S=0.021`$ and the same range of values of $`\mathrm{\Gamma }`$ as figures 15(a,b). The modes from the different families now cross and the first unstable mode belongs to successively higher and higher families when extrapolated to small $`|S|`$ (Batiste et al. ). Figure 15(c) shows the crossing of two even modes involving a nonresonant double Hopf bifurcation (figure 15(d)). As discussed in detail in Batiste et al. the transition between these two types of behavior is mediated by a resonant 1:1 mode crossing at a somewhat smaller value of $`|S|`$. The experimental value of the separation ratio from Sullivan & Ahlers , $`S=0.021`$, therefore corresponds to the “crossing” case and the projection of the equations onto two modes cannot be rigorously justified except in the neighborhood of mode crossing points. We denote the growth rates and frequencies of the modes $`z_\pm `$ by $`\lambda _\pm `$ and $`\omega _\pm `$. For large aspect ratios, the mode frequencies must go like $`\omega _\pm \omega _{\mathrm{}}+c_{1\pm }\mathrm{\Gamma }^1+c_{2\pm }\mathrm{\Gamma }^2+\mathrm{}`$. The fact that the frequency curves in figure 15(d) are essentially parallel “straight lines” implies that $`c_{1+}c_1`$. Therefore, $`\mathrm{\Delta }\omega (\omega _+\omega _{})/2=𝒪(\mathrm{\Gamma }^2)`$ for large $`\mathrm{\Gamma }`$ (Batiste et al. ). Moreover, as argued in Landsberg & Knobloch , the parabolic minimum of the neutral stability curve leads to the expectation that $`\mathrm{\Delta }\lambda =𝒪(\mathrm{\Gamma }^2)`$. Thus, in the range $`\lambda =𝒪(\mathrm{\Gamma }^2)`$, $`\lambda `$, $`\mathrm{\Delta }\lambda `$, and $`\mathrm{\Delta }\omega `$ are all of the same order as $`\mathrm{\Gamma }\mathrm{}`$, as required for the applicability of equations (7, 8). Of course, close enough to the mode crossing point $`\mathrm{\Delta }\lambda \mathrm{\Delta }\omega `$, and in this region averaging methods can be used to eliminate the $`(\overline{z}_+z_{}^2,\overline{z}_{}z_+^2)`$ terms from the mode interaction equations (Landsberg & Knobloch ). However, for typical values of $`\mathrm{\Delta }\lambda `$ it appears likely that the system is correctly described by equations (7, 8), as hypothesized in Landsberg & Knobloch and Moehlis & Knobloch . The first odd and even temperature eigenfunctions for $`S=0.021`$ are shown in figure 16 in the form of a space-time diagram, with time increasing upward. As in the approximate expression (12) the eigenfunction consists of waves propagating outwards from the center of the container. The eigenfunction amplitude has a local minimum at the center and increases outwards, peaking near the sidewalls. This type of eigenfunction was anticipated by Cross and is characteristic of eigenfunctions in systems with positive group velocity (although, strictly speaking, in a finite system one cannot define a group velocity since the allowed wave number is quantized by the sidewalls as well as being nonuniform). However, for the present purposes the most important observation is that for aspect ratios as large as this, the odd and even eigenfunctions are essentially indistinguishable, as hypothesized by Landsberg & Knobloch . ### 3.3 Other systems with approximate D<sub>4</sub> symmetry There are a number of other systems of interest where an approximate D<sub>4</sub> symmetry arises in a natural way and the bursting mechanism described in section 3.1 may be relevant. These include overstable convection in small aspect ratio containers with nearly square cross-section (Armbruster ) and more generally any partial differential equation on a nearly square domain describing the evolution of an oscillatory instability, cf. Ashwin & Mei . Other systems in which our bursting mechanism might be detected are electrohydrodynamic convection in liquid crystals (Silber et al. ; T. Peacock, private communication), lasers (Feng et al. ), spring-supported fluid-conveying tubes (Steindl & Troger ), and dynamo theories of magnetic field generation in the Sun (Knobloch & Landsberg , Knobloch et al. ). Perhaps more interesting is the possibility that large scale spatial modulation due to distant walls may produce bursting in a fully nonlinear state with D<sub>4</sub> symmetry undergoing a symmetry-breaking Hopf bifurcation. As an example we envisage a steady pattern of fully nonlinear two-dimensional rolls. With periodic boundary conditions with period four times the basic roll period, the roll pattern has D<sub>4</sub> symmetry since the pattern is preserved under spatial translations by 1/4 period and a reflection. If such a pattern undergoes a secondary Hopf bifurcation with a spatial Floquet multiplier $`\mathrm{exp}i\pi /2`$, the Hopf bifurcation breaks D<sub>4</sub> symmetry. If the invariance of the basic pattern under translations by 1/4 period is only approximate (this would be the case if the roll amplitude varied on a slow spatial scale), the D<sub>4</sub> symmetry itself would be weakly broken and the new mechanism described above could operate. Also of interest is the Faraday system in a nearly square container. In this system gravity-capillary waves are excited on the surface of a viscous fluid by vertical vibration of the container, usually as a result of a subharmonic resonance. Simonelli & Gollub studied the effect of changing the shape of the container from a square to a slightly rectangular container, focusing on the $`(3,2)`$, $`(2,3)`$ interaction in this system. These modes are degenerate in a square container and only pure and mixed modes were found in this case. In a slightly rectangular container the degeneracy between these modes is broken, however, and in this case a region of quasiperiodic and chaotic behavior was present near onset. When these oscillations first appear they take the form of relaxation oscillations in which the surface of the fluid remains flat for a long time before a “large wave grows, reaches a maximum, and decays, all in a time short compared with the period”. The duration of the spikes is practically independent of the forcing amplitude, while the interspike period appears to diverge as the forcing amplitude decreases. The spikes themselves possess the characteristic asymmetry seen in figures 7 and 8. This behavior occurs when the forcing frequency lies below the resonance frequency of the square container, i.e., precisely when D<sub>4</sub>-symmetric problem has a subcritical branch. Irregular bursts are also found, depending on parameters, but these are distinct from the chaotic states found by Nagata far from threshold and present even in a square container. ## 4 Discussion In this article we have seen that there are many different mechanisms responsible for bursting in hydrodynamical systems. The table below summarizes the different mechanisms described in terms of properties that are most relevant to hydrodynamics. Thus no single mechanism can be expected to provide a universal explanation for all observations. The bursts found experimentally in Taylor-Couette flow (cf. section 2.5) and large aspect ratio binary fluid convection (cf. section 3.1) occur very close to the threshold of a primary instability and thus have the greatest potential for a successful dynamical systems interpretation of the type emphasized here. We have seen, however, that even for fully developed turbulent boundary layers at very large Reynolds numbers a dynamical systems approach can be profitable (cf. section 2.1). Although nearly all of the mechanisms we have described rely on the presence of global bifurcations, there are important differences among them. For example, the bursts in the wall region of a turbulent boundary layer described in section 2.1 are due to a (structurally stable) heteroclinic cycle connecting fixed points with finite amplitude; such a cycle leads to bursts with a limited dynamic range. In contrast, in the mechanism of section 3.1 the dynamic range is unlimited. Moreover, the role of the fixed points is different: in the former the bursts are associated with the excursions between the fixed points while in the latter the bursts are associated with the fixed points. Because of the asymptotic stability of the cycle the time between successive bursts in the turbulent boundary layer will increase without bound unless the stochastic pressure term is included; such a stochastic term is not required in the mechanism of section 3.1. In particular, in this mechanism the duration of the bursts remains finite despite the fact that they are associated with a heteroclinic connection. This is because of the faster than exponential escape to “infinity” that is typical of this mechanism. This is so also for the mechanism described in section 2.3. Both of these mechanisms involve global connections to infinity and hence are capable of describing bursts of arbitrarily large dynamic range. The models of the subcritical transition to turbulence and various types of intermittency also produce bursts of finite duration but rely on global reinjection which produces bursts of bounded amplitude. This work was supported by NSF under grant DMS-9703684 and by NASA under grant NAG3-2152.
no-problem/9909/astro-ph9909447.html
ar5iv
text
# Exploring Star Clusters Using Strömgren 𝑢⁢𝑣⁢𝑏⁢𝑦 Photometry11footnote 1Based on data collected at the European Southern Observatory (La Silla, Chile) and the Nordic Optical Telescope, Spain ## 1 Introduction Strömgren photometry offers distinct advantages over broad band photometry in the study of field stars and star clusters. In particular the most important features are that it can provide estimates of effective temperature, heavy element abundance and surface gravity for individual stars. Our present contribution will focus on the new insights provided by including $`u`$ band (centered at 3500Å) data for turnoff and RGB stars in open and globular clusters. For a discussion of $`u`$ band photometry of horizontal branch (HB) stars in GCs see Grundahl et al. (1999). We shall illustrate that, for the study of globular and open clusters, the $`c_1`$ index offers the possibility of determining ages independent of distance and nearly independent of reddening. Furthermore, there is strong evidence that the variations in the $`c_1`$ index (defined as: $`c_1=(uv)(vb)`$; reddening corrected: $`c_0=c_1`$0.2 E$`(by)`$) observed in RGB stars in M13 (Grundahl, VandenBerg & Andersen 1998) are due to star-to-star abundance variations in N and probably also C. ## 2 Observations and Data reduction All observations for this project were obtained at the 2.56m Nordic Optical Telescope and the Danish 1.54m telescope on La Silla using the $`uvby`$ filter sets available there. In most clusters we obtained samples of 10000–30000 stars. Calibrating stars from the lists of Schuster & Nissen (1988) and Olsen (1983, 1984) were observed on several nights during our observing runs. The photometry for the clusters was done using the suite of programs (DAOPHOT/ALLSTAR/ ALLFRAME/DAOGROW) developed by PBS (see Stetson 1987, 1991, 1994 for details). ## 3 Illustration of the age dependence of $`c_1`$ As part of our program we have observed several old open clusters, in order to study their color-magnitude diagrams (CMDs) and eventually to use them as photometric standard fields. Two of these clusters, NGC 2506 and NGC 2243 are known to have a metallicity significantly lower than solar, approaching that of 47 Tuc. From previous studies of their CMDs it is well known that they span a large range in age, and thus we can illustrate how the cluster loci change in a $`(vy,c_1)`$ diagram (all three clusters have nearly the same reddening, and thus we make no corrections for reddening). In Fig. 1a-1c we show our calibrated $`(vy,V)`$ CMD’s for NGC 2506, NGC 2243 and 47 Tuc. Note how prominent the sequence of possible equal mass binaries is in both NGC 2506 and NGC 2243. (The data for these three clusters were obtained during the same observing run, thus differences due to variations in filter/detector combinations are ruled out). Figure 1d clearly shows how the value of $`c_1`$ at the cluster turnoff (at constant metallicity) increases with increasing cluster age. There are some obvious advantages to using $`uvby`$ photometry for estimating ages of clusters in this way: the method 1) is independent of the cluster distance; 2) does not rely on HB morphology or number of HB stars; 3) is nearly independent of reddening. Assuming ages of approximately 2.5, 5 and 12 Gyr for NGC 2506, NGC 2243 and 47 Tuc, we see that the rate of change in $`c_1`$ decreases with increasing age – for fairly metal-rich populations this method is best suited for ages smaller than $``$10 Gyr. At lower metallicities the sensitivity to age of $`c_1`$ increases. For an application of this method to determine the absolute ages of M92 and M13, please refer to Grundahl et al. (1999, in preparation) ## 4 Abundance variations in all globular clusters? It has been known for many years that several GCs exhibit star–to–star variations in their elemental abundances, for both RGB and main sequence stars. In particular, NGC 6752, 47 Tuc (Cannon et al. 1998) and recently M71 (Cohen 1999a) have been studied in detail. These clusters show evidence for a bimodality in their CN abundance on the RGB and main sequence. Spurred by our finding (Grundahl et al. 1998) that M13 shows large star–to–star variations in the $`c_1`$ index for lower RGB stars, we examined our photometry for other GCs, and surprisingly found that all of the observed clusters with \[Fe/H\]$`<1.1`$ exhibit the $`c_1`$ variations (Fig. 2). For the more metal-rich GCs in our sample, M71 and 47 Tuc (Fig. 3), we do not find a large spread among the giants (probably just due to small sample size) but very significant variations on the upper two magnitudes of the main sequence; for fainter stars we cannot distinguish between photometric scatter and real star–to–star variations. In M71 (not shown here) the definition of the total range in $`c_1`$ is slightly problematic due to field star contamination – there is, however, no question that most of the observed scatter is real. Figure 2 showns how $`c_1`$ varies with the luminosity of RGB stars in our observed clusters. For each cluster we have marked the location of the RGB bump as measured from the RGB luminosity function (excluding AGB stars). In NGC 288 the “bump” is not detected, and we have simply estimated its position from comparison with NGC 362 and NGC 1851 which have a similar metallicity. For all clusters (except M71 and 47 Tuc, to be discussed below) the scatter in $`c_1`$ remains constant as a function of luminosity at fixed metallicity, and as the luminosity increases above the RGB bump the width of the $`c_1`$ scatter decreases. The most obvious question to ask is what is causing these observed $`c_1`$ variations?. In this respect we first notice that the effect exists for all the filter/detector combinations we have used. Second, all the RGB stars are so bright (and in most cases have many measurements in each filter) that the estimated photometric errors for most stars are reliable and less than 0.01 magnitudes. Thus the observed scatter on the lower RGB is of order 10$`\sigma `$. We therefore consider the scatter to be real and highly significant. It is also evident from our data that the scatter among the RGB stars is only significant when color indices using the $`u`$ band are constructed. Thus the most obvious candidate for causing these variations is the ultraviolet NH band at 3360Å. If this is indeed the case we would expect that there should be a correlation between the measured CN strength and $`c_1`$ scatter. Before turning to further empirical evidence that variations in the NH strength are causing the $`c_1`$ variations we have calculated Strömgren indices for a “CN strong” and a “CN weak” star and overplotted these on our observations of M 13 and 47 Tuc (Fig. 3). Values for the gravity and temperature were taken from a 16 Gyr O enhanced isochrone, similar to that employed in the study of 47 Tuc by Hesser et al. (1987). As can be seen the models reproduce well the observed width, although there is some indication that the actual spread in 47 Tuc may be larger than assumed in the models. The synthetic spectra for RGB and MS stars indicate that the cause for the spread in $`c_1`$ is indeed the variations in NH strength. To further test this hypothesis we have attempted to match GC stars with spectrocsopically measured CN or NH strength with our photometry. However, since most of our fields are located close to the cluster centers we only have a small amount of overlap. In M92 it is known from the study of Carbon et al. (1982) that there are significant (factor 10) variations in the abundance of N among faint RGB stars. We only have 3 stars with measured NH strength in common with their study, of which one is an AGB stars and the other is a bright giant in the region of very little (if any) $`c_1`$ scatter. The last star, however, is classified by Carbon et al. as NH strong, and does indeed lie at the upper envelope (as expected from the models) of our $`c_1`$ distribution. In the case of 47 Tuc, we have approximately 38 stars in common with the study of Cannon et al. (1998), of which they classify 19 as CN strong and 19 as CN weak. In Fig. 3, we have overplotted these 38 stars and there is clearly a separation in $`c_1`$ of the two groups as expected if the $`c_1`$ scatter is related to N (and probably also C) variations. In the case of M13, for which we have the highest quality photometry, there is some evidence that the scatter is present at $`V=18`$, only 0.5 magnitudes brighter than the cluster turnoff. Cohen (1999b) did not report any star–to–star variation in the CN strength among turnoff stars in M13. We suspect that this is because these stars are so hot that most of their CN molecules are destroyed. If this is true, observations of NH are needed for the detection of abundance variations in these fairly hot stars. We therefore suggest that all globular clusters have variations in N and probably also C in stars as faint as the base of the RGB. Whether these variations are “primordial”, due to cluster self–enrichment or some as yet unknown mixing process ocurring in fairly unevolved giants or subgiants remains to be seen. ## 5 Conclusions We have demonstrated how the use of Strömgren photometry allows the determination of distance independent ages for both open and globular clusters. Especially for clusters less than $`\mathrm{\hspace{0.17em}10}`$Gyr the variation in $`c_1`$ with age is fairly large, making it relatively easy to determine for old open clusters. It has also been argued that most, if not all, globular clusters exhibit evidence for star–to–star variations in at least N (and probably C) – the cause for these abundance variations is not yet clear. At luminosities higher than the RGB bump we find evidence that the scatter in $`c_1`$ decreases, possibly due to the onset of deep mixing which will dredge up N. ## Acknowledgements Russel Cannon is thanked for providing a machine readable version of their spectroscopic results for 47 Tuc. ## References > Cannon, R.D., Croke, B.F.W., Bell, R.A., Hesser, J.E., Stathakis, R.A., 1998, MNRAS, 298, 601 > > Carbon, D.F., Langer, G.E., Butler, D., Kraft, R.P., Suntzeff, N.B., Kemper, E., Trefzger, C.F., Romanishin, W., 1982, ApJS, 49, 207 > > Cohen, J., 1999a, AJ, 117, 2434 > > Cohen, J., 1999b, AJ, 117, 2428 > > Grundahl, F., Catelan, M., Landsman, W., Stetson, P.B. Stetson, Andersen, M.I., 1999, ApJ, Oct. 10 issue, In press. > > Grundahl, F., VandenBerg, D.A., Andersen, M.I., 1998, ApJL, 500, 179 > > Hesser, J.E., Harris, W.E., VandenBerg, D.A., Allwright, J.W.B., Schott, P., Stetson, P.B. 1987, PASP, 99, 739 > > Olsen, E.H., 1983, A&AS, 54, 55 > > Olsen, E.H., 1984, A&AS, 57, 443 > > Schuster, W., Nissen, P.E., 1988, A&AS, 73, 225 > > Stetson, P.B., 1987, PASP, 99, 191 > > Stetson, P.B., 1990, PASP, 102, 932 > > Stetson, P.B., 1994, PASP, 106, 250
no-problem/9909/hep-ph9909564.html
ar5iv
text
# Model-independent information on 𝛾 from 𝐵^±→𝜋⁢𝐾 Decays ## 1 INTRODUCTION The main objective of the $`B`$ factories is to explore in detail the physics of CP violation, to determine many of the flavor parameters of the electroweak theory, and to probe for possible effects of physics beyond the Standard Model. This will test the Cabibbo–Kobayashi–Maskawa (CKM) mechanism, which predicts that all CP violation results from a single complex phase in the quark mixing matrix. Facing the announcement of evidence for a CP asymmetry in the decays $`BJ/\psi K_S`$ by the CDF Collaboration , the confirmation of direct CP violation in $`K\pi \pi `$ decays by the KTeV and NA48 Collaborations , and the successful start of the asymmetric $`B`$ factories at SLAC and KEK, the year 1999 has been an important step in achieving this goal. The precise determination of the sides and angles of the “unitarity triangle” $`V_{ub}^{}V_{ud}+V_{cb}^{}V_{cd}+V_{tb}^{}V_{td}=0`$ plays a central role in the $`B`$-factory program . With the standard phase conventions for the CKM matrix, only the two smallest elements in this relation, $`V_{ub}^{}`$ and $`V_{td}`$, have nonvanishing imaginary parts (to an excellent approximation). In the Standard Model the angle $`\beta =\text{arg}(V_{td})`$ can be determined in a theoretically clean way by measuring the time-dependent, mixing-induced CP asymmetry in the decays $`B,\overline{B}J/\psi K_S`$. The preliminary CDF result implies $`\mathrm{sin}2\beta =0.79_{0.44}^{+0.41}`$ . The angle $`\gamma =\text{arg}(V_{ub}^{})`$, or equivalently the combination $`\alpha =180^{}\beta \gamma `$, is much harder to determine . Recently, there has been significant progress in the theoretical understanding of the hadronic decays $`B\pi K`$, and methods have been developed to extract information on $`\gamma `$ from rate measurements for these processes. Here we discuss the charged modes $`B^\pm \pi K`$, which are particularly clean from a theoretical perspective . For applications involving the neutral decay modes the reader is referred to the literature . In the Standard Model the main contributions to the decay amplitudes for the rare decays $`B\pi K`$ come from the penguin-induced flavor-changing neutral current (FCNC) transitions $`\overline{b}\overline{s}q\overline{q}`$, which exceed a small, Cabibbo-suppressed $`\overline{b}\overline{u}u\overline{s}`$ contribution from $`W`$-boson exchange. The weak phase $`\gamma `$ enters through the interference of these two (“tree” and “penguin”) contributions. Because of a fortunate interplay of isospin, Fierz and flavor symmetries, the theoretical description of the charged modes $`B^\pm \pi K`$ is very clean despite the fact that these are exclusive nonleptonic decays . Without any dynamical assumption, the hadronic uncertainties in the description of the interference terms relevant to the determination of $`\gamma `$ are of relative magnitude $`O(\lambda ^2)`$ or $`O(ϵ_{\mathrm{SU}(3)}/N_c)`$, where $`\lambda =\mathrm{sin}\theta _C0.22`$ is a measure of Cabibbo suppression, $`ϵ_{\mathrm{SU}(3)}20\%`$ is the typical size of SU(3) flavor-symmetry breaking, and the factor $`1/N_c`$ indicates that the corresponding terms vanish in the factorization approximation. Factorizable SU(3) breaking can be accounted for in a straightforward way. Recently, the accuracy of this description has been further increased, because it has been shown that nonleptonic $`B`$ decays into two light mesons, such as $`B\pi K`$ and $`B\pi \pi `$, admit a heavy-quark expansion . To leading order in $`\mathrm{\Lambda }/m_b`$, and to all orders in perturbation theory, the decay amplitudes for these processes can be calculated from first principles, without recourse the phenomenological models. The QCD factorization theorem proved in improves upon the phenomenological approach of “generalized factorization” , which emerges as the leading term in the heavy-quark limit. With the help of this theorem the irreducible theoretical uncertainty in the description of the $`B^\pm \pi K`$ decay amplitudes can be reduced by an extra factor of $`O(\mathrm{\Lambda }/m_b)`$, rendering their analysis essentially model independent. As a consequence of this fact, and because they are dominated by (hadronic) FCNC transitions, the decays $`B^\pm \pi K`$ offer a sensitive probe to physics beyond the Standard Model , much in the same way as the “classical” FCNC processes $`BX_s\gamma `$ or $`BX_sl^+l^{}`$. We will discuss how the bound on $`\gamma `$ and the extraction of $`\gamma `$ in the Standard Model could be affected by New Physics. ## 2 THEORY OF $`𝑩^\mathbf{\pm }\mathbf{}𝝅𝑲`$ DECAYS The hadronic decays $`B\pi K`$ are mediated by a low-energy effective weak Hamiltonian, whose operators allow for three distinct classes of flavor topologies: QCD penguins, trees, and electroweak penguins. In the Standard Model the weak couplings associated with these topologies are known. From the measured branching ratios for the various $`B\pi K`$ decay modes it follows that QCD penguins dominate the decay amplitudes , whereas trees and electroweak penguins are subleading and of a similar strength . The theoretical description of the two charged modes $`B^\pm \pi ^\pm K^0`$ and $`B^\pm \pi ^0K^\pm `$ exploits the fact that the amplitudes for these processes differ in a pure isospin amplitude $`A_{3/2}`$, defined as the matrix element of the isovector part of the effective Hamiltonian between a $`B`$ meson and the $`\pi K`$ isospin eigenstate with $`I=\frac{3}{2}`$. In the Standard Model the parameters of this amplitude are determined, up to an overall strong phase $`\varphi `$, in the limit of SU(3) flavor symmetry . Using the QCD factorization theorem proved in , the SU(3)-breaking corrections can be calculated in a model-independent way up to nonfactorizable terms that are power-suppressed in $`\mathrm{\Lambda }/m_b`$ and vanish in the heavy-quark limit. A convenient parameterization of the decay amplitudes $`𝒜_{+0}𝒜(B^+\pi ^+K^0)`$ and $`𝒜_{0+}\sqrt{2}𝒜(B^+\pi ^0K^+)`$ is $`𝒜_{+0}`$ $`=`$ $`P(1\epsilon _ae^{i\gamma }e^{i\eta }),`$ (1) $`𝒜_{0+}`$ $`=`$ $`P\left[1\epsilon _ae^{i\gamma }e^{i\eta }\epsilon _{3/2}e^{i\varphi }(e^{i\gamma }\delta _{\mathrm{EW}})\right],`$ where $`P`$ is the dominant penguin amplitude defined as the sum of all terms in the $`B^+\pi ^+K^0`$ amplitude not proportional to $`e^{i\gamma }`$, $`\eta `$ and $`\varphi `$ are strong phases, and $`\epsilon _a`$, $`\epsilon _{3/2}`$ and $`\delta _{\mathrm{EW}}`$ are real hadronic parameters. The weak phase $`\gamma `$ changes sign under a CP transformation, whereas all other parameters stay invariant. Let us discuss the various terms entering the decay amplitudes in detail. From a naive quark-diagram analysis one does not expect the $`B^+\pi ^+K^0`$ amplitude to receive a contribution from $`\overline{b}\overline{u}u\overline{s}`$ tree topologies; however, such a contribution can be induced through final-state rescattering or annihilation contributions . They are parameterized by $`\epsilon _a=O(\lambda ^2)`$. In the heavy-quark limit this parameter can be calculated and is found to be very small, $`\epsilon _a2\%`$ . In the future, it will be possible to put upper and lower bounds on $`\epsilon _a`$ by comparing the CP-averaged branching ratios for the decays $`B^\pm \pi ^\pm K^0`$ and $`B^\pm K^\pm \overline{K}^0`$ . Below we assume $`|\epsilon _a|0.1`$; however, our results will be almost insensitive to this assumption. The terms proportional to $`\epsilon _{3/2}`$ in (1) parameterize the isospin amplitude $`A_{3/2}`$. The contribution proportional to $`e^{i\gamma }`$ comes from the tree process $`\overline{b}\overline{u}u\overline{s}`$, whereas the quantity $`\delta _{\mathrm{EW}}`$ describes the effects of electroweak penguins. The parameter $`\epsilon _{3/2}`$ measures the relative strength of tree and QCD penguin contributions. Information about it can be derived by using SU(3) flavor symmetry to relate the tree contribution to the isospin amplitude $`A_{3/2}`$ to the corresponding contribution in the decay $`B^+\pi ^+\pi ^0`$. Since the final state $`\pi ^+\pi ^0`$ has isospin $`I=2`$ (because of Bose symmetry), the amplitude for this process does not receive any contribution from QCD penguins. Moreover, electroweak penguins in $`\overline{b}\overline{d}q\overline{q}`$ transitions are negligibly small. We define a related parameter $`\overline{\epsilon }_{3/2}`$ by writing $`\epsilon _{3/2}=\overline{\epsilon }_{3/2}\sqrt{12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2}`$, so that the two quantities agree in the limit $`\epsilon _a0`$. In the SU(3) limit, this new parameter can be determined experimentally form the relation $$\overline{\epsilon }_{3/2}=R_1\left|\frac{V_{us}}{V_{ud}}\right|\left[\frac{2\text{B}(B^\pm \pi ^\pm \pi ^0)}{\text{B}(B^\pm \pi ^\pm K^0)}\right]^{1/2}.$$ (2) SU(3)-breaking corrections are described by the factor $`R_1=1.22\pm 0.05`$, which can be calculated in a model-independent way using the QCD factorization theorem for nonleptonic decays . The quoted error is an estimate of the theoretical uncertainty due to uncontrollable corrections of $`O(\frac{1}{N_c}\frac{m_s}{m_b})`$. Using preliminary data reported by the CLEO Collaboration to evaluate the ratio of CP-averaged branching ratios in (2) we obtain $$\overline{\epsilon }_{3/2}=0.21\pm 0.06_{\mathrm{exp}}\pm 0.01_{\mathrm{th}}.$$ (3) With a better measurement of the branching ratios the uncertainty in $`\overline{\epsilon }_{3/2}`$ will be reduced significantly. Finally, the parameter $`\delta _{\mathrm{EW}}`$ $`=`$ $`R_2\left|{\displaystyle \frac{V_{cb}^{}V_{cs}}{V_{ub}^{}V_{us}}}\right|{\displaystyle \frac{\alpha }{8\pi }}{\displaystyle \frac{x_t}{\mathrm{sin}^2\theta _W}}\left(1+{\displaystyle \frac{3\mathrm{ln}x_t}{x_t1}}\right)`$ (4) $`=`$ $`(0.64\pm 0.09)\times {\displaystyle \frac{0.085}{|V_{ub}/V_{cb}|}},`$ with $`x_t=(m_t/m_W)^2`$, describes the ratio of electroweak penguin and tree contributions to the isospin amplitude $`A_{3/2}`$. In the SU(3) limit it is calculable in terms of Standard Model parameters . SU(3)-breaking corrections are accounted for by the quantity $`R_2=0.92\pm 0.09`$ . The error quoted in (4) also includes the uncertainty in the top-quark mass. Important observables in the study of the weak phase $`\gamma `$ are the ratio of the CP-averaged branching ratios in the two $`B^\pm \pi K`$ decay modes, $$R_{}=\frac{\text{B}(B^\pm \pi ^\pm K^0)}{2\text{B}(B^\pm \pi ^0K^\pm )}=0.75\pm 0.28,$$ (5) and a particular combination of the direct CP asymmetries, $`\stackrel{~}{A}`$ $`=`$ $`{\displaystyle \frac{A_{\mathrm{CP}}(B^\pm \pi ^0K^\pm )}{R_{}}}A_{\mathrm{CP}}(B^\pm \pi ^\pm K^0)`$ (6) $`=`$ $`0.52\pm 0.42.`$ The experimental values of these quantities are derived from preliminary data reported by the CLEO Collaboration . The theoretical expressions for $`R_{}`$ and $`\stackrel{~}{A}`$ obtained using the parameterization in (1) are $`R_{}^1`$ $`=`$ $`1+2\overline{\epsilon }_{3/2}\mathrm{cos}\varphi (\delta _{\mathrm{EW}}\mathrm{cos}\gamma )`$ $`+`$ $`\overline{\epsilon }_{3/2}^2(12\delta _{\mathrm{EW}}\mathrm{cos}\gamma +\delta _{\mathrm{EW}}^2)+O(\overline{\epsilon }_{3/2}\epsilon _a),`$ $`\stackrel{~}{A}`$ $`=`$ $`2\overline{\epsilon }_{3/2}\mathrm{sin}\gamma \mathrm{sin}\varphi +O(\overline{\epsilon }_{3/2}\epsilon _a).`$ (7) Note that the rescattering effects described by $`\epsilon _a`$ are suppressed by a factor of $`\overline{\epsilon }_{3/2}`$ and thus reduced to the percent level. Explicit expressions for these contributions can be found in . ## 3 LOWER BOUND ON $`𝜸`$ AND CONSTRAINT IN THE $`\mathbf{(}\overline{𝝆}\mathbf{,}\overline{𝜼}\mathbf{)}`$ PLANE There are several strategies for exploiting the above relations. First, from a measurement of the ratio $`R_{}`$ alone a bound on $`\mathrm{cos}\gamma `$ can be derived, which implies a nontrivial constraint on the Wolfenstein parameters $`\overline{\rho }`$ and $`\overline{\eta }`$ defining the apex of the unitarity triangle . Only CP-averaged branching ratios are needed for this purpose. Varying the strong phases $`\varphi `$ and $`\eta `$ independently we first obtain an upper bound on the inverse of $`R_{}`$. Keeping terms of linear order in $`\epsilon _a`$, we find $`R_{}^1`$ $``$ $`\left(1+\overline{\epsilon }_{3/2}|\delta _{\mathrm{EW}}\mathrm{cos}\gamma |\right)^2+\overline{\epsilon }_{3/2}^2\mathrm{sin}^2\gamma `$ (8) $`+`$ $`2\overline{\epsilon }_{3/2}|\epsilon _a|\mathrm{sin}^2\gamma .`$ Provided $`R_{}`$ is significantly smaller than 1, this bound implies an exclusion region for $`\mathrm{cos}\gamma `$, which becomes larger the smaller the values of $`R_{}`$ and $`\overline{\epsilon }_{3/2}`$ are. It is convenient to consider instead of $`R_{}`$ the related quantity $$X_R=\frac{\sqrt{R_{}^1}1}{\overline{\epsilon }_{3/2}}=0.72\pm 0.98_{\mathrm{exp}}\pm 0.03_{\mathrm{th}}.$$ (9) Because of the theoretical factor $`R_1`$ entering the definition of $`\overline{\epsilon }_{3/2}`$ in (2) this is, strictly speaking, not an observable. However, the theoretical uncertainty in $`X_R`$ is so much smaller than the present experimental error that it is justified to treat this quantity as an observable. The advantage of presenting our results in terms of $`X_R`$ rather than $`R_{}`$ is that the leading dependence on $`\overline{\epsilon }_{3/2}`$ cancels out, leading to the simple bound $`|X_R||\delta _{\mathrm{EW}}\mathrm{cos}\gamma |+O(\overline{\epsilon }_{3/2},\epsilon _a)`$. In Figure 1 we show the upper bound on $`X_R`$ as a function of $`|\gamma |`$, obtained by varying the input parameters in the intervals $`0.15\overline{\epsilon }_{3/2}0.27`$ and $`0.49\delta _{\mathrm{EW}}0.79`$ (corresponding to using $`|V_{ub}/V_{cb}|=0.085\pm 0.015`$ in (4)). Note that the effect of the rescattering contribution parameterized by $`\epsilon _a`$ is very small. The gray band shows the current value of $`X_R`$, which clearly has too large an error to provide any useful information on $`\gamma `$.<sup>2</sup><sup>2</sup>2Unfortunately, the $`2\sigma `$ deviation from 1 indicated by the first preliminary CLEO result has not been confirmed by the present data. The situation may change, however, once a more precise measurement of $`X_R`$ will become available. For instance, if the current central value $`X_R=0.72`$ were confirmed, it would imply the bound $`|\gamma |>75^{}`$, which would mark a significant improvement over the limit $`|\gamma |>37^{}`$ obtained from the global analysis of the unitarity triangle including information from $`K`$$`\overline{K}`$ mixing . So far, as in previous work, we have used the inequality (8) to derive a lower bound on $`|\gamma |`$. However, a large part of the uncertainty in the value of $`\delta _{\mathrm{EW}}`$, and thus in the resulting bound on $`|\gamma |`$, comes from the present large error on $`|V_{ub}|`$. Since this is not a hadronic uncertainty, it is more appropriate to separate it and turn (8) into a constraint on the Wolfenstein parameters $`\overline{\rho }`$ and $`\overline{\eta }`$. To this end, we use that $`\mathrm{cos}\gamma =\overline{\rho }/\sqrt{\overline{\rho }^2+\overline{\eta }^2}`$ by definition, and $`\delta _{\mathrm{EW}}=(0.24\pm 0.03)/\sqrt{\overline{\rho }^2+\overline{\eta }^2}`$ from (4). The solid lines in Figure 2 show the resulting constraint in the $`(\overline{\rho },\overline{\eta })`$ plane obtained for the representative values $`X_R=0.25`$, 0.5, 0.75, 1.0, 1.25 (from right to left), which for $`\overline{\epsilon }_{3/2}=0.21`$ would correspond to $`R_{}=0.90`$, 0.82, 0.75, 0.68, 0.63, respectively. Values to the right of these lines are excluded. For comparison, the dashed circles show the constraint arising from the measurement of the ratio $`|V_{ub}/V_{cb}|=0.085\pm 0.015`$ in semileptonic $`B`$ decays, and the dashed-dotted line shows the bound implied by the present experimental limit on the mass difference $`\mathrm{\Delta }m_s`$ in the $`B_s`$ system . Values to the left of this line are excluded. It is evident from the figure that the bound resulting from a measurement of the ratio $`X_R`$ in $`B^\pm \pi K`$ decays may be very nontrivial and, in particular, may eliminate the possibility that $`\gamma =0`$. The combination of this bound with information from semileptonic decays and $`B_s`$$`\overline{B}_s`$ mixing alone would then determine the Wolfenstein parameters $`\overline{\rho }`$ and $`\overline{\eta }`$ within narrow ranges,<sup>3</sup><sup>3</sup>3An observation of CP violation, such as the measurement of $`ϵ_K`$ in $`K`$$`\overline{K}`$ mixing or $`\mathrm{sin}2\beta `$ in $`BJ/\psi K_S`$ decays, is however needed to fix the sign of $`\overline{\eta }`$. and in the context of the CKM model would prove the existence of direct CP violation in $`B`$ decays. ## 4 EXTRACTION OF $`𝜸`$ Ultimately, the goal is of course not only to derive a bound on $`\gamma `$ but to determine this parameter directly from the data. This requires to fix the strong phase $`\varphi `$ in (2), which can be done either through the measurement of a CP asymmetry or with the help of theory. A strategy for an experimental determination of $`\gamma `$ from $`B^\pm \pi K`$ decays has been suggested in . It generalizes a method proposed by Gronau, Rosner and London to include the effects of electroweak penguins. The approach has later been refined to account for rescattering contributions to the $`B^\pm \pi ^\pm K^0`$ decay amplitudes . Before discussing this method, we will first illustrate an easier strategy for a theory-guided determination of $`\gamma `$ based on the QCD factorization theorem derived in . This method does not require any measurement of a CP asymmetry. ### 4.1 Theory-guided determination In the previous section the theoretical predictions for the nonleptonic $`B\pi K`$ decay amplitudes obtained using the QCD factorization theorem were used in a minimalistic way, i.e., only to calculate the size of the SU(3)-breaking effects parameterized by $`R_1`$ and $`R_2`$. The resulting bound on $`\gamma `$ and the corresponding constraint in the $`(\overline{\rho },\overline{\eta })`$ plane are therefore theoretically very clean. However, they are only useful if the value of $`X_R`$ is found to be larger than about 0.5 (see Figure 1), in which case values of $`|\gamma |`$ below $`65^{}`$ are excluded. If it would turn out that $`X_R<0.5`$, then it is in principle possible to satisfy the inequality (8) also for small values of $`\gamma `$, however, at the price of having a very large value of the strong phase, $`\varphi 180^{}`$. But this possibility can be discarded based on the model-independent prediction that $$\varphi =O[\alpha _s(m_b),\mathrm{\Lambda }/m_b].$$ (10) In fact, a direct calculation of this phase to leading power in $`\mathrm{\Lambda }/m_b`$ yields $`\varphi 11^{}`$ . Using the fact that $`\varphi `$ is parametrically small, we can exploit a measurement of the ratio $`X_R`$ to obtain a determination of $`|\gamma |`$ – corresponding to an allowed region in the $`(\overline{\rho },\overline{\eta })`$ plane – rather than just a bound. This determination is unique up to a sign. Note that for small values of $`\varphi `$ the impact of the strong phase in the expression for $`R_{}`$ in (2) is a second-order effect, since $`\mathrm{cos}\varphi 1\varphi ^2/2`$. As long as $`|\varphi |\sqrt{2\mathrm{\Delta }\overline{\epsilon }_{3/2}/\overline{\epsilon }_{3/2}}`$, the uncertainty in the value of $`\mathrm{cos}\varphi `$ has a much smaller effect than the uncertainty in $`\overline{\epsilon }_{3/2}`$. With the present value of $`\overline{\epsilon }_{3/2}`$, this is the case as long as $`|\varphi |43^{}`$. We believe it is a safe assumption to take $`|\varphi |<25^{}`$ (i.e., more than twice the value obtained to leading order in $`\mathrm{\Lambda }/m_b`$), so that $`\mathrm{cos}\varphi >0.9`$. Solving the equation for $`R_{}`$ in (2) for $`\mathrm{cos}\gamma `$, and including the corrections of $`O(\epsilon _a)`$, we find $`\mathrm{cos}\gamma `$ $`=`$ $`\delta _{\mathrm{EW}}{\displaystyle \frac{X_R+\frac{1}{2}\overline{\epsilon }_{3/2}(X_R^21+\delta _{\mathrm{EW}}^2)}{\mathrm{cos}\varphi +\overline{\epsilon }_{3/2}\delta _{\mathrm{EW}}}}`$ (11) $`+`$ $`{\displaystyle \frac{\epsilon _a\mathrm{cos}\eta \mathrm{sin}^2\gamma }{\mathrm{cos}\varphi +\overline{\epsilon }_{3/2}\delta _{\mathrm{EW}}}},`$ where we have set $`\mathrm{cos}\varphi =1`$ in the $`O(\epsilon _a)`$ term. Using the QCD factorization theorem one finds that $`\epsilon _a\mathrm{cos}\eta 0.02`$ in the heavy-quark limit , and we assign a 100% uncertainty to this estimate. In evaluating the result (11) we scan the parameters in the ranges $`0.15\overline{\epsilon }_{3/2}0.27`$, $`0.55\delta _{\mathrm{EW}}0.73`$, $`25^{}\varphi 25^{}`$, and $`0.04\epsilon _a\mathrm{cos}\eta \mathrm{sin}^2\gamma 0`$. Figure 3 shows the allowed regions in the $`(\overline{\rho },\overline{\eta })`$ plane for the representative values $`X_R=0.25`$, 0.75, and 1.25 (from right to left). We stress that with this method a useful constraint on the Wolfenstein parameters is obtained for any value of $`X_R`$. ### 4.2 Model-independent determination It is important that, once more precise data on $`B^\pm \pi K`$ decays will become available, it will be possible to test the theoretical prediction of a small strong phase $`\varphi `$ experimentally. To this end, one must determine the CP asymmetry $`\stackrel{~}{A}`$ in addition to the ratio $`R_{}`$. From (2) it follows that for fixed values of $`\overline{\epsilon }_{3/2}`$ and $`\delta _{\mathrm{EW}}`$ the quantities $`R_{}`$ and $`\stackrel{~}{A}`$ define contours in the $`(\gamma ,\varphi )`$ plane, whose intersections determine the two phases up to possible discrete ambiguities . Figure 4 shows these contours for some representative values, assuming $`\overline{\epsilon }_{3/2}=0.21`$, $`\delta _{\mathrm{EW}}=0.64`$, and $`\epsilon _a=0`$. In practice, including the uncertainties in the values of these parameters changes the contour lines into contour bands. Typically, the spread of the bands induces an error in the determination of $`\gamma `$ of about $`10^{}`$ .<sup>4</sup><sup>4</sup>4A precise determination of this error requires knowledge of the actual values of the observables. Gronau and Pirjol find a larger error for the special case where the product $`|\mathrm{sin}\gamma \mathrm{sin}\varphi |`$ is very close to 1, which however is highly disfavored because of the expected smallness of the strong phase $`\varphi `$. In the most general case there are up to eight discrete solutions for the two phases, four of which are related to the other four by the sign change $`(\gamma ,\varphi )(\gamma ,\varphi )`$. However, for typical values of $`R_{}`$ it turns out that often only four solutions exist, two of which are related to the other two by a change of signs. The theoretical prediction that $`\varphi `$ is small implies that solutions should exist where the contours intersect close to the lower portion in the plot. Other solutions with large $`\varphi `$ are strongly disfavored theoretically. Moreover, according to (2) the sign of the CP asymmetry $`\stackrel{~}{A}`$ fixes the relative sign between the two phases $`\gamma `$ and $`\varphi `$. If we trust the theoretical prediction that $`\varphi `$ is negative , it follows that in most cases there remains only a unique solution for $`\gamma `$, i.e., the CP-violating phase $`\gamma `$ can be determined without any discrete ambiguity. As an example, consider the hypothetical case where $`R_{}=0.8`$ and $`\stackrel{~}{A}=15\%`$. Figure 4 then allows the four solutions where $`(\gamma ,\varphi )(\pm 82^{},21^{})`$ or $`(\pm 158^{},78^{})`$. The second pair of solutions is strongly disfavored because of the large values of the strong phase $`\varphi `$. From the first pair of solutions, the one with $`\varphi 21^{}`$ is closest to our theoretical expectation that $`\varphi 11^{}`$, hence leaving $`\gamma 82^{}`$ as the unique solution. ## 5 SEARCH FOR NEW PHYSICS In the presence of New Physics the theoretical description of $`B^\pm \pi K`$ decays becomes more complicated. In particular, new CP-violating contributions to the decay amplitudes may be induced. A detailed analysis has been presented in . A convenient and completely general parameterization of the two amplitudes in (1) is obtained by replacing $`P`$ $``$ $`P^{},\epsilon _ae^{i\gamma }e^{i\eta }i\rho e^{i\varphi _\rho },`$ $`\delta _{\mathrm{EW}}`$ $``$ $`ae^{i\varphi _a}+ibe^{i\varphi _b},`$ (12) where $`\rho `$, $`a`$, $`b`$ are real hadronic parameters, and $`\varphi _\rho `$, $`\varphi _a`$, $`\varphi _b`$ are strong phases. The terms $`i\rho `$ and $`ib`$ change sign under a CP transformation. New Physics effects parameterized by $`P^{}`$ and $`\rho `$ are isospin conserving, while those described by $`a`$ and $`b`$ violate isospin. Note that the parameter $`P^{}`$ cancels in all ratios of branching ratios and thus does not affect the quantities $`R_{}`$ and $`X_R`$ as well as all CP asymmetries. Because the ratio $`R_{}`$ in (5) would be 1 in the isospin limit, it is particularly sensitive to isospin-violating New Physics contributions. The isospin-conserving effects parameterized by $`\rho `$ enter only through interference with the isospin-violating terms proportional to $`\epsilon _{3/2}`$ in (1) and hence are suppressed. New Physics can affect the bound on $`\gamma `$ derived from (8) as well as the value of $`\gamma `$ extracted using the strategies discussed in the previous section. We will discuss these two possibilities in turn. ### 5.1 Effects on the bound on $`𝜸`$ The upper bound on $`R_{}^1`$ in (8) and the corresponding bound on $`X_R`$ shown in Figure 1 are model-independent results valid in the Standard Model. Note that the extremal value of $`R_{}^1`$ is such that $`|X_R|(1+\delta _{\mathrm{EW}})`$ irrespective of $`\gamma `$. A value of $`|X_R|`$ exceeding this bound would be a clear signal for New Physics . Consider first the case where New Physics may induce arbitrary CP-violating contributions to the $`B\pi K`$ decay amplitudes, while preserving isospin symmetry. Then the only change with respect to the Standard Model is that the parameter $`\rho `$ may no longer be as small as $`O(\epsilon _a)`$. Varying the strong phases $`\varphi `$ and $`\varphi _\rho `$ independently, and allowing for an arbitrarily large New Physics contribution to $`\rho `$, one can derive the bound $$|X_R|\sqrt{12\delta _{\mathrm{EW}}\mathrm{cos}\gamma +\delta _{\mathrm{EW}}^2}1+\delta _{\mathrm{EW}}.$$ (13) Note that the extremal value is the same as in the Standard Model, i.e., isospin-conserving New Physics effects cannot lead to a value of $`|X_R|`$ exceeding $`1+\delta _{\mathrm{EW}}`$. For intermediate values of $`\gamma `$ between $`25^{}`$ and $`125^{}`$ the Standard Model bound on $`X_R`$ is weakened. But even for large values $`\rho =O(1)`$, corresponding to a significant New Physics contribution to the decay amplitudes, the effects are small. If both isospin-violating and isospin-conserving New Physics effects are present and involve new CP-violating phases, the analysis becomes more complicated. Still, it is possible to derive model-independent bounds on $`X_R`$. Allowing for arbitrary values of $`\rho `$ and all strong phases, one obtains $`|X_R|`$ $``$ $`\sqrt{(|a|+|\mathrm{cos}\gamma |)^2+(|b|+|\mathrm{sin}\gamma |)^2}`$ (14) $``$ $`1+\sqrt{a^2+b^2}{\displaystyle \frac{2}{\overline{\epsilon }_{3/2}}}+X_R,`$ where the last inequality is relevant only in cases where $`\sqrt{a^2+b^2}1`$. The important point to note is that with isospin-violating New Physics contributions the value of $`|X_R|`$ can exceed the upper bound in the Standard Model by a potentially large amount. For instance, if $`\sqrt{a^2+b^2}`$ is twice as large as in the Standard Model, corresponding to a New Physics contribution to the decay amplitudes of only 10–15%, then $`|X_R|`$ could be as large as 2.6 as compared with the maximal value 1.8 allowed in the Standard Model. Also, in the most general case where $`b`$ and $`\rho `$ are nonzero, the maximal value $`|X_R|`$ can take is no longer restricted to occur at the endpoints $`\gamma =0^{}`$ or $`180^{}`$, which are disfavored by the global analysis of the unitarity triangle . Rather, $`|X_R|`$ would take its maximal value if $`|\mathrm{tan}\gamma |=|\rho |=|b/a|`$. The present experimental value of $`X_R`$ in (9) has too large an error to determine whether there is any deviation from the Standard Model. If $`X_R`$ turns out to be larger than 1 (i.e., one third of a standard deviation above its current central value), then an interpretation of this result in the Standard Model would require a large value $`|\gamma |>91^{}`$ (see Figure 1), which may be difficult to accommodate. This may be taken as evidence for New Physics. If $`X_R>1.3`$, one could go a step further and conclude that the New Physics must necessarily violate isospin . ### 5.2 Effects on the determination of $`𝜸`$ A value of the observable $`R_{}`$ violating the Standard Model bound (8) would be an exciting hint for New Physics. However, even if a more precise measurement will give a value that is consistent with the Standard Model bound, $`B^\pm \pi K`$ decays provide an excellent testing ground for physics beyond the Standard Model. This is so because New Physics may still cause a significant shift in the value of $`\gamma `$ extracted from $`B^\pm \pi K`$ decays using the strategies discussed in Section 4. This may lead to inconsistencies when this value is compared with other determinations of $`\gamma `$. A global fit of the unitarity triangle combining information from semileptonic $`B`$ decays, $`B`$$`\overline{B}`$ mixing, CP violation in the kaon system, and mixing-induced CP violation in $`BJ/\psi K_S`$ decays provides information on $`\gamma `$, which in a few years will determine its value within a rather narrow range . Such an indirect determination could be complemented by direct measurements of $`\gamma `$ using, e.g., $`BDK`$ decays, or using the triangle relation $`\gamma =180^{}\alpha \beta `$ combined with a measurement of $`\alpha `$ in $`B\pi \pi `$ or $`B\pi \rho `$ decays. We will assume that a discrepancy of more than $`25^{}`$ between the “true” $`\gamma =\text{arg}(V_{ub}^{})`$ and the value $`\gamma _{\pi K}`$ extracted in $`B^\pm \pi K`$ decays will be observable after a few years of operation at the $`B`$ factories. This will set the benchmark for sensitivity to New Physics effects. In order to illustrate how big an effect New Physics could have on the value of $`\gamma `$ we consider the simplest case where there are no new CP-violating couplings. Then all New Physics contributions in (5) are parameterized by the single parameter $`a\delta _{\mathrm{EW}}+a_{\mathrm{NP}}`$. A more general discussion can be found in . We also assume for simplicity that the strong phase $`\varphi `$ is small, as suggested by (10). In this case the difference between the value $`\gamma _{\pi K}`$ extracted from $`B^\pm \pi K`$ decays and the “true” value of $`\gamma `$ is to a good approximation given by $$\mathrm{cos}\gamma _{\pi K}\mathrm{cos}\gamma a_{\mathrm{NP}}.$$ (15) In Figure 5 we show contours of constant $`X_R`$ versus $`\gamma `$ and $`a`$, assuming without loss of generality that $`\gamma >0`$. Obviously, even a moderate New Physics contribution to the parameter $`a`$ can induce a large shift in $`\gamma `$. Note that the present central value of $`X_R0.7`$ is such that values of $`a`$ less than the Standard Model result $`a0.64`$ are disfavored, since they would require values of $`\gamma `$ exceeding $`100^{}`$, in conflict with the global analysis of the unitarity triangle . ### 5.3 Survey of New Physics models In , we have explored how physics beyond the Standard Model could affect purely hadronic FCNC transitions of the type $`\overline{b}\overline{s}q\overline{q}`$ focusing, in particular, on isospin violation. Unlike in the Standard Model, where isospin-violating effects in these processes are strongly suppressed by electroweak gauge couplings or small CKM matrix elements, in many New Physics scenarios these effects are not parametrically suppressed relative to isospin-conserving FCNC processes. In the language of effective weak Hamiltonians this implies that the Wilson coefficients of QCD and electroweak penguin operators are of a similar magnitude. For a large class of New Physics models we found that the coefficients of the electroweak penguin operators are, in fact, due to “trojan” penguins, which are neither related to penguin diagrams nor of electroweak origin. Specifically, we have considered: (a) models with tree-level FCNC couplings of the $`Z`$ boson, extended gauge models with an extra $`Z^{}`$ boson, supersymmetric models with broken R-parity; (b) supersymmetric models with R-parity conservation; (c) two-Higgs–doublet models, and models with anomalous gauge-boson couplings. Some of these models have also been investigated in . In case (a), the resulting electroweak penguin coefficients can be much larger than in the Standard Model because they are due to tree-level processes. In case (b), these coefficients can compete with the ones of the Standard Model because they arise from strong-interaction box diagrams, which scale relative to the Standard Model like $`(\alpha _s/\alpha )(m_W^2/m_{\mathrm{SUSY}}^2)`$. In models (c), on the other hand, isospin-violating New Physics effects are not parametrically enhanced and are generally smaller than in the Standard Model. For each New Physics model we have explored which region of parameter space can be probed by the $`B^\pm \pi K`$ observables, and how big a departure from the Standard Model predictions one can expect under realistic circumstances, taking into account all constraints on the model parameters implied by other processes. Table 1 summarizes our estimates of the maximal isospin-violating contributions to the decay amplitudes, as parameterized by $`|a_{\mathrm{NP}}|`$. They are the potentially most important source of New Physics effects in $`B^\pm \pi K`$ decays. For comparison, we recall that in the Standard Model $`a0.64`$. Also shown are the corresponding maximal values of the difference $`|\gamma _{\pi K}\gamma |`$. As noted above, in models with tree-level FCNC couplings New Physics effects can be dramatic, whereas in supersymmetric models with R-parity conservation isospin-violating loop effects can be competitive with the Standard Model. In the case of supersymmetric models with R-parity violation the bound (14) implies interesting limits on certain combinations of the trilinear couplings $`\lambda _{ijk}^{}`$ and $`\lambda _{ijk}^{\prime \prime }`$, which are discussed in . ## 6 CONCLUSIONS Measurements of the rates for the rare hadronic decays $`B^\pm \pi K`$ provide interesting information on the weak phase $`\gamma `$ and on the Wolfenstein parameters $`\overline{\rho }`$ and $`\overline{\eta }`$. Using isospin, Fierz and flavor symmetries together with the fact that nonleptonic $`B`$ decays into two light mesons admit a heavy-quark expansion, a largely model-independent description of these decays is obtained despite the fact that they are exclusive nonleptonic processes. In the future, a precise measurement of the $`B^\pm \pi K`$ decay amplitudes will provide an extraction of $`\gamma `$ with a theoretical uncertainty of about $`10^{}`$, and at the same time will allow for sensitive tests of physics beyond the Standard Model. ### Acknowledgements It is a pleasure to thank the SLAC Theory Group for the warm hospitality extended to me during the past year. I am grateful to Martin Beneke, Gerhard Buchalla, Yuval Grossman, Alex Kagan, Jon Rosner and Chris Sachrajda for collaboration on parts of the work reported here. This research was supported by the Department of Energy under contract DE–AC03–76SF00515.
no-problem/9909/hep-ph9909275.html
ar5iv
text
# Real CP violation in a simple extension of the standard model ## 1 Introduction The concept of real CP violation has recently been introduced by Masiero and Yanagida . It stands for spontaneous CP violation (SCPV) which occurs in spite of the vacuum expectation values (VEVs) being real. In a model with real CP violation, complex numbers are present in the Yukawa couplings from the very beginning; however, unless the (real) VEVs break a certain CP symmetry, those complexities cancel among themselves, and the whole theory is CP conserving. As a matter of fact, a model of real CP violation had already been suggested by myself before Masiero and Yanagida’s paper. The possibility that CP might be spontaneously broken by real VEVs should not come as a surprise. This may simply happen because the basic Lagrangian (before spontaneous symmetry breaking) is invariant under a non-trivial CP transformation, as I shall now explain with the help of a simple example. Suppose that there are two non-Hermitian scalar fields, $`S_1`$ and $`S_2`$, which transform under CP into each other’s Hermitian conjugate: $$S_1\stackrel{\mathrm{CP}}{}S_2^{},S_2\stackrel{\mathrm{CP}}{}S_1^{}.$$ (1) If the VEVs of $`S_1`$ and $`S_2`$ have different modulus, then the CP transformation of Eq. (1) is broken. This happens independently of the phases of the VEVs. On the other hand, we may define $$S_\pm \frac{S_1+S_2\pm i\left(S_1S_2\right)}{2}.$$ (2) Then, the CP transformation of Eq. (1) looks like the usual CP transformation: $$S_+\stackrel{\mathrm{CP}}{}S_+^{},S_{}\stackrel{\mathrm{CP}}{}S_{}^{}.$$ (3) When the VEVs of $`S_1`$ and $`S_2`$ are real and distinct, the VEVs of $`S_\pm `$ are complex. This we would describe as being just the usual form of SCPV. Thus, the situation which in the basis $`\{S_1,S_2\}`$ would be described as real CP violation, looks in the basis $`\{S_+,S_{}\}`$ like the usual SCPV, with complex VEVs. One may then ask oneself whether the concept of real CP violation is not altogether spurious. But, using $`S_+`$ and $`S_{}`$ as the basic fields, instead of using $`S_1`$ and $`S_2`$, may be inappropriate, in particular if there is in the theory a symmetry, besides CP, under which $`S_1`$ and $`S_2`$ transform as singlets, while $`S_+`$ and $`S_{}`$ mix. For instance, there may be an extra symmetry under which $$S_1e^{2i\pi /3}S_1,S_2e^{2i\pi /3}S_2.$$ (4) Clearly, $`S_+`$ and $`S_{}`$ mix under the transformation in Eq. (4). Hence, they form an inadequate basis to study the theory, and it is more fruitful to use the basis $`\{S_1,S_2\}`$. It is then appropriate to say that there is real CP violation. On the other hand, if no symmetry like the one in Eq. (4) exists, then the concept of real CP violation has no distinctive meaning, as it is just as legitimate to study the theory in the basis $`\{S_+,S_{}\}`$ as in the basis $`\{S_1,S_2\}`$. Alternatively, one may argue that, if a model with real CP violation is able to attain some original and useful achievement, any result which would not in general obtain, then real CP violation is useful and worth taking seriously. In their paper , Masiero and Yanagida have suggested that real CP violation might provide an avenue to solving the strong CP problem; as a matter of fact, a realization of this prophecy had already been provided in my paper . An alternative application of real CP violation, and of the associated non-trivial CP symmetry, might be to constrain the fermion mass matrices, leading to the prediction of some relationships among the fermion masses and mixing angles. This would be analogous to what was done by Ecker and collaborators in the context of left–right-symmetric models. Both the models of real CP violation presented by Masiero and Yanagida and by myself have many fields beyond the standard-model ones. Besides, those models also include extra non-Abelian symmetry groups. This might lead one to believe that real CP violation is an exotic effect, which can occur or be useful only in the context of complicated models. The main purpose of this paper to show that this is not so. Specifically, I construct here a simple extension of the standard model, with no extra fermions, no extra gauge groups, and no extra non-Abelian symmetries, in which real CP violation occurs. The second goal of this paper is to call attention to a potential advantage of real CP violation. Namely, one is able to get spontaneous breaking of CP without thereby introducing CP violation in the mixings and self-interactions of the scalar fields. The third aim of this paper is to use that property to attenuate the strong CP problem of the standard model. Indeed, once CP violation is removed from the scalar sector, then the possibility arises of eliminating the generation of an unacceptably large $`\theta _{\mathrm{QFD}}`$ through the one-loop diagram of Fig. 1 . When scalar–pseudoscalar mixing is absent that diagram does not exist, and a non-zero $`\theta _{\mathrm{QFD}}`$ may arise only at two-loop level, it being then expected to be of order $`10^8`$. This would milden the strong CP problem. Still, I cannot claim that the specific model in the following section does solve that problem. ## 2 The model The model that I want to put forward in order to materialize my claims has three Higgs doublets. The gauge group is SU(2)$``$U(1) and the fermion spectrum is the usual one. In particular, there are three left-handed-quark doublets $`q_{La}=(p_{La},n_{La})^T`$, three right-handed charge-$`2/3`$ (up-type) quarks $`p_{Ra}`$, and three right-handed charge-$`1/3`$ (down-type) quarks $`n_{Ra}`$. (The index $`a`$ runs from $`1`$ to $`3`$.) There are three scalar doublets $$\varphi _a=e^{i\theta _a}\left(\begin{array}{c}\phi _a^+\\ v_a+\frac{\rho _a+i\eta _a}{\sqrt{2}}\end{array}\right),$$ (5) where $`v_ae^{i\theta _a}`$ are the VEVs, with $`v_a`$ real and positive by definition. The fields $`\rho _a`$ and $`\eta _a`$ are Hermitian. As usual, I denote $$\stackrel{~}{\varphi }_ai\tau _2\varphi _{a}^{}{}_{}{}^{T}=e^{i\theta _a}\left(\begin{array}{c}v_a+\frac{\rho _ai\eta _a}{\sqrt{2}}\\ \phi _a^{}\end{array}\right).$$ (6) There is in the model a discrete symmetry $`D`$ under which $$\varphi _1\stackrel{D}{}i\varphi _1,\varphi _2\stackrel{D}{}i\varphi _2,q_{L1}\stackrel{D}{}iq_{L1},q_{L2}\stackrel{D}{}iq_{L2},$$ (7) and all other fields remain invariant. The symmetry CP transforms the right-handed-quark fields in the usual way, $$p_{Ra}\stackrel{\mathrm{CP}}{}\gamma ^0C\overline{p_{Ra}}^T,n_{Ra}\stackrel{\mathrm{CP}}{}\gamma ^0C\overline{n_{Ra}}^T,$$ (8) but it interchanges the indices $`1`$ and $`2`$ in the transformation of both the scalar doublets and the left-handed-quark doublets: $$\varphi _1\stackrel{\mathrm{CP}}{}\varphi _{2}^{}{}_{}{}^{T},\varphi _2\stackrel{\mathrm{CP}}{}\varphi _{1}^{}{}_{}{}^{T},\varphi _3\stackrel{\mathrm{CP}}{}\pm \varphi _{3}^{}{}_{}{}^{T},$$ (9) $$q_{L1}\stackrel{\mathrm{CP}}{}\gamma ^0C\overline{q_{L2}}^T,q_{L2}\stackrel{\mathrm{CP}}{}\gamma ^0C\overline{q_{L1}}^T,q_{L3}\stackrel{\mathrm{CP}}{}\pm \gamma ^0C\overline{q_{L3}}^T.$$ (10) The fields in the left-hand side of Eqs. (8)–(10) are meant to be at the space–time point $`(t,\stackrel{}{r})`$, while the fields in the right-hand side are at the space–time point $`(t,\stackrel{}{r})`$. The $`\pm `$ sign in Eqs. (9) and (10) means that there are as a matter of fact two different CP symmetries, both of which leave the Lagrangian of the model invariant. Notice that the transformations $`D`$ and CP commute. It follows from Eq. (9) that CP will be preserved by the vacuum if $$v_1e^{i\theta _1}=v_2e^{i\theta _2},v_3e^{i\theta _3}=\pm v_3e^{i\theta _3}.$$ (11) The phases $`\theta _a`$ are gauge-dependent, and therefore the conditions in Eq. (11) are not gauge-invariant. The invariant conditions for the absence of SCPV are $$v_1=v_2,e^{i\left(2\theta _3\theta _1\theta _2\right)}=\pm 1.$$ (12) One sees that SCPV may be achieved through $`v_1v_2`$, quite independently of the phases of the VEVs. This situation embodies what I call real CP violation. As a consequence of the symmetries $`D`$ and CP, the scalar potential $`V`$ has only two terms which “see” the relative phases of the doublets. Indeed, $`V=V_I+V_S`$, where $`V_I`$ $`=`$ $`\mu _1\left(\varphi _1^{}\varphi _1+\varphi _2^{}\varphi _2\right)+\mu _2\varphi _3^{}\varphi _3+\lambda _1\left[\left(\varphi _1^{}\varphi _1\right)^2+\left(\varphi _2^{}\varphi _2\right)^2\right]+\lambda _2\left(\varphi _3^{}\varphi _3\right)^2`$ (13) $`+\lambda _3\left(\varphi _1^{}\varphi _1\right)\left(\varphi _2^{}\varphi _2\right)+\lambda _4\left[\left(\varphi _1^{}\varphi _1\right)+\left(\varphi _2^{}\varphi _2\right)\right]\left(\varphi _3^{}\varphi _3\right)`$ $`+\lambda _5\left(\varphi _1^{}\varphi _2\right)\left(\varphi _2^{}\varphi _1\right)+\lambda _6\left[\left(\varphi _1^{}\varphi _3\right)\left(\varphi _3^{}\varphi _1\right)+\left(\varphi _2^{}\varphi _3\right)\left(\varphi _3^{}\varphi _2\right)\right]`$ and $`V_S`$ $`=`$ $`\lambda _7\left[\left(\varphi _3^{}\varphi _1\right)\left(\varphi _3^{}\varphi _2\right)+\left(\varphi _1^{}\varphi _3\right)\left(\varphi _2^{}\varphi _3\right)\right]`$ (14) $`+\lambda _8\left[e^{i\chi }\left(\varphi _1^{}\varphi _2\right)^2+e^{i\chi }\left(\varphi _2^{}\varphi _1\right)^2\right].`$ $`V_I`$ is that part of $`V`$ which is insensitive to the overall phases of the doublets. The coefficients $`\mu _1`$, $`\mu _2`$, and $`\lambda _{1\text{}8}`$ are real by Hermiticity. Notice that our particular CP symmetry allows an unconstrained phase $`\chi `$ to appear in the potential. As there are two gauge-invariant phases among the VEVs and two terms in the potential which “see” those phases, no undesirable Goldstone bosons arise, and the phases of the VEVs adjust in such a way that $$e^{i\left(2\theta _3\theta _1\theta _2\right)}=\left(1\right)^a,e^{i\left(2\theta _22\theta _1+\chi \right)}=\left(1\right)^b,$$ (15) where $`a`$ and $`b`$ are either $`0`$ or $`1`$, and $$\lambda _7\left(1\right)^a=\left|\lambda _7\right|,\lambda _8\left(1\right)^b=\left|\lambda _8\right|.$$ (16) The first condition in Eq. (15) means that SCPV cannot be achieved through violation of the second relation in Eq. (12). Still, $`v_1v_2`$ is possible because of the presence of the coupling $`\lambda _7`$ in the scalar potential. Indeed, the stability conditions for the VEVs, assuming $`v_1v_2`$, are $`\mu _1`$ $`=`$ $`2\lambda _1\left(v_1^2+v_2^2\right)\left(\lambda _4+\lambda _6\right)v_3^2,`$ $`\mu _2`$ $`=`$ $`2\lambda _2v_3^2\left(\lambda _4+\lambda _6\right)\left(v_1^2+v_2^2\right)+2\left|\lambda _7\right|v_1v_2,`$ (17) $`\left|\lambda _7\right|`$ $`=`$ $`\left(2\lambda _1+\lambda _3+\lambda _52\left|\lambda _8\right|\right)v_1v_2/v_3^2.`$ Thus, in this model CP is spontaneously broken through $`v_1v_2`$. The VEVs are not real, because of the phase $`\chi `$ coming from $`V_S`$, but their phases are immaterial for SCPV, as they merely adjust in order to offset the phases in the scalar potential—see Eqs. (15) and (16). As a remarkable consequence, even when $`v_1v_2`$, CP violation remains absent from the self-interactions of the scalars. In particular, there are two physical charged scalars $`H_1^+`$ and $`H_2^+`$, $$\left(\begin{array}{c}G^+\\ H_1^+\\ H_2^+\end{array}\right)=T_H\left(\begin{array}{c}\phi _1^+\\ \phi _2^+\\ \phi _3^+\end{array}\right),$$ (18) and the $`3\times 3`$ matrix $`T_H`$ is orthogonal, i.e., real. The field $`G^+`$ is the Goldstone boson which is absorbed by the longitudinal component of the $`W^+`$. There are two physical neutral pseudoscalars $`A_1`$ and $`A_2`$, $$\left(\begin{array}{c}G^0\\ A_1\\ A_2\end{array}\right)=T_A\left(\begin{array}{c}\eta _1\\ \eta _2\\ \eta _3\end{array}\right),$$ (19) where $`G^0`$ is the Goldstone boson which is absorbed by the $`Z^0`$. The first rows of $`T_H`$ and of $`T_A`$ are given by $`(v_1,v_2,v_3)/v`$, where $`v=\sqrt{v_1^2+v_2^2+v_3^2}=174\mathrm{GeV}`$. Finally, there are three physical neutral scalars $`N_a`$, $$\left(\begin{array}{c}N_1\\ N_2\\ N_3\end{array}\right)=T_N\left(\begin{array}{c}\rho _1\\ \rho _2\\ \rho _3\end{array}\right).$$ (20) The matrices $`T_A`$ and $`T_N`$ are orthogonal. The important point is that the fields $`\rho `$ do not mix with the fields $`\eta `$. This, and the reality of $`T_H`$, are consequences of the CP invariance of the scalar potential, which is retained even after SCPV has been achieved through $`v_1v_2`$. The Yukawa Lagrangian of the quarks is $`_\mathrm{Y}^{(\mathrm{q})}`$ $`=`$ $`\left(\overline{q_{L1}}\varphi _1\mathrm{\Gamma }_1+\overline{q_{L2}}\varphi _2\mathrm{\Gamma }_1^{}+\overline{q_{L3}}\varphi _3\mathrm{\Gamma }_2\right)n_R`$ (21) $`\left(\overline{q_{L1}}\stackrel{~}{\varphi }_2\mathrm{\Delta }_1^{}+\overline{q_{L2}}\stackrel{~}{\varphi }_1\mathrm{\Delta }_1+\overline{q_{L3}}\stackrel{~}{\varphi }_3\mathrm{\Delta }_2\right)p_R+\mathrm{H}.\mathrm{c}..`$ $`\mathrm{\Gamma }_1`$, $`\mathrm{\Gamma }_2`$, $`\mathrm{\Delta }_1`$, and $`\mathrm{\Delta }_2`$ are $`1\times 3`$ row matrices. Because of CP symmetry, $`\mathrm{\Gamma }_2`$ and $`\mathrm{\Delta }_2`$ are real; on the other hand, $`\mathrm{\Gamma }_1`$ and $`\mathrm{\Delta }_1`$ are in general complex. The mass matrices of the quarks are of the form $$M_n=\left(\begin{array}{c}v_1e^{i\theta _1}\mathrm{\Gamma }_1\\ v_2e^{i\theta _2}\mathrm{\Gamma }_1^{}\\ v_3e^{i\theta _3}\mathrm{\Gamma }_2\end{array}\right),M_p=\left(\begin{array}{c}v_2e^{i\theta _2}\mathrm{\Delta }_1^{}\\ v_1e^{i\theta _1}\mathrm{\Delta }_1\\ v_3e^{i\theta _3}\mathrm{\Delta }_2\end{array}\right).$$ (22) The matrices $`M_n`$ and $`M_p`$ are bi-diagonalized by unitary matrices $`U_{L,R}^n`$ and $`U_{L,R}^p`$: $$U_{L}^{n}{}_{}{}^{}M_nU_R^n=M_d=\mathrm{diag}(m_d,m_s,m_b),U_{L}^{p}{}_{}{}^{}M_pU_R^p=M_u=\mathrm{diag}(m_u,m_c,m_t).$$ (23) The relationship between the quarks in the weak basis and in the mass basis is $$n_L=U_L^nd_L,n_R=U_R^nd_R,p_L=U_L^pu_L,p_R=U_R^pu_R.$$ (24) The Cabibbo–Kobayashi–Maskawa (CKM) matrix is $$V=U_{L}^{p}{}_{}{}^{}U_L^n.$$ (25) Because of the form of $`M_n`$ and $`M_p`$ in Eqs. (22), and in particular as $`\mathrm{\Gamma }_2`$ and $`\mathrm{\Delta }_2`$ are real, $`V`$ is intrinsically complex—i.e., $`J\mathrm{Im}\left(V_{us}V_{cb}V_{ub}^{}V_{cs}^{}\right)0`$—if and only if $`v_1v_2`$. Indeed, when $`\mathrm{sin}\left(2\theta _3\theta _1\theta _2\right)=0`$ one finds that $$det\left(M_pM_p^{}M_nM_n^{}M_nM_n^{}M_pM_p^{}\right)v_2^2v_1^2.$$ (26) As is well known , $`J`$ is proportional to the determinant in Eq. (26). Thus, in this model the spontaneous breaking of CP does not lead to CP violation in the scalar sector, but it does lead to a complex CKM matrix. The appearance of CP violation in this model is completely independent of the values of the phases of the VEVs. Indeed, even though $`\theta _2\theta _1`$ and $`\theta _3\theta _1`$ are in general non-zero—because of the arbitrary phase $`\chi `$ in $`V_S`$, see Eqs. (15)—the appearance of CP violation hinges on $`v_1v_2`$, and not on the values of $`\theta _2\theta _1`$ and $`\theta _3\theta _1`$. It is thus appropriate to describe this model as displaying real CP violation, even though the VEVs may have non-zero relative phases. ## 3 Consequences for strong CP violation As is well known, non-perturbative effects in Quantum Chromodynamics (QCD) may lead to P and CP violation, characterized by a parameter $`\theta =\theta _{\mathrm{QCD}}+\theta _{\mathrm{QFD}}`$, in hadronic processes. The first contribution to $`\theta `$, namely $`\theta _{\mathrm{QCD}}`$, characterizes P and CP violation in the vacuum of QCD. This contribution to $`\theta `$ may be assumed to vanish in a model, like the present one, whose Lagrangian conserves CP. On the other hand, $$\theta _{\mathrm{QFD}}=\mathrm{arg}det\left(M_nM_p\right)$$ (27) does not in general vanish. This is because the quark mass matrices must be complex—lest the CKM matrix be real—and there is in general no reason why their determinant should be real. From the experimental upper bound on the electric dipole moment of the neutron one finds that $`\theta <\mathrm{\hspace{0.17em}10}^{\left(9\text{}10\right)}`$ . The presence of such a small number in QCD constitutes the strong CP problem.<sup>1</sup><sup>1</sup>1The situation $`\theta =\pi `$ is equivalent to $`\theta =0`$, in that it also corresponds to the absence of P and CP violation by the strong interaction. In the present model, $`\theta _{\mathrm{QCD}}`$ is zero because CP is a symmetry of the Lagrangian. As for $`\theta _{\mathrm{QFD}}`$, one gathers from the mass matrices in Eq. (22) that $`\mathrm{arg}detM_n`$ $`=`$ $`\theta _1+\theta _2+\theta _3+{\displaystyle \frac{\pi }{2}}\left(\text{mod}\pi \right),`$ $`\mathrm{arg}detM_p`$ $`=`$ $`\theta _1\theta _2\theta _3{\displaystyle \frac{\pi }{2}}\left(\text{mod}\pi \right),`$ (28) and therefore $`\theta _{\mathrm{QFD}}=0`$ at tree level. A non-zero $`\theta _{\mathrm{QFD}}`$ may however be generated at loop level . Let us denote the loop contributions to the mass matrices of the down-type and up-type quarks, as computed in the basis of the tree-level physical quarks $`d`$ and $`u`$, by $`\mathrm{\Sigma }_d`$ and $`\mathrm{\Sigma }_u`$, respectively. Thus, the quark mass Lagrangian at loop level is $$_{\mathrm{mass}}^{(\mathrm{q})}=\overline{d_L}\left(M_d+\mathrm{\Sigma }_d\right)d_R\overline{u_L}\left(M_u+\mathrm{\Sigma }_u\right)u_R+\mathrm{H}.\mathrm{c}..$$ (29) Then, $`\theta _{\mathrm{QFD}}`$ $`=`$ $`\mathrm{arg}det\left(M_d+\mathrm{\Sigma }_d\right)+\mathrm{arg}det\left(M_u+\mathrm{\Sigma }_u\right)`$ (30) $`=`$ $`\mathrm{arg}detM_d+\mathrm{arg}det\left(1+\mathrm{\Sigma }_dM_d^1\right)+\mathrm{arg}detM_u+\mathrm{arg}det\left(1+\mathrm{\Sigma }_uM_u^1\right)`$ $`=`$ $`\mathrm{arg}det\left(1+\mathrm{\Sigma }_dM_d^1\right)+\mathrm{arg}det\left(1+\mathrm{\Sigma }_uM_u^1\right)`$ $`=`$ $`\text{Im tr}\mathrm{ln}\left(1+\mathrm{\Sigma }_dM_d^1\right)+\text{Im tr}\mathrm{ln}\left(1+\mathrm{\Sigma }_uM_u^1\right).`$ I have used the fact that $`M_d`$ and $`M_u`$ are real matrices. The expression in Eq. (30) may be computed with the help of the identity, valid for small $`C`$, $$\mathrm{ln}\left(1+C\right)=C\frac{C^2}{2}+\frac{C^3}{3}\mathrm{}.$$ (31) The dangerous contributions to the one-loop quark self-energies are the ones from diagrams with either charged or neutral scalars in the loop , as in Fig. 1. Let us therefore compute the Yukawa interactions of the model in the physical-quark basis. I first write the unitary matrices $`U_L^n`$ and $`U_L^p`$ as $$U_L^n=\left(\begin{array}{c}N_1\\ N_2\\ N_3\end{array}\right),U_L^p=\left(\begin{array}{c}P_1\\ P_2\\ P_3\end{array}\right),$$ (32) where the $`N_a`$ and the $`P_a`$ are $`1\times 3`$ row matrices. Let us define $`A_aP_a^{}N_a`$. Clearly, the CKM matrix $$V=U_{L}^{p}{}_{}{}^{}U_L^n=A_1+A_2+A_3.$$ (33) The unitarity of $`U_L^n`$ and $`U_L^p`$ implies $`A_aA_b^{}`$ $`=`$ $`\delta _{ab}P_a^{}P_a,`$ $`A_a^{}A_b`$ $`=`$ $`\delta _{ab}N_a^{}N_a.`$ (34) Moreover, $$A_1A_1^{}+A_2A_2^{}+A_3A_3^{}=A_1^{}A_1+A_2^{}A_2+A_3^{}A_3=1_{3\times 3}.$$ (35) The quark Yukawa Lagrangian in Eq. (21) may, when one takes into account Eqs. (23) and (24), be written as $`_\mathrm{Y}^{(\mathrm{q})}`$ $`=`$ $`\overline{d_L}M_dd_R\overline{u_L}M_uu_R`$ (36) $`\overline{d_L}\left({\displaystyle \frac{\rho _1+i\eta _1}{\sqrt{2}v_1}}A_1^{}A_1+{\displaystyle \frac{\rho _2+i\eta _2}{\sqrt{2}v_2}}A_2^{}A_2+{\displaystyle \frac{\rho _3+i\eta _3}{\sqrt{2}v_3}}A_3^{}A_3\right)M_dd_R`$ $`\overline{u_L}\left({\displaystyle \frac{\rho _2i\eta _2}{\sqrt{2}v_2}}A_1A_1^{}+{\displaystyle \frac{\rho _1i\eta _1}{\sqrt{2}v_1}}A_2A_2^{}+{\displaystyle \frac{\rho _3i\eta _3}{\sqrt{2}v_3}}A_3A_3^{}\right)M_uu_R`$ $`\overline{u_L}\left({\displaystyle \frac{\phi _1^+}{v_1}}A_1+{\displaystyle \frac{\phi _2^+}{v_2}}A_2+{\displaystyle \frac{\phi _3^+}{v_3}}A_3\right)M_dd_R`$ $`+\overline{d_L}\left({\displaystyle \frac{\phi _2^{}}{v_2}}A_1^{}+{\displaystyle \frac{\phi _1^{}}{v_1}}A_2^{}+{\displaystyle \frac{\phi _3^{}}{v_3}}A_3^{}\right)M_uu_R+\mathrm{H}.\mathrm{c}..`$ One sees from Eqs. (20) and (36) that the scalars $`N_a`$ couple to the quarks with an Hermitian matrix multiplied on the right by the quark mass matrix. The pseudoscalars $`A_1`$ and $`A_2`$ have similar Yukawa couplings, with the Hermitian matrix substituted by an anti-Hermitian matrix. Under these conditions, the self-energies $`\mathrm{\Sigma }_d`$ and $`\mathrm{\Sigma }_u`$ following from the diagram in Fig. 1 satisfy $`\mathrm{\Sigma }_dM_d^1=\left(\mathrm{\Sigma }_dM_d^1\right)^{},`$ $`\mathrm{\Sigma }_uM_u^1=\left(\mathrm{\Sigma }_uM_u^1\right)^{},`$ (37) and do not contribute to $`\theta _{\mathrm{QFD}}`$. Now consider the up-quark self-energy $`\mathrm{\Sigma }_u`$ following from a loop with the charged scalar $`H_1^+`$, depicted in Fig. 2. Let us write $`H_1^+=_{a=1}^3c_a\phi _a^+`$, where the coefficients $`c_a`$ are real. Then, Fig. 2 yields $$\mathrm{\Sigma }_uM_u^1\left(\frac{c_1}{v_1}A_1+\frac{c_2}{v_2}A_2+\frac{c_3}{v_3}A_3\right)f(M_d^2,m_{H_1}^2)\left(\frac{c_2}{v_2}A_1^{}+\frac{c_1}{v_1}A_2^{}+\frac{c_3}{v_3}A_3^{}\right),$$ (38) where $`f`$ is the real function issuing from the loop integration. Using the cyclic property of the trace together with Eq. (34), one finds that $`\text{tr}\left(\mathrm{\Sigma }_uM_u^1\right)`$ is real, therefore it does not contribute to $`\theta _{\mathrm{QFD}}`$. Thus, $`\theta _{\mathrm{QFD}}`$ vanishes at one-loop level. It is important to emphasize the crucial role that real CP violation plays in attenuating the strong CP problem in this model. Indeed, it is this peculiar form of SCPV which allows CP violation to be absent from the scalar mixing, even while it persists in the fermion sector. This means that the $`\rho _a`$ do not mix with the $`\eta _a`$, and also that the physical charged scalars are real linear combinations of the $`\phi _a^+`$. The same thing happened in the previous model of real CP violation that I have put forward . Unfortunately, in the present model $`\theta _{\mathrm{QFD}}`$ arises at two-loop level, and it is not clear to me that there is any suppression mechanism which can make it small enough. Therefore, I cannot claim that this model completely avoids the strong CP problem. To be fair, however, I must emphasize that there are very few viable models which solve the strong CP problem in the context of extensions of the standard model only with extra scalars; in particular, invisible-axion models , are tightly constrained by experimental tests . It must also be pointed out that this model has unsuppressed flavour-changing neutral Yukawa interactions, which may give undesirably large contributions to CP violation, $`K^0`$$`\overline{K^0}`$ mixing, $`B_d^0`$$`\overline{B_d^0}`$ mixing, and the decay $`K_L\mu ^+\mu ^{}`$. ## 4 Concluding remarks Spontaneous CP violation may provide a way of eliminating the strong CP problem of the standard model . In order to reach that goal, however, it is usually important that CP violation remains absent from the scalar mixing. This may be achieved if SCPV does not hinge on the generation of non-trivial phases among the VEVs, but rather on the generation of real (or else with trivial relative phases) VEVs, which however do not satisfy the equalities among themselves implied by the initial CP symmetry. We may call this situation “real CP violation”. As far as I can see, the simplest model displaying these features involves, for an odd number of fermion generations, three Higgs doublets. However, if there were four (or, more generally, an even number of) generations, two Higgs doublets would be enough. The CP transformation would read $$\varphi _1\stackrel{\mathrm{CP}}{}\varphi _{2}^{}{}_{}{}^{T},\varphi _2\stackrel{\mathrm{CP}}{}\varphi _{1}^{}{}_{}{}^{T},$$ (39) $$q_{L1}\stackrel{\mathrm{CP}}{}\gamma ^0C\overline{q_{L3}}^T,q_{L2}\stackrel{\mathrm{CP}}{}\gamma ^0C\overline{q_{L4}}^T,q_{L3}\stackrel{\mathrm{CP}}{}\gamma ^0C\overline{q_{L1}}^T,q_{L4}\stackrel{\mathrm{CP}}{}\gamma ^0C\overline{q_{L2}}^T,$$ (40) and an extra discrete symmetry would change the signs of $`\varphi _2`$, $`q_{L3}`$, and $`q_{L4}`$. One would thus obtain fermion mass matrices of the form $$M_n=\left(\begin{array}{c}v_1e^{i\theta _1}\mathrm{\Gamma }\\ v_2e^{i\theta _2}\mathrm{\Gamma }^{}\end{array}\right),M_p=\left(\begin{array}{c}v_1e^{i\theta _1}\mathrm{\Delta }\\ v_2e^{i\theta _2}\mathrm{\Delta }^{}\end{array}\right),$$ (41) with $`\mathrm{\Gamma }`$ and $`\mathrm{\Delta }`$ being $`2\times 4`$ matrices. CP violation in fermion mixing would be present as long as $`v_1v_2`$, yet $`\theta _{\mathrm{QFD}}`$ would vanish both at tree level and one-loop level.<sup>2</sup><sup>2</sup>2The scalar potential of such a model would need either soft CP breaking or extra scalar fields in order to produce $`v_1v_2`$. More generally, I have shown in this paper that real CP violation is a distinct possibility whenever the Lagrangian respects a non-trivial CP symmetry together with some other discrete symmetry. Real CP violation may constitute an useful option for spontaneous CP breaking for other purposes besides solving the strong CP problem; is particular, the non-trivial CP transformation associated with real CP violation may be used to obtain specific relationships among the fermion masses and mixings.
no-problem/9909/hep-lat9909080.html
ar5iv
text
# Simulations of Protein Folding ## 1 PROTEINS A protein is a linear chain of amino acids. The proteins of natural living organisms are composed of 20 different types of amino acids. A typical protein is a polymer of 300 amino acids, of which there are $`20^{300}=2\times 10^{390}`$ different possibilities. The human body uses about 80,000 different proteins for most of its functionality, including structure, communication, transport, and catalysis. The order of the amino acids in the proteins of an organism is specified by the order of the base pairs in the deoxyribonucleic acid, DNA, of its genome. Human DNA consists of $`10^9`$ base pairs with a total length of 3m. Since three base pairs specify an amino acid, the code for the 80,000 human proteins requires only $`3\times 300\times 80,000=7\times 10^7`$ base pairs or 7% of the genome. ### 1.1 Amino Acids The twenty amino acids differ only in their side chains. The key atom in an amino acid is a carbon atom called the $`\alpha `$-carbon, C<sub>α</sub>. Four atoms are attached to the C<sub>α</sub> by single covalent bonds: a hydrogen atom H, a carbonyl-carbon atom C, a nitrogen atom N, and the first atom of the side chain R of the amino acid. The carbonyl carbon C is connected to an oxygen atom by two covalent bonds and to a hydroxyl group OH by another covalent bond; the nitrogen atom N is attached to two hydrogen atoms, forming an amine group NH<sub>2</sub>. The backbone of an amino acid is the triplet N, C<sub>α</sub>, C. Of the 20 amino acids found in biological systems, 19 are left handed. If one looks at the C<sub>α</sub> from the H, then the order of the structures CO, R, and N is clockwise (CORN). The one exception is glycine in which the entire side chain is a single hydrogen atom; glycine is not chiral. ### 1.2 Globular Proteins There are three classes of proteins: fibrous, membrane, and globular. Fibrous proteins are the building materials of bodies; collagen is used in tendon and bone, $`\alpha `$-keratin in hair and skin. Membrane proteins sit in the membranes of cells through which they pass molecules and messages. Globular proteins catalyze chemical reactions; enzymes are globular proteins. Under normal physiological conditions, saline water near pH=7 at 20–40 <sup>o</sup>C, proteins assume their native forms. Globular proteins fold into compact structures. The biological activity of a globular protein is largely determined by its unique shape, which in turn is determined by its primary structure, that is, by its sequence of amino acids. ### 1.3 Kinds of Amino Acids The amino acids that occur in natural living organisms are of four kinds. Seven are nonpolar: alanine (ala), valine (val), phenylalanine (phe), proline (pro), methionine (met), isoleucine (ile), and leucine (leu).They avoid water and are said to be *hydrophobic*. Four are charged: aspartic acid (asp) and glutamic acid (glu) are negative, lysine (lys) and arginine (arg) are positive. Eight are polar: serine (ser), threonine (thr), tyrosine (tyr), histidine (his), cysteine (cys), asparagine (asn), glutamine (gln), and tryptophan (trp). The four charged amino acids and the eight polar amino acids seek water and are said to be *hydrophilic*. Glycine falls into a class of its own. ### 1.4 Protein Geometry When two amino acids are joined to make a dipeptide, first the hydroxyl group OH attached to the carbonyl carbon C of the first amino acid combines with one of the two hydrogen atoms attached to the nitrogen N of the second amino acid to form a molecule of water H<sub>2</sub>O, and then a peptide bond forms between the carbonyl carbon C of the first amino acid and the nitrogen N of the second amino acid. A peptide bond is short, 1.33 Å, and resists rotations because it is partly a double bond. To a good approximation, the six atoms C<sub>α1</sub>, C$`{}_{1}{}^{}{}_{}{}^{}`$, O<sub>1</sub>, N<sub>1</sub>, H<sub>1</sub>, and C<sub>α2</sub> lie in a plane, called the peptide plane. If a third amino acid is added to the carbonyl carbon C$`{}_{2}{}^{}{}_{}{}^{}`$ of the second amino acid, then the six atoms C$`{}_{\alpha 2}{}^{}\mathrm{}`$C<sub>α3</sub> also will lie in a (typically different) plane. Exceptionally, the peptide plane of proline is not quite flat because the side chain loops around, and its third carbon atom forms a bond with the nitrogen atom of the proline backbone. ### 1.5 The Protein Backbone The protein backbone consists of the chain of triplets (N C<sub>α</sub> C)<sub>1</sub>, (N C<sub>α</sub> C)<sub>2</sub>, (N C<sub>α</sub> C)<sub>3</sub>, $`\mathrm{}`$, (N C<sub>α</sub> C)<sub>N</sub>. Apart from the first nitrogen N<sub>1</sub> and the last carbonyl carbon C$`{}_{n}{}^{}{}_{}{}^{}`$, this backbone (and its oxygen and amide hydrogen atoms) consists of a chain of peptide planes, C$`{}_{\alpha 1}{}^{}\mathrm{}`$C$`{}_{\alpha 2}{}^{}\mathrm{}`$ C$`{}_{\alpha n1}{}^{}\mathrm{}`$C<sub>αn</sub>. Since the angles among the four bonds of the C<sub>α</sub>’s are fixed, the shape of the backbone of peptide planes is determined by the angles of rotation about the single bonds that link each C<sub>α</sub> to the N that precedes it and the C that follows it. The angle about the N<sub>i</sub>-C<sub>αi</sub> bond is called $`\varphi _i`$, that about the C<sub>αi</sub>-C$`{}_{i}{}^{}{}_{}{}^{}`$ bond is $`\psi _i`$. The 2N angles $`(\varphi _1,\psi _1)\mathrm{}(\varphi _N,\psi _N)`$ determine the shape of the backbone of the protein. These angles are the main kinematic variables of a protein. The principal properties of proteins are discussed in the classic article by Jane Richardson . ## 2 PROTEIN FOLDING The problem of protein folding is to predict the natural folded shape of a protein under physiological conditions from the DNA that defines its sequence of amino acids, which is its primary structure. This difficult problem has been approached by several techniques. Some scientists have applied all-atom molecular dynamics . We have used the Monte Carlo method in a manner inspired by the work of Ken Dill *et al.* . Our Monte Carlo simulations are guided by a simple potential with three terms. The first term embodies the Pauli exclusion principle. Because the outer parts of atoms are electrons which are fermions, the Pauli exclusion principle requires that the side chains of a protein not overlap by more than a fraction of an angstrom. In our present simulations, we have represented each side chain as a sphere centered at the first carbon atom, the $`C_\beta `$, of the side chain or at the hydrogen atom that is the side chain in the case of glycine. The second term represents the mutual attraction of nonpolar or hydrophobic amino acids. In effect the water electric dipoles, the free protons, the free hydroxyl radicals, and the other ions of the cellular fluid attract the charged and polar amino acids of a protein but leave unaffected the nonpolar amino acids. The resulting net inward force on the nonpolar amino acids drives them into a core which can be as densely packed as an ionic crystal. The third term is a very phenomenological representation of the effects of steric repulsion and hydrogen bonding. For a given amino acid, this term is more negative when its pair of angles $`\varphi _i`$ and $`\psi _i`$ are in a zone that avoids steric clashes between the backbone and the side chain and that encourages the formation of hydrogen bonds between NH<sup>+</sup> and O<sup>-</sup> groups. One of these Ramachandran zones favors the formation of $`\alpha `$ helices, others favor $`\beta `$ structures. We incorporate these zones in a Metropolis step with two scales, which we call zoning with memory. Each Monte-Carlo trial move begins with a random number that determines whether the angles $`\varphi _i`$ and $`\psi _i`$ of residue $`i`$ will change zone, *e.g.,* from its present zone to the $`\alpha `$ zone, the $`\beta `$ zone, or to the miscellaneous zone. If the zone is changed, then the angles $`\varphi _i`$ and $`\psi _i`$ revert to the values they possessed when residue $`i`$ was last in that zone. The trial move is then modified slightly and randomly. ### 2.1 Rotations We have derived a simple formula for the 3$`\times `$3 real orthogonal matrix that represents a right-handed rotation by $`\theta =|\stackrel{}{\theta }|`$ radians about the axis $`\widehat{\theta }=\stackrel{}{\theta }/\theta `$: $$e^{i\stackrel{}{\theta }\stackrel{}{J}}=\mathrm{cos}\theta Ii\widehat{\theta }\stackrel{}{J}\mathrm{sin}\theta +(1\mathrm{cos}\theta )\widehat{\theta }(\widehat{\theta })^𝖳$$ in which the generators $`(J_k)_{ij}=iϵ_{ikj}`$ satisfy $`[J_i,J_j]=iϵ_{ijk}J_k`$ and $`𝖳`$ means transpose. In terms of indices, this formula for $`R(\stackrel{}{\theta })=e^{i\stackrel{}{\theta }\stackrel{}{J}}`$ is $$R(\stackrel{}{\theta })_{ij}=\delta _{ij}\mathrm{cos}\theta \mathrm{sin}\theta ϵ_{ijk}\widehat{\theta }_k+(1\mathrm{cos}\theta )\widehat{\theta }_i\widehat{\theta }_j.$$ In these formulae $`ϵ_{ijk}`$ is totally antisymmetric with $`ϵ_{123}=1`$, and sums over $`k`$ from 1 to 3 are understood. ### 2.2 Distance A conventional measure of the quality of a theoretical fold is the mean root-mean-square distance $`d`$ between positions $`\stackrel{}{r}(i)`$ of the $`\alpha `$ carbons of the folded protein and those $`\stackrel{}{x}(i)`$ of the native structure of the protein, $$d=\sqrt{\frac{1}{n}\underset{i=1}{\overset{n}{}}\left(\stackrel{}{r}(i)\stackrel{}{x}(i)\right)^2}.$$ The native states of many proteins are available from http://www.rcsb.org/pdb/. We have derived a formula for this distance in terms of the centers of mass $`\stackrel{}{r}=(1/n)_{j=1}^n\stackrel{}{r}(j)`$ and $`\stackrel{}{x}=(1/n)_{j=1}^n\stackrel{}{x}(j)`$, the relative coordinates $`\stackrel{}{q}(i)=\stackrel{}{r}(i)\stackrel{}{r}`$ and $`\stackrel{}{y}(i)=\stackrel{}{x}(i)\stackrel{}{x}`$, their inner products $`Q^2=_{i=1}^n\stackrel{}{q}(i)^2`$ and $`Y^2=_{i=1}^n\stackrel{}{y}(i)^2`$, and the matrix that is the sum of their outer products $`B=_{i=1}^n\stackrel{}{q}(i)\stackrel{}{y}(i)^𝖳`$. If $`(B^𝖳)_{kl}=B_{lk}=_{i=1}^nq(i)_ly(i)_k`$ denotes the transpose of this 3$`\times `$3 matrix $`B`$ and tr denotes the trace, then the rms distance $`d`$ is $$d=\left\{\frac{1}{n}\left[Q^2+Y^22\mathrm{tr}\left(BB^𝖳\right)^{\frac{1}{2}}\right]\right\}^{\frac{1}{2}}.$$ ### 2.3 Two Proteins We have performed simulations on a protein fragment of 36 amino acids called the villin headpiece (1VII). We begin by rotating the $`2n`$ dihedral angles $`\varphi `$ and $`\psi `$ of the protein to $`\pi `$, except for the angle $`\varphi `$ of proline. In this denatured starting configuration, the average rms distance $`d`$ is 29 Å. Our best simulations so far fold the villin headpiece to a mean rms distance $`d`$ that is slightly less than 5 Å from its native state. Our second protein is a 56-residue fragment of the 63-residue protein cole1 rop (1ROP). From a denatured configuration with $`d=55`$ Å, our code folds this protein to a mean rms distance $`d`$ of slightly less than 3.2 Å from its native state. ## ACKNOWLEDGEMENTS We wish to thank Ken Dill for many key suggestions; Charles Beckel, John McIver, Susan Atlas, and Sorin Istrail for helpful conversations; Sean Cahill and Gary Herling for critical readings of the manuscript; and Sau Lan Wu and John Ellis for their hospitality at CERN. We have performed our computations on two (dual Pentium II) personal computers running Linux; we are grateful to Intel and Red Hat for reducing the cost of scientific computing.
no-problem/9909/astro-ph9909025.html
ar5iv
text
# Evidence Against an Association Between Gamma-Ray Bursts and Type I Supernovae ## 1 Introduction The discovery of supernova 1998bw (Galama et al. galama98 (1998)) within the 8 arcminute radius of the BepppoSAX WFC error circle for GRB 980425 (Soffitta et al. soffitta98 (1998)) has led to the hypothesis that some GRB sources are Type Ib-Ic SNe. There are some serious difficulties with this interpretation of the data for GRB 980425/SN 1998bw: the supernova occurred outside the NFI error circle of a fading X-ray source (Pian et al. 1998a ,1998b , pian99 (1999) ; Piro et al.piro98 (1998)). This source had a temporal decay consistent with a power-law index of $`1.2`$ (Pian et al. 1998b ), which resembles the temporal behavior of X-ray afterglows seen in almost every other GRB followed up with the SAX NFI. It must therefore be viewed as a strong candidate to be the X-ray afterglow of GRB 980425. Moreover, if the association between GRB 980425 and SN 1998bw were true, the luminosity of this burst would be $`10^{46}`$ erg s<sup>-1</sup> and its energy would be $`10^{47}`$ erg. Each would therefore be five orders of magnitude less than that of other bursts, and the behavior of the X-ray and optical afterglow would be very different from those of the other BeppoSAX bursts, yet the burst itself is indistinguishable from other BeppoSAX and BATSE GRBs with respect to duration, time history, spectral shape, peak flux, and and fluence (Galama et al. galama98 (1998)). In view of these difficulties, the safest procedure is to regard the association as a hypothesis that is to be tested by searching for correlations between SNe and GRB in catalogs of SNe and GRBs, excluding SN 1998bw and GRB 980425. Wang & Wheeler (1998a ; see also Wang & Wheeler 1998b ) have correlated BATSE GRB with Type Ia and with Type Ib-Ic Sne. They found that the data was “consistent” with the assumption of an association between GRB and Type Ib-Ic SNe. In the present work we improve upon the Wang & Wheeler correlative study by introducing an analysis method based on Bayesian inference, and therefore using the likelihood function, that incorporates information about the BATSE position errors in a non-arbitrary way and that is free of the ambiguities of a posteriori statistics. The method also accounts the fact that the BATSE temporal exposure is less than unity. ## 2 Methodology We use the BATSE 4B GRB Catalog (Meegan et al. meegan98 (1998)), and BATSE bursts that occurred subsequent to the 4B catalog but before 1 May 1998. We also use the Ulysses supplement to the BATSE 4B catalog, which contains 219 BATSE bursts for which 3rd IPN annuli have been determined (Hurley et al. hurley98 (1998)). Hurley (private communication, 1998) has kindly made available at our request 3rd IPN annuli for an additional 9 BATSE bursts that occurred subsequent to the period of the BATSE 4B catalog but before 1 May 1998. We have compiled three Type I SNe samples. The first is a sample of 37 Type Ia SNe at low redshift ($`z<0.1`$) from the CfA SN Search Team (Riess 1998, private communication). The second is a sample of 46 moderate redshift ($`0.1<z<0.830`$) Type Ia SNe from the Supernova Cosmology Project (Perlmutter 1998, private communication). The third sample consists of 20 Type Ib, Ib/c, and Ic SNe compiled from IAU Circulars and various SNe catalogs. We compare three hypotheses: $`H_1`$: The association between SNe and GRBs is real. If a SN is observed, there is a chance $`ϵ`$ that BATSE sees the associated GRB, where $`ϵ`$ is the average BATSE temporal exposure. While $`ϵ`$ varies with Declination, the variation is modest and we neglect it. The probability density for the time of occurrence of the $`i`$th supernova is assumed uniform in the interval $`[t_i,t_i+\tau _i]`$, so that all GRBs that occur in that interval have an equal prior probability of being associated with the SN. $`H_1^{}`$: The association is real, but only a fraction $`f`$ of detectable SNe produce detectable GRB. $`H_2`$: There is no association between SNe and GRBs. We calculate the Bayesian Odds, $`𝒪`$ favoring $`H_1`$ over $`H_2`$. The details of the calculation are presented in Graziani et al (graziani98 (1998)). We separately compare $`H_1^{}`$ to $`H_2`$, denoting the odds favoring $`H_1^{}`$ over $`H_2`$ by $`𝒪^{}`$. Finally, we calculate the posterior probability distribution for the parameter $`f`$ in the model $`H_1^{}`$, and infer an upper limit on its value. ## 3 Results We find overwhelming odds against the hypothesis that all Type Ia supernovae produce gamma-ray bursts, whether at low redshift ($`𝒪=10^9:1`$) or high-redshift ($`𝒪=10^{12}:1`$). We also find large odds ($`𝒪^{}=34:1`$) against the hypothesis that a fraction of Type Ia supernovae produce observable gamma-ray bursts We find very large odds ($`𝒪=6000:1`$) against the hypothesis that all Type Ib, Ib/c, and Ic supernovae produce observable gamma-ray bursts. We also find moderate odds ($`𝒪^{}=6:1`$) against the hypothesis that a fraction of Type Ib-Ic supernovae produce observable bursts. If we nevertheless assume that this hypothesis is correct, we find that the fraction $`f_{\mathrm{SN}}`$ of Type Ib, Ib/c and Ic SNe that produce observable GRBs must be less than 0.17, 0.42, and 0.70 with 68%, 95%, and 99.6% probability, respectively. These limits are relatively weak because of the modest size (20 events) of our sample of Type Ib-Ic SNe. ## 4 Discussion We find large odds against the hypothesis that all Type Ib, Ib/c, and Ic supernovae produce observable gamma-ray bursts, the specific hypothesis considered by Wang & Wheeler (1998a , 1998b ). The odds against the hypothesis that a fraction of Type Ib-Ic supernovae produce observable bursts are more moderate, because they account for the possibility that $`f`$ is so small that no SN-produced GRBs are detected, in which case the data can not distinguish between $`H_1^{}`$ and $`H_2`$. Type Ib, Ib/c and Ic SNe are now being found at a rate of about eight a year, so that the size of the sample of known Type Ib-Ic SNe should triple within about five years. One might hope that future analyses, using the statistical methodology that we have presented here, could either show that the association between Type Ib-Ic SNe and GRBs is rare, or confirm the proposed association. Unfortunately, achieving the former will be difficult: the limit on the fraction $`f_{\mathrm{SN}}`$ of Type Ib-Ic SNe that produce observable GRBs scales like $`N_{\mathrm{SN}}^1`$ for large $`N_{\mathrm{SN}}`$, and therefore tripling the size of the sample of known Type Ib-Ic SNe without observing an additional possible SN – GRB association would only reduce the 99.7% probability upper limit on $`f_{\mathrm{SN}}`$ to 0.24. The parameter $`f`$ represents the fraction of observable SNe which can produce observable GRBs. Some observable SNe might not produce observable GRBs because of intrinsic effects, such as beaming, whereas others might not do so because the sampling distance for SN-produced GRB could be less than the sampling distance for the SNe themselves. It is possible to separate these two effects, by writing $`f=f_{\mathrm{intrinsic}}\times f_{\mathrm{sampling}}`$. Obviously, the more interesting quantity is $`f_{\mathrm{intrinsic}}`$, since it addresses the nature of the GRB sources. We may attempt to find a constraint on $`f_{\mathrm{intrinsic}}`$ by assuming the correctness of the identification of SN 1998bw with GRB 980425 and using that association to estimate $`f_{\mathrm{sampling}}`$, under the (dubious) assumption that the GRBs produced by Type Ib-Ic SNe are standard candles. The result is $`f_{\mathrm{sampling}}1.9\times 10^3`$. This is rather bad news for the prospect of constraining $`f_{\mathrm{intrinsic}}`$, since the product of $`f_{\mathrm{intrinsic}}`$ and $`f_{\mathrm{sampling}}`$ is only constrained by the data to be less than about 0.7. Thus this argument can place no constraint on $`f_{\mathrm{intrinsic}}`$. Elsewhere, we show that placing a meaningful constraint on $`f_{\mathrm{intrinsic}}`$ would require a GRB experiment approximately 80 times more sensitive than BATSE (Graziani et al. graziani98 (1998)). One can also approach the proposed association between SNe and GRBs from the opposite direction. The interesting question, from this point of view, is what fraction $`f_{\mathrm{GRB}}`$ of the GRBs detected by BATSE could have been produced by Type Ib-Ic SNe? (note that this is different from the question addressed by Kippen et al. \[kippen98 (1998)\], who constrained the fraction of BATSE GRB that could have been produced by known SNe). Such a limit may be derived under the same assumptions as made in the previous paragraph. We find that no more than $`90`$ SNe could have produced GRB detectable by BATSE, indicating that $`f_{\mathrm{GRB}}`$ can be no more than about 5%.
no-problem/9909/hep-th9909081.html
ar5iv
text
# 1 Introduction ## 1 Introduction In this short note we are going to examine the physics of a single flat $`D`$–brane in the presence of a large background electromagnetic field. The problem has received lately considerable attention and has been analyzed in complete detail in the recent work . In particular it has been shown in that, in order to describe the physics of the fluctuating electromagnetic field, one has two equivalent options. On one side, one can follow the standard treatment of the subject in terms of an ordinary $`U(1)`$ gauge field in the presence of the background field strength. On the other side, one can fully include the effects of the background by rewriting the action in terms of a $`U(1)`$ gauge field of noncommutative gauge theory. In this formulation, the background is used to define a Poisson structure and a star product on the brane world–volume, which are then used to define the noncommutative gauge theory. Ordinary gauge transformations, which form an abelian group, are replaced by noncommutative gauge transformations, which give rise to a non abelian group. Nonetheless gauge equivalent configurations are so in both descriptions, and therefore the gauge orbits are the same in both cases. Finally, in , the authors identify a specific $`\alpha ^{}0`$ limit in which the noncommutative description simplifies considerably and reduces to standard noncommutative Yang–Mills theory. In a previous work , a similar setting and limit was considered in the analysis of the physics of $`D2`$–branes in Type IIA string theory. The motivations for the analysis in are quite different from the ones in , and the discussion is driven by the attempt to connect the physics of a $`D2`$–brane to that of an $`M2`$–brane in the $`11`$–dimensional strong coupling limit of Type IIA. In particular, in order to connect with the $`11`$–dimensional interpretation of the theory, it was crucial in to treat on the same footing the directions transverse and parallel to the brane. This can be achieved by changing coordinates on the world–volume of the brane, so as to eliminate any fluctuation of the field strength in favor of fluctuations of the induced metric. Since the action governing the brane is invariant under diffeomorphisms, the coordinate change can be considered as a field redefinition, and does not alter the physics. On the other hand, in terms of the new fields, the action for a $`D2`$–brane possesses a smooth polynomial limit in the $`11`$–dimensional limit of Type IIA. In this note we will show that the limit considered in is that of in the specific case of a $`D2`$–brane. Moreover, we will show that the coordinate change considered in is the same, to leading order in derivatives, to the one considered in . The result can be understood as follows. As we have briefly described above, and as we shall explain in detail later, it is convenient to change coordinates in order to eliminate any fluctuations of the $`U(1)`$ field strength. This change of coordinates in not unique, but is defined up to diffeomorphisms of the world–volume of the brane which leave invariant the background field strength $`2`$–form. The group of such diffeomorphisms is non abelian, and has as infinitesimal generators the Hamiltonian vector fields defined in terms of the background, now considered as a symplectic structure on the world–volume. These diffeomorphisms replace the ordinary abelian gauge invariance of the original theory, as in the case of . Moreover, to leading order in derivatives, commutators in terms of the star product considered in are nothing but Poisson brackets with respect to the symplectic structure defined by the background. This explains why the change of coordinates is the same to leading order. Given the above facts, we show that the results of , generalized to the case of a generic $`Dp`$–brane, can be used to streamline the proof of the equivalence of the standard Born–Infeld action with noncommutative Yang–Mills theory, in the $`\alpha ^{}0`$ limit considered in . Moreover, in the large wave–length regime, the results in this note give a clear geometric interpretation to the field redefinition given in . We are going to use primarily the notation of , in order to allow a quick comparison of the equations. Moreover, given the nature of this short note, we are not including an extensive reference list. For a more complete bibliography we refer the reader to the papers and . ## 2 The Coordinate Redefinition In this section we are going to analyze the physics of a flat $`Dp`$–brane in Type II string theory, whose world–volume is extended in the spacetime directions $`0,\mathrm{},p`$. We are not going to consider the transverse motions of the brane, and therefore we will limit our attention exclusively to the degrees of freedom of the $`U(1)`$ gauge potential living on the brane world–volume. In particular we will analyze the dynamics of the $`Dp`$–brane in the presence of a large background electromagnetic field. We will denote with $`X^i`$, $`i=0,\mathrm{},p`$, the coordinates on the target spacetime which are parallel to the $`Dp`$–brane. The world–volume $`\mathrm{\Sigma }=𝐑^{p+1}`$ of the brane can be parameterized with the natural coordinates $`x^i`$ $`(i=0,\mathrm{},p)`$ inherited from the target, which are defined by $$X^i(x)=x^i.$$ The embedding functions are therefore non–dynamical, and all of the physics of the $`Dp`$– brane (recall that we are neglecting transverse motions) is described by the fluctuating $`U(1)`$ gauge potential. As we already mentioned, we are going to work in the presence of a large constant background magnetic field $`B_{ij}`$. We will assume in this paper that the constant matrix $`B`$ is of maximal rank. For the most part of what follows, we shall actually assume that $`p`$ is odd, and therefore that $`B`$ is invertible ($`\mathrm{rk}B=p+1`$). Following the notation of , we shall in this case denote with $`\theta =B^1`$ the inverse matrix. At the end of the section we will briefly return to the case of even $`p`$ ($`\mathrm{rk}B=p`$). Let us denote with $`a=a_i(x)dx^i`$ the fluctuating gauge potential, and with $`f=da`$ the corresponding field strength. The total field strength is then given by $$F_{ij}(x)=B_{ij}+f_{ij}(x)=B_{ij}+_ia_j(x)_ja_i(x).$$ (1) We now change parameterization on the world–volume $`\mathrm{\Sigma }`$ by choosing new coordinates $`\sigma ^i`$. Clearly the target–space embedding coordinates are now given by $$X^i(\sigma )=x^i(\sigma ).$$ Moreover the new field strength is now given by $$\stackrel{~}{F}_{ij}(\sigma )=\frac{x^k}{\sigma ^i}\frac{x^l}{\sigma ^j}F_{kl}(x(\sigma )).$$ (2) We claim that we can choose the coordinates $`\sigma `$ so that the the field strength $`\stackrel{~}{F}_{ij}`$ is given by $$\stackrel{~}{F}_{ij}(\sigma )=B_{ij}.$$ (3) In other words, we can eliminate any fluctuation of the electromagnetic field by a simple coordinate redefinition. In the new coordinate system $`\sigma ^i`$ the dynamics of the brane is not described by the $`U(1)`$ gauge potential $`a_i(x)`$, but is now equivalently described by the embedding functions $`x^i(\sigma )`$, which are now the dynamical fields. Let us start our analysis by considering small fluctuations, and therefore by working to first order in $`a_i`$. First let us define the displacement functions $$x^i(\sigma )=\sigma ^i+d^i(\sigma ).$$ For small fluctuations, $`d^i`$ is of order $`o(a)`$. To first order in fluctuations equation (2) becomes $$\stackrel{~}{F}_{ij}=F_{ij}+_dF_{ij}=F_{ij}+d^k_kF_{ij}+F_{kj}_id^k+F_{ik}_jd^k,$$ where $`_d`$ is the Lie derivative in the direction of the vector–field $`d^i`$. Using equations (3) and (1), and recalling the antisymmetry of $`B_{ij}`$, we can rewrite the above (to order $`o(a)`$) as $$f_{ij}=_ia_j_ja_i=B_{jk}_id^kB_{ik}_jd^k.$$ The above is an identity if and only if $$a_i=B_{ij}d^j+_i\lambda $$ (4) for some scalar $`\lambda `$. Therefore $$d^i=\theta ^{ij}a_j$$ up to transformations of the form $$d^id^i+\theta ^{ij}_j\lambda .$$ Let us now move to the general case. Suppose that we are given a generic gauge field $`a_i(x)`$, and a corresponding coordinate change $`x^i(\sigma )`$ which satisfies (3). We can then analyze, given an infinitesimal change $`a_ia_i+\delta a_i`$, the corresponding variation $`x^ix^i+\delta x^i`$. In fact we can use the results just derived above in the linear approximation. First let us change coordinates from $`x^i`$ to $`\sigma ^i`$. The gauge field fluctuation $`\delta a_i(x)`$ transforms as $`\delta a_i(x)\stackrel{~}{\delta a}_i(\sigma )=\delta a_j(x(\sigma ))_ix^j`$. We are now in the condition analyzed previously, since, by construction, the field strength in the coordinates $`\sigma ^i`$ is equal to $`B_{ij}`$, and we are considering an additional infinitesimal change $`\stackrel{~}{\delta a}_i`$ to the gauge potential. We can now use the above results on infinitesimal fluctuations to conclude that $`x^i(\sigma )x^i(\sigma +\eta )`$, where $`\eta ^i(\sigma )=\theta ^{ij}\stackrel{~}{\delta a}_j(\sigma )`$. Therefore $`\delta x^i=\eta ^j_jx^i`$, and we have the relation $$\delta x^i(\sigma )=\theta ^{jk}_jx^i_kx^l\delta a_l(x(\sigma )).$$ (5) We have seen before that, to first order in $`a_i`$, $`x^i=\sigma ^i+\theta ^{ij}a_j`$. Using this result and integrating the above equation we conclude that, to second order in $`a_i`$, we have $$x^i=\sigma ^i+\theta ^{ij}a_j+\frac{1}{2}\theta ^{ij}\theta ^{kl}\left(2a_l_ka_j+a_k_ja_l\right)+\mathrm{}.$$ To make contact with the work of , let me define the following variable $$\widehat{A}_i(\sigma )=B_{ij}d^j(\sigma )$$ which is given, in terms of $`a`$, by the formula $$\widehat{A}_i=a_i+\frac{1}{2}\theta ^{kl}\left(2a_l_ka_i+a_k_ia_l\right)+\mathrm{}.$$ (6) We clearly see that, to first non–trivial order in $`\theta `$, the field $`\widehat{A}_i`$ corresponds to the one described in . We will now better analyze this correspondence by describing how the original gauge invariance of the theory manifests itself in terms of the new dynamical fields $`d^i`$ or, alternatively, $`\widehat{A}_i`$. The coordinate change from the variables $`x^i`$ to the variables $`\sigma ^i`$ is defined so that the electromagnetic field, in the coordinates $`\sigma ^i`$, is given by the constant matrix $`B_{ij}`$. Clearly the choice of coordinates $`\sigma ^i`$ is not unique. In fact, given an infinitesimal vector field $`V^i(\sigma )`$, we may define new coordinates $`\sigma ^i+V^i(\sigma )`$ which are equally valid if $`_VB_{ij}=B_{kj}_iV^k+B_{ik}_jV^k=0`$. This is equivalent to the statement that the $`1`$–form $`B_{ij}V^j`$ is closed, or that $`V^i=\theta ^{ij}_j\rho `$. The reparameterization of the world–volume $`\mathrm{\Sigma }`$ can be equivalently represented by a change in the functions $`x^i(\sigma )`$ given by $$x^ix^i\theta ^{jk}_j\rho _kx^i=x^i+i\{\rho ,x^i\},$$ (7) where we have introduced the Poisson bracket $$\{A,B\}=i\theta ^{ij}_iA_jB$$ on the brane world–volume. Note that $$\{\sigma ^i,\sigma ^j\}=i\theta ^{ij},$$ and more generally that $$\{\sigma ^i,A\}=i\theta ^{ij}_jA.$$ Written in terms of $`\widehat{A}_i`$, equation (7) reads $$\widehat{A}_i\widehat{A}_i+_i\rho +i\{\rho ,\widehat{A}_i\}.$$ (8) Let us mention, for completeness, the following consistency check. Consider, in equation (5), a variation $`\delta a_i(x)`$ which is pure gauge $`\delta a_i(x)=_i\lambda (x)`$. It is then clear that, if we define $`\rho (\sigma )=\lambda (x(\sigma )),`$ we have that $`_k\rho (\sigma )=_kx^l\delta a_l(x(\sigma ))`$. Therefore the variation of $`x^i(\sigma )`$ is given by $$\delta x^i=\theta ^{jk}_jx^i_k\rho =i\{\rho ,x^i\}$$ and is therefore pure gauge. In the last part of this section we wish to connect the above discussion to that of . In the relation between $`a`$ and $`\widehat{A}`$ is derived starting from the knowledge of the correct non abelian gauge invariance $`\widehat{A}_i\widehat{A}_i+_i\rho +i[\rho ,\widehat{A}_i]`$, where $`[A,B]=ABBA=\{A,B\}+o(\theta ^2)`$. Equivalently, we can start with the gauge transformation (8) and follow the argument of to derive (6). In fact, the computation is exactly the same, since all the formulae are identical to first non–trivial order in $`\theta `$. In the treatment in this note the noncommutative gauge group is the set of diffeomorphisms of the brane world–volume which leave the $`2`$–form $`B_{ij}`$ invariant. The group is infinitesimally generated by the vector fields of the form $`V^i=\theta ^{ij}_j\rho `$, which are nothing but the Hamiltonian vector fields defined in terms of $`B_{ij}`$, now considered as a symplectic structure on $`\mathrm{\Sigma }`$. We have worked in the case of $`p`$ odd. Let me now very briefly discuss the case of even $`p`$, when $`B_{ij}`$ cannot be invertible. Divide the coordinates $`X^i`$ in $`X^0`$ and $`X^a`$, $`a=1,\mathrm{},p`$ and assume that $`B_{0a}=0`$ and that $`B_{ab}`$ is invertible. Let us consider equation (4). For a correct choice of $`\lambda `$ we can work under the assumption that $`a_0=0`$. It is then clear that we can solve equation (4) by imposing $$d^0=0x^0(\sigma )=\sigma ^0$$ (9) and by choosing $`d^a=\theta ^{ab}a_b`$, where $`\theta ^{ab}`$ is the inverse of the invertible part $`B_{ab}`$ of the background field strength. It is also clear that we can impose the constraint (9) not only for small fluctuations of the commutative gauge potential $`a_i`$, but, in complete generality, to all orders in $`a_i`$ (this is the choice discussed in , where the attention was focused on the case $`p=2`$ for physical reasons). The constraint (9) restricts the noncommutative gauge group to the diffeomorphism which leave the background field strength and (9) invariant. This group is generated by Hamiltonian vector fields $`V^i`$ with $`V^0=0`$ and which satisfy $`_i\rho =B_{ij}V^j`$. This means that $`\rho `$ is time independent and that we can essentially reduce the problem by one dimension, therefore going back to the case previously discussed. ## 3 The $`\alpha ^{}0`$ Limit In this last section we analyze the $`\alpha ^{}0`$ limit considered in . Following the reasoning in and using the results of the last section we streamline the proof of the equivalence of the standard Born–Infeld action with noncommutative Yang–Mills theory in the large wave–length regime. We will work again for convenience in the case $`p`$ odd. This is done both for notational simplicity and since the case $`p=2`$ was treated in detail in . In what follows we shall closely follow the notation of in order to make quick contact with the results of that paper. As in the last section we fix the $`U(1)`$ field strength to be $`B_{ij}`$ and consider as dynamical fields the embedding functions $`x^i(\sigma )`$. Following we assume that the metric in the target spacetime is given by $$g_{ij}.$$ The induced metric on the brane is then given by $$h_{ij}=_iX^k_jX^lg_{kl}.$$ Again as in , we consider the following limit<sup>1</sup><sup>1</sup>1The constant $`\eta `$ in is related to $`ϵ`$ in by $`ϵ=\eta ^4`$. With this redefinition one can easily check that the limit considered in is exactly the same as that analyzed in , in the particular case of $`p=2`$ and $`\mathrm{rk}B_{ij}=2`$. (recall that we are considering the case of $`B_{ij}`$ of maximal rank $`p+1`$) $$g_{ij}ϵ\alpha ^{}ϵ^{1/2}g_sϵ,$$ where we take $`ϵ0`$. The tension of the $`Dp`$–brane is given by $`T1/(g_sl_s^{p+1})`$ and therefore scales as $$Tϵ^{\frac{p+5}{4}}.$$ We can then expand the Born–Infeld action as $`S`$ $`=`$ $`T{\displaystyle _\mathrm{\Sigma }}d^{p+1}\sigma \sqrt{det\left(h_{ij}+2\pi \alpha ^{}B_{ij}\right)}`$ $`=`$ $`T{\displaystyle _\mathrm{\Sigma }}d^{p+1}\sigma \sqrt{det\left(2\pi \alpha ^{}B_{ij}\right)}\left(1{\displaystyle \frac{1}{4}}\left({\displaystyle \frac{1}{2\pi \alpha ^{}}}\right)^2\theta ^{ij}_jX^a_kX^bg_{ab}\theta ^{kl}_lX^c_iX^dg_{cd}\right)+\mathrm{}`$ $`=`$ $`\mathrm{const}.{\displaystyle \frac{T}{4}}\left({\displaystyle \frac{1}{2\pi \alpha ^{}}}\right)^2{\displaystyle _\mathrm{\Sigma }}d^{p+1}\sigma \sqrt{det(2\pi \alpha ^{}B_{ij})}\{X^a,X^c\}\{X^b,X^d\}g_{ab}g_{cd}+\mathrm{}.`$ The action scales as $`T(\alpha ^{})^{\frac{p3}{2}}(g_{ab})^2`$, and is therefore finite in the $`ϵ0`$ limit, thus showing that the Born–Infeld action does have a smooth polynomial limit, if it is written in terms of the correct variables. To make contact with , let us define, as in the last section, the noncommutative gauge potential $`\widehat{A}_i`$ by the equation $$X^i=\sigma ^i+\theta ^{ij}\widehat{A}_j.$$ If we define the noncommutative field strength by $$\widehat{F}_{ij}=_i\widehat{A}_j_j\widehat{A}_ii\{\widehat{A}_i,\widehat{A}_j\}$$ we can readily check that $$i\{X^i,X^j\}=\theta ^{ij}+\theta ^{ik}\theta ^{jl}\widehat{F}_{kl}.$$ Define as in the open–string metric $`G_{ij}`$ $`=`$ $`(2\pi \alpha ^{})^2B_{ik}g^{kl}B_{lj}`$ $`G^{ij}`$ $`=`$ $`{\displaystyle \frac{1}{(2\pi \alpha ^{})^2}}\theta ^{ik}g_{kl}\theta ^{lj}`$ and the open–string coupling constant $$G_s=g_s\stackrel{1/2}{det}\left(2\pi \alpha ^{}B_{ik}g^{kj}\right)ϵ^{\frac{3p}{4}}.$$ It is then quick to check that $$\left(\frac{1}{2\pi \alpha ^{}}\right)^2\{X^a,X^c\}\{X^b,X^d\}g_{ab}g_{cd}=\left(2\pi \alpha ^{}\right)^2G^{ab}G^{cd}\widehat{F}_{ac}\widehat{F}_{bd}+\mathrm{const}.+\mathrm{total}\mathrm{derivative}.$$ Therefore the action $`S`$ becomes $$S=\mathrm{const}.+\frac{1}{4G_{YM}^2}_\mathrm{\Sigma }d^{p+1}\sigma \sqrt{detG_{ij}}G^{ab}G^{cd}\widehat{F}_{ac}\widehat{F}_{bd},$$ where $$G_{YM}^2=\frac{1}{(2\pi )^{p2}G_sl_s^{p3}}$$ is the noncommutative Yang–Mills coupling, which is independent of $`ϵ`$. This then concludes the proof of the equivalence of the commutative and noncommutative descriptions in the large wave–length approximation. It is uniquely based on the invariance under diffeomorphisms of the Born–Infeld action. This fact both simplifies the proof considerably and also clarifies the geometric nature of the field redefinition to leading order in the derivative expansion. Let me also note, as a conclusion, that the analysis and the results in , restricted to the description of flat $`D2`$–branes, do give a formal and complete proof of the conjecture stated in . Acknowledgments We would like to thank R. Schiappa for helpful discussion and comments.
no-problem/9909/astro-ph9909380.html
ar5iv
text
# Models of OH Maser Variations in U Her ## 1 Introduction Radio observations of the OH maser emission from OH/IR stars have often been used to investigate properties of the dust shell, such as its shape (Chapman, Cohen and Saikia 1991, Alcock and Ross 1986), its density structure (MacLow 1996), and its history (Chapman, Cohen and Saikia 1991). With a few exceptions, the maser emission from these stars has not been monitored over decade time scales, in part because the study of maser emission itself is only a few decades old. U Orionis has been studied extensively by many groups since 1972, when an unusual flare in the 1612 emission occurred. In particular, in 1991, Chapman, Cohen and Saikia reported results of a monitoring program of U Ori lasting 6 years. Observing programs spanning decades should prove interesting, because the time scale for gas to cross the maser emitting shell is of order 10 years. U Herculis is a particularly good candidate for maser studies, since it is relatively close ( 385 pc (Chapman, et al. 1994)) and has strong OH maser emission, with a peak maser flux that varies between 3 and 20 Jy in the 1665 and 1667 MHz lines. In 1977, Fix (1999) began a monitoring program of U Her and 11 other OH/IR stars at the Arecibo Radio Telescope. His U Her data was indispensible to this project. Section 2 describes the Arecibo Radio Telescope (Arecibo) <sup>1</sup><sup>1</sup>1The Arecibo Observatory is part of the National Astronomy and Ionospheric Center, which is operated by Cornell University under a cooperative agreement with the National Science Foundation. and Very Long Baseline Array (VLBA) <sup>2</sup><sup>2</sup>2The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. observations of U Her, as well as the data reduction and analysis carried out prior to modeling. Section 3 explains the need for and the construction of simulated sets of maser components. We explore several explanations for the observed variations in Section 4, and find that overall changes in the properties of the maser emitting shell do not fully explain the observed variations in the maser velocities. A plasma turbulence model is presented in Section 5, and shown to be a promising line of inquiry. Section 6 contains a summary and discussion of these results, as well as suggestions for future work. ## 2 Observations, Analysis, and Interpretations ### 2.1 Observations Observations of U Her were obtained in both maser main lines (1665 and 1667 MHz), and both right circular polarization (RCP) and left circular polarization (LCP). The total bandpass of all observations was 25 km/s ($``$ 488 kHz), although the spectral resolution varied with instrument as described below. Spectra of the main line maser emission from U Her were obtained at Arecibo at four epochs: 1977.866 (henceforth referred to as 1977), 1979.151 (1979a), 1979.953 (1979b), and 1992.384 (1992). The spectral resolution at most epochs was 0.05 km/s. At 1667 MHz, the 1979a data were of insufficient spectral resolution, and so are not included in the full discussion, but reserved until section 4.1. On 1995.512 (1995), we used the VLBA, supplemented by one antenna of the Very Large Array (VLA) to map the OH emission. Approximately 3 hours of data were obtained on the target source. The data were correlated using the VLBA correlator in Socorro, NM. The spectral resolution was 0.1 km/s. The beam size was approximately 10$`\times `$4 mas. 3C273 was used as a flux calibrator, and 3C345 was chosen as a phase calibrator. The a priori calibration of these VLBA data was carried out using the spectral-line calibration methods described in the AIPS Cookbook in Chapters 4, 8 and 9. Remaining errors were removed using a loop of self-calibration and CLEAN algorithms. The red-shifted portion of the 1667 MHz line (from the back of the shell) was too faint to be mapped with the VLBA. Also in 1667 MHz, the VLA antenna recorded interference. The consequent loss of short telescope spacings reduced the sensitivity of the 1667 MHz maps to diffuse emission at angular scales longer than 40 mas. Spectra were created from the VLBA single antenna data. These spectra were significantly noisier than the Arecibo data, since the sensitivity of the single antenna can not compare to Arecibo. This is evident in Figure 6. ### 2.2 Analysis Each spectrum was deconstructed into a series of Gaussian components using a least-squares fitting routine. Between 19 and 30 components were required to adequately characterize each spectrum. The overall maser emitting region was resolved into a number of small maser components as shown in Figure 6. This “spotty” appearance was evident in all the maps, in both polarizations, and both main lines. The AIPS task SAD was used to find the maser components in the individual maps. The integrated amplitudes of these maser components were then grouped by location across the channels to create a spectrum for each component. This resulted in a set of spatially resolved components (“spatial components”) each with an associated spectrum. ### 2.3 Interpretations Figure 6 shows the dramatic changes in the Arecibo spectra between 1977 and 1992. The changes in the spectra can not be described as simple amplitude variations of the entire line, as would be produced in response to variations in the intensity of the central star. Only 7 of the 80 maser spots in the maps were spatially coincident, indicating that the individual maser spots are highly polarized, possibly via the Cook mechanism (Cook 1975). The Cook mechanism balances velocity gradients and magnetic field gradients so that only one polarization is amplified. In contrast, the spectra are not highly polarized, implying that an annular region of approximately constant projected outflow velocity may have several components. These two observations together indicate that it is likely that an apparent Zeeman pair in a spectrum is actually a pair of spatially distinct Cook components. The spectra of the spatial components were compared with the Gaussian components derived directly from fitting Gaussian components to the VLBA spectra. In the case of the 1665 MHz data, 87% of the Gaussian components could be identified with a particular spatial component. 8% of the Gaussian components could not be identified in the maps. The amplitudes of these components were small. 5% had more than one spectral component that could be identified with one spatial location. For the 1667 MHz observations, only 54% of the Gaussian components could be identified with the spatial components. This is probably due to the decreased flux in 1667 MHz, and the decreased sensitivity at larger size scales. These results imply that a component by component analysis of the Arecibo spectra yields variations of real, physically distinct components. ## 3 Simulated Data In order to test whether model results are meaningful, 25 sets of simulated components were created. Each set of simulated components consists of a number of velocities and associated errors. Note that a spectrum was not produced by this process. The components have neither width nor amplitude, only velocity. This is the primary parameter of interest in the models below. All of the relevant information was created using the pseudo-random number generator distributed with ANSI-C. The generator was seeded with a number between 1 and 100. The generator creates a long list of random numbers, from which we created velocities, errors, and numbers of components. The first step was to choose the number of components in a data set. Since the real data sets contained between 19 and 30 components each, the first 25 random numbers were transformed to values between 19 and 30. The velocities of the components in a data set were found by transforming the next 25 (say) random numbers into velocities ranging between -4 and -21, to simulate the range of velocities found in the real spectra. Each simulated data set was created from different lists of random numbers. We also created “errors” in the simulated components. The errors in the real components ranged between 0.001 and 0.02 km/s, depending roughly on the amplitude of the component, but independent of the velocity. The “errors” in the simulated components were created by transforming the random numbers to numbers between 0.001 and 0.02 km/s. The lists of simulated velocities and simulated errors were collated so that for each simulated set of components, there were between 19 and 30 velocities and associated errors. ## 4 Models Involving Overall Properties of the Shell There are at least three ways in which the maser components may vary due to changes in the overall properties of the maser shell. The first is that the masers may be moving outward at constant velocity, through regions of varying pump intensity. In this case, components will remain at the same velocity, while changing in amplitude. The second possibility is an acceleration of the shell, either radial or rotational, which changes the velocities of the maser components. The third possibility is a change in the overall magnetic field, which can be observed in changes in Zeeman split components, in which the members of an LCP/RCP pair will change velocity in opposite directions, either moving closer together along the velocity axis, or farther apart. All of the models involving bulk properties of the shell were investigated using the following algorithm. An initial epoch was chosen, then the model was propagated forward in time, modifying each component of the initial epoch accordingly. For example, if the model predicts that between 1977 and 1979, the components will shift redward in velocity by 0.25 km/s, then 0.25 was subtracted from the velocity of each 1977 component. These modified components were then compared to the 1979 components to look for matches. A match was found when the difference in the predicted and actual velocities was less than or equal to twice the sum of their error bars. In order to understand the meaning of the statistics, the model was also applied to pairs of simulated data, to compare with the number of matches expected purely by chance. In order for a model to be considered successful, it must accurately predict the changes in the central velocities of the components from one epoch to the next. The percentage of components matched in the real data must be significantly larger than the percentage of components matched in the simulated data. In general, there were 24 comparisons of actual components (2 frequencies, 2 polarizations, and 4 epochs, combined pairwise by epoch), and 51 comparisons of simulated components. The parameter space for each of the models was completely explored. Each parameter in each model was varied from a theoretical lower bound to a theoretical upper bound, in a series of incremental steps. For each value of these parameters, the number of matches between epochs and between simulated component sets was recalculated. The step size was always carefully chosen so that the model predictions would be adequately sampled. This is explained further for each model below. The number of parameters in each model was small enough that this method of investigation was not overly costly in terms of CPU time. ### 4.1 Changes in the Pump If the masing components move outward at constant velocity, then each individual component remains at the same velocity (projected along the line of sight) while increasing or decreasing in amplitude. Spectral components with have the same velocity in more than one epoch. The results of this model are shown in Figure 6, and expressed as a percentage of the total number of matching components that might have been found. These two binned distributions yield a reduced chi-square of 1.4, indicating that the distributions are statistically identical. This model fails to fit real data better than it fits simulated data, implying that the matches in the real data are consistent with chance. ### 4.2 Radial Acceleration A radial acceleration changes the velocity of the material as it moves out through the shell. Because of projection effects, the radial acceleration will be most apparent at the front and the rear of the shell, where the expansion is most nearly along the line of sight. A radial acceleration at the limb of the shell will not cause an observable change in the velocity of a maser component located there. The emission from the front and back of the shell is located at the outside of the line profile, and so we expect to see the greatest changes due to radial acceleration in these regions of the line. The change in the projected velocity due to a radial acceleration is given by $$\mathrm{\Delta }v_{pr}=\mathrm{\Delta }v_{exp}\mathrm{cos}\theta $$ (1) where $`\mathrm{\Delta }v_{exp}`$ is the total change in the expansion velocity due to acceleration, and $`\theta `$ is the angle between the maser and the projected center of the shell. The cosine of the angle is equal to the ratio of the observed velocity to the expansion velocity, $`v_{obs}/v_{exp}`$, so that equation (1) becomes $$\mathrm{\Delta }v_{pr}=\mathrm{\Delta }v_{exp}\frac{v_{obs}}{v_{exp}}$$ (2) The component will be shifted towards the outside of the line for a positive radial acceleration, and towards the center of the line for a negative acceleration. Red-shifted and blue-shifted lines will move in opposite directions, but both will move towards or away from the center if they are affected by the same acceleration. The projected velocity at a later epoch, $`v_{r,b}`$, is related to the projected velocity at an earlier epoch, $`V_{obs}`$ by $$v_{r,b}=v_{obs}(1\pm \frac{\mathrm{\Delta }v_{exp}}{v_{exp}})$$ (3) where $`v_{r,b}`$ indicates the red- or blue-shifted velocity, and the direction of the acceleration is given by the sign of $`\mathrm{\Delta }v_{exp}`$. From the largest and smallest velocities of the maser components of U Her, the expansion velocity is between 6 and 8 km/s. This expansion velocity is expected to remain approximately constant since the dust has already condensed (Fix and Cobb 1987). With this range of values for the expansion velocity, the theoretically expected values for $`\mathrm{\Delta }v_{exp}/v_{exp}`$ are less than 0.2 since the expansion velocity changes by at most a few tenths of a km/s per decade. Varying the range of $`\mathrm{\Delta }v_{exp}/v_{exp}`$ between -2 and 2 easily covers the possible range of values, and is in fact much larger than necessary, as it allows even the innermost components to be completely shifted out of the line profile. A step size of 0.0005 was used. The errors in the centroids of the components were greater than 0.001. This step size is approximately 1/8 of twice the sum of the error bars for a pair of components. This is the criterion for finding components at the same velocity, and so this step size should adequately sample the radial acceleration model. The average percentage of components which could be identified between epochs was 39% for the real data, and 35% for the simulated data. The reduced chi-square of the two distributions (real and simulated data) of these percentages is 1.9, consistent with identical distributions for these small number statistics. One of the assumptions of this model is that the radial acceleration is spherically symmetric (although the shell need not be). A consequence of this assumption is that the front and the rear of the shell undergo the same acceleration. This constraint was relaxed by allowing the front and back of the shell to have different accelerations. In this case, the reduced chi-square of the distributions of percentages of matched components for real and simulated data was found to be 0.46. This is still consistent with all of the matches in the actual data being found purely by chance. A second consequence of this assumption is that many components undergo the same acceleration. If this is not the case, this model does not apply, and we must try to model the components individually, and look for changes in the velocities which are linear in time (see section 4.5). ### 4.3 Magnetic Field The maser emission from OH/IR stars is often polarized. This indicates that magnetic fields are present which are strong enough to Zeeman split the OH lines. In order to investigate changes in the global magnetic field, we must first look for Zeeman pairs. The VLBA maps provide the most compelling evidence of magnetic fields strong enough to Zeeman split the lines. In the 1665 MHz emission, only 7 of 47 maser spots appeared in the same location in both RCP and LCP emission. In 1667 MHz, none of the 33 maser components appeared in both polarizations. This implies that at least 80% of the components are polarized, possibly via the Cook mechanism (Cook 1975). As mentioned above, this implies that we are unlikely to find a pair of Zeeman components in the spectra which are real. The probability that an apparent Zeeman pair is actually a pair of spatially distinct Cook components is high. The importance of the Cook mechanism can be verified directly from the spectral data. The main-line maser emission is thought to arise in a narrow region of the shell (see for example, Collison and Nedoluha, 1994). If the magnetic field is constant (or approximately so) throughout the region, then the Zeeman splitting will also remain constant throughout the shell. In this case, the Cook mechanism may be ignored, since there is no magnetic field gradient. There should be a constant splitting of each of the LCP/RCP pairs, since each component experiences the same magnetic field. If the magnetic field gradient is important, few pairs of LCP and RCP components will be found with the same Zeeman splitting. The number of Zeeman pairs was calculated by choosing a component in one polarization, then searching in the other polarization for a component offset by a prescribed Zeeman splitting. The range of the splitting was -2 to 2 km/s, and the step size was 0.001 km/s. These parameters were chosen to encompass the entire range of possible values, and to provide a step size smaller than the errors in the velocities of the components. The average percentage of matched components for the actual data was 34.5%, while the average percentage of matched components for the simulated data was 32.5%. When the 1995 data, with its larger spectral resolution, was removed from consideration, the average percentage of matched components in the actual data dropped to 27%. Both of these results are consistent with the results from the simulated data. ### 4.4 Linear and Quadratic Changes in Time The investigations described above lead us to conclude that we cannot model the changes in the velocities of the components as global changes caused by constant velocity motions, radial or rotational accelerations, or global magnetic fields. However, it may be possible that all components, while subject to varying physical conditions, are still constrained to change in the same way over time. For example, all components could be subject to an acceleration that varies between one component and another. Models of this type will not give any detailed information about the mechanism of the change, but they will tell us whether the same mechanism operates on each maser feature throughout the observed time period. We note that if the variations are caused by a combination of the above scenarios, then this model should be able to characterize those variations. The number of 1977 components was in some cases (e.g. 1665 RCP) greater than the number of components in the 1979b or 1992 data. There may be redundancies in the alignment of components, so that two or more 1977 components may be aligned with one of the 1979b or 1992 components. Since it is not possible to determine which of these is the “correct” pairing, we simply counted the number of 1977 data points for which a match may be made. Because of this, the number of successful projections was occasionally higher than the smallest number of components in the three or four epochs. The four epoch case was completely consistent with the simulated data, implying that the matched components are likely to have been found by chance. The three epoch case is less tightly constrained, and more matching components were found. However, the largest percentage of actual matched components was no more than 5% higher than the highest percentage of simulated matched components. These results are not inconsistent with the results expected purely by chance. For completeness, we investigated an acceleration which is variable in time. An acceleration which changes over time will produce a velocity that varies quadratically in time. For example, if the radiation pressure changes, the force on the gas will change, and the acceleration of the gas particles will change. The possibility of a quadratic variation of the velocity was investigated, and it was found that the number of free parameters in this model is so large that a projected component can be found in the third or fourth epoch nearly 100% of the time for both real and simulated data. ## 5 Plasma Turbulence We have now shown that the changes in the velocity and polarization structure of the OH main line maser emission from U Her over decade time scales can not be completely explained by overall properties of the shell, such as accelerations, or global magnetic field changes. We are driven to consider explanations which do not involve properties of the shell as a whole, but rather can affect each maser component differently. We know that the magnetic field must be important in these masers because nearly all of the spatial and spectral components are polarized. The masers arise in regions with charge-carrying dust grains moving at a drift velocity which is of order tens of km/s (Collison and Fix 1992). This is about 100 times the thermal speed of the gas derived from the full widths of the spectral components. There is no direct evidence that the dust grains carry charge, although it has long been assumed that they do. It seems plausible that the grains are collisionally charged by interacting with the ionized gas and its electrons, and theoretical models including charged dust match well with infrared observations (Zubko 1998). Plasma turbulence may arise, perhaps via an instability such as those described in plasma kinetic theory. We investigate whether this turbulence could produce changes on the observed time scales. ### 5.1 Time Scales of Turbulence In the data considered so far, the shortest time scale investigated was a bit more than two years. A better constraint on the time scales may be found by comparing observations closer in time. A second set of observations of U Her were taken on 1979.151 (1979a), about ten months before the 1979.953 (1979b) data set used to investigate the overall variations of the shell. Only the 1665 MHz observations were of high enough spectral resolution to compare with the rest of the data, and so the 1979a observations were not included in the larger study. The 1979a data was subjected to the models described in section 4. These models tended to fit the 1979a and 1979b combination of epochs better than any other pair of epochs, but still, the percentage of components which could be identified across epochs was consistent with, or only marginally better (1-2%) than in the simulated components. Interestingly, even the constant velocity model failed to fit these epochs which were separated by less than one year. This implies that the time scales of the variations of the individual components may be of order months. The time scale for a turbulent eddy of size L to rearrange is $$\tau =\frac{L}{\delta v_t}$$ (4) where $`\delta v_t`$ is the “turbulence velocity”, which is usually less than or of the order of the thermal velocity, or the Alfvén speed. In the case of U Her, we can calculate an upper bound to this length scale since the masers are partially resolved. From the period-luminosity relation for Mira variables, the distance to U Her is estimated to be 385 pc (Chapman et al 1994). Feast (1989) quotes an error in the derived magnitude using this method of 0.14. This yields an error in the distance of 26 pc (7%). From the maps, the average angular size of the maser emission regions is about 20 mas. This gives a linear size of $`1\times 10^{14}`$ cm for the regions which produce the maser emission. If the turbulence velocity is the thermal velocity of the gas ($`0.2`$km/s=$`2\times 10^4`$cm/s), then the time scale for complete change of the turbulent medium is about 200 years. This is much longer than the time scale of the variations. The simple turbulent motions of the gas cannot explain variations of the maser components on the time scales observed. Suppose instead that the magnetic field variations drive the appearance and disappearance of the maser spots. This possibility is supported by the observed importance of the Cook mechanism. If the magnetic field is tied to the charged dust particles, then turbulent features will cross the maser emitting region at the dust drift velocity, given by Collison and Fix (1992) as $`20`$ km/s. This leads to a time scale of a bit less than two years. While it is not certain whether the dust grains and the magnetic fields travel together, this seems like a reasonable assumption, since the $`1`$ mGauss fields inferred from the Zeeman splitting cannot be generated at the central star, and thus must be generated in situ by the motion of the charged particles. Note that the magnetic field gradient is the important quantity in the Cook mechanism, not the magnitude of the magnetic field, and so this calculation is an upper bound on the time scale, since the turbulent magnetic field may have large enough gradients for the Cook mechanism to be rendered ineffective on length and time scales much smaller than the ones calculated here. ### 5.2 Plasma Turbulence Along the Line of Sight: Theory and Model Description One way to investigate the validity of a plasma turbulence model is to investigate whether plasma turbulence can cause bright, polarized components with the same probability as that inferred from the filling factor of the masers in the radiation shell. For example, if masers are produced in 20% of the shell, then we would like to know if plasma turbulence produces bright, polarized components in 20% of simulated lines of sight. The filling factor of the observed maser emitting regions was found by summing the areas of the maser components, and dividing by the area of the shell projected on the sky. The projected shell was assumed circular, with a diameter of $`0.5`$ arcseconds, as indicated by the distribution of the maser emission in the maps. For the 1665 MHz maps, the filling factor of all the components at all velocities is 0.11. For the 1667 MHz maps, the filling factor is 0.05. However, since the back portion of the shell was too faint to map in 1667 MHz emission, the filling factor for the 1667 MHz emission may be as much as 0.1. The filling factor to which the following model results were compared was 0.1. We can reduce the plasma turbulence problem to one dimension by considering the maser as a line integral along the line of sight. In each incremental path length $`ds`$, a given velocity $`v`$, and magnetic field $`b`$ are present. These values of $`v`$ and $`b`$ are the small-scale turbulent velocities and magnetic fields, not the large-scale expansion velocity, or overall magnetic field. We can calculate an effective velocity of emission from this incremental path length from the Doppler-shifted frequencies and the Zeeman splitting of the line due to the magnetic field $$v_{eff}=v\pm \frac{\gamma c}{f}B$$ (5) where $`c`$ is the speed of light, $`f`$ is the frequency in MHz of the transition at rest (1665 or 1667), and $`\gamma `$ is the frequency splitting due to the Zeeman effect. The sign indicates RCP or LCP shifts. In order for amplification to occur, we must have a large number of path lengths $`ds`$ with the same effective velocity. Collecting the effective velocities of the path lengths into bins and making a histogram shows the velocity coherence along the line of sight. In general, each path length will not be saturated. If one of the bins contains more than the number of path lengths required for the maser to become saturated, then the maser will be amplified. A component is considered bright and polarized if the ratio of the maximum bin in RCP to the maximum bin in LCP is greater than 1.3 or less than 0.7. This criterion means simply that the maser component must be at least 30% polarized in order to resemble a Cook polarized component. MHD turbulence theory shows that the power spectrum of turbulent velocities and magnetic fields is a power-law. In a one dimensional case, the power spectrum is given by $$P(k)=\stackrel{~}{A}^2(k)k^a,$$ (6) where $`\stackrel{~}{A}`$ is the Fourier transform of a function in velocity space (e.g. the velocity, $`v(x)`$), and $`k`$ is the wave number in the Fourier transform space. In the case of Kolmogorov turbulence, the exponent indicating the steepness of the power law, $`a`$, is 5/3. We adopt this special case as a first approximation of the problem. In our case, however, we may have turbulence on size scales larger than the size of the emitting region, $`l_0`$. These will enter into the power spectrum as a flattening of the slope to a constant near $`k`$=0. The point of turnover is given by $`k_0=2\pi /l_0`$. We can express this modified Kolmogorov power spectrum in functional form as $$P(k)\left(1+\left(\frac{k}{k_0}\right)^2\right)^{5/6}.$$ (7) In order to generate the velocity and magnetic field distribution, we must randomize the power spectrum given by MHD turbulence theory. This guarantees that the general behavior of the velocities and magnetic fields agrees with what is expected from theory. Typically, the power spectrum is randomized by multiplying by a random, complex number $$\stackrel{~}{A}(k)=\sqrt{\frac{P(k)}{2}}(D+iF).$$ (8) $`D`$ and $`F`$ are both zero mean, Gaussian distributed, unit standard deviation numbers (Spangler 1998). This equation is additionally constrained by the requirement that $`A(x)`$ (the velocity or magnetic field vector) is real. This requirement will be met when $`\stackrel{~}{A}^{}(k)=\stackrel{~}{A}(k)`$. By even-odd symmetry, this condition constrains $`F`$ to be 0 when $`\stackrel{~}{A}(k)`$ is the modified Kolmogorov spectrum being considered here. $`A(x)`$ is the inverse Fourier transform of $`\stackrel{~}{A}(k)`$ $$A(x)=_{\mathrm{}}^{\mathrm{}}𝑑k\mathrm{exp}(2\pi ikx)\stackrel{~}{A}(k).$$ (9) The velocity is given by $`A(x)`$. The magnetic field, $`b(x)`$, was calculated as a linear combination of the velocity, $`v(x)`$, and a statistically independent function $`b_i(x)`$. The function $`b_i(x)`$ was calculated in the same manner as $`v(x)`$, using a different amplitude, and a different set of random numbers $`D`$ in the randomization of the power spectrum. The magnetic field is then given by $$b(x)=\alpha v(x)+(1\alpha )b_i(x),$$ (10) where $`\alpha `$ is an adjustable parameter, with values between 0 and 1, which indicates the degree of correlation between the magnetic and velocity fields. Plots of the randomized quantities $`v(x)`$ and $`b(x)`$ are shown in Figure 6. Once the velocity and magnetic fields were calculated, the effective velocity, Equation 5, was calculated, and path lengths grouped in bins by this value. The bin width was chosen to be the spectral resolution of the Arecibo observations (0.05 km/s). The resulting distribution was compared to the criterion for a bright, polarized component. The model was recalculated for many different sets of the random variable $`D`$, which is equivalent to calculating the model along many lines of sight. The probability of the production of a bright, polarized component was calculated, and compared to the filling factor of the maser components. This probability was calculated for various values of the free parameters $`k_0`$, $`\alpha `$, $`C_{vel}`$, and $`C_{mag}`$. A subset of the results of this model are shown in Figure 6. The strongest dependence in the model was on the correlation parameter $`\alpha `$. Cases in which the magnetic and velocity fields were very highly correlated yielded the correct filling factor in only 11% of trials. Generally, the highly correlated case led to too many components, or none, so that either 20-40% of the shell was filled, or there was no maser action. A partial correlation ($`\alpha =0.5`$) also gave the correct filling factor in only 11% of trials. For a nearly uncorrelated case, ($`\alpha =0.01`$), the correct filling factor was obtained 1/3 of the time. This implies that the proper filling factor can be achieved through plasma turbulence, and that the velocity and magnetic fields are most likely uncorrelated. None of the other parameters showed a trend as clear as the dependence on $`\alpha `$. Overall, 19% of the values of the parameters yielded filling factors that matched the observations. ## 6 Discussion In this study, we have investigated three possible global explanations of the variations of the OH maser emission from U Her. The first possibility was a movement of the maser, at constant velocity, through regions where the pump altered in strength. The second possibility was an acceleration, either radial or rotational, of the shell. The third was a change in the global magnetic field which causes the polarization of the maser components. All three of these possibilities were investigated as bulk properties of the shell, and as properties which changed within the shell, and were peculiar to each maser. None of these possibilities completely characterized the behavior of the main-line maser emission from U Her. Since the variations cannot be described by overall properties of the shell, and cannot even be modeled to vary in the same manner over time everywhere in the shell, we were driven to seek other explanations. One promising explanation is that plasma turbulence effects alter the magnetic field in the emission region enough to remove the amplification of the masers by the Cook mechanism. If the turbulent magnetic fields are carried by the dust grains, the plasma turbulence model produces changes on time scales of less than one year, which is in agreement with observed time scales of variation. Also, it is possible to create shells with the correct filling factors using this model. This is a radical departure from the usual school of thought for at least two reasons. The first is that the determining factor in maser emission seems to be the magnetic field gradient, rather than the velocity gradient along the line of sight, since it is the turbulent magnetic field which varies on the observed time scales. The second is that the coherent turbulent “bits” along the line of sight which add up to an amplified, polarized maser component are not necessarily physically contiguous. There may be larger regions of incoherent plasma between the coherent regions, and still the masers will be amplified and polarized. Thus, these masers can not be thought of as entities, or even preferred lines of sight, since the whole line of sight through the dust shell is not necessarily involved in the amplification.
no-problem/9909/quant-ph9909078.html
ar5iv
text
# Untitled Document The Encoding of Quantum State Information Within Subparticles Robert A. Herrmann Mathematics Department U. S. Naval Academy 572C Holloway Rd Annapolis, MD 21402-5002 USA 25 SEP 1999, last revision 22 NOV 2010. Abstract: A method is given by which the descriptive content of quantum state specific-information can be encoded into subparticle coordinates. This method is consistent with the GGU-model solution to general grand unification problem. 1. State specific-information. Subparticles are predicted by the GGU-model (Herrmann, 1993, 1994). They are used for various purposes and it has recently been shown that subparticles can transmit quantum state specific-information over any distance in a manner that would appear to be instantaneous (Herrmann, 1999a) without violating Einstein separability. What has been missing from these investigations is a direct method that would encoded such specific-information within the appropriate ultrasubparticle coordinate so that the generation of an intermediate subparticle would display this specific-information when the standard part operator, $`\mathrm{𝚜𝚝}(),`$ is applied. Recall that each of the coordinates $`i3`$ of an ultrasubparticle that characterizes physical behavior contains only the numbers $`\pm 1/10^\omega \mu (0),`$ where $`\omega `$ is a Robinson (1966) infinite natural number and $`\mu (0)`$ the set of hyperreal infinitesimals. Consider any descriptive scientific language $`,`$ and construct an intuitive word $`w`$ that yields the descriptive content for a particular quantum state. Note that $`w[w],`$ where $`[w]`$ is the equivalence class of all symbol strings that have the same relative meaning (Herrmann, 1999b). Then, as in Herrmann (1987, 1993, 1994), there is an encoding $`i`$ into the set of natural numbers (including 0). In the equivalence classes of partial sequences $`,`$ there is the unique set $`[f_0]`$ such that $`f_0(0)=i(w).`$ From subparticle theory (Herrmann, 1993, 1994, 1999a), there is an ultrasubparticle coordinate $`a_j`$ that corresponds to this quantum state specific-information. It is also known (Herrmann, 1993, 1994, 1999a), that there exists a hyperreal natural number $`\lambda `$ such that $`\mathrm{𝚜𝚝}(\lambda /10^\omega )=i(w).`$ Hence, in order to obtain the intermediate subparticle such that $`\mathrm{𝚜𝚝}(a_j)=i(w),`$ simply consider a $`\lambda `$ hyperfinite combination of ultrasubparticles. This will yield an intermediate subparticle with an $`a_j`$ coordinate $$\mathrm{𝚜𝚝}(\underset{n=1}{\overset{\lambda }{}}\frac{1}{10^\omega })=i(w),$$ $`(1.1)`$ where each term of this hyperfinite series is the constant $`1/10^\omega .`$ Since $`i`$ is an injection, the actual state specific-information, in descriptive form, can be re-captured by considering the inverse $`i^1.`$ What this implies is that by hyperfinite subparticle combination and application of the standard part operator each elementary physical particle may be assumed to be composed of all of the requisite state specific-information that distinguishes it from any other quantum physical entity and state specific-information is characterized by a specific coordinate entry. 2. Subparticle mechanisms. The mapping $`i,`$ in section 1, is somewhat arbitrary. The “word” $`w`$ merely represents the specific-information it contains. As stated in Herrmann (1993) about subparticles, “These multifaceted things, these subparticles are not to be construed as either particles nor waves nor quanta nor anything that can be represented by a fixed imagery. Subparticles are to be viewed only operationally” (p. 99). The hyperfinite sum (1.1) is an analogue model for an ultrafinite combination of ultrasubparticles. Each ultrafinite combination can be conceived of as a “bundling” of the ultrasubparticles. An intermediate subparticle is generated by the ultralogic $`{}_{}{}^{}𝐒`$ applied to an ultraword called an ultramixture. Each elementary particle is generated by $`{}_{}{}^{}𝐒`$ applied to a single ultraword termed an ultimate ultramixture (Herrmann, 1993, p. 106). This method of generation can be continued through all levels of complexity. Of course, realization occurs only when the standard part operator is applied. \[Note: When the physical-like process is discussed the term “ultrafinite” is used rather than “hyperfinite” as an indication that this applies to a specific physical-like process and a mathematical model such as $`\mathrm{𝚜𝚝}(_{n=1}^\lambda \frac{1}{10^\omega })=i(w)`$.\] The infinitesimal values $`\pm 1/10^\omega `$ used in ultrasubparticle coordinates $`a_i,i3`$ are, in general, not unique. Thus, mathematical expressions that appear in this section only represent analogue models for subparticle behavior. This is especially the case where defined measures are used as representations for the “qualities” that characterize the elementary “particles” since the values of the various $`\lambda \mathrm{𝙸𝙽}_{\mathrm{}}`$ used in hyperfinite sums such as (1.1) are dependent upon a defined measure. Hence, they but represent the notion of an ultrafinite combination of ultrasubparticles. All that has been done with subparticle theory can be accomplished using other forms such as $`\pm 1/2^\omega .`$ Everything in subparticle theory, including the next material, can be done in the binary form. The actual mathematical model itself states that there is much specific behavior within the model that cannot be expressed in any human language form. This does not eliminate general descriptions for behavior, descriptions that need not refer to specific quantities as a being unique. Such general descriptions follow the same patterns as the descriptions that appear in the first paragraph of this section. In Herrmann (1993, p. 103), independent \*-finite coordinate summation is used to model the formation of an intermediate subparticle generated by $`{}_{}{}^{}𝐒`$ applied to an ultramixture. It is mentioned that this can also be accomplished operationally by repeated application of an affine (actually a \*-affine) transformation. In order to discuss various transformations that model subparticle mechanisms, objects equivalent to matrices are used. These matrices are viewed in our Nonstandard Model as internal functions $`M:[1,n]\times [1,n]{}_{}{}^{}\mathrm{𝙸𝚁},`$ where nonzero $`n{}_{}{}^{}\mathrm{𝙸𝙽}.`$ It is not difficult to convent, by \*-transform, the usual process of multiplying a standard $`(a_1,\mathrm{},a_n)`$ by a standard $`n\times n`$ matrix $`\mathrm{A}`$ and obtain a transformed standard $`(b_1,\mathrm{},b_n)`$ into statements using $`M`$. The result obtained will be a hyperfinite and, hence, internal object. Further, the usual linear algebra finite coordinatewise summation \*-transferred to the hyperfinite yields internal objects. When appropriate, the usual matrix notation is used. In Herrmann (1993, p. 110), the possible “naming” of ultrafinite combinations is discussed. \[There is a typographical error in (2), the last line on this page. It should read “For the set of all fundamental entities. . . . entities.”\] This process does not lend itself to a simple \*-affine transformation. However, under the proper specific-information interpretation, the first and second coordinates yield no natural-world specific-information and, when realization occurs, are suppressed. Indeed, for only natural-world applications one need not include these two coordinates as part of the basic operational definition for subparticles and if not included, what follows would need to be slightly modified. Consequently, the following approach should be considered as but models for the production of natural-world specific-information. In what follows, the most general ultrasubparticle case is used, where $`a_1=k,a_2=1.`$ For a single characteristic modeled by coordinate $`a_i,i>2`$, one of the simplest transformation processes is modeled by the internal \*-affine transformation $$\mathrm{T}()^{n\times 1}=\mathrm{I}()^{n\times 1}+(b_j)^{n\times 1},$$ $`(2.1)`$ where $`\mathrm{I}`$ is an identity matrix, $`b_1=0,b_2=1,b_i=a_i`$ and $`b_j=0,ji,3jn.`$ This transformation must be applied $`\lambda 1`$ times. For example, assume that $`\mathrm{T}`$ is “first” applied to $`(k,1,1/10^\omega ,1/10^\omega ,\mathrm{},a_n)^T,\omega \mathrm{𝙸𝙽}_{\mathrm{}},n{}_{}{}^{}\mathrm{𝙸𝙽}`$ and models a quality expressed by coordinate $`i=3`$. Then applying the \*-translation $`\mathrm{T}`$ for a total of “$`\lambda 1`$ times” yields the intermediate subparticle $$(k,\lambda ,\underset{n=1}{\overset{\lambda }{}}1/10^\omega ,1/10^\omega ,\mathrm{},a_n)=(k,\lambda ,\lambda /10^\omega ,1/10^\omega ,\mathrm{},a_n).$$ $`(2.2)`$ In this form, the second coordinate reveals the count value. The operator that yields the realized intermediate subparticle for (2.2) can be represented by a matrix composed of operators. This matrix is $`(b_{ij})^{n\times n},`$ where $`b_{ii}=\mathrm{𝚜𝚝},3in`$ and $`0=b_{11}=b_{22}=b_{ij},ij,1in,1jn`$ and, for this case, yields $`(0,0,r,0,\mathrm{},0),\mathrm{𝚜𝚝}(\lambda /10^\omega )=r,`$ a representation for specific-information. This matrix can be modified relative to the first two coordinates since these coordinates do not yield any natural-world specific-information. There are relations between coordinate specific-information and it is these relations that determine how the elementary particles behave when they are conceived of as finite collections of intermediate subparticles. Rather than (2.1), there are more specific $`\lambda `$ dependent internal \*-affine transformations which need apply but once as a model for coordinate independent \*-finite coordinate summation. For the $`\lambda `$, simply consider $$\mathrm{T}(\lambda )()^{n\times 1}=\mathrm{I}()^{n\times 1}+(b_j)^{n\times 1},$$ $`(2.3)`$ where $`b_1=0,b_2=\lambda 1,b_i=(\lambda 1)a_i`$ and $`b_j=0,ji,3jn.`$ One application of (2.3) yields (2.2). The generation of the intermediate subparticles by these methods and then the bundling of finitely many to yield any elementary particle in any state can be represented by a single and simple internal \*-linear transformation upon which the standard part operator is applied. Suppose that there is but three qualities necessary to completely describe an elementary particle and these are modeled by $`\lambda _1,\lambda _2,\lambda _3`$ and coordinates 3,4,5. Consider matrix $`(b_{ij})^{n\times n},b_{ii}=0;i=1,2,6,7,\mathrm{},b_{33}=\lambda _1,b_{44}=\lambda _2,b_{55}=\lambda _3`$ and $`b_{ij}=0,ij,1in,1jn.`$ For realization, consider $`\mathrm{𝚜𝚝}((b_{ij})(k,1,1/10^\omega ,1/10^\omega ,\mathrm{},a_n)^T)=(0,0,r_1,r_2,r_3,0,\mathrm{},0).`$ If one wanted this matrix to replicate the finite bundling of intermediate subparticles with suppression of the naming and the count number coordinates rather than merely producing the standard results, then the $`b_{ii}=3,i6`$ . The major purpose for this section is to establish that the general notion of independent \*-finite coordinate summation, technically, is not a forced mathematical procedure designed solely for this one purpose since the results can be duplicated by other rather simple and well known mathematical operators. However, rigorously establishing the results by other well known means is less significant than the intuitive idea of generating specific-information from infinitesimal pieces of information by a simple bundling process, and then continuing this bundling process to obtain elementary particles. Retaining these intuitive notions remains the major purpose for (a) the independent \*-finite summation model followed by (b) the usual coordinatewise summation for coordinates $`i3`$. The (b) process can be continued to obtain the specific-information that characterizes more complex entities composed of the assumed elementary particles. References Herrmann, R. A. 1987. Nonstandard consequence operators, Kobe J. Math., 4(1)(1987), 1-14. Herrmann, R. A. 1993. The Theory of Ultralogics Part II. http://www.arxiv.org/abd/math.GM/9903082 Herrmann, R. A. 1994. Solution to the General Grand Unification Problem. http://www.arxiv.org/abs/astro-ph/9903110 Herrmann, R. A. 1999a. The NSP-World and Action-at-a-Distance. in Vol. 2, Instantaneous Action-at-a-Distance in Modern Physics: “Pro” and “Contra” ed. Chubykalo, A., N. V. Pope and R. Smirnov-Rueda, Nova Science Books and Journals, New York:223-235. Herrmann, R. A. 1999b. The Wondrous Design and Non-random Character of Chance Events. http://www.arxiv.org/abs/physics/9903038 Robinson, A. 1966. Non-standard analysis. North-Holland, Amsterdam. Rudin, W. 1964. The Principles of Mathematical Analysis, McGraw-Hill, New York.
no-problem/9909/astro-ph9909203.html
ar5iv
text
# What heats the bright H ii regions in I Zw 18? ## 1 Introduction The blue compact dwarf emission-line galaxy I Zw 18 is famous for being the most metal poor galaxy known so far. Its oxygen abundance is about 2% the solar value, as first shown by Searle and Sargent (1972), and then confirmed by many studies (e.g. Lequeux et al. 1979, French 1980, Kinman & Davidson 1981, Pagel et al. 1992, Legrand et al. 1997, Izotov et al. 1997b, Vílchez & Iglesias-Páramo 1998). Because of this, I Zw 18 has played an essential role in the determination of the primordial helium mass fraction. Also, due to its extreme properties, I Zw 18 has been a choice target for studies of star formation history in blue compact galaxies (Dufour & Hester 1990, Hunter & Thronson 1995, Dufour et al. 1996, De Mello et al. 1998, Aloisi et al. 1999), of the elemental enrichment in dwarf galaxies (Kunth & Sargent 1986, Kunth et al. 1995) and of the interplay between star formation and the interstellar medium (Martin 1996, van Zee et al. 1998). An important clue is the distribution of the oxygen abundance inside the H ii regions (Skillman & Kennicutt 1993, Vílchez & Iglesias-Páramo 1998) and in the neutral gas (Kunth et al. 1994, Pettini & Lipman 1995, van Zee et al. 1998). Another clue is the carbon and nitrogen abundance (Garnett et al. 1997, Vílchez & Iglesias-Páramo 1998, Izotov & Thuan 1999). On the whole, there is a general consent about an intense and recent burst of star formation in I Zw 18 - which provides the ionizing photons - following previous star formation episodes. How exactly has the gas been enriched with metals during the course of the evolution of I Zw 18 remains to be better understood. Much of our understanding (or speculations) on the chemical evolution of I Zw 18 (and other galaxies in general) relies on the confidence placed in the chemical abundances derived from the lines emitted in the H ii regions. These are generally obtained using standard, empirical, methods which have been worked out years ago, and rely on the theory of line emission in photoionized gases. Photoionization models are, most of the time, used merely as a guide to evaluate the temperature of the low excitation regions once the characteristic temperature of the high excitation zones has been obtained through the \[O iii\] $`\lambda `$4363/5007 ratio. They also serve to provide formulae for estimating the correction factors for the unseen ionic species of a given element. Direct fitting of the observed emission line spectrum by tailored photoionization models provides more accurate abundances only if all the relevant line ratios are perfectly reproduced by the model (which is rarely the case in model fitting history) and if the model reproducing all the observational constraints is unique. One virtue of model fitting, though, is that it permits to check whether the assumptions used in abundance determinations are correct for a given object. For example, there is the long standing debate whether so-called “electron temperature fluctuations” (see e.g. Mathis 1995, Peimbert 1996, Stasińska 1998) are present in H ii regions to a sufficient level so as to significantly affect elemental abundance determinations. If a photoionization model is not able to reproduce all the temperature sensitive line ratios, the energy balance is not well understood, and one may question the validity of abundance determinations. Also, photoionization models are a potential tool (see e.g. Esteban et al. 1993, García-Vargas 1996, Stasińska & Schaerer 1997, Crowther et al. 1999) to uncover the spectral distribution of the ionizing radiation field, thus providing information on the ionizing stars, their evolutionary status and the structure of their atmospheres. These two points are a strong motivation for a photoionization model analysis of I Zw 18. There have already been a few such attempts in the past (Dufour et al. 1988, Campbell 1990, Stevenson et al. 1993). None of those models were, however, able to reproduce the He ii $`\lambda `$4686 line, known to exist in I Zw 18 since the work of French (1980). The reason is that, in those models, the spectral distribution of the ionizing radiation was that of a single star whose radiation field was interpolated from a grid of plane-parallel, LTE model atmospheres for massive stars. Recently, Wolf-Rayet stars have been identified in I Zw 18 through the characteristic bump they produce at 4650 Å (Izotov et al. 1997a, Legrand et al. 1997). Spherically expanding non-LTE model atmospheres for hot Wolf-Rayet stars with sufficiently low wind densities (Schmutz et al. 1992) do predict an output of radiation above the He ii ionization edge, which might, at least qualitatively, provide a natural explanation for the narrow He ii $`\lambda `$4686 line observed in I Zw 18. Schaerer (1996) has, for the first time, synthesized the broad (stellar) and narrow (nebular) He ii features in young starbursts using the Geneva stellar evolution tracks and appropriate stellar model atmospheres. He then extended his computations to the metallicity of I Zw 18 (De Mello et al. 1998). In this paper, we use the emergent radiation field from the synthetic starburst model presented in De Mello et al. (1998) to construct photoionization models of I Zw 18. One of the objectives is to see whether this more realistic ionizing radiation field permits, at the same time, to solve the electron temperature problem encountered in previous studies. Former photoionization models predicted too low a \[O iii\] $`\lambda `$4363/5007 ratio, unless specific geometries were adopted (Dufour et al. 1988, Campbell 1990), which later turned out to be incompatible with Hubble Space Telescope (HST) images. The synthetic starburst model we use is based on spherically expanding non-LTE stellar atmosphere models for main sequence stars (Schaerer & de Koter 1997) and for Wolf-Rayet stars (Schmutz et al. 1992). These models have a greater heating power than the LTE model atmospheres of same effective temperature (see Fig. 3; also Schaerer & de Koter) The progression of the paper is as follows. In Section 2, we discuss in more detail the photoionization models proposed previously for I Zw 18 and show in what respect they are not consistent with recent observations. In Section 3, we present our own model fitting methodology, including a description of the computational tools. In Section 4, we describe the models we have built for I Zw 18, and discuss the effects of the assumptions involved in the computations. Our main results are summarized in Section 5. ## 2 Previous photoionization models of I Zw 18 The first attempt to produce a photoionization model for I Zw 18 is that of Dufour et al. (1988). Their observational constraints were provided by spectra obtained in an aperture of 2.5″$`\times `$ 6″of the NW region combined with IUE observations yielding essentially the C iii\] $`\lambda `$1909 line. Using Shields’s photoionization code NEBULA, they modelled the object as an ionization bounded sphere of constant density $`n`$ = 100 cm<sup>-3</sup> and adjustable volume filling factor $`ϵ`$ so as to reproduce the observed \[O iii\] $`\lambda `$5007/\[O ii\] $`\lambda `$3727 ratio. The ionizing radiation was provided by a central source of radiation, represented by the LTE model atmospheres of Hummer & Mihalas (1970), modified to take into account the low metallicity of I Zw 18. Discarding the He ii problem, they obtained a model that was reasonably successful except that it had an O<sup>++</sup> temperature, T(O<sup>++</sup>), marginally smaller than observed (17200 K) compared to the value of 18100 (+1100, -1000) K derived directly from their observed \[O iii\] $`\lambda `$4363/5007 = (2.79 $`\pm `$ 0.35) $`\times `$ 10<sup>-2</sup> (the errors quoted being 2$`\sigma `$) <sup>1</sup><sup>1</sup>1A summary of various measurements of \[O iii\] $`\lambda `$4363/5007 and corresponding electron temperatures is shown in Fig. 2.. This model was obtained for an effective temperature of 45000 K. These authors showed that, because of the dominant role played by Ly$`\alpha `$ cooling in I Zw 18, it was impossible, for the adopted geometry, to produce a model with noticeably higher T(O<sup>++</sup>), by varying the free parameters at hand. Even increasing the effective temperature did not raise T(O<sup>++</sup>) appreciably, because then the ionization parameter had to be lowered in order to maintain \[O iii\] $`\lambda `$5007/\[O ii\] $`\lambda `$3727 at its observed level, and this resulted in a greater H<sup>0</sup> abundance, thus enhancing Ly$`\alpha `$ excitation. Dufour et al. then proposed a composite model, in which the \[O iii\] line would be mainly produced around high temperature stars (T<sub>eff</sub> $`>`$ 38000 K) and the \[O ii\] line would be mainly emitted around stars of lower T<sub>eff</sub> ($`<`$ 37000 K). Alternatively, one could have, around a star of T<sub>eff</sub> $`<`$ 45000 K, a high ionization component emitting most of the \[O iii\] and a low ionization component emitting most of the \[O ii\]. Since then, the HST images (Hunter & Thronson 1995, Meurer et al. 1995, Dufour et al. 1996, De Mello et al. 1998) have revealed that the NW region appears like a shell of ionized gas about 5″ in diameter, encircling a dense star cluster. Thus the geometries proposed by Dufour et al. (1988), although quite reasonable a priori, do not seem to apply to the case under study. Campbell (1990), using Ferland’s photoionization code CLOUDY, constructed models to fit the spectral observations of Lequeux et al. (1979) obtained through a slit of 3″.8 $`\times `$ 12″.4. These observations were giving a \[O iii\] $`\lambda `$4363/5007 ratio of (3.75 $`\pm `$ 0.35) $`\times `$ 10<sup>-2</sup>. With a constant density, ionization bounded spherical model and a LTE Kurucz stellar atmosphere with metallicity 1/10 solar, in which the adjustable parameters were O/H, T<sub>eff</sub>, $`n`$ and $`ϵ`$, Campbell obtained a best fit model that had \[O iii\] $`\lambda `$4363/5007 = 3.07 10<sup>-2</sup>, i.e. much lower than the value she aimed at reproducing. She then proposed a density gradient model, in which the inner regions had a density $`n`$ $`>`$ 10<sup>5</sup> cm<sup>-3</sup>, so as to induce collisional deexcitation of \[O iii\] $`\lambda `$5007. Applying standard abundance derivation techniques to this model yields an oxygen abundance higher by 70% than the input value. This led Campbell to conclude that I Zw18 was not as oxygen poor as previously thought. The density gradient model of Campbell (1990) can be checked directly using the density sensitive \[Ar iv\] $`\lambda `$4741/4713 ratio. The only observations giving this line ratio are those of Legrand et al. (1997), and they indicate a density in the Ar<sup>+++</sup> region lower than 100 cm<sup>-3</sup>. Direct images with the HST do not support Campbell’s density gradient model either, since, as stated above, the appearance of the H ii region is that of a shell surrounding the excitation stars. Stevenson et al. (1993), using a more recent version of CLOUDY, constructed a spherical, ionization bounded constant density photoionization model to fit their own data. They used as an input an extrapolation of the Kurucz LTE model atmospheres. Their modelling procedure was very similar to that of Campbell (1990) for her constant density model. Their best fit model had O/H = 1.90 10<sup>-5</sup> and returned \[O iii\] $`\lambda `$4363/5007 = 2.79 10<sup>-2</sup>, to be compared to their observed value of (3.21 $`\pm `$ 0.42) $`\times `$ 10<sup>-2</sup>. What complicates the discussion of the three studies above is that they use different codes with probably different atomic data, and they aim at fitting different sets of observations. Nevertheless, it is clear that all those models have difficulties in reproducing the high \[O iii\] $`\lambda `$4363/5007 observed. They have other weak points, as noted by their authors. For example, Dufour et al. (1988) and Stevenson et al. (1993) comment on the unsatisfactory fitting of the sulfur lines. However, the atomic data concerning sulfur are far less well established than those concerning oxygen, therefore the discrepancies are not necessarily meaningful. Besides, it is not surprising that, with a simple density structure, one does not reproduce perfectly at the same time the \[O iii\]/\[O ii\] and \[S iii\]/\[S ii\] ratios. The most important defect shared by the three models just discussed is that they predict no He ii $`\lambda `$4686 emission. This is simply due to the fact that they used an inadequate input stellar radiation field. With the presently available stellar population synthesis models for the exciting stars of giant H ii regions which make use of more realistic model atmospheres (Schaerer & Vacca 1998), and especially models that are relevant for the Wolf-Rayet stages of massive stars, it is interesting to reanalyze the problem. Using simple photon counting arguments, De Mello at al. (1998) have already shown that a starburst with a Salpeter initial mass function and an upper mass limit of 150 M could reproduce the equivalent width of the Wolf-Rayet features and of the narrow He ii $`\lambda `$4686 emission line in I Zw 18. It is therefore interesting, using the emergent radiation field from such a synthetic stellar population, to see whether one can better reproduce the \[O iii\] $`\lambda `$4363/5007 ratio observed in I Zw 18, with a model that is more compatible with the density structure constrained by the HST images. ## 3 Our model fitting methodology ### 3.1 Computational tools and input parameters As in the previous studies, we concentrate on the so-called NW component, seen in the top of Fig. 1, which shows the WFPC2 H$`\alpha `$ image of the HST (cf. Fig. 1 of De Mello et al. 1998). Throughout the paper, we adopt a distance to I Zw 18 of 10 Mpc, assuming $`H_o`$ = 75 km s<sup>-1</sup> Mpc<sup>-1</sup>, as in many studies (Hunter & Thronson 1995, Martin 1996, van Zee et al. 1998) <sup>2</sup><sup>2</sup>2Izotov et al. (1999) have submitted a paper suggesting a distance of 20 Mpc to I Zw 18. Should this be the case, the conclusions of our paper that are linked to the ionization structure and the temperature of the nebula would hardly be changed. The total mass of the ionized gas would be larger, roughly by a factor 2<sup>3</sup>.. #### 3.1.1 The stellar population We use the same model for the stellar population as described in De Mello et al. (1998). It is provided by a evolutionary population synthesis code using stellar tracks computed with the Geneva code at the appropriate metallicity (1/50 Z). The stellar atmospheres used are spherically expanding non-LTE models for WR stars (Schmutz et al. 1992) and O stars (CoStar models at $`Z=0.004`$, Schaerer & de Koter 1997), and Kurucz models at \[Fe/H\]=-1.5 for the remainder. More details can be found in De Mello et al. (1998) and Schaerer & Vacca (1998). We assume an instantaneous burst of star formation, with an upper mass limit of 150 M and a lower mass limit of 0.8 M. Since all observational quantities considered here depend only on the properties of massive stars, the choice for $`M_{\mathrm{low}}`$ has no influence for the results of this paper. It merely serves as an absolute normalisation. The total initial mass of the stars is adjusted in such a way that, at a distance of 10 Mpc, the flux at 3327 Å is equal to 1.7 10<sup>-15</sup> erg s<sup>-1</sup> cm<sup>-2</sup> Å<sup>-1</sup>, the value measured in the flux calibrated WFPC2 F336W image of De Mello et al. (1998) within a circle of 2.5″ radius centered on the NW region (see Fig. 1). This flux is dominated by the latest generation of stars in I Zw 18, so that our normalization is hardly sensitive to the previous star formation history in the NW region of I Zw 18. It yields a total stellar mass of 8.7 10<sup>4</sup> M, at a distance of 10 Mpc. Actually, most of the flux comes from a region much smaller in size, and our photoionization modelling is made with the ionizing cluster located at the center of the nebula and assuming that its spatial extension is negligible. We consider that the observed ultraviolet flux is only negligibly affected by extinction <sup>3</sup><sup>3</sup>3A direct fitting of the ultraviolet stellar continuum by population synthesis models, of which we became aware after the paper had been submitted, yields C(H$`\beta `$) $`<`$ 0.06 (Mas-Hesse & Kunth 1999).. For an extinction C(H$`\beta `$) of 0.04, such as estimated by Izotov & Thuan (1998), the corrected flux would be only about 10% larger, which is insignificant in our problem. Other observers give values of C(H$`\beta `$) ranging between 0. and 0.2. If C(H$`\beta `$) were as large as 0.20, as estimated by Vílchez & Iglesias-Páramo (1998) and some other observers, the true stellar flux would be a factor two higher. However, all the determinations of C(H$`\beta `$), except the one by Izotov & Thuan (1998), do not take into account the underlying stellar absorption at H$`\beta `$ and therefore overestimate the reddening. A further cause of overestimation of C(H$`\beta `$), which applies also to the work of Izotov & Thuan (1998), is that the intrinsic H$`\alpha `$/H$`\beta `$ ratio is assumed to be the case B recombination value, while collisional excitation of H$`\alpha `$ is not negligible in the case of I Zw 18 as noted by Davidson & Kinman (1985). We will come back to this below. #### 3.1.2 The nebula The photoionization computations are performed with the code PHOTO using the atomic data listed in Stasińska & Leitherer (1996). The code assumes spherical geometry, with a central ionizing source. The diffuse radiation is treated assuming that all the photons are emitted outwards in a solid angle of 2$`\pi `$, and the transfer of the resonant photons of hydrogen and helium is computed with the same outward only approximation, but multiplying the photo-absorption cross-section by an appropriate factor to account for the increased path length due to scattering (Adams 1975). The nebular abundances used in the computations are those we derived from the spectra of Izotov and Thuan (1998) for the NW component of I Zw 18, with the same atomic data as used in the photoionization code. For helium, however, we adopted the abundance derived by Izotov & Thuan (1998) for the SE component, as stellar absorption contaminates the neutral helium lines in the NW component. The nominal value of the temperature derived from \[O iii\] $`\lambda `$4363/5007 is 19800 K. This value was used to compute the ionic abundances of all the ions except O<sup>+</sup>, N<sup>+</sup> and S<sup>+</sup>, for which a value of 15000 K was adopted (this is the typical value returned by our photoionization models for I Zw 18). The electron density deduced from \[S ii\] $`\lambda `$6731/6717 is 140 cm<sup>-3</sup>, and this density was adopted in the computation of the ionic abundances of all species. The ionization correction factors to compute the total element abundances were those of Kingsburgh & Barlow (1994), which are based on photoionization models of planetary nebulae and are also suitable for H ii regions. They give slightly smaller oxygen abundances (by a few %) than the traditional ionization correction factors which assume that the O<sup>+++</sup> region is coextensive with the He<sup>++</sup> (we did not iterate on the ionization correction factors after our photoionization model analysis since this would have not changed any of the conclusions drawn in this paper). The carbon abundance used in the computations follows from the C/O ratio derived by Garnett et al. (1997) from HST observations of I Zw 18. The abundances of the elements not constrained by the observations (Mg, Si) and (Cl, Fe) have been fixed to 10<sup>-7</sup> and 10<sup>-8</sup> respectively. Table 1 presents the abundance set used in all the computations presented in the paper. As already noted by previous authors, at the metallicity of I Zw 18, the heavy elements (i.e. all the elements except hydrogen and helium) play a secondary role in the thermal balance. Their role in the absorption of ionizing photons is completely negligible. Any change of abundances, even that of helium, compatible with the observed intensities of the strong lines, will result in a very small change in the electron temperature, and we have checked that the effect they will induce in the predicted spectra are small compared to the effects discussed below. We do not include dust in the computations. While it is known that, in general, dust mixed with the ionized gas may absorb some of the ionizing photons, and contribute to the energy balance of the gas by photoelectric heating and collisional cooling (e.g. Baldwin et al. 1991, Borkowski & Harrington 1991, Shields & Kennicutt 1995), the expected effect in I Zw 18 is negligible, since the dust-to-gas ratio is believed to be small at such metallicities (cf. Lisenfeld & Ferrara 1998). The case of I Zw 18 is thus very interesting for photoionization modelling, since due to the very low metallicity of this object, the number of unconstrained relevant parameters is minimal. ### 3.2 Fitting the observational constraints In judging the value of our photoionization models, we do not follow the common procedure of producing a table of intensities relative to H$`\beta `$ to be compared to the observations. A good photoionization model is not only one which reproduces the observed line ratios within the uncertainties. It must also satisfy other criteria, like being compatible with what is known from the distribution of the ionized gas, and what is known of the ionizing stars themselves. On the other hand, many line ratios are not at all indicative of the quality of a photoionization model. For example, obviously, two lines arising from the same atomic level like \[O iii\] $`\lambda `$5007 and \[O iii\] $`\lambda `$4959 have intensity ratios that depend only on the respective transition probabilities. In H ii regions, the ratio of hydrogen Balmer lines (if case B applies) is little dependent on the physical conditions in the ionized gas, and this is why it can be used to determine the reddening. The ratios of the intensities of neutral helium lines do depend somewhat on the electron density distribution and on selective absorption by dust of pseudo-resonant photons, (Clegg & Harrington 1989, Kingdon & Ferland 1995), and these are introduced in photoionization models. In the case of the NW component of I Zw 18, the observed neutral helium lines are affected by absorption from stars or interstellar sodium (Izotov & Thuan 1998, Vílchez & Iglesias-Páramo 1998), and cannot be easily used as constraints for photoionization models. Generally speaking, once line ratios indicative of the electron temperature (like \[O iii\] $`\lambda `$4363/5007, \[N ii\] $`\lambda `$5755/6584), of the electron density (like \[S ii\] $`\lambda `$6731/6717, \[Ar iv\] $`\lambda `$4741/4713) and of the global ionization structure (like \[O iii\] $`\lambda `$5007/\[O ii\] $`\lambda `$3727 or \[S iii\] $`\lambda `$9532/\[S ii\] $`\lambda `$6725) have been fitted, the ratios of all the strong lines with respect to H$`\beta `$ are necessarily reproduced by a photoionization model whose input abundances were obtained from the observations. The only condition is that the atomic data to derive the abundances and to compute the models should be the same. Problems may arise only if the empirical ionization factors are different from the ones given by the model, or if there is insufficient information on the distribution of the electron temperature or density inside the nebula (in the case of I Zw 18 no direct information is available on the temperature in the low ionization zone, but we adopted a value inspired by the models). Therefore, intensity ratios such as \[O iii\] $`\lambda `$5007/H$`\beta `$, \[Ne iii\] $`\lambda `$3869/H$`\beta `$, \[N ii\] $`\lambda `$6584/H$`\beta `$, \[Ar iii\] $`\lambda `$7135/H$`\beta `$ or C iii\] $`\lambda `$1909/H$`\beta `$ are not a measure of the quality of the photoionization model. To judge whether a photoionization model is acceptable, one must work with outputs that are significantly affected by the physical processes on which the photoionization model is based, i.e. the transfer of the ionizing radiation, the processes determining the ionization equilibrium of the various atomic species and the thermal balance of the gas. Table 2 lists the quantities that can be used in the case of I Zw 18, given the observational information we have on the object. The value of the H$`\beta `$ flux is derived from the H$`\alpha `$ flux measured in a circle of radius $`\theta `$=2.5″ (shown in Fig. 1), assuming C(H$`\beta `$) = 0. The line ratios He ii $`\lambda `$4686/H$`\beta `$, \[O iii\] $`\lambda `$4363/5007, \[S ii\] $`\lambda `$6731/6717, \[O iii\] $`\lambda `$5007/\[O ii\] $`\lambda `$3727, \[S iii\] $`\lambda `$6312/\[S ii\] $`\lambda `$6725 and \[O i\] $`\lambda `$6300/H$`\beta `$ are the values observed by Izotov & Thuan (1998) in the rectangular aperture whose approximate position is shown in Fig. 1. It is important to define in advance the tolerance we accept for the difference between our model predictions and the observations. This must take into account both the uncertainty in the observational data, the fact that the spectra were taken through an aperture not encompassing the whole nebula, and the fact that the nebula does not have a perfect, spherical symmetric structure. This latter aspect is, of course, difficult to quantify, and the numbers given in Column 3 of Table 2 are to be regarded rather as guidelines. In Column 4, we indicate which is the dominant factor determining the adopted tolerance : the signal-to-noise, or the geometry. For example, such ratios as He ii/H$`\beta `$, \[O iii\]/\[O ii\], \[S iii\]/\[S ii\] or \[O i\]/H$`\beta `$ are obviously more dependent on geometrical effects than \[O iii\] $`\lambda `$4363/5007. Note that, even for that ratio, the tolerance given in Table 2 is larger than the uncertainty quoted by Izotov & Thuan (1998). The reason is that the many observations of the NW component of I Zw 18 made over the years, with different telescopes, detectors, and apertures, yield distinct values for this ratio, as shown in Fig. 2. In view of this figure, a tolerance of 10% with respect with the \[O iii\] $`\lambda `$4363/5007 ratio measured by Izotov & Thuan (1998) seems reasonable. The status of the \[S ii\] $`\lambda `$6731/6717 ratio is somewhat different. It indicates the average electron density in the zone emitting \[S ii\]. This is very close to an input parameter, since photoionization models are built with a given density structure. However, because the density deduced from \[S ii\] $`\lambda `$6731/6717 is not the total hydrogen density but the electron density in the region emitting \[S ii\], and because the density is not necessarily uniform, it is important to check that the model returns an \[S ii\] $`\lambda `$6731/6717 value that is compatible with the observations. For the total H$`\beta `$ flux, we accept models giving F(H$`\beta `$) larger than the observed value, on account of the fact that the coverage factor of the ionizing source by the nebula may be smaller than one. Thus, in the following, we compute photoionization models with the input parameters as defined in Section 3.1, and see how they compare with the constraints specified in Table 2. We will not examine the effects of varying the elemental abundances, since, as mentioned above, they are negligible in our problem. Uncertainties in the measured stellar flux have only a small impact on our models, and are therefore not discussed here. Similarly, we discard the effects of an error in the distance $`d`$ to I Zw 18. These are not crucial on the ionization structure of a model designed to fit the observed flux at 3327Å, since the mean ionization parameter varies roughly like $`d^{2/3}`$. What we mainly want to see is whether, with our present knowledge, we can satisfactorily explain the observed properties of I Zw 18. As will be seen, the gas density distribution plays an important role. ## 4 Climbing the ladder of sophistication ### 4.1 The ionizing radiation field Before turning to proper photoionization modelling, it is worthwhile examining the gross properties of the ionizing radiation field of the synthetic stellar population model we are using, and compare it to single star model atmospheres. Two quantities are particularly relevant. One is $`Q(\mathrm{He}^+)`$/$`Q(\mathrm{H}^0)`$, the ratio of the number of photons above 54.4 and 13.6 eV emitted by the ionizing source. This ratio allows one to estimate the He ii $`\lambda `$4686/H$`\beta `$ that would be observed in a surrounding nebula, by using simple conservation arguments leading to the formula: He ii $`\lambda `$4686/H$`\beta `$ = 2.14 $`Q(\mathrm{He}^+)`$/$`Q(\mathrm{H}^0)`$ (taking the case B recombination coefficients given in Osterbrock 1989). As is known, this expression is valid if the nebula is ionization bounded and the observations pertain to the whole volume. It is less commonly realized that it also assumes the average temperature in the He<sup>++</sup> region to be the same as in the entire H ii region. This is may be far from true, as will be shown below, so a correction should account for that. Another assumption is that all the photons above 54.4 eV are absorbed by He<sup>+</sup> ions. This is not what happens in objects with a low ionization parameter. There, the residual neutral hydrogen particles are sufficiently numerous to compete with He<sup>+</sup>. In such a case, the expression above gives an upper limit to the nebular He ii $`\lambda `$4686/H$`\beta `$ ratio. In spite of these difficulties, $`Q(\mathrm{He}^+)`$/$`Q(\mathrm{H}^0)`$ remains a useful quantity to estimate the intensity of the nebular He ii $`\lambda `$4686 line. Fig. 3a shows the variation of $`Q(\mathrm{He}^+)`$/$`Q(\mathrm{H}^0)`$ as a function of starburst age for the synthetic model population we are considering. As already stated in De Mello et al. (1998), the strength of the He ii nebular line in I Zw 18 indicates a starburst between 2.9 and 3.2 Myr. Another important ratio is $`Q(\mathrm{He}^0)`$/$`Q(\mathrm{H}^0)`$, sometimes referred to as the hardness parameter of the ionizing radiation field. It provides a qualitative measure of the heating power of the stars. We have represented this quantity in Fig. 3c. We see that, as the starburst ages, its heating power gradually declines, and shows only a very mild bump at ages around 3 Myr, where the Wolf-Rayet stars are present. As we will show below this modest increase of the heating power is not sufficient to explain the high electron temperature observed in I Zw 18. For comparison, we show in Figs. 3b and d respectively, the values of $`Q(\mathrm{He}^+)`$/$`Q(\mathrm{H}^0)`$ and $`Q(\mathrm{He}^0)`$/$`Q(\mathrm{H}^0)`$ as a function of the stellar effective temperature for the LTE model atmospheres of Kurucz (1991), the CoStar model atmospheres corresponding to main sequence stars (Schaerer & de Koter 1997) and for the model atmospheres for Wolf-Rayet stars of Schmutz et al. (1992). The CoStar models show an increased He<sup>+</sup> ionizing flux compared to Kurucz models which have a negligible flux even for very low metallicities (\[Fe/H\]$`=1.5`$). The reasons for this difference have been discussed in Schaerer & de Koter (1997). In addition to the $`T_{\mathrm{eff}}`$ dependence, $`Q(\mathrm{He}^+)`$ from WR models depend strongly on the wind density. He<sup>+</sup> ionizing photons are only predicted by models with sufficiently thin winds (cf. Schmutz et al. 1992). Figure 3d shows the increase of the hardness of the radiation field, at a given $`T_{\mathrm{eff}}`$, between the spherically expanding non-LTE models for O and WR stars and the traditional Kurucz models (see discussion in Schaerer & de Koter 1997). This provides a greater heating power which, as will be shown later, is however still insufficient to explain the observations. ### 4.2 I Zw 18 as a uniform sphere We start with the easiest and most commonly used geometry in photoionization modelling: a sphere uniformly filled with gas at constant density, occupying a fraction $`ϵ`$ of the whole nebular volume. The free parameters of the models are then only the age of the starburst, the gas density and the filling factor. Each model is computed starting from the center, and the computations are stopped either when the \[O iii\]/\[O ii\] ratio has reached the observed value given in Table 2, or when the gas becomes neutral. In other words, we examine also models that are not ionization bounded, in contrast to previous studies. Figure 4 shows our diagnostic diagram for a series of models having a density $`n`$= 100 cm<sup>-3</sup> and a filling factor $`ϵ`$=0.01. The left panel shows the computed values of log F(H$`\beta `$) + 15 (open circles), log He ii $`\lambda `$4686/H$`\beta `$ \+ 2 (squares), angular radius $`\theta `$ (crosses), \[O iii\] $`\lambda `$4363/5007 $`\times `$ 100 (open triangles), \[S ii\] $`\lambda `$6731/6717 (diamonds), log (\[O iii\]/\[O ii\]) (black triangles), log (\[S iii\]/\[S ii\]) (asterisks) and log \[O i\]/H$`\beta `$ +3 (plus) as a function of the starburst age. The black circles correspond to the value of log F(H$`\beta `$) + 15 that the nebula would have if it were ionization bounded. Thus, by comparing the positions of an open circle and a black circle, at a given abscissa, one can immediately see whether the model is density bounded and how much diffuse H$`\alpha `$ or H$`\beta `$ emission is expected to be emitted outside the main body of the nebula. In the right panel, the observed values are represented on the same vertical scale and with the same symbols as the model predictions. The tolerances listed in Table 2 are represented as vertical error bars (the horizontal displacement of the symbols has no particular meaning). We readily see that the age of the starburst is important only for the He ii $`\lambda `$4686 line, the other quantities varying very little for ages 2.7–3.4 Myr. Therefore, for the following runs of models, we adopt an age of 3.1 Myr. In principle, one can always adjust the age for the model to reproduce the observed He ii $`\lambda `$4686/H$`\beta `$ ratio exactly. Figure 5 shows the same sort of diagnostic diagram as Fig. 4 for a series of models with $`n`$ = 100 cm<sup>-3</sup> and varying filling factor. For a filling factor around 0.1 or larger, with the adopted electron density, the model is ionization bounded, and its \[O iii\]/\[O ii\] is larger than observed. For filling factors smaller than that, the gas distribution is more extended, so that the general ionization level drops. The observed \[O iii\]/\[O ii\] can then only be reproduced for a density bounded model. In such a case, the H$`\beta `$ radiation actually produced by the nebula is smaller than if all the ionizing photons were absorbed in the nebula. A filling factor of 0.002 – 0.05 gives values of \[O iii\]/\[O ii\], F(H$`\beta `$) and $`\theta `$ in agreement with the observations. But such models give \[O iii\] $`\lambda `$4363/5007 too small compared with the observations, and \[O i\]/H$`\beta `$ below the observed value by nearly two orders of magnitude. It is interesting, though, to understand the qualitative behavior of these line ratios as $`ϵ`$ decreases. \[O i\]/H$`\beta `$ decreases because the model becomes more and more density bounded in order to reproduce the observed \[O iii\]/\[O ii\] and levels off at $`ϵ`$ = 0.1, because the ionization parameter of the model is then so small that the \[O i\] is gradually emitted by residual neutral oxygen in the main body of the nebula and not in the outskirts. \[O iii\] $`\lambda `$4363/5007 decreases as $`ϵ`$ decreases, because of the increasing proportion of L$`\alpha `$ cooling as the ionization parameter drops. One can build other series of models with different values of $`n`$ that are still compatible with the observed \[S ii\] $`\lambda `$6731/6717. Qualitatively, their behavior is the same and no acceptable solution is found. Interestingly, Fig. 5 shows that models with $`ϵ`$ $``$ 0.1 have \[O iii\] $`\lambda `$4363/5007 marginally compatible with the observations ( \[O iii\] $`\lambda `$4363/5007 = 3.03 10<sup>-2</sup> for $`ϵ`$ = 0.1), but such models have too large \[O iii\]/\[O ii\]($`>`$ 10 compared of the observed value 6.8) and too small angular radius ($`<`$ 1.6″ instead of the observed value 2.5″). Note, by the way, that such models, being optically thick, return a rather large \[O i\]/H$`\beta `$, actually close to the observed value, and a \[S iii\]/\[S ii\] compatible with the observations. However, we do not use \[S iii\]/\[S ii\] as a primary criterion to judge the validity of a model, since experience with photoionization modelling of planetary nebulae shows that it is difficult to reproduce at the same time the sulfur and the oxygen ionization structure of a given object, and, in principle, one expects the atomic data for oxygen to be more reliable than those for sulfur. The strongest argument against models with $`ϵ>0.1`$ is their angular size, which is definitely too small compared with the observations. This remains true even when considering a reasonable error on the distance since, with the condition that we impose on the flux at 3327 Å to be preserved, the angular radius of a model goes roughly like $`d^{1/3}`$. This illustrates the importance of taking into account other parameters in addition to line ratios to accept or reject a photoionization model. ### 4.3 I Zw 18 as a spherical shell The series of models presented above had mainly a pedagogical value, but they are obviously incompatible with the observed morphology in H$`\alpha `$. The next step is to consider a model consisting of a hollow, spherical shell of constant density, similar to the one constructed by García-Vargas et al. (1997) for NGC 7714 for example. In such a case, there is an additional free parameter, $`R_{in}`$, the radius of the inner boundary of the shell. It is fixed, more or less, by the appearance of the H$`\alpha `$ image. Figure 6 shows a diagnostic diagram for a series of models with $`R_{in}`$ = 2.25 10<sup>20</sup> cm (corresponding to an angular radius of 1.5″), $`n`$ = 100 cm<sup>-3</sup> and varying $`ϵ`$. The qualitative behavior is similar to that seen for the uniform sphere models presented in Fig. 1, but the \[O iii\] $`\lambda `$4363/5007 ratio is now even lower (it never exceeds 2.5 10<sup>-2</sup> in this series). This is because of the enhanced role of L$`\alpha `$ cooling, which is strong in all the parts of the nebula, while for the full sphere model, in the zone close to the star, the ionization parameter is very high and consequently the population of neutral hydrogen very small. Apart from the \[O iii\] $`\lambda `$4363/5007 problem, models with $`ϵ`$ = 0.002 – 0.05 are satisfactory as concerns the main diagnostics (\[O iii\]/\[O ii\], F(H$`\beta `$) and $`\theta `$). The models become progressively density bounded towards smaller values of $`ϵ`$ $`<`$ 0.02, meaning that there is a leakage of ionizing photons. From Fig. 6, one sees that these photons are enough to produce an H$`\alpha `$ emission in an extended diffuse envelope that is at least comparable in strength to the total emission from the dense shell. This is in agreement with Dufour & Hester’s (1990) ground-based observation of extended H$`\alpha `$ emission surrounding the main body of star formation. ### 4.4 Other geometries Closer inspection of the HST H$`\alpha `$ image shows that the gas emission is incompatible with a spherical bubble. This is illustrated in Fig. 7, where the observed cumulative surface brightness profile within radius $`r`$ (dashed line) is compared to the expected profiles for constant density spherical shells of various inner radii (solid lines). The theoretical profiles are obtained assuming that the temperature is uniform in the gas, but taking into account a reasonable temperature gradient in the model hardly changes the picture. Clearly, the observed profile is not compatible with a spherical density distribution. The column density of emitting matter in the central zone of the image is too small. One must either have an incomplete shell with some matter stripped off from the poles, or even a more extreme morphology like an diffuse axisymmetric body with a dense ringlike equator seen face on. Such geometries are actually common among planetary nebulae (Corradi & Schwartz 1995) and nebulae surrounding luminous blue variables (Nota et al. 1995), being probably the result of the interaction of an aspherical stellar wind from the central stars (Mellema 1995, Frank et al. 1998) and are also suggested to exist in superbubbles and supergiant shells and to give rise to the blow-out phenomenon (Mac Low et al. 1989, Wang & Helfand 1991, Tenorio-Tagle et al. 1997, Oey & Smedley 1998, Martin 1998). Does the consideration of such a geometry help in solving the \[O iii\] $`\lambda `$4363/5007 problem? In the case of a spherical shell with some matter lacking at the poles, our computations overestimate the role of the diffuse ionizing radiation, which is supposed to come from a complete shell. Since the heating power of the diffuse ionizing radiation is smaller than that of the stellar radiation, one may be underestimating the electron temperature. As a test, we have run a model switching off completely the diffuse radiation, so as to maximize the electron temperature. The effect was to increase the O<sup>++</sup> temperature by only 200 K. In the case of a ring with some diffuse matter seen in projection inside the ring, the gas lying close to the stars would be at a higher electron temperature than the matter of the ring, and one expects that an adequate combination of the parameters describing the gas distribution interior to and inside of the ring might reproduce the \[O iii\] $`\lambda `$4363/5007 measured in the aperture shown in Fig. 1. However, we notice that the region of high \[O iii\] $`\lambda `$4363/5007 is much larger than that. It extends over almost 20″ ( Vílchez & Iglesias-Páramo 1998) and \[O iii\] $`\lambda `$4363/5007 is neither correlated with \[O iii\]/\[O ii\] nor with the H$`\alpha `$ surface brightness. Therefore, one cannot explain the high \[O iii\] $`\lambda `$4363 by the emitting matter being close to the ionizing stars. In passing we note that the observations of Vílchez & Iglesias-Páramo (1998) show the nebular He ii $`\lambda `$4686 emission to be extended as well (although not as much as \[O iii\] $`\lambda `$4363). If this emission is due to photoionization by the central star cluster, as modelled in this paper, this means that the H$`\alpha `$ ring is porous, since He ii $`\lambda `$4686 emission necessarily comes from a region separated from the stars by only a small amount of intervening matter. In summary, we must conclude that, if we take into account all the available information on the structure of I Zw 18, we are not able to explain the high observed \[O iii\] $`\lambda `$4363/5007 ratio with our photoionization models. In our best models, \[O iii\] $`\lambda `$4363/5007 is below the nominal value of Izotov & Thuan (1998) by 25 - 35%. ### 4.5 Back to the model assumptions Can our lack of success be attributed to an improper description of the stellar radiation field? After all, we know little about the validity of stellar model atmospheres in the Lyman continuum (see discussion in Schaerer 1998). Direct measurements of the EUV flux of early B stars revealed an important EUV excess (up to $``$ 1.5 dex) with respect to plane parallel model atmospheres (Cassinelli et al. 1995, 1996), whose origin has been discussed by Najarro et al. (1996), Schaerer & de Koter (1997) and Aufdenberg et al. (1998). For O stars a similar excess in the total Lyman continuum output is, however, excluded from considerations of their bolometric luminosity and measurements of H ii region luminosities (Oey & Kennicutt 1997, Schaerer 1998). The hardness of the radiation field, which is crucial for the heating of the nebula, is more difficult to test. Some constraints can be obtained by comparing the line emission of nebulae surrounding hot stars with the results of photoionization models (Esteban et al. 1993, Peña et al. 1998, Crowther et al. 1999), but this is a difficult task, considering that the nebular emission depends also on its geometry. Although the hardness predicted by the CoStar O stars models permits to build grids of photoionization models that seem to explain the observations of Galactic and LMC H ii regions (Stasińska & Schaerer 1997), the constraints are not sufficient to prove or disprove the models. To check the effect of a harder radiation field, we have run a series of models where the radiation field above 24.6 eV was arbitrarily multiplied by a factor 3 (raising the value of $`Q(\mathrm{He}^0)`$/$`Q(\mathrm{H}^0)`$ from 0.33 to 0.59, corresponding to $`T_{\mathrm{eff}}`$ from $``$ 40000 K to $``$ 100 000 K or a blackbody of the same temperature (cf. Fig. 3d). This drastic hardening of the radiation field resulted in an increase of \[O iii\] $`\lambda `$4363/5007 of only 10%. It is only by assuming a blackbody radiation of 300 000 K (which has $`Q(\mathrm{He}^0)`$/$`Q(\mathrm{H}^0)`$ = 0.9) that one approaches the observed \[O iii\] $`\lambda `$4363/5007. A model similar to those presented in Fig. 6 but with such a radiation field gives \[O iii\] $`\lambda `$4363/5007 =3.15 10<sup>-2</sup>. But is has a He ii $`\lambda `$4686/H$`\beta `$ ratio of 0.53, which is completely ruled out by the observations. Of course, a blackbody is probably not the best representation for the spectral energy distribution of the radiation emitted by a very hot body, but in order to explain the emission line spectrum of I Zw 18 by stars, one has to assume an ionizing radiation strongly enhanced at energies between 20 – 54 eV, but not above 54.4 eV, compared to the model of De Mello et al. (1998). If this is realistic cannot be said at the present time. We have also checked the effect of heating by additional X-rays that would be emitted due to the interaction of the stellar winds with ambient matter (see Martin 1996 for such X-ray observations), by simply adding a bremsstrahlung spectrum at T = 10<sup>6</sup> or T = 10<sup>7</sup> K to the radiation from the starburst model. As expected, the effect on \[O iii\] $`\lambda `$4363/5007 was negligible, since the X-rays are mostly absorbed in the low ionization regions (they do raise the temperature in the O<sup>+</sup> zone to T<sub>e</sub> $``$ 16 000 K). As already commented by previous authors, changing the elemental abundances does not improve the situation. Actually, even by putting all the elements heavier than helium to nearly zero abundance, \[O iii\] $`\lambda `$4363/5007 is raised by only 7%. Varying the helium abundance in reasonable limits does not change the problem. The neglect of dust is not expected to be responsible for the too low \[O iii\] $`\lambda `$4363/5007 we find in our models for I Zw 18. Gas heating by photoemission from grains can contribute by as much as 20% to the electron thermal balance when the dust-to-gas ratio is similar to that found in the Orion nebula. But, as discussed by Baldwin et al. (1991), it is effective close to the ionizing source where dust provides most of the opacity. Besides, the proportion of dust in I Zw 18 is expected to be small, given the low metallicity. Extrapolating the relation found by Lisenfeld & Ferrara (1998) between the dust-to-gas ratio and the oxygen abundances in dwarf galaxies to the metallicity of I Zw 18 yields a dust-to-gass mass ratio 2 to 3 orders of magnitudes smaller than in the solar vicinity. It remains to examine the ingredients of the photoionization code. We first stress that the \[O iii\] $`\lambda `$4363/5007 problem in I Zw 18 has been encountered by various authors using different codes, even if the observational material and the points of emphasis in the discussion were not the same in all the studies. As a further test, we compared the same model for I Zw 18 run by CLOUDY 90 and by PHOTO and the difference in the predicted value of \[O iii\] $`\lambda `$4363/5007 was only 5%. One must therefore incriminate something that is wrongly treated in the same way in many codes. One possibility that comes to mind is the diffuse ionizing radiation, which is treated by some kind of outward only approximation in all codes used to model I Zw 18. However, we do not expect that an accurate treatment of the diffuse ionizing radiation would solve the problem. Indeed, comparison of codes that treat more accurately the ionizing radiation with those using an outward only approximation shows only a negligible difference in the electron temperature (Ferland et al. 1996). Besides, as we have shown, even quenching the diffuse ionizing radiation does not solve the problem. Finally, one can also question the atomic data. The most relevant ones here are those governing the emissivities of the observed \[O iii\] lines and the H i collision strengths. The magnitude of the discrepancy we wish to solve would require modifications of the collision strengths or transition probabilities for \[O iii\] of about 25%. This is much larger than the expected uncertainties and the differences between the results of different computations for this ion (see discussion in Lennon and Burke 1994 and Galavis et al. 1997). Concerning L$`\alpha `$ excitation, dividing the collision strength by a factor 2 (which is far above any conceivable uncertainty, see e.g. Aggarwal et al. 1991) modifies \[O iii\] $`\lambda `$4363/5007 only by 2% because L$`\alpha `$ acts like a thermostat. We are therefore left with the conclusion that the \[O iii\] $`\lambda `$4363/5007 ratio cannot be explained in the framework of photoionization models alone. ### 4.6 Condensations and filaments Another failure of our photoionization models is that they predict too low \[O i\]/H$`\beta `$ compared to observations. This is a general feature of photoionization models and it is often taken as one of the reasons to invoke the presence of shocks. However, it is well known that the presence of small condensations of filaments enhances the \[O i\]/H$`\beta `$ ratio, by reducing the local ionization parameter. Another possibility is to have an intervening filament located at a large distance from the ionizing source, whose projected surface on the aperture of the observations would be small. In order to see under what conditions such models can quantitatively account for the observed \[O i\]/H$`\beta `$ ratio, we have mimicked such a situation by computing a series of ionization bounded photoionization models for filaments of different densities located at various distances from the exciting stars. For simplicity, we assumed that there is no intervening matter between the filaments and the ionizing source. The models were actually computed for complete shells. The radiation coming from a filament can be simply obtained by multiplying the flux computed in the model by the covering factor $`f`$ by which the filament is covering the source. Table 3 presents the ratios with respect to H$`\beta `$ of the \[O i\] $`\lambda `$6300, \[O ii\] $`\lambda `$3727, \[O iii\] $`\lambda `$5007, \[S ii\] $`\lambda `$6717 and \[S ii\] $`\lambda `$6731 for these models. The first column of the table corresponds to a photoionization model for the main body of the nebula with $`R_{in}`$ = 2.25 10<sup>20</sup> cm (corresponding to $`\theta _{\mathrm{in}}=`$ 1.5″), $`n`$=100 cm<sup>-3</sup>, and $`ϵ`$=0.01, and density bounded so as to obtain the observed \[O iii\]/\[O ii\] (this is one of the models of Fig. 6). It can be readily seen that, for an intervening filament of density 10<sup>2</sup> cm<sup>-3</sup> located at a distance of 500 pc from the star cluster, or for condensations of density 10<sup>6</sup> cm<sup>-3</sup>, one can reproduce the observed \[O i\]/H$`\beta `$ line ratio without strongly affecting the remaining lines, not even the density sensitive \[S ii\] $`\lambda `$6731/6717 ratio, if one assumes a covering factor $`f`$ of about 0.1. This explanation may appear somewhat speculative. However, one must be aware that the morphology of the ionized gas in I Zw 18 shows filaments in the main shell as well as further out and this has been amply discussed in the literature (Hunter & Thronson 1995, Dufour et al. 1995, Martin 1996). Our point is that, even in the framework of pure photoionization models, if one accounts for a density structure suggested directly by the observations, the strength of the \[O i\] line can be easily understood. We note that the condensations or filaments that produce \[O i\] are optically thick, and therefore their neutral counterpart should be seen in H i. Unfortunately, the available H i maps of I Zw 18 (van Zee et al. 1998) do not have sufficient spatial resolution to reveal such filaments. But these authors show that the peak emission in H i and H$`\alpha `$ coincide (in their paper, the peak emission in H$`\alpha `$ actually refers to the whole NW component), and find that the entire optical system, including the diffuse emission, is embedded in a diffuse, irregular and clumpy neutral cloud. In such a situation, it is very likely that some clumps or filaments, situated in front of the main nebula, and having a small covering factor, produce the observed \[O i\] line. By using photoionization models to explain the observed \[O i\]/H$`\beta `$ ratio, one can deduce the presence of density fluctuations, even if those are not seen directly. Such density fluctuations could then be, more directly, inferred from density sensitive ratios of \[Fe ii\] lines, such as seen and analyzed in the Orion nebula by Bautista & Pradhan (1995). ### 4.7 A few properties of I Zw 18 deduced from models Although we have not built a completely satisfactory photoionization model reproducing all the relevant observed properties of I Zw 18, we are not too far from it. We describe below some properties of the best models that may be of interest for further empirical studies of this object. Tables 4 and 5 present the mean ionic fractions and mean ionic temperatures (as defined in Stasińska 1990) for the different elements considered, in the case of the best fit models with a uniform sphere and a spherical shell respectively, with $`n`$=100 cm<sup>-3</sup>, and $`ϵ`$=0.01. It can be seen that, while both geometries yield similar ionic fractions for the main ionization stages, the relative populations of the highly charged trace ions are very different. In the case of the uniform sphere, the proportion of O<sup>+++</sup>, for example, is twice as large as in the shell model. Also, the characteristic temperature of ions with high ionization potential are much higher in the case of the filled sphere, for reasons commented on earlier. As a result, the OIV\]1394/H$`\beta `$ ratio is 9.7 10<sup>-3</sup> in the first case and 8.6 10<sup>-4</sup> in the second. This line is too weak to be measured in I Zw 18, of course, but it is useful to keep this example in mind for the study or more metal rich objects. It is interesting to point out that, in the case of the uniform sphere, the total flux in the He ii $`\lambda `$4686 line is smaller than in the case of the shell model (3.1 10<sup>-15</sup> vs. 3.3 10<sup>-15</sup> erg cm<sup>-2</sup> s<sup>-1</sup>) despite the fact that the ionic fractions of He<sup>++</sup> are similar. This is because the He<sup>++</sup> region is at a much higher temperature (25 000 K versus 18 000 K). Tables 4 and 5 can be used for estimating the ionization correction factors for I Zw 18. Caution should, however, be applied regarding especially the ionization structure predicted for elements of the third row of Mendeleev’s table. Experience in photoionization modelling of planetary nebulae, where the observational constraints are larger, shows that the ionization structure of these elements is rarely satisfactorily reproduced with simple models (Howard et al. 1997, Peña et al., 1998). The total ionized mass in the NW component is relatively well determined, since we know the total number of ionizing photons, the radius of the emitting region and have an estimate of the mean ionization parameter through the observed \[O iii\]/\[O ii\]. Indeed, at a given $`Q(\mathrm{H}^0)`$, for a constant density sphere with a filling factor $`ϵ`$, $`n^2ϵ`$ is proportional to the cube of the radius, while $`nϵ^2`$ is proportional to the cube of the ionization parameter. Of course, we have just made the point that the NW component of I Zw 18 is not a sphere. Nevertheless, we have an order of magnitude estimate, which is of 3. 10<sup>5</sup> M at $`d`$ = 10 Mpc (this estimate varies like $`d^3`$). Finally, it is important to stress, as already mentioned above, that H$`\alpha `$ is partly excited by collisions. In all our models for the main body of the nebula, H$`\alpha `$/H$`\beta `$ lies between 2.95 and 3, while the case B recombination value is 2.7. This means that the reddening of I Zw 18 is smaller than the value obtained using the case B recombination value at the temperature derived from \[O iii\] $`\lambda `$4363/5007. If we take the observations of Izotov & Thuan (1998), who also correct for underlying stellar absorption, we obtain C(H$`\beta `$)=0. ## 5 Summary and conclusions We have built photoionization models for the NW component of I Zw 18 using the radiation field from a starburst population synthesis at appropriate metallicity (De Mello et al. 1998) that is consistent with the Wolf-Rayet signatures seen in the spectra of I Zw 18. The aim was to see whether, with a nebular density structure compatible with recent HST images, it was possible to explain the high \[O iii\] $`\lambda `$4363/5007 ratio seen in this object, commonly interpreted as indicative of electron temperature $``$ 20 000K. For our photoionization analysis we have focused on properties which are relevant and crucial model predictions. For the observational constraints we have not only used line ratios, but also other properties such as the integrated stellar flux at 3327 Å and the observed angular radius of the emitting region as seen by the HST. Care has also been taken to include tolerances on model properties which may be affected by deviations from various simple geometries. We have found that \[O iii\] $`\lambda `$4363/5007 cannot be explained by pure photoionization models, which yield too low an electron temperature. We have considered the effects due to departure from spherical symmetry indicated by the HST images. Indeed these show that the NW component of I Zw 18 is neither a uniform sphere nor a spherical shell, but rather a bipolar structure with a dense equatorial ring seen pole on. We have discussed the consequences that an inaccurate description of the stellar ionizing radiation field might have on \[O iii\] $`\lambda `$4363/5007 as well as additional photoionization by X-rays. Finally, we have considered possible errors in the atomic data. All these trials were far from solving the electron temperature problem, raising the \[O iii\] $`\lambda `$4363/5007 ratio by only a few percent while the discrepancy with the observations is on the 30% level. Such a discrepancy means that we are missing a heating source whose power may be of the same magnitude as that of the stellar ionizing photons. It is also possible that the unknown energy source is not so powerful, but acts in such a way that small quantities of gas are emitting at very high temperatures, thus boosting the \[O iii\] $`\lambda `$4363 line. Shocks are of course one of the options (Peimbert et al. 1991, Martin 1996, 1997), as well as conductive heating at the interface of an X-ray plasma with optically visible gas (Maciejewski et al. 1996). Such ideas need to be examined quantitatively, and applied to the case of I Zw 18, which we shall attempt in a future work. What are the consequences of our failure in understanding the energy budget on the abundance determinations in I Zw 18? It depends on how the electron temperature is distributed in the O<sup>++</sup>zone. As emphasized by Peimbert (1967, 1996) over the years (see also Mathis et al. 1998), the existence of even small zones at very high temperatures will boost the lines with high excitation threshold like \[O iii\] $`\lambda `$4363, so that the temperature derived from \[O iii\] $`\lambda `$4363/5007 will overestimate the average temperature of the regions emitting \[O iii\] $`\lambda `$5007. Consequently, the true O/H ratio will be larger than the one derived by the standard methods. The C/O ratio, on the other hand, will be smaller than derived empirically, because the ultraviolet C iii\] $`\lambda `$1909 line will be extremely enhanced in the high temperature regions. Such a possibility was invoked by Garnett et al. (1997) to explain the high C/O found in I Zw 18 compared to other metal-poor irregular galaxies. Presently, however, too little is known both theoretically and observationally to estimate quantitatively this effect, and it is not excluded that the abundances derived so far may be correct within 30%. Obviously, high spatial resolution mapping of the \[O iii\] $`\lambda `$4363/5007 ratio in I Zw 18 would be valuable to track the origin of the high \[O iii\] $`\lambda `$4363 seen in spectra integrated over a surface of about 10 ″<sup>2</sup>. Beside demonstrating the existence of a heating problem in I Zw 18, our photoionization model analysis led to several other results. The intensity of the nebular He ii $`\lambda `$4686 line can be reproduced with a detailed photoionization model having as an input the stellar radiation field that is consistent the observed Wolf-Rayet features in I Zw 18. This confirms the results of De Mello et al. (1998) based on simple Case B recombination theory. By fitting the observed \[O iii\]/\[O ii\] ratio and the angular size of the NW component with a model where the stellar radiation flux was adjusted to the observed value, we were able to show that the H ii region is not ionization bounded and about half of the ionizing photons are leaking out of it, sufficient to explain the extended diffuse H$`\alpha `$ emission observed in I Zw 18. While the \[O i\] emission is not reproduced in simple models, it can easily be accounted for by condensations or by intervening filaments on the line of sight. There is no need to invoke shocks to excite the \[O i\] line, although shocks are probably involved in the creation of the filaments, as suggested by Dopita (1997) in the context of planetary nebulae. The intrinsic H$`\alpha `$/H$`\beta `$ ratio is significantly affected by collisional excitation: our photoionization models give a value of 3.0, to be compared to the case B recombination value of 2.75 used in most observational papers. Consequently, the reddening is smaller than usually estimated, with C(H$`\beta `$) practically equal to 0. Our models can be used to give ionization correction factors appropriate for I Zw 18 for more accurate abundance determinations. However, the largest uncertainty in the abundances of C, N, O and Ne ultimately lies in the unsolved temperature problem. It would be, of course, of great interest to find out whether other galaxies share with I Zw 18 this \[O iii\] $`\lambda `$4363/5007 problem. There are at least two other cases which would deserve a more thorough analysis. One is the starburst galaxy NGC 7714, for which published photoionization models (García-Vargas et al. 1997) also give \[O iii\] $`\lambda `$4363/5007 smaller than observed. However, it still needs to be demonstrated that this problem remains when modifying the model assumptions (e.g. the gas density distribution, possible heating of the gas by photolelectric effect on dust particles etc…). In the case of NGC 2363, the photoionization models of Luridiana et al. (1999) that were built using the oxygen abundances derived directly from the observations yielded a \[O iii\] $`\lambda `$4363/5007 ratio marginally compatible with the observations. These authors further argued that, due to the presence of large spatial temperature fluctuations, the true gas metallicity in this object is higher than derived by empirical methods. In such a case, the \[O iii\] $`\lambda `$4363/5007 ratio becomes even more discrepant. It might well be that additional heating sources exist in giant H ii regions, giving rise to such large temperature variations and and enhancing the \[O iii\] $`\lambda `$4363 emission. As mentioned above, such a scenario needs to be worked out quantitatively. Further detailed observational and theoretical studies of individual objects would be helpful, since we have shown that with insufficient observational constraints, high \[O iii\] $`\lambda `$4363/5007 may actually be produced by photoionization models. The effort is worthwhile, since it would have implications both on our understanding of the energetics of starburst galaxies and on our confidence in abundance derivations. ###### Acknowledgements. This project was partly supported by the “GdR Galaxies”. DS acknowledges a grant from the Swiss National Foundation of Scientific Research. We thank Duilia De Mello for providing the HST images and Jean-François Le Borgne for help with IRAF. During the course of this work, we benefited from conversations with Jose Vílchez, Rosa González-Delgado, Enrique Pérez, Yurij Izotov, Trinh Xuan Thuan. Thanks are due to Valentina Luridiana, Crystal Martin and Claus Leitherer for reading the manuscript.
no-problem/9909/hep-ex9909051.html
ar5iv
text
# 1 Introduction ## 1 Introduction Supersymmetric (SUSY) extensions of the Standard Model predict the existence of charginos and neutralinos . Charginos, $`\stackrel{~}{\chi }_i^\pm `$, are the mass eigenstates formed by the mixing of the fields of the fermionic partners of the charged gauge bosons (winos) and those of the charged Higgs bosons (charged higgsinos). Fermionic partners of the $`\gamma `$, the Z boson, and the neutral Higgs bosons mix to form the mass eigenstates called neutralinos, $`\stackrel{~}{\chi }_j^0`$ <sup>1</sup><sup>1</sup>1In each case, the index $`i=1,2`$ or $`j=1\mathrm{to}4`$ is ordered by increasing mass.. If charginos exist and are sufficiently light, they are pair-produced at LEP through $`\gamma `$\- or Z-exchange in the $`s`$-channel. For the wino component, there is an additional production process through scalar electron-neutrino ($`\stackrel{~}{\nu }_\mathrm{e}`$) exchange in the $`t`$-channel. The production cross-section is large (several pb) unless the scalar neutrino (sneutrino) is light, in which case the cross-section is reduced by destructive interference between the $`s`$-channel and $`t`$-channel diagrams . In much of the parameter space, $`\stackrel{~}{\chi }_1^+`$ decays dominantly into $`\stackrel{~}{\chi }_1^0\mathrm{}^+\nu `$ or $`\stackrel{~}{\chi }_1^0\mathrm{q}\overline{\mathrm{q}}^{}`$ via a virtual W boson. For small scalar lepton masses, decays to leptons via a scalar lepton become important. R-parity conservation is assumed throughout this note. With this assumption, the $`\stackrel{~}{\chi }_1^0`$ is stable and invisible <sup>2</sup><sup>2</sup>2The lightest supersymmetric particle (LSP) is either $`\stackrel{~}{\chi }_1^0`$ or the scalar neutrino. We assume $`\stackrel{~}{\chi }_1^0`$ is the LSP in the direct searches described in this paper. If the scalar neutrino is lighter than the chargino, $`\stackrel{~}{\chi }_1^+\stackrel{~}{\nu }\mathrm{}^+`$ becomes the dominant decay mode. For this case, we use the results of Ref. to calculate limits, as mentioned in Section 5.2. and the experimental signature for $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}`$ events is large missing momentum transverse to the beam axis. Neutralino pairs ($`\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_2^0`$) can be produced through a virtual Z or $`\gamma `$ ($`s`$-channel) or by a scalar electron ($`t`$-channel) exchange . The $`\stackrel{~}{\chi }_2^0`$ will decay into $`\stackrel{~}{\chi }_1^0\nu \overline{\nu }`$, $`\stackrel{~}{\chi }_1^0\mathrm{}^+\mathrm{}^{}`$ or $`\stackrel{~}{\chi }_1^0\mathrm{q}\overline{\mathrm{q}}`$ through a virtual Z boson, sneutrino, slepton, squark or a neutral SUSY Higgs boson ($`\mathrm{h}^0`$ or $`\mathrm{A}^0`$). The decay via a virtual Z is the dominant mode in most of the parameter space. For small scalar lepton masses, decays to a lepton pair via a scalar lepton are important. The experimental signature of $`\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_2^0`$ events is an acoplanar pair of leptons or jets. If the mass difference between $`\stackrel{~}{\chi }_2^0`$ and $`\stackrel{~}{\chi }_1^0`$ is small, the experimental signature becomes monojet-like. Motivated by Grand Unification and to simplify the physics interpretation, the Constrained Minimal Supersymmetric Standard Model (Constrained MSSM) is used to guide the analysis but the results are also applied to more general models. At the grand unified (GUT) mass scale, in the Constrained MSSM all the gauginos are assumed to have a common mass, $`m_{1/2}`$, and all the sfermions have a common mass, $`m_0`$. Details are given in Section 5.2. Previous searches for charginos and neutralinos have been performed using data collected near the Z peak (LEP1), at centre-of-mass energies ($`\sqrt{s}`$) of 130–136 GeV , 161 GeV , 170–172 GeV , and 183 GeV , with luminosities of about 100, 5, 10, 10 and 60 pb<sup>-1</sup> respectively. No evidence for signal has been found. In 1998 the LEP $`\mathrm{e}^+\mathrm{e}^{}`$ collider at CERN was operated at $`\sqrt{s}`$= 188.6 GeV. This paper reports on direct searches for charginos and neutralinos performed using the data sample collected at this centre-of-mass energy. The total integrated luminosity collected with the OPAL detector at this energy is 182.1 pb<sup>-1</sup>. The selection criteria are similar to those used in , but have been modified to improve the sensitivity of the analysis at the current energy. The description of the OPAL detector and its performance can be found in Ref. and . ## 2 Event Simulation Chargino and neutralino signal events are generated with the DFGT generator which includes spin correlations and allows for a proper treatment of both the W boson and the Z boson width effects in the chargino and heavy neutralino decays. The generator includes initial-state radiation and uses the JETSET 7.4 package for the hadronisation of the quark-antiquark system in the hadronic decays of charginos and neutralinos. SUSYGEN is used to calculate the branching fractions for the Constrained MSSM interpretation of the analysis. The sources of background to the chargino and neutralino signals are two-photon, lepton pair, multihadronic and four-fermion processes. Two-photon processes are the most important background for the case of a small mass difference between the $`\stackrel{~}{\chi }_1^\pm `$ and the $`\stackrel{~}{\chi }_1^0`$, or between the $`\stackrel{~}{\chi }_2^0`$ and the $`\stackrel{~}{\chi }_1^0`$, since such events have small visible energy and small transverse momentum. Using the Monte Carlo generators PHOJET , PYTHIA and HERWIG , hadronic events from two-photon processes are simulated in which the invariant mass of the photon-photon system is larger than 5.0 GeV. Monte Carlo samples for four-lepton events ($`\mathrm{e}^+\mathrm{e}^{}\mathrm{e}^+\mathrm{e}^{}`$, $`\mathrm{e}^+\mathrm{e}^{}\mu ^+\mu ^{}`$ and $`\mathrm{e}^+\mathrm{e}^{}\tau ^+\tau ^{}`$) are generated with the Vermaseren program . All other four-fermion processes except for regions of phase-space covered by the two-photon simulations, are simulated using the grc4f generator , which takes into account all interferences. The dominant contributions come from $`\mathrm{W}^+\mathrm{W}^{}`$, $`\mathrm{We}\nu `$, $`\gamma ^{}\mathrm{Z}^{()}`$ and $`\mathrm{ZZ}^{()}`$ processes. Lepton pairs are generated using the KORALZ generator for $`\tau ^+\tau ^{}(\gamma )`$ and $`\mu ^+\mu ^{}(\gamma )`$ events, and the BHWIDE program for $`\mathrm{e}^+\mathrm{e}^{}\mathrm{e}^+\mathrm{e}^{}(\gamma )`$ events. Multihadronic ($`\mathrm{q}\overline{\mathrm{q}}(\gamma )`$) events are simulated using PYTHIA . Generated signal and background events are processed through the full simulation of the OPAL detector and the same event analysis chain is applied to the simulated events as to the data. ## 3 Analysis Calculations of experimental variables are performed as in . The following preselections are applied to reduce background due to two-photon events and interactions of beam particles with the beam pipe or residual gas: (1) the number of charged tracks is required to be at least two; (2) the observed transverse momentum of the whole event is required to be larger than 1.8 GeV; (3) the energy deposited in each silicon-tungsten forward calorimeter and in each forward detector has to be less than 2 GeV (these detectors are located in the forward region, with polar angle <sup>3</sup><sup>3</sup>3A right-handed coordinate system is adopted, where the $`x`$-axis points to the centre of the LEP ring, and positive $`z`$ is along the electron beam direction. The angles $`\theta `$ and $`\varphi `$ are the polar and azimuthal angles, respectively. $`|\mathrm{cos}\theta |>0.99`$, surrounding the beam pipe); (4) the visible invariant mass of the event has to exceed 2 GeV; (5) there should be no signal in the MIP plug scintillators <sup>4</sup><sup>4</sup>4The MIP plug scintillators are an array of thin scintillating tiles with embedded wavelength shifting fibre readout which have been installed to improve the hermiticity of the detector. They cover the polar angular range between 43 and 200 mrad.. ### 3.1 Detection of charginos The event sample is divided into three mutually exclusive categories, motivated by the topologies expected for chargino events. Separate analyses are applied to the preselected events in each category: * $`N_{\mathrm{ch}}>4`$ and no isolated leptons, where $`N_{\mathrm{ch}}`$ is the number of charged tracks. The isolated lepton selection criteria are the same as those described in Ref. . When both $`\stackrel{~}{\chi }_1^+`$ and $`\stackrel{~}{\chi }_1^{}`$ decay hadronically, signal events tend to fall into this category for all but the smallest values of $`\mathrm{\Delta }M_+(m_{\stackrel{~}{\chi }_1^+}m_{\stackrel{~}{\chi }_1^0})`$. * $`N_{\mathrm{ch}}>4`$ and at least one isolated lepton. If just one of the $`\stackrel{~}{\chi }_1^\pm `$ decays leptonically, signal events tend to fall into this category. * $`N_{\mathrm{ch}}4`$. Events tend to fall into this category if $`\mathrm{\Delta }M_+`$ is small or if both charginos decay leptonically. The fraction of $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}`$ events falling into category (A) is about 35-50% for most of the $`\mathrm{\Delta }M_+`$ range. This fraction drops to less than 15% if $`\mathrm{\Delta }M_+`$ is smaller than 5 GeV, since the average charged track multiplicity of the events becomes small. Similarly, the fraction of events falling into category (B) is also about 35-50% for most of the $`\mathrm{\Delta }M_+`$ range and is less than 10% if $`\mathrm{\Delta }M_+`$ is smaller than 5 GeV. In contrast, when $`\mathrm{\Delta }M_+`$ is smaller than 10 GeV, the fraction of events falling into category (C) is greater than about 50%, while if $`\mathrm{\Delta }M_+`$ is larger than 20 GeV, this fraction is about 10%. Since the chargino event topology mainly depends on the difference between the chargino mass and the lightest neutralino mass, different selection criteria are applied to four $`\mathrm{\Delta }M_+`$ regions: (I) $`\mathrm{\Delta }M_+10`$ GeV, (II) 10 GeV $`<\mathrm{\Delta }M_+m_{\stackrel{~}{\chi }_1^+}/2`$, (III) $`m_{\stackrel{~}{\chi }_1^+}/2<\mathrm{\Delta }M_+m_{\stackrel{~}{\chi }_1^+}20`$ GeV, and (IV) $`m_{\stackrel{~}{\chi }_1^+}20`$ GeV$`<\mathrm{\Delta }M_+m_{\stackrel{~}{\chi }_1^+}`$. In region (I), background events come mainly from the two-photon processes. In regions (II) and (III), the main background processes are the four-fermion processes ($`\mathrm{W}^+\mathrm{W}^{}`$, single W and $`\gamma ^{}\mathrm{Z}^{()}`$). In these two regions the background level is modest. In region (IV) the $`\mathrm{W}^+\mathrm{W}^{}`$ background becomes large and dominant. Since the $`\mathrm{W}^+\mathrm{W}^{}`$ background is significant in the region of $`\mathrm{\Delta }M_+>85`$ GeV where the chargino decays via an on-mass-shell W-boson, a special analysis is applied to improve the sensitivity to the chargino signal for $`m_{\stackrel{~}{\chi }_1^+}>85`$ GeV and $`\mathrm{\Delta }M_+\stackrel{>}{}\mathrm{\hspace{0.33em}85}`$ GeV. Overlap between this analysis and the region (IV) standard analysis is avoided by selecting the analysis which minimises the expected cross-section limit calculated with only the expected number of background events. For each region a single set of cut values is determined which minimises the expected limit on the signal cross-section at 95% confidence level (C.L.) using the method of Ref. . In this procedure, only the expected number of background events is taken into account and therefore the choice of cuts is independent of the number of candidates actually observed. The variables used in the selection criteria and their cut values are optimised in each $`\mathrm{\Delta }M_+`$ region. They are identical to the ones of Ref. unless otherwise stated, and are therefore only briefly described in the following sections. #### 3.1.1 Analysis (A) ($`𝑵_{\mathrm{𝐜𝐡}}\mathbf{>}\mathrm{𝟒}`$ without isolated leptons) After the preselection, cuts on $`E_{\mathrm{fwd}}/E_{\mathrm{vis}}`$, $`|\mathrm{cos}\theta _{\mathrm{miss}}|`$, $`|P_\mathrm{z}|`$ and $`|P_\mathrm{z}|/E_{\mathrm{vis}}`$, are applied to further reduce background events from the two-photon and multihadronic processes. $`E_{\mathrm{vis}}`$ is the total visible energy of the event, $`E_{\mathrm{fwd}}`$ is the visible energy in the region of $`|\mathrm{cos}\theta |>0.9`$, $`\theta _{\mathrm{miss}}`$ is the polar angle of the missing momentum and $`P_\mathrm{z}`$ is the visible momentum along the beam axis. In regions (III) and (IV), the cut values for $`E_{\mathrm{fwd}}/E_{\mathrm{vis}}`$ and $`|\mathrm{cos}\theta _{\mathrm{miss}}|`$ have been changed with respect to Ref. , requiring them to be smaller than 0.15 and 0.90 respectively. Most of the two-photon background events are rejected by cuts against small $`P_\mathrm{t}^{\mathrm{HCAL}}`$ and $`P_\mathrm{t}`$, the transverse momentum of the event measured with and without using the hadron calorimeter, respectively. In region (I), a cut is also applied against large $`P_\mathrm{t}^{\mathrm{HCAL}}`$ ($`P_\mathrm{t}^{\mathrm{HCAL}}30`$ GeV) to reduce the $`\mathrm{W}^+\mathrm{W}^{}`$ background. Jets are reconstructed using the Durham algorithm with jet resolution parameter $`y_{\mathrm{cut}}=0.005`$. By requiring the number of jets ($`N_{\mathrm{jet}}`$) to be between 3 and 5 inclusive, monojet events from the process $`\gamma ^{}\mathrm{Z}^{()}\mathrm{q}\overline{\mathrm{q}}\nu \overline{\nu }`$ are rejected for regions (I) and (II), and background events from $`\mathrm{q}\overline{\mathrm{q}}(\gamma )`$ and $`\mathrm{We}\nu `$ processes are reduced in regions (III) and (IV). In order to determine the acoplanarity angle <sup>5</sup><sup>5</sup>5The acoplanarity angle, $`\varphi _{\mathrm{acop}}`$, is defined as 180\- $`\varphi `$, where $`\varphi `$ is the opening angle in the ($`r`$,$`\varphi `$) plane between the two jets. ($`\varphi _{\mathrm{acop}}`$), events passing the selection described above are forced into two jets, again using the Durham algorithm. Each jet is required to be far from the beam axis by cutting on its polar angle $`\theta _\mathrm{j}`$ (j=1,2). This cut ensures a good measurement of $`\varphi _{\mathrm{acop}}`$ and further reduces the background from $`\mathrm{q}\overline{\mathrm{q}}(\gamma )`$ and two-photon processes. The acoplanarity angle is required to be larger than 15 to reduce the $`\mathrm{q}\overline{\mathrm{q}}(\gamma )`$ background. The acoplanarity angle distribution for region (III) is shown in Figure 1(a) before this cut. The cut on the visible mass, $`M_{\mathrm{vis}}`$, is optimised for each $`\mathrm{\Delta }M_+`$ region. If a lepton candidate ($`\mathrm{}^{}`$) is found with an algorithm based on the “looser” isolation condition described in Ref. , the energy of this lepton, $`E_{\mathrm{}^{}}`$, and the invariant mass calculated without this lepton, $`M_{\mathrm{had}^{}}`$, must be different from the values expected for $`\mathrm{W}^+\mathrm{W}^{}\mathrm{}\nu \mathrm{q}\overline{\mathrm{q}}^{}`$ events. The cuts on $`E_{\mathrm{}^{}}`$ and $`M_{\mathrm{had}^{}}`$ have been optimized with respect to Ref. : for region (I) no cut is applied; for region (III) $`M_{\mathrm{had}^{}}`$ is required to be smaller than 65 GeV and $`E_{\mathrm{}^{}}`$ smaller than 25 GeV. The background from $`\mathrm{W}^+\mathrm{W}^{}`$ events and single-W events is efficiently suppressed by requiring that the highest and second highest jet energies, $`E_1`$ and $`E_2`$, be smaller than the typical jet energy expected for the $`\mathrm{W}^+\mathrm{W}^{}`$ 4-jet process. The cuts on $`E_1`$ and $`E_2`$ are identical to the ones in Ref. apart from region (II) where $`E_1`$ is required to be between 2 and 30 GeV. In addition, in regions (III) and (IV), if $`E_1`$ is larger than 40 GeV, $`M_{\mathrm{vis}}`$ is required to be either smaller than 70 GeV or larger than 95 GeV. In regions (III) and (IV) three-jet events with $`|P_\mathrm{z}|<10`$ GeV are also rejected if $`M_{\mathrm{vis}}`$ is close to the W mass. These cuts reduce the $`\mathrm{W}^+\mathrm{W}^{}\tau \nu \mathrm{q}\overline{\mathrm{q}}^{}`$ background with low-energy decay products of the $`\tau `$. A special analysis is applied in the region of $`\mathrm{\Delta }M_+\stackrel{>}{}\mathrm{\hspace{0.33em}85}`$ GeV, since the event topology of the signal is very similar to that of $`\mathrm{W}^+\mathrm{W}^{}`$ 4 jets. After selecting well contained events with the cuts $`|\mathrm{cos}\theta _{\mathrm{miss}}|<0.95`$, $`E_{\mathrm{fwd}}/E_{\mathrm{vis}}<0.15`$ and $`|P_\mathrm{z}|<30`$ GeV, multi-jet events with large visible energy are selected with $`N_{\mathrm{jet}}4`$ and $`110<E_{\mathrm{vis}}<170`$ GeV. To select a clear 4-jet topology $`y_{34}0.0075`$, $`y_{23}0.04`$ and $`y_{45}0.0015`$ are also required, where $`y_{\{n\}\{n+1\}}`$ is defined as the minimum $`y_{\mathrm{cut}}`$ value at which the reconstruction of the event switches from $`n+1`$ to $`n`$ jets. Events having a “jet” consisting of a single $`\gamma `$ with energy greater than 20 GeV are considered to be $`\gamma \mathrm{q}\overline{\mathrm{q}}g`$ and are rejected. The numbers of observed events and background events expected from the four different sources, for each $`\mathrm{\Delta }M_+`$ region and for the special analysis, are given in Table 1. Typical detection efficiencies for $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}`$ events are 20–65% for $`\mathrm{\Delta }M_+`$=10–80 GeV in the standard analysis, and 20–30% for $`m_{\stackrel{~}{\chi }_1^+}85`$ GeV and $`\mathrm{\Delta }M_+85`$ GeV. #### 3.1.2 Analysis (B) ($`𝑵_{\mathrm{𝐜𝐡}}\mathbf{>}\mathrm{𝟒}`$ with isolated leptons) After the preselection, cuts on $`|\mathrm{cos}\theta _{\mathrm{miss}}|`$ and $`E_{\mathrm{fwd}}/E_{\mathrm{vis}}`$ are applied to reject two-photon background events. The cut on $`E_{\mathrm{fwd}}/E_{\mathrm{vis}}`$ has been tightened with respect to Ref. : it is required to be smaller than 0.15 in regions (I) and (II) and smaller than 0.2 in regions (III) and (IV). In order to reject the $`\mathrm{W}^+\mathrm{W}^{}\mathrm{}\nu \mathrm{q}\overline{\mathrm{q}}^{}`$ background, the following cuts are applied: the momentum of isolated lepton candidates should be smaller than that expected from decays of the W (smaller than 15 GeV, 30 GeV, 40 GeV and between 5 and 40 GeV for regions (I) to (IV) respectively) and the invariant mass ($`M_{\mathrm{had}}`$) of the event calculated excluding the highest momentum isolated lepton is required to be smaller than the W mass (smaller than 20 GeV and 40 GeV for regions (I) and (II), between 10 and 60 GeV for region (III), and between 15 and 70 GeV for region (IV)). The distribution of $`M_{\mathrm{had}}`$ after the $`\varphi _{\mathrm{acop}}`$ cut is shown in Figure 1(b) for region (III). As is evident in this figure, most of the $`\mathrm{W}^+\mathrm{W}^{}`$ background events are rejected by this cut. The invariant mass of the system formed by the missing momentum and the most energetic isolated lepton, $`M_{\mathrm{}\mathrm{miss}}`$, is required to be larger than 110 GeV for region (IV). Finally, a $`M_{\mathrm{vis}}`$ cut is applied to reject $`\mathrm{We}\nu `$ events in which a fake lepton candidate is found in the W$`\mathrm{q}\overline{\mathrm{q}}^{}(g)`$ decay: it is required to be smaller than 30 GeV for region (I), between 25 and 85 GeV for region (III), and no requirement on $`M_{\mathrm{vis}}`$ is applied for region (IV). A special analysis is applied in the region of $`\mathrm{\Delta }M_+\stackrel{>}{}\mathrm{\hspace{0.33em}85}`$ GeV where there is a large $`\mathrm{W}^+\mathrm{W}^{}`$ background. The selection criteria are identical to those in region (IV) up to the $`\varphi _{\mathrm{acop}}`$ cut. To reject further $`\mathrm{W}^+\mathrm{W}^{}\mathrm{}\nu \mathrm{q}\overline{\mathrm{q}}^{}`$ events while keeping a good signal efficiency, $`M_{\mathrm{had}}`$ is required to be between 70 and 95 GeV, $`M_{\mathrm{vis}}`$ between 90 and 125 GeV, and $`M_{\mathrm{}\mathrm{miss}}`$ between 90 and 130 GeV. The numbers of observed events and background events expected from the four different sources, for each $`\mathrm{\Delta }M_+`$ region and for the special analysis, are given in Table 2. Typical detection efficiencies for $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}`$ events are 40–65% for $`\mathrm{\Delta }M_+`$=10–80 GeV for the standard analysis, and 20–30% for $`m_{\stackrel{~}{\chi }_1^+}85`$ GeV and $`\mathrm{\Delta }M_+85`$ GeV. #### 3.1.3 Analysis (C) ($`𝑵_{\mathrm{𝐜𝐡}}\mathbf{}\mathrm{𝟒}`$) This analysis is especially important for the region of $`\mathrm{\Delta }M_+5`$ GeV. Because the background varies significantly with $`\mathrm{\Delta }M_+`$ in region (I), this region is split into 2 sub-regions (a,b). In order to reject events with charged particles which escape detection in the main detector, the net charge of the event is required to be zero. Since the signal is expected to have a two-lepton or two-jet topology, events are forced into two jets using the Durham jet algorithm . To improve the jet assignment, each jet must contain at least one charged track ($`N_{\mathrm{ch},\mathrm{j}}1`$), must have significant energy ($`E_\mathrm{j}>`$1.5 GeV) and the magnitude of the sum of the track charges ($`|Q_\mathrm{j}|`$) must not exceed 1. The $`P_\mathrm{t}/E_\mathrm{b}`$ distributions for region (III) are shown in Figure 1(c) after these cuts. In region (I), if the acoplanarity angle is smaller than 70, cuts are applied on $`P_\mathrm{t}`$, $`a_\mathrm{t}`$ (the transverse momentum perpendicular to the event thrust axis), and $`|\mathrm{cos}\theta _a|`$, where $`\theta _a\mathrm{tan}^1(a_\mathrm{t}/P_\mathrm{z})`$. These cuts reduce the background contamination from two-photon and $`\tau ^+\tau ^{}`$ processes. They are identical to those in Ref. for region (Ib) but have been further optimized for region (Ia) where $`a_\mathrm{t}/E_{\mathrm{beam}}`$ is required to lie between 0.025 and 0.1 and $`|\mathrm{cos}\theta _a|`$ to be smaller than 0.9. To further reduce the two-photon background, cuts on $`P_\mathrm{t}`$ and $`|\mathrm{cos}\theta _{\mathrm{miss}}|`$ are applied in region (I) if the acoplanarity angle is larger than 70 and in all other regions for any value of acoplanarity angle. The cuts on $`|\mathrm{cos}\theta _{\mathrm{miss}}|`$ are identical to those in Ref. apart from region (Ia) where it is required to be smaller than 0.75. $`P_\mathrm{t}/E_{\mathrm{beam}}`$ should be between 0.02 and 0.04, between 0.03 and 0.05, and between 0.035 and 0.075 for regions (Ia), (Ib) and (II), and should be larger than 0.095 and 0.1 for regions (III) and (IV). To reduce the background from $`\mathrm{e}^+\mathrm{e}^{}\mu ^+\mu ^{}`$ events in which one of the muons is emitted at a small polar angle and is not reconstructed as a good track, events are rejected if there is a track segment in the muon chamber or a hadron calorimeter cluster at a small polar angle, and within 1 radian in ($`r`$,$`\varphi `$) of the missing momentum direction ($`\stackrel{}{P}_{\mathrm{miss}}`$). Cuts on $`|\mathrm{cos}\theta _\mathrm{j}|`$ and $`\varphi _{\mathrm{acop}}`$ are applied to reject two-photon, lepton-pair and $`\gamma ^{}\mathrm{Z}^{()}\mathrm{}^+\mathrm{}^{}\nu \overline{\nu }`$ events. The values of these cuts have been slightly modified with respect to Ref. in that $`|\mathrm{cos}\theta _\mathrm{j}|`$ is now required to be smaller than 0.75 in region (Ia), and $`\varphi _{\mathrm{acop}}`$ should lie between 50 and 160 for region (Ia) and between 20 and 160 for region (Ib). $`\mathrm{W}^+\mathrm{W}^{}\mathrm{}^+\nu \mathrm{}^{}\overline{\nu }`$ events are rejected by upper cuts on $`M_{\mathrm{vis}}`$ (20 GeV, 25 GeV, 30 GeV, 50 GeV and 75 GeV for regions (Ia) to (IV)) and on the higher energy of the two jets, $`E_1/E_{\mathrm{beam}}`$ (identical to Ref. apart from region (Ia) where it is required to be smaller than 0.2). The numbers of background events expected from the four different sources, for each $`\mathrm{\Delta }M_+`$ region, are given in Table 3. The typical detection efficiencies for $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}`$ events are 30–65% for $`\mathrm{\Delta }M_+\stackrel{>}{}\mathrm{\hspace{0.33em}10}`$ GeV, and the modest efficiency of 20% is obtained for $`\mathrm{\Delta }M_+=5`$ GeV. ### 3.2 Detection of neutralinos To obtain optimal performance, the event sample is divided into two mutually exclusive categories, motivated by the topologies expected for neutralino events. * $`N_{\mathrm{ch}}4`$. Signal events in which $`\stackrel{~}{\chi }_2^0`$ decays into $`\stackrel{~}{\chi }_1^0\mathrm{}^+\mathrm{}^{}`$ tend to fall into this category. Also, when the mass difference between $`\stackrel{~}{\chi }_2^0`$ and $`\stackrel{~}{\chi }_1^0`$ ($`\mathrm{\Delta }M_0m_{\stackrel{~}{\chi }_2^0}m_{\stackrel{~}{\chi }_1^0}`$) is small, signal events tend to fall into this category independently of the $`\stackrel{~}{\chi }_2^0`$ decay channel. * $`N_{\mathrm{ch}}>4`$. Signal events in which $`\stackrel{~}{\chi }_2^0`$ decays into $`\stackrel{~}{\chi }_1^0\mathrm{q}\overline{\mathrm{q}}`$ tend to fall into this category for modest and large values of $`\mathrm{\Delta }M_0`$. The event topology of $`\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_2^0`$ events depends mainly on the difference between the $`\stackrel{~}{\chi }_2^0`$ and $`\stackrel{~}{\chi }_1^0`$ masses. Separate selection criteria are therefore used in four $`\mathrm{\Delta }M_0`$ regions: (i) $`\mathrm{\Delta }M_010`$ GeV, (ii) $`10<\mathrm{\Delta }M_030`$ GeV, (iii) $`30<\mathrm{\Delta }M_080`$ GeV, (iv) $`\mathrm{\Delta }M_0>80`$ GeV. In regions (i) and (ii), the main sources of background are two-photon and $`\gamma ^{}\mathrm{Z}^{()}\mathrm{q}\overline{\mathrm{q}}\nu \overline{\nu }`$ processes. In regions (iii) and (iv), the main sources of background are four-fermion processes ($`\mathrm{W}^+\mathrm{W}^{}`$, $`\mathrm{We}\nu `$ and $`\gamma ^{}\mathrm{Z}^{()}`$). The fraction of events falling into category (C) is 10-20% for $`\mathrm{\Delta }M_020`$ GeV but increases to about 70% when $`\mathrm{\Delta }M_05`$ GeV. The fraction of invisible events due to $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0\mathrm{Z}^{()}\stackrel{~}{\chi }_1^0\nu \overline{\nu }`$ decays is 20-30% depending on $`\mathrm{\Delta }M_0`$. The selection criteria applied for the low-multiplicity events (category (C)) in regions (i), (ii), (iii) and (iv) are identical to those used in analysis (C) of the chargino search in regions (Ia), (II), (III) and (IV), respectively (see Table 3). Events falling into category (D) have typically a monojet or a di-jet topology with large missing transverse momentum. The cuts described below are applied for these topologies. They are identical to the ones of Ref. unless otherwise stated. #### 3.2.1 Analysis (D) ($`𝑵_{\mathrm{𝐜𝐡}}\mathbf{>}\mathrm{𝟒}`$) To reduce the background from two-photon and $`\mathrm{q}\overline{\mathrm{q}}(\gamma )`$ processes, cuts on $`|\mathrm{cos}(\theta _{\mathrm{miss}})|`$, $`E_{\mathrm{fwd}}/E_{\mathrm{vis}}`$ and missing transverse momenta are applied. The acoplanarity angle should be large to remove $`\mathrm{q}\overline{\mathrm{q}}(\gamma )`$ background events. To ensure the reliability of the measurement of $`\varphi _{\mathrm{acop}}`$, both jets should have a polar angle $`\theta _\mathrm{j}`$ satisfying $`|\mathrm{cos}\theta _\mathrm{j}|<0.95`$. In region (iv), the $`\varphi _{\mathrm{acop}}`$ cut is loosened with respect to regions (i)–(iii), since the acoplanarity angle of signal events becomes smaller. To compensate for the resulting higher $`\mathrm{q}\overline{\mathrm{q}}(\gamma )`$ background, the $`E_{\mathrm{fwd}}/E_{\mathrm{vis}}`$ cut is tightened. After these cuts, the remaining background events come predominantly from $`\gamma ^{}\mathrm{Z}^{()}\mathrm{q}\overline{\mathrm{q}}\nu \overline{\nu }`$, $`\mathrm{W}^+\mathrm{W}^{}\mathrm{}\nu \mathrm{q}\overline{\mathrm{q}}^{^{}}`$ and $`\mathrm{We}\nu \mathrm{q}\overline{\mathrm{q}}^{^{}}\mathrm{e}\nu `$. Cuts on the visible mass are applied to reduce $`\mathrm{W}^+\mathrm{W}^{}\mathrm{}\nu \mathrm{q}\overline{\mathrm{q}}^{^{}}`$ and $`\mathrm{We}\nu \mathrm{q}\overline{\mathrm{q}}^{^{}}\mathrm{e}\nu `$ processes. $`\gamma ^{}\mathrm{Z}^{()}\mathrm{q}\overline{\mathrm{q}}\nu \overline{\nu }`$ background events are removed by a cut on the ratio of the visible mass to the visible energy, which is required to be larger than 0.3 for region (i) and 0.25 for regions (ii) and (iii). In regions (iii) and (iv), $`d_{23}^2y_{23}E_{\mathrm{vis}}^2<30`$ GeV<sup>2</sup> is required to select a clear two-jet topology and to reject $`\mathrm{W}^+\mathrm{W}^{}\tau \nu \mathrm{q}\overline{\mathrm{q}}^{}`$ events. In Figure 1(d) the $`d_{23}^2`$ distribution is shown for region (iv) after all the other cuts. The numbers of background events expected from the four different sources, for each $`\mathrm{\Delta }M_0`$ region, are given in Table 4. Typical detection efficiencies for $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0`$ events are 50–65% for $`\mathrm{\Delta }M_0>10`$ GeV. ## 4 Systematic uncertainties Systematic uncertainties on the number of expected signal and background events are estimated in the same manner as in the previous papers and and are only briefly described here. For the number of expected signal events the uncertainties arise from the measurement of the integrated luminosity (0.5%), Monte-Carlo statistics and interpolation of the efficiencies to arbitrary values of $`m_{\stackrel{~}{\chi }_1^\pm }`$ and $`m_{\stackrel{~}{\chi }_1^0}`$ (2–10%), modelling of the cut variables in the Monte Carlo simulations (4–10%), fragmentation uncertainties in hadronic decays ($`<2`$%) and detector calibration effects ($`<1`$%). The angular distributions of the chargino and neutralino final-state decay products and their effect on the resulting signal detection efficiencies depend on the details of the parameters of the Constrained MSSM . However, the corresponding variation of the efficiencies is determined to be less than 5% (relative), and this is taken into account in the systematic errors when obtaining the limits. Consequently, the limits are independent of the details of the Constrained MSSM. For the expected number of background events, the uncertainties are due to Monte Carlo statistics (see Tables 1 to 4), uncertainties in the amount of two-photon background (30%), uncertainties in the simulation of the four-fermion processes (17%), and modelling of the cut variables ($`<7`$%), as determined in Ref. . In addition to effects included in the detector simulation, an efficiency loss of 2.9% (relative) arises from beam-related background in the silicon-tungsten forward calorimeter and in the forward detector which is estimated using random beam crossing events. ## 5 Results No evidence for $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}`$ and $`\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_2^0`$ production is observed. Exclusion regions and limits are determined by using the likelihood ratio method , which assigns greater weight to the analysis which has the largest sensitivity. Systematic uncertainties on the efficiencies and on the number of expected background events are taken into account in the cross-section limit calculations according to Ref. . ### 5.1 Limits on the $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}`$ and $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0`$ production cross-sections Figures 2(a) and (b) show model-independent upper-limits (95% C.L.) on the production cross-sections of $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}`$ and $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0`$, respectively. These are obtained assuming the specific decay mode $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^0\mathrm{W}^{()\pm }`$ for $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}`$, and $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0\mathrm{Z}^{()}`$ for $`\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_2^0`$ production. The results from the $`\sqrt{s}=`$183 GeV analysis are also included in the limit calculation <sup>6</sup><sup>6</sup>6 In calculating limits, cross-sections at different $`\sqrt{s}`$ were estimated by weighting by $`\overline{\beta }/s`$, where $`\overline{\beta }`$ is $`p_{\stackrel{~}{\chi }_1^\pm }/E_{\mathrm{beam}}`$ for $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}`$ production or $`p_{\stackrel{~}{\chi }_2^0}/E_{\mathrm{beam}}=p_{\stackrel{~}{\chi }_1^0}/E_{\mathrm{beam}}`$ for $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0`$ production.. If the cross-section for $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}`$ is larger than 0.75 pb and $`\mathrm{\Delta }M_+`$ is between 5 GeV and about 80 GeV, $`m_{\stackrel{~}{\chi }_1^+}`$ is excluded at the 95% C.L. up to the kinematic limit. If the cross-section for $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0`$ is larger than 0.95 pb and $`\mathrm{\Delta }M_0`$ is greater than 7 GeV, $`m_{\stackrel{~}{\chi }_2^0}`$ is excluded up to the kinematic limit at 95% C.L. ### 5.2 Limits in the MSSM parameter space The phenomenology of the gaugino-higgsino sector of the MSSM is mostly determined by the following parameters: the SU(2) gaugino mass parameter at the weak scale ($`M_2`$), the mixing parameter of the two Higgs doublet fields ($`\mu `$) and the ratio of the vacuum expectation values of the two Higgs doublets ($`\mathrm{tan}\beta `$). Assuming sfermions and SUSY Higgs sufficiently heavy not to intervene in the decay channels, these three parameters are sufficient to describe the chargino and neutralino sectors completely. Within the Constrained MSSM , a large value of the common scalar mass, $`m_0`$ (e.g., $`m_0=500`$ GeV) leads to heavy sfermions and therefore to a negligible suppression of the cross-section due to interference with $`t`$-channel sneutrino exchange. Chargino decays would then proceed predominantly via a virtual or real W. On the other hand, a light $`m_0`$ results in a low value of the mass of the sneutrino, enhancing the contribution of the $`t`$-channel exchange diagrams that have destructive interference with $`s`$-channel diagrams, thus reducing the cross-section for chargino pair production. Small values of $`m_0`$ also enhance the leptonic branching ratio of charginos. From the input parameters $`M_2`$, $`\mu `$, $`\mathrm{tan}\beta `$, $`m_0`$ and $`A`$ (the trilinear Higgs coupling), masses, production cross-sections and branching fractions are calculated according to the Constrained MSSM . For each set of input parameters, the total numbers of $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}`$, $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0`$, $`\stackrel{~}{\chi }_3^0\stackrel{~}{\chi }_1^0`$ and $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_2^0`$ events expected to be observed are calculated using the integrated luminosity, the cross-sections, the branching fractions, and the detection efficiencies (which depend upon the masses of the chargino, the lightest neutralino and next-to-lightest neutralino). Contributions from channels such as $`\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_4^0`$, $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_4^0`$, etc…are not included. The $`\stackrel{~}{\chi }_3^0\stackrel{~}{\chi }_1^0`$ channel is similar to the $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0`$ channel, and cascade decays through $`\stackrel{~}{\chi }_2^0`$ are taken into account. The relative importance of each of the analyses (A)–(D) changes with the leptonic or hadronic branching ratios, and the likelihood ratio method is used to optimally weight each analysis depending on these branching ratios. Results are presented for two cases: (i) $`m_0=500`$ GeV (i.e., heavy sfermions), and (ii) the value of $`m_0`$ that gives the smallest total numbers of expected chargino and neutralino events taking into account cross-sections, branching ratios, and detection efficiencies for each set of values of $`M_2`$, $`\mu `$ and $`\mathrm{tan}\beta `$. This latter value of $`m_0`$ leads to the most conservative limit at that point, so that the resulting limits are valid for all $`m_0`$. In searching for this value of $`m_0`$, only those values are considered that are compatible with the current limits on the sneutrino mass ($`m_{\stackrel{~}{\nu }_L}>43`$ GeV ), and upper limits on the cross-section for slepton pair production, particularly right-handed smuon and selectron pair production . Particular attention is paid to the region of values of $`m_0`$ leading to the mass condition $`m_{\stackrel{~}{\nu }}m_{\stackrel{~}{\chi }_1^\pm }`$ by taking finer steps in the value of $`m_0`$. Note that we assume the stau mixing angle to be zero, and the added complication of possible enhanced decays to third-generation particles at large values of $`\mathrm{tan}\beta `$ because of stau mixing is ignored. When $`m_{\stackrel{~}{\nu }}m_{\stackrel{~}{\chi }_1^\pm }`$, resulting in a topology of acoplanar leptons and missing momentum, the upper limits on the cross-section for the two-body chargino decay from Ref. are used, while for $`m_{\stackrel{~}{\nu }}>m_{\stackrel{~}{\chi }_1^\pm }`$ the three-body decays are dominant. Single photon topologies from $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0`$ production and acoplanar photons with missing energy topologies from $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_2^0`$ with photonic decay $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0\gamma `$ are taken into account using the 95% C.L. cross-section upper limits on these topologies from OPAL results . In both of these cases, if the relevant product of cross-section and branching ratio for a particular set of MSSM parameters is greater than the measured 95% C.L. upper limit presented in that paper, then that set of parameters is considered to be excluded. The following regions of the Constrained MSSM parameters are scanned: $`0M_22000`$ GeV, $`|\mu |500`$ GeV, and $`A=\pm M_2,\pm m_0`$ and 0. The typical scan step is 0.2 GeV. Extensions beyond the scanned range have negligible effect on the quoted limits. The choice of $`A`$ values is related to various scenarios of stop mixing, influencing the Higgs sector but having essentially no effect on the gaugino sector. No significant dependence on $`A`$ is observed. Figure 3 shows the resulting exclusion regions in the ($`M_2`$,$`\mu `$) plane for $`\mathrm{tan}\beta =1.5`$ and 35 with $`m_0500`$ GeV and for all $`m_0`$. The restrictions on the Constrained MSSM parameter space presented here can be transformed into exclusion regions in the ($`m_{\stackrel{~}{\chi }_1^\pm }`$,$`m_{\stackrel{~}{\chi }_1^0}`$) or ($`m_{\stackrel{~}{\chi }_2^0}`$,$`m_{\stackrel{~}{\chi }_1^0}`$) plane. A given mass pair is excluded only if all considered Constrained MSSM parameters in the scan which lead to that same mass pair are excluded at the 95% C.L. The $`\stackrel{~}{\chi }_1^\pm `$ mass limits are summarised in Table 5. In the ($`m_{\stackrel{~}{\chi }_1^\pm }`$,$`m_{\stackrel{~}{\chi }_1^0}`$) plane, Figures 4(a) and (b) show the corresponding 95% C.L. exclusion regions for $`\mathrm{tan}\beta =1.5`$ and 35. Figures 4(c) and (d) show the corresponding 95% C.L. exclusion regions in the ($`m_{\stackrel{~}{\chi }_2^0}`$,$`m_{\stackrel{~}{\chi }_1^0}`$) plane, for $`\mathrm{tan}\beta =1.5`$ and 35. Mass limits on $`\stackrel{~}{\chi }_1^0`$, $`\stackrel{~}{\chi }_2^0`$, and $`\stackrel{~}{\chi }_3^0`$ are summarised in Table 6. Figure 5 shows the dependence of the mass limits on the value of $`\mathrm{tan}\beta `$. Of particular interest is the absolute lower limit, in the framework of the Constrained MSSM, on the mass of the lightest neutralino of $`m_{\stackrel{~}{\chi }_1^0}>32.8`$ GeV (31.6 GeV) at 95% C.L. for $`m_0500`$ GeV (all $`m_0`$). This has implications for direct searches for the lightest neutralino as a candidate for dark matter . Since the formulae for couplings and masses in the gaugino sector are symmetric in $`\mathrm{tan}\beta `$ and $`1/\mathrm{tan}\beta `$, these results also hold for $`\mathrm{tan}\beta <1`$. ## 6 Summary and Conclusion A data sample corresponding to an integrated luminosity of 182.1 pb<sup>-1</sup> at $`\sqrt{s}=`$188.6 GeV, collected with the OPAL detector, has been analysed to search for pair-production of charginos ($`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}`$) and neutralinos ($`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0`$) predicted by supersymmetric theories. Decays of $`\stackrel{~}{\chi }_1^\pm `$ into $`\stackrel{~}{\chi }_1^0\mathrm{}^\pm \nu `$ or $`\stackrel{~}{\chi }_1^0\mathrm{q}\overline{\mathrm{q}}^{}`$ and decays of $`\stackrel{~}{\chi }_2^0`$ into $`\stackrel{~}{\chi }_1^0\nu \overline{\nu }`$, $`\stackrel{~}{\chi }_1^0\mathrm{}^+\mathrm{}^{}`$ or $`\stackrel{~}{\chi }_1^0\mathrm{q}\overline{\mathrm{q}}`$ are looked for. No evidence for such events has been observed. The exclusion limits on $`\stackrel{~}{\chi }_1^\pm `$ and $`\stackrel{~}{\chi }_j^0`$ production are significantly higher with respect to the results obtained at lower centre-of-mass energies. Exclusion regions valid at the 95% confidence level have been derived in the framework of the Constrained MSSM, in which only three parameters ($`M_2`$, $`\mu `$ and $`\mathrm{tan}\beta `$) are necessary to describe the chargino and neutralino sectors. These restrictions in parameter space have been transformed into mass limits valid at the 95% confidence level. Assuming $`m_{\stackrel{~}{\chi }_1^\pm }m_{\stackrel{~}{\chi }_1^0}5`$ GeV, the lower mass limit of the chargino is 93.6 GeV for $`\mathrm{tan}\beta =1.5`$ and 94.1 GeV for $`\mathrm{tan}\beta =35`$ for the case of a large universal scalar mass ($`m_0`$ 500 GeV); for all $`m_0`$, the mass limit is 78.0 GeV for $`\mathrm{tan}\beta =1.5`$ and 71.7 GeV for $`\mathrm{tan}\beta =35`$. The lower mass limit for the lightest neutralino is 32.8 GeV for the case of $`m_0500`$ GeV and 31.6 GeV for all $`m_0`$. This latter result has implications for searches for the lightest neutralino as a dark matter candidate. ## Acknowledgements We particularly wish to thank the SL Division for the efficient operation of the LEP accelerator at all energies and for their continuing close cooperation with our experimental group. We thank our colleagues from CEA, DAPNIA/SPP, CE-Saclay for their efforts over the years on the time-of-flight and trigger systems which we continue to use. In addition to the support staff at our own institutions we are pleased to acknowledge the Department of Energy, USA, National Science Foundation, USA, Particle Physics and Astronomy Research Council, UK, Natural Sciences and Engineering Research Council, Canada, Israel Science Foundation, administered by the Israel Academy of Science and Humanities, Minerva Gesellschaft, Benoziyo Center for High Energy Physics, Japanese Ministry of Education, Science and Culture (the Monbusho) and a grant under the Monbusho International Science Research Program, Japanese Society for the Promotion of Science (JSPS), German Israeli Bi-national Science Foundation (GIF), Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie, Germany, National Research Council of Canada, Research Corporation, USA, Hungarian Foundation for Scientific Research, OTKA T-029328, T023793 and OTKA F-023259.
no-problem/9909/astro-ph9909345.html
ar5iv
text
# Mass Loss on the Red Giant Branch and the Second-Parameter Phenomenon ## 1 Introduction Much recent debate has focused on the issue of whether age is the “second parameter” of horizontal-branch (HB) morphology (the first parameter being metallicity \[Fe/H\]), or whether the phenomenon is instead much more complex, with several parameters playing an important role (VandenBerg 1999 and references therein). At the same time, most studies devoted toward this issue have adopted a qualitative, rather than quantitative, approach to the second-parameter phenomenon. More specifically, attempts to check whether a measured turnoff age difference between two GCs would be consistent with their relative HB types have been relatively rare. The main purpose of this paper is to report on some recent progress in this area. ## 2 Analytical Mass Loss Formulae for Cool Giants Mass loss on the red giant branch (RGB) is widely recognized as one of the most important ingredients, as far as the HB morphology goes (e.g., Catelan & de Freitas Pacheco 1995; Lee et al. 1994; Rood et al. 1997). Up to now, investigations of the impact of RGB mass loss upon the HB morphology have mostly relied on Reimers’ (1975) mass loss formula. We note, however, that Reimers’ is by no means the only mass loss formula available for this type of study. In particular, alternative formulae have been presented by Mullan (1978), Goldberg (1979), and Judge & Stencel (1991, hereafter JS91). ### 2.1 Mass Loss Formulae Revisited As a first step in this project, we have undertaken a revision of all these formulae, employing the latest and most extensive dataset available in the literature—namely, that of JS91. The mass loss rates provided in JS91 were compared against more recent data (e.g., Guilain & Mauron 1996), and excellent agreement was found. If the distance adopted by JS91 lied more than about $`2\sigma `$ away from that based on Hipparcos trigonometric parallaxes, the star was discarded. Only five stars turned out to be discrepant, in a sample containing more than 20 giants. Employing ordinary least-squares regressions, we find that the following formulae provide adequate fits to the data (see also Fig. 1): $$\frac{\mathrm{d}M}{\mathrm{d}t}=8.5\times 10^{10}(\begin{array}{c}L\\ \overline{gR}\end{array})^{+1.4}M_{}\mathrm{yr}^1,$$ (1) with $`g`$ in cgs units, and $`L`$ and $`R`$ in solar units. As can be seen, this represents a “generalized” form of Reimers’ original mass loss formula, essentially reproducing a later result by Reimers (1987). The exponent (+1.4) differs from the one in Reimers’ (1975) formula (+1.0) at $`3\sigma `$; $$\frac{\mathrm{d}M}{\mathrm{d}t}=2.4\times 10^{11}(\begin{array}{c}g\\ \overline{R^{\frac{3}{2}}}\end{array})^{0.9}M_{}\mathrm{yr}^1,$$ (2) likewise, but in the case of Mullan’s (1978) formula; $$\frac{\mathrm{d}M}{\mathrm{d}t}=1.2\times 10^{15}R^{+3.2}M_{}\mathrm{yr}^1,$$ (3) idem, Goldberg’s (1979) formula; $$\frac{\mathrm{d}M}{\mathrm{d}t}=6.3\times 10^8g^{1.6}M_{}\mathrm{yr}^1,$$ (4) ibidem, JS91’s formula. In addition, the expression $$\frac{\mathrm{d}M}{\mathrm{d}t}=3.4\times 10^{12}L^{+1.1}g^{0.9}M_{}\mathrm{yr}^1,$$ (5) suggested to us by D. VandenBerg, also provides a good fit to the data. “Occam’s razor” would favor equations (3) or (4) in comparison with the others, but otherwise we are unable to identify any of them as being obviously superior. ### 2.2 Caveats We emphasize that mass loss formulae such as those given above should not be employed in astrophysical applications (stellar evolution, analysis of integrated galactic spectra, etc.) without keeping in mind these exceedingly important limitations: 1. As in Reimers’ (1975) case, equations (1) through (5) were derived based on Population I stars. Hence they too are not well established for low-metallicity stars. Moreover, there are only two first-ascent giants in the adopted sample; 2. Quoting Reimers (1977), “besides the basic \[stellar\] parameters $`\mathrm{}`$ the mass-loss process is probably also influenced by the angular momentum, magnetic fields and close companions. The order of magnitude of such effects is completely unclear. Obviously, many observations will be necessary before we get a more detailed picture of stellar winds in red giants” (emphasis added). See also Dupree & Reimers (1987); 3. “One should always bear in mind that a simple $`\mathrm{}`$ formula like that proposed can be expected to yield only correct order-of-magnitude results if extrapolated to the short-lived evolutionary phases near the tips of the giant branches” (Kudritzki & Reimers 1978); 4. “Most observations have been interpreted using models that are relatively simple (stationary, polytropic, spherically symmetric, homogeneous) and thus ‘observed’ mass loss rates or limits may be in error by orders of magnitude in some cases” (Willson 1999); 5. The two first-ascent giants analyzed by Robinson et al. (1998) using HST-GHRS, $`\alpha `$ Tau and $`\gamma `$ Dra, appear to both lie about one order of magnitude below the relations that best fit the JS91 data—two orders of magnitude in fact, if compared to Reimers’ formula (see Fig. 1). The K supergiant $`\lambda `$ Vel, analyzed by the same group (Mullan et al. 1998), appears in much better agreement with the adopted dataset and best fitting relations. In effect, mass loss on the RGB is an excellent, but virtually untested, second-parameter candidate. It may be connected to GC density, rotational velocities, and abundance anomalies on the RGB. It will be extremely important to study mass loss in first-ascent, low-metallicity giants—in the field and in GCs alike—using the most adequate ground- and space-based facilities available, or expected to become available, in the course of the next decade. Moreover, in order to properly determine how (mean) mass loss behaves as a function of the fundamental physical parameters and metallicity, astrometric missions much more accurate than Hipparcos, such as SIM and GAIA, will certainly be necessary. In the meantime, we suggest that using several different mass loss formulae (such as those provided in Sect. 2.1) constitutes a better approach than relying on a single one. This is the approach that we are going to follow in the rest of this paper. ## 3 Implications for the Amount of Mass Lost by First-Ascent Giants The latest RGB evolutionary tracks by VandenBerg et al. (2000) were employed in an investigation of the amount of mass lost on the RGB and its dependence on age. The effects of mass loss upon RGB evolution were ignored. In Figure 2, the mass loss–age relationship is shown for each of equations (1) through (5), and also for Reimers’ (1975) formula, for a metallicity $`[\mathrm{Fe}/\mathrm{H}]=1.41`$, $`[\alpha /\mathrm{Fe}]=+0.30`$. Even though the formulae from Section 2.1 are all based on the very same dataset, the implications differ from case to case. ## 4 The Second-Parameter Effect: the Case of Pal 4/Eridanus vs. M5 Stetson et al. (1999) presented $`V`$$`VI`$ color-magnitude diagrams for the outer-halo GCs Pal 4 and Eridanus, based on HST images. Analyzing their turnoff ages, they concluded that Pal 4 and Eridanus are younger than M5, a GC with similar metallicity but a bluer HB, by $`1.52`$ Gyr. Based on the same data, VandenBerg (1999) obtained a smaller age difference: $`11.5`$ Gyr. Are these values consistent with the relative HB types of Pal 4/Eridanus vs. M5? To answer this question, we constructed detailed synthetic HB models (based on the evolutionary tracks described in Catelan et al. 1998) for M5 and Pal 4/Eridanus, thus obtaining the difference in mean HB mass between them—which was then transformed to a difference in age with the aid of the mass loss formulae from Section 2 and the RGB mass loss results from Section 3. Figure 3 shows the age difference thus obtained as a function of the adopted M5 age, in comparison with the turnoff determinations from Stetson et al. (1999) and VandenBerg (1999). Our assumed reddening values for Pal 4/Eridanus come from Schlegel et al. (1998); had the Harris (1996) values been adopted instead, the curves in Figure 3 corresponding to the different mass loss formulae would all be shifted upwards. ## 5 Conclusions As one can see from Figure 3 our results indicate that, irrespective of the mass loss formula employed, age cannot be the only “second parameter” at play in the case of M5 vs. Pal 4/Eridanus, unless these GCs are younger than 10 Gyr. ## Acknowledgements The author wishes to express his gratitude to D.A. VandenBerg for providing many useful comments and suggestions, and also for making his latest evolutionary sequences available in advance of publication. Support for this work was provided by NASA through Hubble Fellowship grant HF–01105.01–98A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5–26555. ## References > Catelan M., Borissova J., Sweigart A.V., Spassova N., 1998, ApJ 494, 265 > > Catelan M., de Freitas Pacheco J.A., 1995, A&A 297, 345 > > Dupree A.K., Reimers D., 1987. In: Kondo Y., et al. (eds.) Exploring the Universe with the IUE Satellite. Dordrecht, Reidel, p. 321 > > Goldberg L., 1979, QJRAS 20, 361 > > Guilain Ch., Mauron N., 1996, A&A 314, 585 > > Harris W.E., 1996, AJ 112, 1487 > > Judge P.G., Stencel R.E., 1991, ApJ 371, 357 (JS91) > > Kudritzki R.P., Reimers D., 1978, A&A 70, 227 > > Lee Y.-W., Demarque P., Zinn R., 1994, ApJ 423, 248 > > Mullan D.J., 1978, ApJ 226, 151 > > Mullan D.J., Carpenter K.G., Robinson R.D., 1998, ApJ 495, 927 > > Reimers D., 1975. In: Mémoires de la Societé Royale des Sciences de Liège, 6e serie, tome VIII, Problèmes D’Hydrodynamique Stellaire, p. 369 > > Reimers D., 1977, A&A 57, 395 > > Reimers D., 1987. In: Appenzeller I., Jordan C. (eds.) IAU Symp. 122, Circumstellar Matter. Dordrecht, Kluwer, p. 307 > > Robinson R.D., Carpenter K.G., Brown A., 1998, ApJ 503, 396 > > Rood R.T., Whitney J., D’Cruz N., 1997. In: Rood R.T., Renzini A. (eds.) Advances in Stellar Evolution. Cambridge, Cambridge University Press, p. 74 > > Schlegel D.J., Finkbeiner D.P., Davis M., 1998, ApJ 500, 525 > > VandenBerg D.A., 1999, ApJ, submitted > > VandenBerg D.A., Swenson F.J., Rogers F.J., Iglesias C.A., Alexander D.R., 2000, ApJ 528, in press (January $`20^{\mathrm{th}}`$ issue) > > Willson L.A., 1999. In: Livio M. (ed.) Unsolved Problems in Stellar Evolution. Cambridge, Cambridge University Press, p. 227
no-problem/9909/hep-lat9909076.html
ar5iv
text
# Semileptonic Decays of Heavy Mesons with the Fat Clover Action Presented by C. DeTar. ## 1 OBJECTIVES We are studying the semileptonic decays $`B\pi \mathrm{}\nu `$, $`BD\mathrm{}\nu `$, $`B\rho \mathrm{}\nu `$, $`BD^{}\mathrm{}\nu `$, and $`BK^{}\gamma `$ and the corresponding decays with a strange spectator quark. For a companion study of purely leptonic decays, see . The CKM matrix element $`V_{ub}`$, for example, is obtained from the differential semileptonic decay rate for $`B\pi \mathrm{}\nu `$ at total leptonic four-momentum $`q`$ : $$\frac{d\mathrm{\Gamma }}{dq^2}=\frac{G_F^2p_{}^{}{}_{}{}^{3}}{24\pi ^3}|V_{ub}|^2|f^+(q^2)|^2.$$ The unknown hadronic form factor $`f^+(q^2)`$ is to be determined in lattice gauge theory from the matrix element of the weak vector current $`V_\mu `$, $`\pi (k)|V_\mu |B(p)=`$ $`\left(p_\mu +k_\mu q_\mu {\displaystyle \frac{m_B^2m_\pi ^2}{q^2}}\right)f^+(q^2)`$ $`+q_\mu {\displaystyle \frac{m_B^2m_\pi ^2}{q^2}}f^0(q^2).`$ ## 2 FAT CLOVER ACTION Since the heavy-light meson decays involve light quarks, it is important to choose an $`𝒪(a^2)`$ lattice fermion implementation with good chiral properties. To this end we have been experimenting with an action proposed by DeGrand, Hasenfratz, and Kovács , which introduces, in effect, a cutoff-dependent form factor at the quark-gluon vertex to suppress lattice artifacts at the level of the cutoff. The action is the usual clover action but with a gauge background constructed by replacing the usual gauge links by APE-smoothed links with coefficient $`1c`$ for the forward link and $`c/6`$ for the sum of staples. The smoothed link is projected back to SU(3). This smoothing process is repeated $`N`$ times. For the present experiment we take $`c=0.45`$ and $`N=10`$. These values are to be kept constant in the continuum limit, thus giving the local continuum fermion action. This “fattening” process reduces problems with “exceptional” configurations that obstruct extrapolations to light quark mass . ## 3 PARAMETERS IN THE STUDY Calculations were done on an archive of 200 $`24^3\times 64`$ gauge configurations, generated with two flavors of dynamical staggered quarks of mass $`am_q=0.01`$ at the one-plaquette coupling $`6/g^2=5.6`$, corresponding to a lattice spacing (from the rho mass) of about 0.11 fm. The fat clover propagator was generated for three “light” (spectator and recoiling) quarks and five “heavy” (decaying and recoiling) quarks over a mass range $`0.5m_s<m<1.1m_b`$. The coefficient of the clover term $`c_{SW}`$ was set to 1. The mass of the lightest fat clover quark was adjusted to give the same pion mass as the staggered fermion Goldstone boson. We use the Fermilab program through $`𝒪(a)`$ for the quark wave function normalization, including the three-dimensional rotation with coefficient $`d_1`$. The light meson source is placed at $`t=0`$ and the heavy-light meson at $`t=32`$, with antiperiodic boundary conditions in $`t`$. We treat three values of the heavy-light-meson momentum and 21 values of the three-momentum transfer at the current vertex. Computations are in progress. Results are presented for a subset of about half of the 200 configurations including only the two lightest spectator quark masses. ## 4 SELECTED RESULTS An example of the meson dispersion relation is shown in Fig. 1. It is quite satisfactory. The form factor is extracted by amputating the external meson legs — at present, by dividing by $`\mathrm{exp}[(E_BE_M)t]`$, where the $`B`$ meson energy $`E_B`$ and recoil meson energy $`E_M`$ are taken from central values of a fit to the corresponding two-point dispersion relations. The diagonal vector form factor at zero three-momentum transfer gives the vector current renormalization factor $`Z_V`$. It is shown as a function of the inverse meson mass in Fig. 2 for the two currently available choices of the spectator quark mass. We see that this nonperturbative renormalization constant is within $`1015`$% of unity. We test the soft pion theorem which states that in the chiral limit $`f^0(q_{\mathrm{max}}^2)=f_B/f_\pi `$. The same action and configurations are used to get $`f_B`$ . Both spectator and recoil quark masses ($`m`$ and $`m^{}`$) are extrapolated to zero. If we use $`f^0[q_{\mathrm{max}}^2(m,m^{}),m,m^{}]=a+bm+cm^{}`$ we obtain Fig. 3, a disagreement similar to that found by JLQCD . If we include an extra term $`d\sqrt{m+m^{}}`$ as advocated by Maynard the theorem is satisfied, but with large extrapolated errors. We hope our eventual full data sample will help resolve these complexities . Sample form factors for the process $`B_sK\mathrm{}\nu `$ are shown in Fig. 4. ## 5 DISCUSSION Fattening has allowed us to obtain results for an ostensibly $`𝒪(a^2)`$ action on unquenched lattices for quark masses at least as low as $`0.5m_s`$ with no noticeable trouble from exceptional configurations. Our experiment raises a number of important questions: Will a one-loop-perturbative determination of current renormalization factors be adequate? How much fattening is good? Does fattening push us farther from the continuum limit for some quantities? Work is in progress. This work is supported by the US National Science Foundation and Department of Energy and used computer resources at the San Diego Supercomputer Center (NPACI), University of Utah (CHPC), Oak Ridge National Laboratory (CCS), and the Pittsburgh Supercomputer Center.
no-problem/9909/hep-th9909056.html
ar5iv
text
# Untitled Document NSF-ITP-99-70 hep-th/9909056 Quasinormal Modes of AdS Black Holes and the Approach to Thermal Equilibrium Gary T. Horowitz and Veronika E. Hubenygary@cosmic.physics.ucsb.edu, veronika@cosmic.physics.ucsb.edu Physics Department, University of California, Santa Barbara, CA 93106, USA Abstract We investigate the decay of a scalar field outside a Schwarzschild anti de Sitter black hole. This is determined by computing the complex frequencies associated with quasinormal modes. There are qualitative differences from the asymptotically flat case, even in the limit of small black holes. In particular, for a given angular dependence, the decay is always exponential - there are no power law tails at late times. In terms of the AdS/CFT correspondence, a large black hole corresponds to an approximately thermal state in the field theory, and the decay of the scalar field corresponds to the decay of a perturbation of this state. Thus one obtains the timescale for the approach to thermal equilibrium. We compute these timescales for the strongly coupled field theories in three, four, and six dimensions which are dual to string theory in asymptotically AdS spacetimes. September, 1999 1. Introduction It is well known that if you perturb a black hole, the surrounding geometry will “ring”, i.e., undergo damped oscillations. The frequencies and damping times of these oscillations are entirely fixed by the black hole, and are independent of the initial perturbation. These oscillations are similar to normal modes of a closed system. However, since the field can fall into the black hole or radiate to infinity, the modes decay and the corresponding frequencies are complex. These oscillations are known as “quasinormal modes”. For black holes in asymptotically flat spacetimes, they have been studied for almost thirty years \[1,,2\]. The radiation associated with these modes is expected to be seen with gravitational wave detectors in the coming decade. Motivated by inflation, the quasinormal modes of black holes in de Sitter space have recently been studied \[3,,4\]. For spacetimes which asymptotically approach anti de Sitter (AdS) spacetime the situation is slightly different. In the absence of a black hole, most fields propagating in AdS can be expanded in ordinary normal modes. The cosmological constant provides an effective confining box, and solutions only exist with discrete (real) frequencies. However once a black hole is present, this is no longer the case. The fields can now fall into the black hole and decay. There should exist complex frequencies, characteristic of the black hole, which describe the decay of perturbations outside the horizon. We will compute these quasinormal frequencies below, for spacetimes of various dimensions. The quasinormal frequencies of AdS black holes have a direct interpretation in terms of the dual conformal field theory (CFT) \[5,,6,,7,,8\].<sup>1</sup> The importance of these modes in AdS was independently recognized in , but they were not computed. They were computed in , but only for a conformally invariant scalar field whose asymptotic behavior is similar to flat spacetime. The confining behavior of AdS is crucial for the AdS/CFT correspondence. According to the AdS/CFT correspondence, a large static black hole in AdS corresponds to an (approximately) thermal state in the CFT. Perturbing the black hole corresponds to perturbing this thermal state, and the decay of the perturbation describes the return to thermal equilibrium. So we obtain a prediction for the thermalization timescale in the strongly coupled CFT. It seems difficult to compute this timescale directly in the CFT. Since the system will clearly not thermalize in the free field limit, at weak coupling, this timescale will be very long and depend on the coupling constant. In the limit of strong coupling, it seems plausible that the timescale will remain nonzero, and be independent of the coupling. This is because the initial state is characterized by excitations with size of order the thermal wavelength, so causality suggests that the relaxation timescale should also be of order the thermal wavelength. The results we obtain are consistent with this expectation. A black hole in AdS is determined by two dimensionful parameters, the AdS radius $`R`$ and the black hole radius $`r_+`$. The quasinormal frequencies must be functions of these parameters. For large black holes, $`r_+R`$, we will show that there is an additional symmetry which insures that the frequencies can depend only on the black hole temperature $`Tr_+/R^2`$. However, for smaller black holes, this is no longer the case. Whereas the temperature begins to increase as one decreases $`r_+`$ below $`R`$, we find that the (imaginary part of the) frequency continues to decrease with $`r_+`$. This is different from what happens for asymptotically flat black holes. An ordinary Schwarzschild black hole has only one dimensionful parameter which can be taken to be the temperature. Its quasinormal frequencies must therefore be multiples of this temperature. Thus small black holes in AdS do NOT behave like black holes in asymptotically flat spacetime. The reason is simply that the boundary conditions at infinity are changed. More physically, the late time behavior of the field is affected by waves bouncing off the potential at large $`r`$. Another difference from the asymptotically flat case concerns the decay at very late times. For a Schwarzschild black hole, it is known that the exponential decay associated with the quasinormal modes eventually gives way to a power law tail . This has been shown to be associated with the scattering of the field off the Coulomb potential at large $`r`$. As we will discuss later, for asymptotically AdS black holes, this does not occur. We will compute the quasinormal frequencies for Schwarzschild-AdS black holes in the dimensions of interest for the AdS/CFT correspondence: four, five, and seven. We will consider minimally coupled scalar perturbations representing, e.g., the dilaton. This corresponds to a particular perturbation of the CFT. For example, for $`AdS_5`$, it corresponds to a perturbation of an (approximately) thermal state in super Yang-Mills on $`S^3\times R`$ with $`<F^2>`$ nonzero. In the linearized approximation we are using, the spacetime metric is not affected by the scalar field. So the perturbation of the thermal state does not change the energy density, which remains uniform over the sphere. The late time decay of this perturbation is universal in the sense that all solutions for the dilaton with the same angular dependence will decay at the same rate, which is determined by the imaginary part of the lowest quasinormal frequency. Different perturbations, corresponding to different linearized supergravity fields, will have different quasinormal frequencies and hence decay at different rates. Although we work in the classical supergravity limit, our results would not be affected if one includes small semiclassical corrections such as black holes in equilibrium with their Hawking radiation. A brief outline of this paper is the following. In the next section we review the definition of quasinormal modes, their relation to the late time behavior of the field, and derive some of their properties using analytic arguments. The numerical approach we use to compute the complex frequencies is described in section 3. In the following section we discuss the results for both large black holes $`r_+R`$ and intermediate size black holes $`r_+R`$. In section 5 we consider the limit of small black holes $`r_+R`$. Although there is a striking similarity between some of our results and some results obtained in the study of black hole critical phenomena , we will argue that this is probably just a numerical coincidence. The conclusion contains some speculations about the CFT interpretation of the quasinormal frequencies in the regime where they do not scale with the temperature. In the appendix, we give some more details on our numerical calculations. 2. Definition of quasinormal modes and analytic arguments Since we are interested in studying AdS black holes in various dimensions, we begin with the $`d`$ dimensional Schwarzschild-AdS metric: $$ds^2=f(r)dt^2+f(r)^1dr^2+r^2d\mathrm{\Omega }_{d2}^2$$ where $$f(r)\frac{r^2}{R^2}+1\left(\frac{r_0}{r}\right)^{d3}.$$ $`R`$ is the AdS radius and $`r_0`$ is related to the black hole mass via $$M=\frac{(d2)A_{d2}r_0^{d3}}{16\pi G_d}$$ where $`A_{d2}=2\pi ^{\frac{d1}{2}}/\mathrm{\Gamma }(\frac{d1}{2})`$ is the area of a unit $`(d2)`$-sphere. The black hole horizon is at $`r=r_+`$, the largest zero of $`f`$, and its Hawking temperature is $$T=\frac{f^{}(r_+)}{4\pi }=\frac{(d1)r_+^2+(d3)R^2}{4\pi r_+R^2}$$ We are interested in solutions to the minimally coupled scalar wave equation $$^2\mathrm{\Phi }=0$$ If we consider modes $$\mathrm{\Phi }(t,r,\mathrm{angles})=\mathrm{r}^{\frac{2\mathrm{d}}{2}}\psi (\mathrm{r})\mathrm{Y}(\mathrm{angles})\mathrm{e}^{\mathrm{i}\omega \mathrm{t}}$$ where $`Y`$ denotes the spherical harmonics on $`S^{d2}`$, and introduce a new radial coordinate $`dr_{}=dr/f(r)`$, the wave equation reduces to the standard form $$[_r_{}^2+\omega ^2\stackrel{~}{V}(r_{})]\psi =0.$$ The potential $`\stackrel{~}{V}`$ is positive and vanishes at the horizon, which corresponds to $`r_{}=\mathrm{}`$. It diverges at $`r=\mathrm{}`$, which corresponds to a finite value of $`r_{}`$. To define quasinormal modes, let us first consider the case of a simple Schwarzschild black hole. Since the spacetime is asymptotically flat, the potential now vanishes near infinity. Clearly, a solution exists for each $`\omega `$ corresponding to a wave coming in from infinity, scattering off the potential and being partly reflected and partly absorbed by the black hole. Quasinormal modes are defined as solutions which are purely outgoing near infinity $`\mathrm{\Phi }e^{i\omega (tr_{})}`$ and purely ingoing near the horizon $`\mathrm{\Phi }e^{i\omega (t+r_{})}`$. No initial incoming wave from infinity is allowed. This will only be possible for a discrete set of complex $`\omega `$ called the quasinormal frequencies. For the asymptotically AdS case, the potential diverges at infinity, so we must require that $`\mathrm{\Phi }`$ vanish there. In the absence of a black hole, $`r_{}`$ has only a finite range and solutions exist for only a discrete set of real $`\omega `$. However once the black hole is added, there are again solutions with any value of $`\omega `$. These correspond to an outgoing wave coming from the (past) horizon, scattering off the potential and becoming an ingoing wave entering the (future) horizon. Quasinormal modes are defined to be modes with only ingoing waves near the horizon. These again exist for only a discrete set of complex $`\omega `$. It should perhaps be emphasized that these modes are not the same as the ones that have recently been computed in connection with the glueball masses . There are several differences: First, the background for the glueball mass calculation is not the spherically symmetric AdS black hole, but an analytic continuation of the plane symmetric AdS black hole. Second, because of the analytic continuation, the horizon becomes a regular origin, and the boundary conditions there are not the analytic continuation of the ingoing wave boundary condition imposed for quasinormal modes. Finally, the glueball masses are real quantities, while as we have said, the quasinormal frequencies will be complex. This makes them more difficult to compute numerically. One can show \[14,,2\] that the complex quasinormal frequencies determine the fall off of the field at late times. The basic idea is to start by writing the solution to the wave equation in terms of the retarded Green’s function and initial data on a constant $`t`$ surface. One then rewrites the Green’s function in terms of its Fourier transform with respect to $`t`$. The quasinormal modes arise as poles of the Green’s function in the complex frequency plane, and their contributions to the solution can be extracted by closing the contour with a large semicircle near infinity. For a black hole in asymptotically flat spacetimes, Price showed that after the exponential decay due to the quasinormal ringing, the field will decay as a power law $`\mathrm{\Phi }t^{(2l+3)}`$ where $`l`$ is the angular quantum number. This has been seen explicitly in numerical simulations . Mathematically, this is due to a cut in the Green’s function along the negative imaginary frequency axis. More physically, this behavior is due to scattering off the weak Coulomb potential near infinity. For the case of a black hole in AdS, the potential diverges at infinity and vanishes exponentially near the horizon. Ching et. al. have analyzed the late time behavior of a broad class of wave equations with potentials. They show that there are no power law tails for a potential which vanishes exponentially. So there will be no power law tails for black holes in AdS. For a black hole with radius much smaller than the AdS radius, one might expect an intermediate time regime where one sees power law behavior before the new boundary conditions at infinity become important. However, this would occur only if one starts with large quasinormal modes with $`\omega 1/r_+`$ associated with a Schwarzschild black hole. We will see that the lowest modes of a Schwarzschild-AdS black hole are much smaller and their exponential decay is so slow that it eliminates the intermediate time power law behavior. The quasinormal frequencies will in general depend on the two parameters in the problem $`R,r_0`$. By rescaling the metric, $`\widehat{ds}^2=\lambda ^2ds^2`$, and rescaling the coordinates $`\widehat{t}=\lambda t`$ and $`\widehat{r}=\lambda r`$, the new metric again takes the form (2.1) with rescaled constants $`R`$ and $`r_0`$. Since the wave equation (2.1) is clearly invariant under this constant rescaling of the metric, we can use it to set e.g. $`R=1`$. This rescaling is possible for any metric and physically just corresponds to a choice of units. In our case, we measure all quantities in units of the AdS radius. The quasinormal frequencies can still be arbitrary functions of $`r_0`$. We now show that for large black holes, $`r_0R`$, the frequencies must be proportional to the black hole temperature. This is a result of an independent scaling one can do in this limit. For large black holes, the region outside the horizon of the Schwarzschild-AdS metric (2.1) becomes approximately plane symmetric: $$ds^2=h(r)dt^2+h(r)^1dr^2+r^2dx_idx^i$$ where $$h(r)\frac{r^2}{R^2}\left(\frac{r_0}{r}\right)^{d3}.$$ For this metric one can rescale $`r_0`$ by a pure coordinate transformation: $`t=a\widehat{t},x_i=a\widehat{x}_i,r=\widehat{r}/a`$ for constant $`a`$. This does not rescale the overall metric, or the AdS radius $`R`$. The horizon radius $`r_+^{d1}=R^2r_0^{d3}`$ gets rescaled by $`r_+=\widehat{r}_+/a`$. Of course, under this coordinate transformation of the metric, solutions of the wave equation are related by the same coordinate transformation. For solutions which are independent of $`x^i`$ (the analog of the $`l=0`$ modes) we have $`e^{i\omega (r_+)t}=e^{i\omega (\widehat{r}_+)\widehat{t}}`$, which implies $`\omega (r_+)r_+`$. Since the Hawking temperature of the metric (2.1) is also proportional to the horizon radius, $$T=\frac{d1}{4\pi }\frac{r_+}{R^2}$$ we see that the frequencies must scale with the temperature for large black holes. For solutions proportional to $`e^{ik_ix^i}`$, this scaling argument implies $`\omega (ar_+,ak_i)=a\omega (r_+,k_i)`$. So if $`r_+^2k_ik^i`$, one can rescale so that $`k^2`$ is negligibly small. The above argument then shows that $`\omega `$ still scales with the temperature. One can then rescale back to $`r_+R`$ to apply to large black holes. In other words, for any $`k_i`$, the quasinormal frequencies scale with the temperature in the limit of large temperatures $`T^2k^2`$. This argument does not apply to black holes of order the AdS radius, and indeed we will find that the quasinormal frequencies do not scale with the temperature in this regime. But it does confirm the expectation that the approach to thermal equilibrium in the dual field theory should depend only on the temperature (at least for large temperature). Since we want modes which behave like $`e^{i\omega (t+r_{})}`$ near the horizon, it is convenient to set $`v=t+r_{}`$, and work with ingoing Eddington coordinates. The metric for Schwarzschild-AdS in $`d`$ dimensions in ingoing Eddington coordinates is $$ds^2=f(r)dv^2+2dvdr+r^2d\mathrm{\Omega }_{d2}^2$$ where $`f`$ is again given by (2.1). The minimally-coupled scalar wave equation (2.1) may be reduced to an ordinary, second order, linear differential equation in $`r`$ by the separation of variables, $$\mathrm{\Phi }(v,r,\mathrm{angles})=\mathrm{r}^{\frac{2\mathrm{d}}{2}}\psi (\mathrm{r})\mathrm{Y}(\mathrm{angles})\mathrm{e}^{\mathrm{i}\omega \mathrm{v}}$$ This yields the following radial equation for $`\psi (r)`$: $$f(r)\frac{d^2}{dr^2}\psi (r)+[f^{}(r)2i\omega ]\frac{d}{dr}\psi (r)V(r)\psi (r)=0,$$ with the effective potential $`V(r)`$ given by ($`R=1`$) $$V(r)=\frac{(d2)(d4)}{4r^2}f(r)+\frac{d2}{2r}f^{}(r)+\frac{c}{r^2}$$ $$=\frac{d(d2)}{4}+\frac{(d2)(d4)+4c}{4r^2}+\frac{(d2)^2r_0^{d3}}{4r^{d1}}$$ where $$c=l(l+d3)$$ is the eigenvalue of the Laplacian on $`S^{d2}`$. Note that $`V(r)`$ is manifestly positive for $`d4`$. Ingoing modes near the (future) horizon are described, of course, by a nonzero multiple of $`e^{i\omega v}`$. Outgoing modes near the horizon can also be expressed in terms of ingoing Eddington coordinates via $`e^{i\omega (tr_{})}=e^{i\omega v}e^{2i\omega r_{}}`$. Since $$r_{}=\frac{dr}{f(r)}\frac{1}{f^{}(r_+)}\mathrm{ln}(rr_+)$$ near the horizon $`r=r_+`$, the outgoing modes behave like $$e^{i\omega (tr_{})}=e^{i\omega v}e^{2i\omega r_{}}e^{i\omega v}(rr_+)^{2i\omega /f^{}(r_+)}$$ Since $`v,r`$ are good coordinates near the horizon, the outgoing modes are not smooth ($`C^{\mathrm{}}`$) at $`r=r_+`$ unless $`2i\omega /f^{}(r_+)`$ is a positive integer. We show below that the imaginary part of $`\omega `$ must be negative, so the exponent in (2.1) always has a positive real part. Thus the outgoing modes vanish near the future horizon, while the ingoing modes are nonzero there. However we also show (in the next section) that $`2i\omega /f^{}(r_+)`$ cannot be a positive integer, so the outgoing modes are not smooth at $`r=r_+`$. We wish to find the complex values of $`\omega `$ such that (2.1) has a solution with only ingoing modes near the horizon, and vanishing at infinity. We will eliminate the outgoing modes by first assuming the solution is smooth at $`r=r_+`$, and then showing that the allowed descrete values of $`\omega `$ are such that $`2i\omega /f^{}(r_+)`$ is not an integer. The actual values of $`\omega `$ must be computed numerically, but some general properties can be seen analytically. For example, we now show that there are no solutions with $`i\omega `$ pure real, and $`2i\omega <f^{}(r_+)`$. If $`i\omega `$ were real, then the equation would be real and the solutions $`\psi `$ would be real. If there were a local extremum at some point $`\stackrel{~}{r}`$, then $`\psi ^{}(\stackrel{~}{r})=0`$ and $`\psi ^{\prime \prime }(\stackrel{~}{r})`$ would have the same sign as $`\psi (\stackrel{~}{r})`$. So if $`\psi `$ were positive at $`\stackrel{~}{r}`$, it would have to increase as $`r`$ increased. Similarly, if it were negative, it would have to decrease. In neither case, could it approach zero asymptotically. We conclude that the solutions must monotonically approach zero. Now if $`2i\omega <f^{}(r_+)`$, $`\psi ^{}(r_+)`$ has the same sign as $`\psi (r_+)`$.<sup>2</sup> This is where the condition of no outgoing modes near the horizon is used. If outgoing waves were present, $`f(r)\psi ^{\prime \prime }(r)`$ would no longer vanish at $`r=r_+`$, and $`\psi ^{}(r_+)`$ need not have the same sign as $`\psi (r_+)`$. So as one moves away from the horizon, the solutions move farther away from zero and hence can never reach zero asymptotically. This analytic argument only applies if $`2i\omega <f^{}(r_+)`$. But we will see numerically that even without this restriction, there are no solutions with $`i\omega `$ pure real. A more powerful result can be obtained as follows. Multiplying (2.1) by $`\overline{\psi }`$ and integrating from $`r_+`$ to $`\mathrm{}`$ yields $$_{r_+}^{\mathrm{}}𝑑r\left[\overline{\psi }\frac{d}{dr}\left(f\frac{d\psi }{dr}\right)2i\omega \overline{\psi }\frac{d\psi }{dr}V\overline{\psi }\psi \right]=0$$ The first term can be integrated by parts without picking up a surface term since $`f(r_+)=0`$ and $`\overline{\psi }(\mathrm{})=0`$. This yields $$_{r_+}^{\mathrm{}}𝑑r[f|\psi ^{}|^2+2i\omega \overline{\psi }\psi ^{}+V|\psi |^2]=0$$ Taking the imaginary part of (2.1) yields $$_{r_+}^{\mathrm{}}𝑑r[\omega \overline{\psi }\psi ^{}+\overline{\omega }\psi \overline{\psi }^{}]=0$$ Integrating the second term by parts yields $$(\omega \overline{\omega })_{r_+}^{\mathrm{}}𝑑r\overline{\psi }\psi ^{}=\overline{\omega }|\psi (r_+)|^2$$ Substituting this back into (2.1) we obtain the final result $$_{r_+}^{\mathrm{}}𝑑r[f|\psi ^{}|^2+V|\psi |^2]=\frac{|\omega |^2|\psi (r_+)|^2}{\mathrm{Im}\omega }$$ Since $`f`$ and $`V`$ are both positive definite outside the horizon, this equation clearly shows that there are no solutions with Im $`\omega >0`$. These would correspond to unstable modes which grow exponentially in time. There are also no solutions with Im $`\omega =0`$: All solutions must decay in time. In addition, eq. (2.1) shows that the only solution which vanishes at the horizon (and infinity) is zero everywhere. Since the equation is linear, we can always rescale $`\psi `$ so that $`\psi (r_+)=1`$. 3. Numerical approach to computing quasinormal modes To compute the quasinormal modes, we will expand the solution in a power series about the horizon and impose the boundary condition that the solution vanish at infinity. In order to map the entire region of interest, $`r_+<r<\mathrm{}`$, into a finite parameter range, we change variables to $`x=1/r`$. In general, a power series expansion will have a radius of convergence at least as large as the distance to the nearest pole. Examining the pole structure of (2.1) in the whole complex $`r`$ plane, we find $`d+1`$ regular singular points, at $`r=0`$, $`r=\mathrm{}`$, and at the $`d1`$ zeros of $`f`$, one of which, $`r=r_+`$ corresponds to the horizon. At least for $`d=4,5`$ or $`7`$, if we use the variable $`x=1/r`$ and expand about the horizon, $`x_+=1/r_+`$, the radius of convergence will<sup>3</sup> For $`d=4`$ and $`d=5`$, one can show analytically that starting at the horizon, $`x=x_+`$, the nearest pole is indeed $`x=0`$. For $`d=7`$ we have checked numerically that this is again the case. reach to $`x=0`$, so that we can use this expansion to consider the behavior of the solution as $`r\mathrm{}`$. In terms of our new variable $`x=1/r`$, (2.1) becomes $$s(x)\frac{d^2}{dx^2}\psi (x)+\frac{t(x)}{xx_+}\frac{d}{dx}\psi (x)+\frac{u(x)}{(xx_+)^2}\psi (x)=0$$ where the coefficient functions are given by $$s(x)=\frac{r_0^{d3}x^{d+1}x^4x^2}{xx_+}=\frac{x_+^2+1}{x_+^{d1}}x^d+\mathrm{}+\frac{x_+^2+1}{x_+^3}x^4+\frac{1}{x_+^2}x^3+\frac{1}{x_+}x^2$$ $$t(x)=(d1)r_0^{d3}x^d2x^32x^2i\omega $$ $$u(x)=(xx_+)V(x)$$ The parameter $`r_0^{d3}`$ should be viewed as a function of the horizon radius: $`r_0^{d3}=\frac{x_+^2+1}{x_+^{d1}}`$. Since $`s`$, $`t`$, and $`u`$ are all polynomials of degree $`d`$ we may expand them about the horizon $`x=x_+`$: $`s(x)=_{n=0}^ds_n(xx_+)^n`$, and similarly for $`t(x)`$ and $`u(x)`$. It will be useful to note that $`s_0=2x_+^2\kappa `$, $`t_0=2x_+^2(\kappa i\omega )`$, and $`u_0=0`$, where $`\kappa `$ is the surface gravity, which is related to the black hole temperature (2.1) by $$\kappa =\frac{f^{}(r_+)}{2}=2\pi T.$$ Also, since $`s_00`$, $`x=x_+`$ is a regular singular point of (3.1). To determine the behavior of the solutions near the horizon, we first set $`\psi (x)=(xx_+)^\alpha `$ and substitute into (3.1). Then to leading order we get $$\alpha (\alpha 1)s_0+\alpha t_0=2x_+^2\alpha \left(\alpha \kappa i\omega \right)=0$$ which has two solutions $`\alpha =0`$ and $`\alpha =i\omega /\kappa `$. We see from (2.1) that these correspond precisely to the ingoing and outgoing modes near the horizon respectively. Since we want to include only the ingoing modes, we take $`\alpha =0`$. This corresponds to looking for a solution of the form $$\psi (x)=\underset{n=0}{\overset{\mathrm{}}{}}a_n(xx_+)^n$$ Substituting (3.1) into (3.1) and equating coefficients of $`(xx_+)^n`$ for each $`n`$, we obtain the following recursion relations<sup>4</sup> Although the standard way of writing (3.1) is to set the coefficient of $`\psi ^{\prime \prime }`$ to $`1`$ which yields simpler-looking recursion relations, the advantage of the present formulation is that since $`s(x)`$, $`t(x)`$, and $`u(x)`$ are polynomials, their analytic expansions will terminate after a finite number of terms, so that each $`a_n`$ will be given in terms of a relatively small number of terms. for the $`a_n`$: $$a_n=\frac{1}{P_n}\underset{k=0}{\overset{n1}{}}\left[k(k1)s_{nk}+kt_{nk}+u_{nk}\right]a_k$$ where $$P_n=n(n1)s_0+nt_0=2x_+^2n\left(n\kappa i\omega \right)$$ Since the leading coefficient $`a_0`$ is undetermined, this yields a one parameter family of solutions, as expected for a linear equation. The solutions to (2.1) in asymptotically AdS spacetime are $`\mathrm{\Phi }\mathrm{constant}`$ and $`\mathrm{\Phi }1/r^{d1}`$ as $`r\mathrm{}`$, which translates into $`\psi r^{\frac{d2}{2}}`$ and $`\psi r^{d/2}`$, respectively. We are interested in normalizable modes, so we must select only solutions which satisfy $`\psi 0`$ as $`r\mathrm{}`$ (or $`x0`$). This means that we require (3.1) to vanish at $`x=0`$, which is satisfied only for special (discrete) values of $`\omega `$. (For all other values of $`\omega `$, the solution will blow up, $`\psi (0)=\mathrm{}`$.) Thus in order to find the quasinormal modes, we need to find the zeros of $`_{n=0}^{\mathrm{}}a_n(\omega )(x_+)^n`$ in the complex $`\omega `$ plane. This is done by truncating the series after a large number of terms and computing the partial sum as a function of $`\omega `$. One can then find zeros of this partial sum, and check the accuracy by seeing how much the location of the zero changes as one goes to higher partial sums. Some details are given in the Appendix. One can now easily show that $`2i\omega /f^{}(r_+)=i\omega /\kappa `$ cannot be an integer. If $`\omega `$ is pure imaginary and $`i\omega =\stackrel{~}{n}\kappa `$ for some integer $`\stackrel{~}{n}`$, then $`P_{\stackrel{~}{n}}=0`$. This implies an additional constraint on the coefficients $`a_k`$, $`k=0,\mathrm{},\stackrel{~}{n}1`$ which will only be satisfied if they vanish. In other words, the solution will behave like $`(xx_+)^{\stackrel{~}{n}}`$ near the horizon corresponding to a pure outgoing wave. However, since $`\psi `$ now vanishes at the horizon, (2.1) implies that $`\psi `$ vanishes everywhere. So there are no nontrivial solutions with $`i\omega /\kappa `$ equal to an integer. As we saw in section two, this means that if one wanted to include outgoing modes near the (future) horizon, the solution would not be smooth there. 4. Discussion of results The numerical procedure described above can be applied to both large black holes ($`r_+R`$) and intermediate size black holes ($`r_+R`$). In this section we describe the results. We set $`R=1`$, and decompose the quasinormal frequencies into real and imaginary parts: $$\omega =\omega _Ri\omega _I$$ With the sign chosen in (4.1), $`\omega _I`$ is positive for all quasinormal frequencies. 4d BH modes 5d BH modes 7d BH modes $`r_+`$ $`\omega _I`$ $`\omega _R`$ $`\omega _I`$ $`\omega _R`$ $`\omega _I`$ $`\omega _R`$ $`100`$ $`\mathrm{\hspace{0.17em}266.3856}`$ $`184.9534`$ $`\mathrm{\hspace{0.17em}274.6655}`$ $`311.9627`$ $`\mathrm{\hspace{0.17em}261.2}`$ $`500.8`$ $`50`$ $`\mathrm{\hspace{0.17em}133.1933}`$ $`92.4937`$ $`\mathrm{\hspace{0.17em}137.3296}`$ $`156.0077`$ $`\mathrm{\hspace{0.17em}130.7}`$ $`250.4`$ $`10`$ $`\mathrm{\hspace{0.17em}26.6418}`$ $`18.6070`$ $`\mathrm{\hspace{0.17em}27.4457}`$ $`31.3699`$ $`\mathrm{\hspace{0.17em}26.07}`$ $`50.35`$ $`5`$ $`\mathrm{\hspace{0.17em}13.3255}`$ $`9.4711`$ $`\mathrm{\hspace{0.17em}13.6914}`$ $`15.9454`$ $`\mathrm{\hspace{0.17em}12.96}`$ $`25.57`$ $`1`$ $`\mathrm{\hspace{0.17em}2.6712}`$ $`2.7982`$ $`\mathrm{\hspace{0.17em}2.5547}`$ $`4.5788`$ $`\mathrm{\hspace{0.17em}2.16}`$ $`7.27`$ $`0.8`$ $`\mathrm{\hspace{0.17em}2.1304}`$ $`2.5878`$ $`\mathrm{\hspace{0.17em}1.9676}`$ $`4.1951`$ $`0.6`$ $`\mathrm{\hspace{0.17em}1.5797}`$ $`2.4316`$ $`\mathrm{\hspace{0.17em}1.3656}`$ $`3.8914`$ $`0.4`$ $`\mathrm{\hspace{0.17em}1.0064}`$ $`2.3629`$ $`\mathrm{\hspace{0.17em}0.7462}`$ $`3.7174`$ Table 1: The lowest quasinormal mode frequency for the 4, 5, and 7 dimensional Schwarzschild-AdS black hole for some selected black hole sizes. Fig. 1: For large black holes, $`\omega _I`$ is proportional to the temperature. The top line is $`d=4`$, the middle line is $`d=5`$ and the bottom line is $`d=7`$. Fig. 2: For large black holes, $`\omega _R`$ is also proportional to the temperature. The top line is now $`d=7`$, the middle line is $`d=5`$ and the bottom line is $`d=4`$. In Table 1., we list the values of the lowest quasinormal mode frequencies for $`l=0`$ and selected values of $`r_+`$, for the four, five, and seven dimensional Schwarzschild-AdS black holes. For large black holes, both the real and the imaginary parts of the frequency are linear functions of $`r_+`$. Since the temperature of a large black hole is $`T=(d1)r_+/4\pi `$, it follows that they are also linear functions of $`T`$. This is clearly shown in fig. 1 and fig. 2, where $`\omega _I`$ and $`\omega _R`$ respectively are plotted as a function of the temperature for the four, five, and seven dimensional cases. The dots, representing the quasinormal modes, lie on straight lines through the origin. In fig. 1, the top line corresponds to the $`d=4`$ case, the middle line is the $`d=5`$ case, and the bottom line is the $`d=7`$ case. Explicitly, the lines are given by $$\omega _I=11.16T\mathrm{for}d=4$$ $$\omega _I=8.63T\mathrm{for}d=5$$ $$\omega _I=5.47T\mathrm{for}d=7$$ Notice from Table 1 that as a function of $`r_+`$, $`\omega _I`$ is almost independent of dimension. The difference in these slopes is almost entirely due to the dimension dependence of the relation between $`r_+`$ and $`T`$ (2.1). In contrast, $`\omega _R`$ does depend on the dimension, and in fig. 2, the order of the lines is reversed: $$\omega _R=10.5T\mathrm{for}d=7$$ $$\omega _R=9.8T\mathrm{for}d=5$$ $$\omega _R=7.75T\mathrm{for}d=4$$ This linear scaling with the temperature is in agreement with the general argument in section 2. According to the AdS/CFT correspondence, $`\tau =1/\omega _I`$ is the timescale for the approach to thermal equilibrium. Eq. (4.1) is one of the main results of this work. Fig. 3: $`\omega _I`$ for intermediate black holes in four dimensions. The solid line is $`\omega _I=2.66r_+`$, and the dashed line is $`\omega _I=11.16T`$. Fig. 4: $`\omega _I`$ for intermediate black holes in five dimensions. The solid line is $`\omega _I=2.75r_+`$, and the dashed line is $`\omega _I=8.63T`$. For the intermediate size black holes, the quasinormal frequencies do not scale with the temperature. This is clearly shown in fig. 3 which plots $`\omega _I`$ as a function of $`r_+`$ for $`d=4`$ black holes with $`r_+1`$. To a remarkable acuracy, the points continue to lie along a straight line $`\omega _I=2.66r_+`$. The dashed curve represents the continuation of the curve $`\omega _I=11.16T`$ shown in fig. 1. to smaller values of $`r_+`$. (For large $`r_+`$ these two curves are identical.) It is not yet clear what the significance of this linear relation is for the dual CFT. Some speculations are given in section 6. Since the quasinormal frequencies can be computed to an accuracy much better than the size of the dots in fig. 3, one can check that the points actually lie slightly off the line. This is shown more clearly in the five dimensional results in fig. 4. Once again the dashed curve is the continuation of the curve $`\omega _I=8.63T`$ shown in fig. 1, and the solid curve is the line $`\omega _I=2.75r_+`$ that it approaches asymptotically. Fig. 5: $`\omega _R`$ for intermediate black holes in four dimensions. The solid line is $`\omega _R=1.85r_+`$, and the dashed line is $`\omega _R=7.75T`$ Fig. 6: $`\omega _R`$ for intermediate black holes in five dimensions. The solid line is $`\omega _R=3.12r_+`$, and the dashed line is $`\omega _R=9.8T`$ The real part of the quasinormal frequencies are shown in similar plots in fig. 5 for $`d=4`$ and fig. 6 for $`d=5`$. $`\omega _R`$ approximates the temperature more closely than the black hole size, but it is clear from fig. 6 that it is not diverging for small black holes. We have so far discussed only the lowest quasinormal mode with $`l=0`$. We have also computed higher modes and modes with nonzero angular momentum, but the numerical accuracy decreases as one increases the mode number $`n`$ or $`l`$. So we restrict our attention to relatively small values of $`n`$ and $`l`$. For large black holes, in both four and five dimensions, we find that the low lying quasinormal modes are approximately evenly spaced in $`n`$. In particular, for $`r_+=100`$, $`\omega _I(n)41+225n`$ and $`\omega _R(n)54+131n`$ in four dimensions, whereas $`\omega _I(n)73+201n`$ and $`\omega _R(n)106+202n`$ in five dimensions. Fig. 7: Dependence of $`\omega `$ on $`l`$ for four dimensional black hole with $`r_+=1`$. The smaller points are $`\omega _R`$, the larger points are $`\omega _I`$. Increasing the angular momentum $`l`$ mode has the surprising effect of increasing the damping time scale ( $`\omega _I`$ decreases), and decreasing the oscillation time scale ($`\omega _R`$ increases). This is shown in fig. 7, where $`\omega _R`$ (smaller points) and $`\omega _I`$ (larger points) are plotted against $`l`$ for low values of $`l`$.<sup>5</sup> The size of the dots is not related to the accuracy of the calculation. An important open question is the behavior of $`\omega _I`$ as $`l\mathrm{}`$. It appears to decrease with $`l`$, but the general argument in section 2 shows that it cannot become negative. If $`\omega _I`$ continues to decrease with $`l`$, then the late time behavior of a general perturbation will be dominated by the largest $`l`$ mode. The large $`l`$ behavior of $`\omega _I`$ is currently under investigation. Preliminary results indicate that the frequencies stay bounded away from zero. If this were not the case, and $`\omega _I`$ approached zero fast enough, then a general superposition of all spherical harmonics could decay at late times only as a power law. However, even this would not be a problem for the AdS/CFT correspondence, since the decomposition into spherical harmonics can be done in the boundary field theory as well. The statement is that, e.g., a perturbation of $`<F^2>`$ with given angular dependence $`Y_l`$ on $`S^3`$ will decay exponentially with a time scale given by the imaginary part of the lowest quasinormal mode with that value of $`l`$. 5. Comments on small black holes In this section we briefly discuss the extrapolation of the quasinormal frequencies to the small black hole regime ($`r_+R`$). Our numerical approach becomes unreliable in this regime, so we cannot compute them directly. Instead, we must rely on indirect arguments. But first we give some motivation for exploring this question. Small AdS black holes are not of direct interest for the AdS/CFT correspondence. This is because an extended black hole of the form Schwarzschild-AdS cross $`S^m`$ is unstable to forming a black hole localized in all directions whenever the radius of the black hole is smaller than the radius of the sphere. This is a classical instability first discussed by Gregory and Laflamme . It is quite different from the Hawking-Page transition \[17,,18\] which applies to black holes in contact with a heat bath. In that case, when the black hole is of order the AdS radius it undergoes a transition to a thermal gas in AdS. The Hawking-Page transition can be avoided if we consider states of fixed energy, not fixed temperature. Then black holes dominate the entropy even when $`r_+<R`$ and continue to do so until $`r_+/R`$ is less than a negative power of $`N`$ \[19,,20\]. The situation is very similar to the old studies of a black hole in a box. For fixed total energy, the maximum entropy state consists of most of the energy in the black hole, and a small amount in radiation. Unfortunately, the stable small black hole configuration must be a ten or eleven dimensional black hole (by the Gregory-Laflamme instability) and is not known explicitly. Nevertheless, there may be other applications of the quasinormal modes of small black holes in AdS. One possibility comes from the striking fact (shown in fig. 3) that for $`d=4`$, $`\omega _I`$ is proportional to $`r_+`$ to high accuracy. As we will discuss below, the slope of this line, $`2.66`$, turns out to be numerically very close to a special frequency which arises in black hole critical phenomena first studied by Choptuik . To explore this possible connection, one needs to consider quasinormal modes of small black holes. From the intermediate black hole results shown in the previous section, it is tempting to speculate that as $`r_+0`$, $`\omega _I0`$, and $`\omega _R`$ constant. Since the decay of the field is due to absorption by the black hole, it is intuitively plausible that as the black hole becomes arbitrarily small, the field will no longer decay. It is even possible that the quasinormal modes approach the usual AdS modes in the limit $`r_+0`$, although this is not guaranteed since the boundary conditions at $`r=r_+`$ do not reduce to regularity at the origin as $`r_+0`$. If they do approach the usual modes in this limit, then $`\omega _R`$ must approach $`d1`$ . Of course, in the context of string theory, one cannot trust the Schwarzschild-AdS solution when the curvature at the horizon becomes larger than the string scale. By taking the AdS radius $`R`$ sufficiently large, one can certainly use this solution to describe some small black holes, but the geometry would have to be modified before the limit $`r_+0`$ is reached. It has been shown that the low energy absorption cross section for massless scalars incident on a general asymptotically flat spherically symmetric black hole is always equal to the area of the event horizon . We can use this to estimate the imaginary part of the lowest quasinormal mode for a small AdS black hole as follows. Imagine a wave with energy of order $`1/R`$ propagating toward a black hole with $`r_+R`$. Then the spacetime around the black hole is approximately Schwarzschild, and the low energy condition is satisfied, so the amplitude of the reflected wave $`\mathrm{\Phi }_r`$ will be reduced from the amplitude of the incident wave $`\mathrm{\Phi }_i`$, by $`1(\mathrm{\Phi }_r/\mathrm{\Phi }_i)^2r_+^{d2}`$. After a time of order the AdS radius, the reflected wave will bounce off the potential at infinity with no change in amplitude. It will again encounter the black hole potential and be partly absorbed and partly reflected. Repeating this process leads to a gradual decay of the field $`\mathrm{\Phi }e^{\alpha t}`$ with $`\alpha r_+^{d2}`$. This suggests that for small black holes, $`\omega _I`$ should scale like the horizon area $`r_+^{d2}`$. In the large black hole regime, on the other hand, we know that the modes should scale linearly with $`r_+`$. To check this, we consider a simple ansantz which interpolates between these two regimes and see how well it fits the data. Consider the function $`\omega _I(r_+)=\frac{ar_+^m}{b+r_+^{m1}}`$ (where $`a`$ corresponds to the asymptotic slope). For each $`m`$ we choose $`b`$ to give the best fit to the intermediate black hole data, and see which $`m`$ yields the lowest overall error (as measured by $`\chi ^2`$). Fig. 8: The curved line is a fit to the modes of a small black hole in $`d=5`$. The modes approach the straight line shown at large $`r_+`$. In the five dimensional case, using seven points between $`r_+=.4`$ and $`r_+=1`$, we indeed find that $`m=3`$ gives the best fit: $`\chi ^29\times 10^6`$ for $`m=3`$, as opposed to $`\chi ^23\times 10^2`$ for $`m=2`$ and $`m=4`$. The actual fit, shown in fig. 8 along with the modes and the asymptotic line, is given by $`\omega _I(r_+)\frac{2.746675r_+^3}{0.0748+r_+^2}`$. In four dimensions the story is much less clear, since there is no significant difference between the fit with $`m=2`$ and the fit with $`m=3`$. This could be due to the fact that the data for intermediate black holes have not yet started to significantly deviate from a straight line. To see the possible connection with black hole critical phenomena, consider the evolution of a self gravitating spherically symmetric scalar field (in an asymptotically flat 4d spacetime). It is clear that weak waves will scatter and go off to infinity, just like in flat spacetime. Strong waves will collapse and form a black hole. Choptuik studied one parameter families of initial data (labelled by $`p`$) which interpolated between these two extremes. In each case there is a critical solution $`p=p_{}`$ which marks the boundary between forming a black hole or not forming one. The late time behavior of this critical solution turns out to be universal. It has precisely one unstable mode which grows like $`e^{\lambda t}`$ with $`\lambda =2.67`$. This mode is responsible for the famous scaling of the black hole mass for $`p`$ just above the critical value, $`M_{bh}(pp_{})^\gamma `$ where $`\gamma =1/\lambda =.374`$. (For a review, see .) The numerical value of $`\lambda `$ is very close to the slope, $`2.66`$, of the line in fig. 3 giving the imaginary part of the quasinormal mode frequencies. Since both numbers involve imaginary frequencies for spherically symmetric scalar fields in four dimensions, it is natural to wonder if there might be a deeper connection between these two phenomena. Unfortunately, it appears at the moment that the agreement is just a numerical coincidence. The first thing one might check is whether the agreement continues in higher dimensions. Although the critical solutions for black hole formation have not been studied in five or seven dimensions, they have recently been calculated in six dimensions with the result $`\lambda =1/.424=2.36`$. We have redone our calculation in six dimensions and do not find agreement. The slope of $`\omega _I`$ as a function of $`r_+`$ turns out to be 2.693. Another difference is that the exponents in black hole critical phenomena are known to be independent of the mass of the scalar field. We have checked that the quasinormal frequencies of large and intermediate black holes do depend on the mass. One might expect that if there is a connection between these two phenomena, it would apply in the limit of small AdS black holes. However, we have seen that the modes of small black holes actually deviate from the linear relation, so the significance of the asymptotic slope is not clear. While it is still possible that some deeper connection exists (perhaps just in four dimensions and for massless fields) it appears unlikely. As an aside, we note that if one repeats the calculation of critical phenomena for spacetimes which are asymptotically AdS, the late time results will be quite different. Since energy cannot be lost to infinity, if one forms a black hole at all, it will eventually grow to absorb all the energy of the initial state. 6. Conclusions We have computed the scalar quasinormal modes of Schwarzschild-AdS black holes in four, five, and seven dimensions. These modes govern the late time decay of a minimally coupled massless scalar field, such as the dilaton. For large black holes, it is easy to see that these modes must scale with the black hole temperature $`T`$. By the AdS/CFT correspondence, this decay translates into a timescale for the approach to thermal equilbrium in the CFT, for large temperatures and perturbations dual to the scalar field. The timescale is simply given by the imaginary part of the lowest quasinormal frequency, $`\tau =1/\omega _I`$. From (4.1), for perturbations with homogeneous expectation values ($`l=0`$ modes) these timescales are $`\tau =.0896/T`$ for the three dimensional CFT, $`\tau =.116/T`$ for the four dimensional super Yang-Mills theory, and $`\tau =.183/T`$ for the six dimensional $`(0,2)`$ theory. As we mentioned earlier, these time scales are universal in the sense that all scalar fields with the same angular dependence will decay at this rate. Perturbations associated with other linearized supergravity fields will decay at different rates, given by their quasinormal mode frequencies. Perhaps the most surprising aspect of our analysis are the results for intermediate size black holes. For black holes with size of order the AdS radius, we find that the quasinormal frequencies do not continue to scale with temperature, but rather scale approximately linearly with horizon radius. We do not fully understand the implications of this linear relationship for the dual field theories, but we can make the following comments. If one considers the field theory at constant temperature, and slowly lowers the temperature, then one encounters the Hawking-Page transition \[17,,18\]. At this point the supergravity description changes from the euclidean black hole to a thermal gas in AdS. For these low temperatures, the relaxation time might still scale with the temperature, but it cannot be computed by a classical supergravity calculation, and is not related to quasinormal frequencies. To interpret the quasinormal frequencies of intermediate size black holes, we must consider a microcanonical description. Consider all states in the CFT with energy equal to the supergravity energy. Most of these states will be macroscopically indistinguishable, in the sense that they will all have the same expectation values of the operators dual to the supergravity fields. If the only nonzero expectation value is the stress energy tensor, the states are described on the supergravity side by just the black hole. If you perturb one of these CFT states to one which is macroscopically slightly different, it will decay to a typical state with a timescale set by the lowest quasinormal mode. The results in section 4 show that this decay time is determined by the size of the black hole in the supergravity description. Of course the field theory knows about the black hole size since its entropy is given by the black hole area. However the fact that, in this range of energy, the frequency scales linearly with the radius is puzzling. The fact that the quasinormal frequencies do not continue to scale with temperature is also interesting for the following reason. For a certain range of energies, the supergravity entropy $`S(E)`$ is dominated by small (ten or eleven dimensional) black holes<sup>6</sup> As we discussed in the previous section, these black holes can be quantum mechanically stable, since they are in equilibrium with their Hawking radiation. . This means that the effective temperature, defined by $`dS/dE=1/T`$, has the property that it decreases as the energy increases, i.e. the specific heat is negative. By the AdS/CFT correspondence, the same must be true in the dual CFT. (This is not a problem since it applies to only a finite range of energies.) If the quasinormal modes continued to scale with the temperature, then this negative specific heat would have dynamical effects. Instead, we find that the relaxation time increases monotonically with decreasing energy. Acknowledgements It is a pleasure to thank P. Brady, M. Choptuik, S. Hawking, and B. Schmidt for discussions. We also wish to thank the Institute for Theoretical Physics, Santa Barbara where part of this work was done. This work was supported in part by NSF Grants PHY94-07194 and PHY95-07065. Appendix: Evaluation of quasinormal modes As discussed in section 3, in order to find the quasinormal modes, we need to find the zeros of $`_{n=0}^{\mathrm{}}a_n(\omega )(x_+)^n`$ in the complex $`\omega `$ plane. We compute the quasinormal modes using Mathematica, in the following way. We search for the zeros, $`\omega _N`$, of $`\psi _N(\omega )_{n=0}^Na_n(\omega )(x_+)^n`$ by looking for the minima of $`|\psi _N|^2`$, and checking that the value at the minimum is zero, $`|\psi _N(\omega _N)|^2=0`$. (In practice, there are numerical errors in the computation, so the value at the minimum is instead $`10^{14}`$, or smaller.) In order to find the correct minimum, we need to specify an initial guess for $`\omega _N`$. (This sometimes poses difficulties in searching for new modes, and apart from using analytical or intuitive understanding as our guide, we are forced to resort to trial and error.) How close to the actual minimum one is required to start depends on the parameters; for the $`n=1,l=0`$ mode of reasonably-sized black hole, this seldom poses any limitations. To obtain an accurate estimate of the quasinormal frequencies $`\omega `$, we typically need to compute on the order of $`N=100`$ partial sums, depending on the dimension $`d`$, the black hole size $`r_+`$, and the mode (i.e. $`n`$ and $`l`$). Roughly speaking, at a fixed partial sum $`N`$, the relative error in the computed quasinormal frequency grows as $`r_+`$ decreases, and as $`l`$, $`n`$, or $`d`$ increases. Fig. 9: Convergence plot for a five dimensional black hole with $`r_+=0.6`$ The task of determining the mode to the necessary accuracy is fortuitously simplified by the fact that the “convergence curve” has a surprisingly simple form. In particular, once the partial sums have converged to sufficient accuracy, the variation of $`\omega _N`$ is given by an exponentially damped sine as a function of $`N`$, i.e. $`\omega _N\omega +ce^{N/a}\mathrm{sin}(bN+d)`$ where $`a`$ and $`b`$ depend on the physical parameters such as $`r_+`$, while $`c`$ and $`d`$ just depend on which partial sum we start with. In fact, we can use a fitting algorithm in Mathematica to fit these convergence curves. An example is given in fig. 9, where the dots represent $`Re(\omega _N)`$ for a particular set of parameters ($`d=5`$, $`r_+=0.6`$, $`n=1`$, and $`l=0`$), and the solid curve is the corresponding fit. This simplification allows us to determine the mode with a much higher accuracy then we would be led to expect from the spread of $`\omega _N`$. It also allows us to confirm that the numerical errors in the computation of each $`\omega _N`$ are negligible, since otherwise, one would expect a more noisy distribution. Often the quickest way to obtain the quasinormal mode is to simply look for the minimum of $`|\psi _N|^2`$ near various initial guesses for the frequency, but when that method fails, we can also adopt a more systematic approach; eliminating the possibility of occurence of quasinormal modes in a given frequency range. This may be carried out in a more systematic manner due to the fact that $`Re(\psi _N(\omega ))`$ and $`Im(\psi _N(\omega ))`$ are conjugate harmonic functions of $`\omega `$, which must satisfy the maximum principle. Thus, if we find that $`\psi _N(\omega )`$ is bounded inside a given region of the complex $`\omega `$ plane, and either $`Re(\psi _N(\omega ))`$ or $`Im(\psi _N(\omega ))`$ remains nonzero everywhere on the boundary, then $`\psi _N`$ is necessarily nonzero everywhere inside that region. This ensures that there can be no quasinormal modes with these frequencies. We can thus systematically search for the lowest modes by eliminating the low frequency regions until we find the modes. Once we find one mode for a given set of parameters, continuity of the solution allows us to trace the mode through the parameter space; that is, we can find $`\omega `$ for nearby values of $`r_+`$ and $`l`$. Also, once we know the $`n=1`$ and $`n=2`$ modes for a fixed $`r_+`$ and $`l`$, the equal spacing between the modes allows us to find the higher $`n`$ modes (provided the numerical errors stay small). Thus, the procedure for finding $`\omega (r_+)`$ is the following: We first consider a large black hole, where the convergence is good at a low partial sum, e.g. $`N=40`$. For such a low cut-off $`N`$ on the partial sum, we may easily compute $`\psi _N(\omega ,r_+,l)`$ in full generality. We find the desired mode $`\omega _N(r_+,n,l)`$ using the method described above, and we can check the convergence by comparing this result with that obtained for the lower partial sums. We can now follow the mode to smaller values of $`r_+`$, until the convergence becomes too slow, and we need to compute higher partial sums. It becomes more practical at this point to fix all the parameters, and consider $`\psi _N`$ as a function of $`\omega `$ only. This has the numerical advantage of enabling us to compute the partial sums to much higher $`N`$; the drawback, of course, is that now we need to recompute the whole series for each $`r_+`$. Note added: After this work was submitted, we have extended our computations of the higher $`l`$ modes for large black holes, up to $`l=25`$. From fits of the large $`l`$ behavior of $`\omega _I`$, we find strong evidence that the frequencies indeed stay bounded away from zero: In particular, a fit of the form $`\omega _I(l)=1.12+\frac{15.4}{l+\mathrm{\hspace{0.17em}11.8}}`$ has $`\chi ^22\times 10^6`$, as opposed to $`\chi ^210^3`$ for a fit with $`\omega _I(l\mathrm{})0`$. References relax The earliest papers include C. Vishveshwara, Phys. Rev. D1 (1970) 2870; W. Press, Ap. J. Lett. 170 (1971) L105. For an early review, see S. Detweiler, in Sources of Gravitational Radiation, L. Smarr, ed., Cambridge U. Press (1979) p. 221. relax For a recent review, see K. Kokkotas and B. Schmidt, “Quasi-normal modes of stars and black holes”, to appear in Living Reviews in Relativity: www.livingreviews.org (1999). relax P. Brady, C. Chambers, W. Krivan and P. Laguna, Phys. Rev. D55 (1997) 7538, gr-qc/9611056; P. Brady, C. Chambers, W. Laarakkers, and E. Poison, Phys. Rev. D60 (1999) 064003, gr-qc/9902010. relax A. Barreto and M. Zworski, Math. Research Lett. 4 (1997) 103. relax J. Maldacena, Adv. Theor. Math. Phys. 2 (1998) 231, hep-th/9711200. relax E. Witten, Adv. Theor. Math. Phys. 2 (1998) 253, hep-th/9802150. relax S. Gubser, I. Klebanov, and A. Polyakov, Phys. Lett. B428 (1998) 105, hep-th/9802109. relax For a comprehensive review, see O. Aharony, S.S. Gubser, J. Maldacena, H. Ooguri, and Y. Oz, “Large N Field Theories, String Theory and Gravity”, hep-th/9905111. relax S. Kalyana Rama and B. Sathiapalan, “On the role of chaos in the AdS/CFT connection”, hep-th/9905219. relax J. Chan and R. Mann, Phys. Rev. D55 (1997) 7546, gr-qc/9612026; Phys. Rev. D59 (1999) 064025. relax R. Price, Phys. Rev. D5 (1972) 2419, 2439. relax M. Choptuik, Phys. Rev. Lett. 70 (1993) 9. relax C. Csaki, H. Ooguri, Y. Oz, and J. Terning, JHEP 01 (1999) 017, hep-th/9806021; R. de Mello Koch, A.Jevicki, M.Mihailescu, and J.P.Nunes, Phys. Rev. D58 (1998) 105009, hep-th/9806125; M. Zyskin, Phys. Lett. B439 (1998) 373, hep-th/9806128. relax E.S.C. Ching, P.T. Leung, W.M. Suen, and K. Young, Phys. Rev. D52 (1995) 2118, gr-qc/9507035. relax C. Gundlach, R. Price, and J. Pullin, Phys. Rev. D49 (1994) 883. relax R. Gregory and R. Laflamme, Phys. Rev. Lett. 70 (1993) 2837, hep-th/9301052; Nucl. Phys. B428 (1994) 399, hep-th/9404071. relax S. Hawking and D. Page, Commun. Math. Phys. 87 (1983) 577. relax E. Witten, Adv. Theor. Math. Phys. 2 (1998) 505, hep-th/9803131. relax T. Banks, M. Douglas, G. Horowitz, and E. Martinec, “AdS Dynamics from Conformal Field Theory”, hep-th/9808016. relax G. Horowitz, talk at Strings ’99, Potsdam, Germany, to appear in the proceedings. relax C. Burgess and C. Lutken, Phys. Lett. 153B (1985) 137. relax S. R. Das, G. Gibbons, and S. D. Mathur, Phys. Rev. Lett. 78 (1997) 417, hep-th/9609052. relax C. Gundlach, Adv. Theor. Math. Phys. 2 (1998) 1, gr-qc/9712084. relax D. Garfinkle, C. Cutler, and G. C. Duncan, gr-qc/9908044.
no-problem/9909/hep-th9909107.html
ar5iv
text
# rom2f–99/30 mri-phy/P990927 hep-th/9909107 D-branes on Fourfolds with Discrete Torsion ## 1 Introduction D-branes provide a geometric means to studying orbifold singularities and their desingularisations, as the moduli space of D-branes reproduces the space in which the D-branes are embedded into. Within the scope of string theory, a generalisation of orbifold singularities, when possible, is to turn on a discrete torsion. String theory on an orbifold of the $`n`$-dimensional complex space $`^n`$, e.g. $`^n/G`$, where $`G`$ is a finite group, admits discrete torsion if the second cohomology group of $`G`$, viz. $`H^2(G,U(1))`$, is non-trivial. At this point, let us recall that here we are considering D-branes on non-compact spaces, which may serve as local models of a compact target space of string theory near a singularity. In the conformal field theoretic description of the string world-sheet, turning on a discrete torsion is tantamount to assigning a non-vanishing weight or phase to the contribution to the string partition function arising in the twisted sector. In studying a closed string theory on an orbifold in absence of discrete torsion, one can estimate the contributions to the partition function arising in the untwisted and the twisted sectors of the theory, by implementing the quotient in the path integral of the theory. The resulting spectrum, which is the sum of all these contributions, turns out to coincide with that of the theory on the corresponding smooth manifold . The presence of discrete torsion alters the contribution from the twisted sector of the theory. The resulting theory is still consistent as a string theory, but no more a string theory on a blown up manifold. Indeed, the resulting theory might be a consistent string theory on a singular target space . Moreover, the modes in the twisted sector correspond to (partial) deformation of the complex structure of the orbifold, not to the blowing up of the Kähler class. Now that the target space of string theory can be simulated as the moduli space of D-branes, a natural question is whether one can incorporate discrete torsion in this picture. This has been answered in the affirmative in some examples . In this article we will find one more example of this kind. In terms of the supersymmetric gauge theories used to describe the theory on the world-volume of D-branes, discrete torsion is incorporated in the action of the quotienting group on the position as well as the gauge degrees of freedom carried by the brane. In the presence of discrete torsion one is led to choose a *projective representation* of the group , to be contrasted with the *linear representation* used when discrete torsion is absent. The resolution of the orbifold in the absence of discrete torsion is effected by adding a Fayet–Iliopoulos term in the gauge theory, thereby perturbing the D-term of the gauge theory. In the presence of discrete torsion, the moduli for the deformation of the singularity are purveyed by parameters appearing in the perturbation of the F-term of the theory. In either case, the choice of the perturbations of the F- and D-terms are guided by the twisted sector of closed string theory in the presence and absence of discrete torsion, respectively . D-branes on the three-dimensional orbifold $`^3/(_2\times _2)`$ have been studied in the absence of discrete torsion , and in its presence as well. In the absence of discrete torsion, the moduli space of a D-brane on $`^3/(_2\times _2)`$ is a blown down conifold. Adding Fayet–Iliopoulos terms in the gauge theory corresponds to partial or complete resolution of the singularity, depending on the non-vanishing combinations of the Fayet–Iliopoulos parameters. However, the scenario is rather different in the presence of discrete torsion . In this case, the twisted sector of string theory provides modes not to resolve the singularity, but to deform it. Yet, the moduli space turns out to contain a stable double-point singularity or node, while the codimension-two singularities are deformed away by these modes. This signals a deficiency of certain modes in the twisted sector of the closed string theory corroborating with earlier findings . In the present article we will concern ourselves with analysing a D1-brane or D-string on $`^4/(_2\times _2\times _2)`$ in the presence of discrete torsion. D1-branes on this space without discrete torsion has been studied earlier. In the absence of discrete torsion, the analysis of the moduli space of D1-branes on $`^4/(_2\times _2\times _2)`$ parallels the analysis of D3-branes on $`^3/(_2\times _2)`$. The theory of D1-branes on the singular orbifold is an $`𝖭=(0,2)`$ super Yang-Mills theory in two dimensions. In the absence of discrete torsion the singularity in the moduli space is resolved by introducing Fayet–Iliopoulos terms in the action of the super Yang-Mills theory. The moduli space with perturbed D-flatness conditions, implementing the resolution of the orbifold, can be studied using the paraphernalia of toric geometry, where the monomials of the toric description are provided by the unperturbed F-flatness condition of the theory. In the presence of discrete torsion, however, it is the F-term of the $`(0,2)`$ theory that admits a perturbation, involving six parameters. The D-flatness conditions, however, remain unaltered with respect to the theory on the singular orbifold. This corresponds to a deformation of the singularity, rather than its resolution. The perturbation of the F-flatness conditions prevents employment of toric geometry in the description of the deformed variety. One is led to consider the gauge-invariant quantities to furnish a description of the deformed variety. It is found that, after desingularisation by F-term perturbations, the variety describing the moduli space of the gauge theory retains a stable singularity of codimension three (line singularity), in the same manner as its three-dimensional counterpart retains a conifold singularity after deformations. The plan of the article is as follows. We recount some features of the projective representations of $`_2\times _2\times _2`$ in §2. In §3 we study the twisted sector of closed string theory on $`^4/(_2\times _2\times _2)`$ in the presence of discrete torsion. This analysis enables us to determine the number of perturbation parameters allowed in the gauge theory. The low energy effective gauge theory of the D-brane on the orbifold is discussed in the §4. In §5 we consider the vacua of the resulting gauge theory and find out the corresponding moduli spaces both with and without deformations, before concluding in §6. Notations and conventions: Unless explicitly declared otherwise, we follow the following conventions in notation and terminology in the sequel. * The terms *resolution* (or *blow up*) and *deformation* of singularities are used in the usual senses. The term *desingularisation* is used generally to mean either of these two, therby encompassing partial removal of singularities. * For subscripts, uppercase letters from the middle of the alphabet, e.g. $`I`$,$`J`$, $`K`$, etc. assume values in $`\{1,2,3,4\}`$, while the corresponding lower-case letters, namely, $`i`$, $`j`$, $`k`$ etc., assume values in $`\{1,2,3\}`$. * The Pauli matrices $`\sigma _I`$ are chosen to be the following: $$\sigma _1=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)\sigma _2=\left(\begin{array}{cc}0& i\\ i& 0\end{array}\right)\sigma _3=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)\sigma _4=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)$$ * No sum is intended on repeated indices. ## 2 Projective representation & discrete torsion In considering desingularisations of orbifolds of the type $`^4/G`$, discrete torsion corresponds to non-trivial elements of the second cohomology group of the finite group $`G`$.<sup>1</sup><sup>1</sup>1See, however, for more rigorous considerations on discrete torsion. It is incorporated in the theory through the action of the group $`G`$ on the Chan-Paton degrees of freedom by using an adjoint action by a projective representation of $`G`$. In this section we will collect some facts about the projective representation of $`G`$, relevant for the present article. Given a finite group $`G`$, a mapping $`\rho :GGL(n,)`$ is called a *projective $`\alpha `$-representation* of $`G`$ (over the field $``$), provided there exists a mapping $`\alpha :G\times GU(1)`$, such that 1. $`\rho (g)\rho (g^{})=\alpha (g,g^{})\rho (gg^{})`$, 2. $`\rho (\mathrm{𝟏})=𝕀_n`$, for all elements $`g,g^{}G`$, where $`\mathrm{𝟏}`$ denotes the identity element of $`G`$ and $`𝕀_n`$ denotes the $`n\times n`$ identity matrix in $`GL(n,)`$. Let us note that one can define a projective representation over more general fields . Here we are referring to the projective matrix representation over the field of complex numbers as the projective representation. It can be shown that $`\alpha `$ is a $`U(1)`$-valued two-cocycle of the second cohomology group $`H^2(G,U(1))`$ of the finite group $`G`$. For our purposes $`\alpha `$ will be a complex number with unit modulus. <sup>2</sup><sup>2</sup>2Usually the range of the map $`\alpha `$ is taken to be the multiplicative group $`^{}`$ of the field $``$ of complex numbers. Here we will only consider the map $`\alpha `$ with unit modulus. So, we have taken the range to be $`U(1)`$. Generally, the second cohomology group $`H^2(G,U(1))`$ of a direct-product group of the form $`G={\displaystyle \underset{1}{\overset{m}{}}}_n`$, is isomorphic to $`{\displaystyle \underset{1}{\overset{m(m1)/2}{}}}_n`$. Let $`𝔤_i`$ denote the generator of the $`i`$-th $`_n`$ factor appearing in $`G`$, i.e. $`G={\displaystyle \underset{1}{\overset{m}{}}}𝔤_i`$. Let $`g_i`$, $`i=1,2,\mathrm{},m`$, denote the generators of $`G`$. Let us also define $`\mu (i)={\displaystyle \underset{a=1}{\overset{n1}{}}}\alpha (g_i^a,g_i)\text{and}\beta (i,j)=\alpha (g_i,g_j)\alpha (g_j,g_i)^1,i,j=1,2,\mathrm{},m.`$ (2.1) We may set $`\beta `$ to be an $`n`$-th root of unity and $`\mu =1`$, by replacing $`\alpha `$, if necessary, by a cohomologous cocycle. The corresponding projective $`\alpha `$-representation of $`G`$ is given in for special values of $`\beta `$ . For our purposes it suffices to quote the results for the special case with $`m=3`$ and $`n=2`$. Thus, we will consider the projective representations of the group $`G=_2\times _2\times _2`$. Let us assume that $`G`$ has the following action on the four coordinates $`Z_1`$,$`Z_2`$,$`Z_3`$,$`Z_4`$ of $`^4`$: $$\begin{array}{cc}\hfill g_1:(Z_1,Z_2,Z_3,Z_4)& (Z_1,Z_2,Z_3,Z_4),\hfill \\ \hfill g_2:(Z_1,Z_2,Z_3,Z_4)& (Z_1,Z_2,Z_3,Z_4),\hfill \\ \hfill g_3:(Z_1,Z_2,Z_3,Z_4)& (Z_1,Z_2,Z_3,Z_4).\hfill \end{array}$$ (2.2) Let $`𝔤_1`$, $`𝔤_2`$ and $`𝔤_3`$ denote the three generators of the three $`_2`$ factors. A generic element of $`G`$ can be written as $`g=𝔤_1^a𝔤_2^b𝔤_3^c`$. We will denote this element by the symbol $`(abc)`$. Choosing the action of each of the $`_2`$ factors to be a change of sign of $`Z_1`$ and one more out of $`Z_2`$, $`Z_3`$ and $`Z_4`$, we can write $`g_i`$ from (2.2) as $$\begin{array}{cc}\hfill g_1=𝔤_1& =(100),\hfill \\ \hfill g_2=𝔤_2& =(010),\hfill \\ \hfill g_3=𝔤_3& =(001).\hfill \end{array}$$ (2.3) The second cohomology group $`H^2(G,U(1))`$ of $`G`$ is isomorphic to $`_2\times _2\times _2`$ . The three generators of the latter may be taken to be $$\alpha _1((abc),(a^{}b^{}c^{}))=i^{(ab^{}ba^{})},\alpha _2((abc),(a^{}b^{}c^{}))=i^{(bc^{}cb^{})},\alpha _3((abc),(a^{}b^{}c^{}))=i^{(ca^{}ac^{})}.$$ (2.4) Let us note that $`\alpha _i((abc),(abc))=1`$ for $`i=1`$, 2, 3. Hence, by (2.1), we have $`\mu (i)=1`$, for $`i=1,2,3`$. In what follows we will consider the element $`\alpha =\alpha _1\alpha _2\alpha _3`$. Thus, we have, for all $`g,g^{}G`$, $$\begin{array}{cc}\hfill \alpha (g,g^{})& =i\text{if }gg^{},\hfill \\ & =1\text{if }g=g^{},\hfill \end{array}$$ (2.5) and consequently, $`\mu (i)=1`$ for $`i=1,2,3`$, and $`\beta =1`$. There are two irreducible $`\alpha `$-representations of $`G`$, which are not linearly equivalent. These are given by $$\rho (g_i)=\pm \sigma _i.$$ (2.6) The discrete torsion appearing in the path integral is determined by the choice of the two-cocycle. For example, for the above choice of the two-cocycle, namely $`\alpha =\alpha _1\alpha _2\alpha _3`$, the discrete torsion is given by $$\begin{array}{cc}\hfill \epsilon ((abc),(a^{}b^{}c^{}))& =\left(\alpha ((abc),(a^{}b^{}c^{}))\right)^2,(abc)(a^{}b^{}c^{})\hfill \\ & =(1)^{ab^{}ba^{}+bc^{}cb^{}+ca^{}ac^{}},\hfill \end{array}$$ (2.7) which is the form used in . Each given value of the discrete torsion corresponds to a variety with a different topology and leads to different types of gauge theories. These different varieties are related by mirror symmetry . Let us point out that the discrete torsion $`\alpha `$ used in (2.5) is by no means more general than any other two-cocycle. This can be interpreted as a phase between two of the $`_2`$ factors in $`G`$ by a change of basis. This fact will be reflected in the moduli space of the brane in that, the maximally deformed moduli space retains a singular line, unlike the point in . ## 3 String(ent) restrictions on deformations It has been known that string theory can be defined on certain kinds of singular spaces, especially on orbifolds. In sigma-model compactification on orbifolds, the spectrum of string theory receives contributions from the twisted sectors, thereby rendering string theory well-defined on such spaces. In considering D-branes on orbifolds, the orbifold is realised as the moduli space of the gauge theory on the world-volume of the brane. Resolution or deformation of the quotient singularity is effected by perturbing the gauge theory. However, compatibility of the desingularised D-brane moduli space with string theory imposes stringent restrictions on such extra terms. In this section, we discuss these restrictions. The four-dimensional orbifold $`^4/(_2\times _2\times _2)`$ in the blown down limit may be defined as an affine variety embedded in $`^5`$ by the polynomial equation $`(x,y,z,w,t)=0`$, where the polynomial is $`(x,y,z,w,t)=xyzwt^2`$, and $`x`$, $`y`$, $`z`$, $`w`$, $`t`$ are the coordinates of $`^5`$. The $`_2\times _2\times _2`$ symmetry can be made conspicuous by expressing the coordinates of $`^5`$ in terms of the affine coordinates of the covering space of the variety, namely, $`^4`$. Explicitly, $`x=u_1^2,y=u_2^2,z=u_3^2,w=u_4^2,t=u_1u_2u_3u_4`$, with $`\{u_1,u_2,u_3,u_4\}^4`$. The polynomial $``$ remains invariant under the three independent transformations, which change the signs of, say $`u_1`$, together with one out of $`\{u_I|I=2,3,4\}`$ in turn. Let us briefly recount the algebraic deformations of the equation $`=0`$. The possible deformations are given by the ring of polynomials $`Q=[x,y,z,w,t]/`$, where $``$ stands for the set of the partial derivatives of $``$ with respect to each of the arguments, i.e. $$=\{/x,/y,/z,/w,/t\},$$ and $`\mathrm{}`$ denotes the ideal generated by $`\mathrm{}`$. Thus, we have $$\begin{array}{cc}\hfill & =xyz,yzw,xyw,xzw,t,\hfill \\ \hfill Q& =1,x^a,y^b,z^c,w^d,xy,xz,xw,yz,yw,zw,\hfill \end{array}$$ (3.1) where $`a,b,c,d`$ are arbitrary integers. Among the generators of $`Q`$, terms such as $`xy`$ deform the six fixed planes such as $`zw=0`$, terms like $`x`$ deform the four fixed lines which correspond to codimension-three singularities and finally 1 deforms the double-point singularity with codimension four at the origin, $`t^2+x^2+y^2+z^2+w^2=0`$. However, as mentioned before, a physical theory, as the one we are considering, is not necessarily potent enough to contain all the deformations that are mathematically admissible. Within the scope of our discussion with D1-branes, the four-dimensional orbifold is realised as the moduli space of an $`𝖭=(0,2)`$ super-Yang-Mills theory and its deformations are subject to consistency requirements imposed by string theory. Considering branes in the closed or type-II string theory, these consistency conditions are determined by the *twisted sector* of the theory on the orbifold. Of the generators of the ring $`Q`$, only those deformations are physically allowed, that correspond to marginal operators in the closed string theory on the orbifold. The marginal operators are related by supersymmetry to the Ramond-Ramond (RR) ground states of the string theory. The latter, in turn, are determined by the cohomology of the smooth target space that limits to the orbifold under consideration. To be more explicit, let $`X`$ be a manifold, and let $`X^{}=X/G`$ be an orbifold, where $`G`$ is a finite group of order $`|G|`$. Let $`\stackrel{~}{X}`$ be a desingularisation of the orbifold $`X^{}`$. $`\begin{array}{cccc}& & & X\\ & & & G& & \\ \stackrel{~}{X}& & \underset{\text{desingularisation}}{}& X^{}& =X/G\end{array}`$ (3.2) Considering string theory on the orbifold $`X^{}`$, the above-mentioned computation of RR ground states yields the cohomology $`H^{}(\stackrel{~}{X})`$ of $`\stackrel{~}{X}`$. The desingularisation $`\stackrel{~}{X}X^{}`$ can be effected in two different ways. One way is to *blow up* the singularity at the origin. This corresponds to turning on a Fayet–Iliopoulos term in the $`𝖭=(0,2)`$ gauge theory, and thereby perturbing the D-term of the gauge theory . The other way is to *deform* the singularity, discussed above. This corresponds to perturbing the F-flatness conditions in the gauge theory and is relevant for us in considering orbifolds with discrete torsion. At any rate, in order to count the physically admissible perturbation modes in the gauge theory we need to consider closed strings on the orbifold and evaluate the cohomology. Let us compute the cohomology of the space $`\stackrel{~}{X}`$ for this case, following . The general strategy is as follows. Given an element $`g`$ of the group $`G`$, we first find out the subset of $`X`$ that is fixed under $`g`$. Let us denote this subset by $`X_g`$. The subset $`X_g`$ is endowed with $`(p,q)`$-forms, denoted $`\omega _g^{p,q}`$. Let $`\mathrm{\Omega }_g^{p,q}`$ denote the set of $`(p,q)`$-forms on $`X_g`$ and let $`\stackrel{~}{\mathrm{\Omega }}^{p,q}`$ denote the set of all $`(p,q)`$-forms on $`\stackrel{~}{X}`$. The $`(p,q)`$-forms $`\omega _g^{p,q}\mathrm{\Omega }_g^{p,q}`$, which are invariant under the group $`G`$, that is, which satisfy $$\epsilon (h,g)R(h)\omega _g^{p,q}=\omega _g^{p,q},\omega _g^{p,q}\mathrm{\Omega }_g^{p,q},hG,$$ (3.3) contribute to $`\stackrel{~}{\mathrm{\Omega }}^{p+s,q+s}`$, where $`s`$ is the age of $`gG`$, defined by $`s=_{I=1}^4\vartheta _I`$, if $`g:Z_Ie^{2\pi i\vartheta _I}Z_I`$, where $`Z_I`$, $`I=1,2,3,4`$ are the coordinates of $`X`$. Here $`R(h)`$ denotes some representation of the element $`hG`$ and $`\epsilon `$ is the discrete torsion defined in (2.7). The cohomology in absence of discrete torsion can be obtained by setting $`\epsilon =1`$. For the case at hand, the different elements of the group $`G=_2\times _2\times _2`$ fixes three kinds of subsets in $`X=^4`$ — contributing $`h^{pq}`$ $`(p,q)`$-forms to $`\stackrel{~}{\mathrm{\Omega }}^{p,q}`$. First, the identity of $`G`$ fixes $`^4`$ itself, that is, $`X_{(000)}=^4`$. We shall refer to the corresponding contribution to the cohomology as the contribution from the untwisted sector. The $`G`$-invariant forms are 1, $`dZ_Id\overline{Z}_I`$ and $`dZ_1dZ_2dZ_3dZ_4`$ and some of the $`dZ`$’s replaced by their complex conjugates. The contribution to the cohomology $`H^{}(\stackrel{~}{X})`$ is summarised in the following Hodge diamond: $`\begin{array}{ccccccccc}& & & & 1& & & & \\ & & & 0& & 0& & & \\ & & 0& & 4& & 0& & \\ & 0& & 0& & 0& & 0& \\ 1& & 4& & 12& & 4& & 1\\ & 0& & 0& & 0& & 0& \\ & & 0& & 4& & 0& & \\ & & & 0& & 0& & & \\ & & & & 1& & & & \end{array}`$ (3.4) These elements of $`H^{}(\stackrel{~}{X})`$ need be supplemented with the contribution from the twisted sector, that corresponds to the other two kinds of fixed sets. These are contributions from the $`G`$-invariant forms from the fixed subsets of $`X`$, under the non-trivial elements of $`G`$. We will refer to these as the contribution from the twisted sector. The element $`(111)G`$ fixes a point, $`X_{(111)}=\{0\}`$, the origin of $`^4`$, while each of the other six elements of $`G`$ leaves fixed a set $`^2X`$. The contribution from the untwisted sector is the same both in the presence and absence of discrete torsion. Contribution from the twisted sectors are different in the two cases. Let us consider both the cases in turn. * Without discrete torsion: In the absence of discrete torsion the condition of $`G`$-invariance of the $`(p,q)`$-forms is given by (3.3) with $`\epsilon =1`$. Only the $`(0,0)`$-form 1 is defined on $`X_{(111)}`$, which is a point. This is obviously $`G`$-invariant. The age of $`(111)G`$ is $`s=2`$. Thus, this form contributes to $`\stackrel{~}{\mathrm{\Omega }}^{2,2}`$, with $`h^{22}=1`$. Each of the other six elements of $`G`$ has age $`s=1`$. Each fixes a $`^2X`$. For example, $`(100)`$ fixes the $`^2`$ coordinatised by $`Z_3`$ and $`Z_4`$. The $`G`$-invariant forms on this $`^2`$ are 1, $`dZ_id\overline{Z}_i`$, $`i=3,4`$ and $`dZ_3dZ_4d\overline{Z}_3d\overline{Z}_4`$. Taking into account the shift by the age of this element, the contribution to the Hodge numbers are: $`h^{11}=1`$, $`h^{22}=2`$, $`h^{33}=1`$. Similar consideration with each of the other five elements leads to similar contribution to the cohomology. The total contribution from the twisted sector to $`H^{}(\stackrel{~}{X})`$ is summarised in the following Hodge diamond: $`\begin{array}{ccccccccc}& & & & 0& & & & \\ & & & 0& & 0& & & \\ & & 0& & 6& & 0& & \\ & 0& & 0& & 0& & 0& \\ 0& & 0& & 13& & 0& & 0\\ & 0& & 0& & 0& & 0& \\ & & 0& & 6& & 0& & \\ & & & 0& & 0& & & \\ & & & & 0& & & & \end{array}`$ (3.5) Let us point out that since the contribution from the twisted sector is $`h^{11}=6`$, the number of perturbation parameters allowed in the gauge theory in the absence of discrete torsion is 6. This is in keeping with the fact that there are six deformations of the D-flatness condition . Let us point out that there is no symmetry guaranteeing that the contributions from the six sectors mentioned above should be the same. That the contributions are indeed the same is a coincidence and has to be checked on a case by case basis for each of the elements. * With discrete torsion: In the presence of discrete torsion, the condition of $`G`$-invariance of forms is generalised to (3.3), with $`\epsilon `$ defined by (2.7). The contribution to $`H^{}(\stackrel{~}{X})`$ corresponding to $`(111)`$ *happens* to remain the same as that in the case without discrete torsion, namely $`h^{22}=1`$. However, the contribution from the other six do differ. Considering $`(100)`$, again, the invariant forms on the fixed $`^2`$ are $`dZ_3dZ_4`$, $`d\overline{Z}_3d\overline{Z}_4`$, $`dZ_3d\overline{Z}_4`$ and $`d\overline{Z}_3dZ_4`$. Thus, taking into account the shift by the age $`s=1`$ of each of the elements, the contribution to $`H^{}(\stackrel{~}{X})`$ is $`h^{22}=2`$, $`h^{31}=1`$. Analysing similarly the contributions from the other five elements, we get, in the long run, the following Hodge diamond arising from the twisted sector in the presence of discrete torsion: $`\begin{array}{ccccccccc}& & & & 0& & & & \\ & & & 0& & 0& & & \\ & & 0& & 0& & 0& & \\ & 0& & 0& & 0& & 0& \\ 0& & 6& & 13& & 6& & 0\\ & 0& & 0& & 0& & 0& \\ & & 0& & 0& & 0& & \\ & & & 0& & 0& & & \\ & & & & 0& & & & \end{array}`$ (3.6) Thus, we may have at most six deformations for the F-term, as they correspond to deformation of the orbifold singularity and determined by $`h^{31}`$. In the next section we will find out the six possible terms. Let us note that the element $`(111)G`$ has age $`s=2`$ and leads to a terminal singularity not giving in to a crepant resolution. However, this does not affect the present analysis, as this is confined to a consideration of perturbation of the gauge theory by the six parameters that correspond to the six (3,1)-forms, none of which have been contributed by $`(111)`$. Let us point out in passing that by turning on the discrete torsion we get the $`h^{11}`$ and $`h^{(41)1}=h^{31}`$ interchanged. This signifies that the resulting manifolds are related by mirror symmetry. ## 4 The gauge theory of the branes In the regime of weak string coupling, D-branes admit a description in terms of a supersymmetric Yang-Mills theory (SYM) on their world volumes. The moduli space of the SYM is interpreted as the space-time. Referring to the case at hand, the world-volume theory of $`n`$ coalescing D1-branes in type-IIB theory is taken to be the dimensional reduction of the $`𝖭=1`$, $`𝖣=10`$ $`U(n)`$ SYM on $`^4`$, or equivalently, the reduction of the $`𝖭=4`$, $`𝖣=4`$ SYM on $`^2`$, down to two dimensions. The resulting theory is an $`𝖭=(8,8)`$, $`𝖣=2`$ $`U(n)`$ SYM. We will consider a fourfold transverse to the world-volume of the D1-branes obtained as an orbifold of $`^4`$. The D1-brane is taken to be lying along the 9-th direction, evolving in time along the 0-th direction. The $`^4`$ is coordinatised by $`Z_I`$, $`I=1,2,3,4`$. Let $`G`$ be a finite group of order $`|G|`$. In order to retain some supersymmetry, $`G`$ must be a subgroup of the holonomy group of the fourfold, namely $`SU(4)`$. The theory of a single D1-brane on the blown down orbifold $`^4/G`$ is obtained by starting with a theory of $`|G|`$ coalescing D1-branes in two dimensions and then quotienting the resulting gauge theory with gauge group $`U(|G|)`$, by $`G`$. The resulting theory turns out to have $`𝖭=(0,2)`$ supersymmetry in two dimensions . It is convenient, in practice, to start with the $`𝖭=1`$ , $`𝖣=10`$ gauge theory reduced to two dimensions written in a $`𝖭=(0,2)`$ notation . Finally one substitutes the fields those have survived the orbifold projection. Through the projection the supersymmetry will get broken down to $`(0,2)`$. This theory corresponds to the theory of a D1-brane on the singular orbifold $`^4/G`$. The deformation and/or blow up of the orbifold $`^4/G`$ corresponds to adding extra terms to the above-mentioned (0,2) action, with all fields taken to be the ones surviving the projection . ### 4.1 The gauge theory: before projection Let us begin with an inventory of the multiplets of $`𝖭=(0,2)`$, $`𝖣=2`$ super-Yang-Mills theory . The field-content of the theory is as follows. There are four complex bosonic fields, denoted by $`Z_I`$, $`I=1,2,3,4`$, identified with the four complex dimensions transverse to the world volume. There are eight left-handed and eight right-handed Majorana-Weyl spinors which constitute four left-handed Dirac fermions and four right-handed Dirac fermions. The left handed fermions are $`\lambda _{}`$ satisfying $`(\lambda _{})^{}=\overline{\lambda }_{}`$ and the three other can be grouped together as $`\lambda _{IJ}`$ which is antisymmetric in $`IJ`$ and satisfies $`(\lambda _{IJ})^{}=ϵ_{IJKL}\lambda _{KL}`$. Finally, there is a vector field, whose two components will be denoted by $`v_\pm `$, in the light-cone coordinates. The fields mentioned above may be assorted into three supersymmetry multiplets, that is, into three superfields. The vector field and the fermion $`\lambda _{}`$, from the left sector are combined to form the vector multiplet, whose components in the Wess-Zumino gauge are written as, $$\begin{array}{cc}\hfill A_{}& =v_{}2i(\theta \overline{\lambda }_{}+\overline{\theta }\lambda _{})+2(\theta \overline{\theta })D,\hfill \\ \hfill A_+& =(\theta \overline{\theta })v_+,\hfill \end{array}$$ (4.1) where $`v_\pm `$ are the vector fields and $`D`$ is an auxiliary field. The corresponding field strength is given by $$F=\lambda _{}\theta (F_++iD)+2i(\theta \overline{\theta })_+\lambda _{},$$ (4.2) where $`F_+=_{}v_+_+v_{}`$. Four bosonic chiral multiplets are formed from $`Z_I`$ and four Dirac spinors $`\psi _I`$ coming from right sector. The corresponding superfield takes the form $$\varphi _I=Z_I+\sqrt{2}\theta \psi _I+2i\overline{\theta }\theta _+Z_I,$$ (4.3) where we have defined $`_\pm =_\pm iqA_\pm `$, and $`q`$ denotes the charge of the vector multiplet under the gauge group $`U(N)`$. Finally, the three fermions $`\lambda ^{IJ}`$, with hermitian conjugates defined as $`(\lambda ^{IJ})^{}=ϵ^{IJKL}\lambda ^{KL}`$, are gathered into three fermionic chiral multiplets which assume the following form: $$\mathrm{\Lambda }_{i4}=\lambda _{i4}\sqrt{2}\theta G_{i4}2i(\overline{\theta }\theta )_+\lambda _{i4}\sqrt{2}\overline{\theta }E_i;i=1,2,3,$$ (4.4) where $`G_{i4}`$ denotes a bosonic auxiliary field and $`E_i`$ represents a bosonic chiral field. The contributions of these multiplets to the action of the theory can be obtained from the reduction of ten-dimensional action. A more convenient way would be to start with four-dimensional $`𝖭=1`$ action reduced to two dimensions which is a $`(2,2)`$ theory and then by integrating out one of the $`\theta `$’s. These result in fixing the chiral fields $`E_i`$ as $`E_i=[\varphi _i,\varphi _4]`$. The contributions of the above-mentioned multiplets to the Lagrangian of the theory are given by, $$\begin{array}{cc}\hfill L_A& =\frac{1}{2e^2}𝑑\theta 𝑑\overline{\theta }Tr(\overline{F}F)\hfill \\ \hfill L_\varphi & =i\underset{I}{}𝑑\theta 𝑑\overline{\theta }Tr(\overline{\varphi }_I_{}\varphi _I)\hfill \\ \hfill L_\mathrm{\Lambda }& =\frac{1}{2}\underset{i}{}𝑑\theta 𝑑\overline{\theta }Tr(\overline{\mathrm{\Lambda }}_{i4}\mathrm{\Lambda }_{i4i})\hfill \end{array}$$ (4.5) The total Lagrangian obtained as the sum of the three pieces (4.5) admits a superpotential term, while retaining $`𝖭=(0,2)`$ supersymmetry. The corresponding piece of the Lagrangian is given by $$L_𝒲=\frac{1}{\sqrt{2}}𝑑\theta 𝒲+\text{h.c.,}$$ (4.6) where $`𝒲`$ is a chiral fermionic field, the *superpotential*. The general form of $`𝒲`$ is $`𝒲=Tr_i\mathrm{\Lambda }_{i4}J^i`$, where $`J^i`$ denotes a bosonic chiral field satisfying the supersymmetry-constraint $$\underset{i}{}E_iJ^i=0.$$ (4.7) Thus, to sum up, the total action is given by $$L=L_A+L_\varphi +L_\mathrm{\Lambda }+L_𝒲.$$ (4.8) The reduced two-dimensional theory, in absence of extra couplings, the chiral field $`J^i`$ in $`𝒲`$ assumes the form, $`J^i=_{j,k}ϵ^{ijk}[\varphi _j,\varphi _k]`$, which satisfies the supersymmetry constraint (4.7), thanks to the Jacobi identity. The action corresponding to (4.8) has a global $`U(1)^4`$ symmetry associated with the phases of the bosonic fields of which the global $`U(1)`$ is an R-symmetry of the theory. The bosonic potentials, known as the F-term and the D-term, are obtained by integrating out the auxiliary fields $`G_{i4}`$ and $`D`$, respectively, appearing in $`L`$. These are given by $`U_F`$ $`=`$ $`2{\displaystyle \underset{I,J}{}}Tr[Z_I,Z_J]^2,`$ (4.9) $`U_D`$ $`=`$ $`2e^2{\displaystyle \underset{I}{}}Tr[Z_I,\overline{Z}_I],`$ (4.10) respectively. As we will see in presence of the coupling of D-string to twisted sector modes the superpotential as well as the form the chiral fields $`E_i`$ will get modified. ### 4.2 The gauge theory: after projection Having discussed some generalities let us now implement the projection by $`G=_2\times _2\times _2`$. The order of $`G`$ is $`|G|=8`$. Hence we start with 8 branes at the origin, choosing the gauge group to be $`U(8)`$, and then quotient the theory by $`G`$. As mentioned earlier, the action of the group on the Chan-Paton indices is given by the regular representation, obtained as direct sum of two copies of each of the projective representations given in (2.6). Thus, we have, $$r(g_i)=\text{Diag}\{\sigma _i,\sigma _i,\sigma _i,\sigma _i\},$$ (4.11) where $`r(g_i)`$ denotes the regular representation of the generator $`g_i`$ of the group $`G`$. The action of the generators $`g_i`$ on the Lorentz indices is as given in (2.2). The projections are given by adjoint action of the regular representations $`r(g_i)`$ on the supermultiplets. In terms of the respective superfields, they take the form, $$\begin{array}{cc}\hfill r(g_i)A_\mu r(g_i)^1& =A_\mu ,\hfill \\ \hfill r(g_i)Z_Ir(g_i)^1& =\underset{I=1}{\overset{4}{}}R(g_i)_{IJ}Z_J,\hfill \\ \hfill r(g_i)\lambda _{IJ}r(g_i)^1& =\underset{I^{},J^{}=1}{\overset{4}{}}R(g_i)_{II^{}}R(g_i)_{JJ^{}}\lambda _{I^{}J^{}},\hfill \end{array}$$ (4.12) where we have introduced three matrices $`R(g_i)_{IJ}`$, such that the equations (2.2) read $$g_i:Z_I\underset{J=1}{\overset{4}{}}R(g_i)_{IJ}Z_J,$$ (4.13) for $`I=1,2,3,4`$ and $`i=1,2,3`$. The gauge group of the theory after projections (4.12) breaks down to $`U(2)\times U(2)`$ of which the center of mass $`U(1)`$ decouples which plays the role of the unbroken $`U(1)`$ for the single brane. The rest of the group $`U(1)\times SU(2)\times SU(2)`$ is broken by the vacuum expectation values of the following Higgs field $$Z_I=\left(\begin{array}{cc}0& z_I\times \sigma _I\\ w_I\times \sigma _I& 0\end{array}\right),$$ (4.14) where $`z_I`$, $`w_I`$ are $`2\times 2`$ matrices. The charges are assigned through, $$z_IUz_IV^{},w_IVw_IU^{}$$ (4.15) where $`U`$ and $`V`$ belong to the two $`SU(2)`$’s and $`z_I`$ and $`w_I`$ have opposite charges under the relative $`U(1)`$. The representation of the chiral Fermi field is the same as that of the commutators of the bosonic fields, which is consistent with the presence of the $`E`$ field in the multiplet. Thus, we have $$\mathrm{\Lambda }_{i4}=\left(\begin{array}{cc}\lambda _i^1\times \sigma _i& 0\\ 0& \lambda _i^2\times \sigma _i\end{array}\right).$$ (4.16) The fields after projection are then substituted in the action in order to derive the theory of the single brane on $`^4/G`$. The moduli space is described by the solutions of the conditions of F-flatness and D-flatness, obtained by minimising the F-term and the D-term, respectively. The F-flatness conditions are given by $$\begin{array}{cc}\hfill z_jw_i+z_iw_j& =0,z_4w_iz_iw_4=0,\hfill \\ \hfill w_jz_i+w_iz_j& =0,w_4z_iw_iz_4=0,\hfill \end{array}$$ (4.17) for $`i=1,2,3`$. Let us now consider the deformation of this moduli space in the presence of non-zero coupling with fields in the twisted sector of the closed string theory. In the discussion of the closed string twisted sectors in §3, we noticed that each of the six group elements which flips the sign of $`Z_I`$’s pairwise contributes $`(3,1)`$, responsible for the deformation of the complex structure of the singular orbifold. These are complex numbers and hence couple naturally to the superpotential . Geometrically, these deform away codimension-two singularities. One set of the natural modification of the superpotential comes from its four-dimensional analogue. Starting with a term of the form $`𝑑\theta \xi _i\varphi _i`$, where $`\varphi _i`$ is a four-dimensional chiral field, reducing to two dimension and integrating one $`\theta `$ out yields a term $$\delta W=\underset{i=1}{\overset{3}{}}𝑑\theta ^+Tr(𝝃_i\mathrm{\Lambda }_{i4}),$$ (4.18) where $`𝝃_i`$ denotes the coupling parameters, of the form $`𝝃_i=\left(\begin{array}{cc}\xi _i\times \sigma _i& 0\\ 0& \xi _i\times \sigma _i\end{array}\right).`$ (4.19) The form of $`𝝃_i`$ is determined by the gauge-invariance of the coupling and by the fact that its introduction does not break supersymmetry according to (4.7). A simple calculation leads to the above expression without any loss of generality, provided $`𝝃_i`$ do not depend on the other fields. These perturbations give rise to a deformation of the F-term equation as $$\begin{array}{cc}\hfill z_jw_i+z_iw_j& =e_{ijk}\xi _k,\hfill \\ \hfill w_jz_i+w_iz_j& =e_{ijk}\xi _k,\hfill \end{array}$$ (4.20) where $`i=1,2,3`$, and $`e_{123}=+1`$, $`e_{ijk}`$ is symmetric under interchange of $`i,j`$ but non zero only when $`i,j,k`$ are all different. The form of the other perturbations can be obtained by treating all the 4 coordinates of the transverse space on same footing. This perturbations can be introduced by making use of the freedom in the definition of $`E`$ which correspond to a perturbation of the field $`E_i`$ in the fermionic multiplet given by $$E_iE_i+𝜼_i,$$ (4.21) where $`𝜼_i=\left(\begin{array}{cc}\eta _i\times \sigma _i& 0\\ 0& \eta _i\times \sigma _i\end{array}\right).`$ (4.22) Here also the form of the coupling is determined by the fact that $`𝜼_i`$ should transform in a similar way as $`E_i`$ and it should not break supersymmetry (4.7). To sum up, the F-term equations following from these perturbations are given by $$\begin{array}{cc}\hfill z_jw_i+z_iw_j& =e_{ijk}\xi _k,z_4w_iz_iw_4=\eta _i,\hfill \\ \hfill w_jz_i+w_iz_j& =e_{ijk}\xi _i,w_iz_4w_4z_i=\eta _i.\hfill \end{array}$$ (4.23) We shall sometimes refer to (4.23) as the *F-flatness equations*. Let us note that, we have six parameters $`\xi _i`$, $`\eta _i`$, $`i=1,2,3`$, appearing in the perturbation of the gauge theory. This is in keeping with the fact that, the contribution to the cohomology of the resolved space, arising in the twisted sector is $`h^{31}=6`$, as discussed in §3. ## 5 The moduli space and its deformation Let us now go over to finding the vacuum moduli space of the (0,2) theory discussed in the previous section. The vacuum moduli space is the space of allowed values of the scalars $`Z_I`$, respecting the F- and D-flatness conditions, up to gauge equivalence. ### 5.1 Gauge-invariant polynomials Thus, we have the four matrices $`Z_I`$, in the form (4.14), and the F-flatness conditions (4.23) on the non-vanishing $`2\times 2`$ blocks of $`Z_I`$, namely $`z_I`$ and $`w_I`$. The equation of the variety describing the moduli space of the configuration will be written in terms of the gauge-invariant polynomials constructed out of $`z_I`$ and $`w_I`$. We proceed to describe these next. Let us introduce the following expressions, $$\begin{array}{cc}\hfill 𝒫_{IJK\mathrm{}}& =\frac{1}{2}Trz_Iw_Jz_K\mathrm{},\hfill \\ \hfill \stackrel{~}{𝒫}_{IJK\mathrm{}}& =\frac{1}{2}Trw_Iz_Jw_K\mathrm{},\hfill \end{array}$$ (5.1) which we will refer to as *polynomials*. By the *order* of a polynomial, we mean the total number of $`z`$ and $`w`$ appearing in the polynomial — that is, the length of the word inside the trace. We need to introduce a few further notations which we list here: $`𝒬_{IJ}=z_Iw_J,\stackrel{~}{𝒬}_{IJ}=w_Iz_J,`$ (5.2) $`x_I={\displaystyle \frac{1}{2}}Tr𝒬_{II},\stackrel{~}{x}_I={\displaystyle \frac{1}{2}}Tr\stackrel{~}{𝒬}_{II}.`$ (5.3) Let us also note that, $$\stackrel{~}{x}_I=x_I.$$ (5.4) It then follows from (5.1) that $$𝒫_{IJ}=\frac{1}{2}Tr𝒬_{IJ}\stackrel{~}{𝒫}_{IJ}=\frac{1}{2}Tr\stackrel{~}{𝒬}_{IJ}$$ (5.5) Now, the polynomials of different orders defined by (5.1) are not linearly independent. We need to find out the independent polynomials in order to write down the equation of the variety. At this point let us note that from the gauge transformation (4.15) it follows that only the polynomials of even order are gauge-invariant quantities. Hence in what follows we shall not consider the polynomials of odd orders. Let us consider the remaining polynomials order by order. * Order 2 polynomials Using the constraints (4.23), it can be shown that $`𝒬_{II}`$ commute pairwise. Hence, assuming that these matrices are non-singular, the matrices $`𝒬_{II}`$ can be simultaneously diagonalised. From now on we assume that $`𝒬_{II}`$, $`I=1,2,3,4`$ are diagonal. Moreover, $`𝒬_{IJ}`$ with $`IJ`$ commute with $`𝒬_{II}`$, but not between themselves; for example, $`𝒬_{12}`$ and $`𝒬_{23}`$ do not commute. Hence, in a basis in which $`𝒬_{II}`$ are diagonal, $`𝒬_{IJ}`$, with $`IJ`$ are necessarily generic, i.e. not diagonal. It then follows, from the fact that $`𝒬_{IJ}`$, with $`IJ`$ and $`𝒬_{II}`$ commute, that $`𝒬_{II}`$ are all proportional to the two-dimensional identity matrix, $`𝕀_2`$. Moreover, if $`z_I`$ and $`w_I`$ are non-singular matrices, which also we assume, it follows that $`\stackrel{~}{𝒬}_{II}`$ is also proportional to $`𝕀_2`$. Thus we have, $$𝒬_{II}=\stackrel{~}{𝒬}_{II}=x_I𝕀_2,$$ (5.6) and consequently, $$𝒫_{II}=x_I.$$ (5.7) The F-flatness conditions (4.23) impose further constraints on $`𝒬_{IJ}`$ with $`IJ`$. We will use these later. Since the polynomials $`𝒫_{IJ}`$ and $`\stackrel{~}{𝒫}_{IJ}`$ satisfy $`\stackrel{~}{𝒫}_{IJ}=𝒫_{JI}`$, it suffices to consider either set. The twelve polynomials $`𝒫_{IJ}`$ with $`IJ`$ are not linearly independent. The F-flatness conditions (4.23) reduce these to a set of three independent ones. There are three further relations among these six, namely, $$\begin{array}{cc}\hfill \xi _2𝒫_{24}\xi _3𝒫_{34}+\eta _2𝒫_{13}\eta _3𝒫_{12}& =0,\hfill \\ \hfill \xi _1𝒫_{14}\xi _2𝒫_{24}\eta _1𝒫_{23}+\eta _2𝒫_{13}& =\xi _2\eta _2\xi _1\eta _1,\hfill \\ \hfill \xi _1𝒫_{14}\xi _3𝒫_{34}+\eta _1𝒫_{23}+\eta _3𝒫_{12}& =\xi _3\eta _3.\hfill \end{array}$$ (5.8) The equations (5.8) can be derived by considering certain polynomials of order 4 and using properties of trace of products of matrices and (4.23). For example, the first equation in (5.8) follows from $`𝒫_{1243}`$ as, $$\begin{array}{cc}\hfill 𝒫_{1243}& =\frac{1}{2}Trz_1w_2z_3w_4\hfill \\ & =\frac{1}{2}Tr(\xi _3z_2w_1)(z_3w_4+\eta _3)\hfill \\ & =\xi _3𝒫_{34}\eta _3𝒫_{21}+\xi _3\eta _3\stackrel{~}{𝒫}_{1342}\hfill \\ \hfill \stackrel{~}{𝒫}_{1342}& =\frac{1}{2}Trw_1z_3w_4z_2\hfill \\ & =\frac{1}{2}Tr(\xi _2w_3z_1)(\eta _2+w_2z_4)\hfill \\ & =\xi _2\eta _2+\eta _2𝒫_{13}+\xi _2𝒫_{42}𝒫_{1243},\hfill \end{array}$$ (5.9) and further using (4.23). The second and third relations follow similarly by considering $`𝒫_{1243}`$ (again!) and $`𝒫_{2341}`$ respectively. Thus, finally we are left with seven gauge-invariant polynomials of order two. * Order 4 polynomials Three cases arise for the polynomials $`𝒫_{IJKL}`$. 1. $`𝒫_{IIJK}`$ for all $`I`$, $`J`$, $`K`$. These are determined in terms of $`𝒫_{II}`$ and $`𝒫_{JK}`$ by virtue of (5.6). 2. $`𝒫_{IIJJ}=\frac{1}{2}x_Ix_J`$ 3. $`𝒫_{1234}`$ — this is an independent polynomial. Other order 4 polynomials with all indices different are determined in terms of $`𝒫_{1234}`$ and certain polynomials of order 2. As for the polynomials $`\stackrel{~}{𝒫}_{IJKL}`$, 1. $`\stackrel{~}{𝒫}_{IIJK}`$ for all $`I`$, $`J`$. These are determined in terms of $`\stackrel{~}{𝒫}_{II}`$ and $`\stackrel{~}{𝒫}_{JK}`$ by virtue of (5.6), and the latter are related to $`𝒫_{II}`$ and $`𝒫_{JK}`$ in turn, again by (5.6). 2. $`\stackrel{~}{𝒫}_{IIJJ}`$ are also determined in terms of the are determined by $`x_I`$ $$\begin{array}{cc}\hfill \stackrel{~}{𝒫}_{IIJJ}& =\frac{1}{2}\stackrel{~}{x}_I\stackrel{~}{x}_J\hfill \\ & =\frac{1}{2}x_Ix_J.\hfill \end{array}$$ 3. $`\stackrel{~}{𝒫}_{1234}`$ is determined in terms of $`𝒫_{1234}`$ by the relation $$\stackrel{~}{𝒫}_{1234}=𝒫_{1234}+\frac{1}{2}(\xi _1\eta _1\xi _2\eta _2+\xi _3\eta _3),$$ (5.10) as a consequence of the F-flatness conditions (4.23) Thus, we conclude that, the only independent polynomial of order four is $`𝒫_{1234}`$. * Polynomials of order higher than 4 These can all be expressed in terms of polynomials of lower order. Some of these relations follow by (5.6) and (5.7), while the others follow by an use of the Cayley–Hamilton theorem. As an example of the latter cases, let us consider the polynomial $`𝒫_{111134}`$ = $`\frac{1}{2}Trz_1w_1z_1w_1z_3w_4`$. The Cayley-Hamilton theorem leads to the following equation for any $`2\times 2`$ matrix, $`𝐌`$. $$𝐌^2𝐌Tr𝐌+\frac{1}{2}\left((Tr𝐌)^2Tr𝐌^2\right)𝕀_2=0.$$ (5.11) Now, taking $`𝐌=z_1w_1`$, we have $$z_1w_1z_1w_12z_1w_1𝒫_{11}+\left(2𝒫_{11}^2𝒫_{1111}\right)𝕀_2=0.$$ (5.12) Multiplying by $`z_3w_4`$ and taking trace we obtain $$𝒫_{112234}=2𝒫_{11}𝒫_{1134}2𝒫_{11}^2𝒫_{34}+𝒫_{1111}𝒫_{34}.$$ (5.13) Thus, the polynomial $`𝒫_{112234}`$ of order six is expressed in terms of polynomials of lower order. Other polynomials of order higher than four can be treated similarly. In particular, in any polynomial of order say $`(4+k)`$, $`k>0`$, at least $`k`$ indices come repeated. Using (4.23) it is always possible to gather these repeated indices together and use the proportionality of $`𝒬_{II}`$ to the identity matrix to reduce it to some combination of lower order polynomials or use the Cayley-Hamilton theorem as above. Thus, finally we have ten independent, gauge invariant polynomials of order two and one of order four. The vacuum moduli space is described by an equation involving these polynomials upon using the F-flatness conditions. ### 5.2 The variety Now, in order to write down the equation for the variety describing the moduli space, let us introduce a $`4\times 4`$ matrix $``$, defined as follows. $$\begin{array}{cc}\hfill _{II}& =𝒫_{II},\hfill \\ \hfill _{ij}& =\frac{1}{2}\left(𝒫_{ij}+\stackrel{~}{𝒫}_{ij}\right),\hfill \\ \hfill _{i4}& =\frac{1}{2}\left(\stackrel{~}{𝒫}_{i4}𝒫_{i4}\right),\hfill \\ \hfill _{4i}& =\frac{1}{2}\left(\stackrel{~}{𝒫}_{4i}𝒫_{4i}\right)\hfill \\ & =_{i4},\hfill \end{array}$$ (5.14) for $`I,J=1,2,3,4`$ ans $`i,j=1,2,3`$. Defining another complex variable $`t`$ as $$t=\frac{1}{2i}\left(𝒫_{1234}+\stackrel{~}{𝒫}_{1234}\right),$$ (5.15) the equation of the variety is written as $$t^2=det.$$ (5.16) We now write this in terms of $`t`$ and $`x_I`$, $`I=1,2,3,4`$ and use (4.23). The matrix $``$ now assumes the following form: $`=\left(\begin{array}{cccc}x_1& \xi _3/2& \xi _2/2& \eta _1/2\\ \xi _3/2& x_2& \xi _1/2& \eta _2/2\\ \xi _2/2& \xi _1/2& x_3& \eta _3/2\\ \eta _1/2& \eta _2/2& \eta _3/2& x_4\end{array}\right).`$ (5.17) The matrix $``$ is the most general one consistent with the global $`U(1)^4`$ symmetry, provided we assign the correct charges to the parameters $`\xi `$ and $`\eta `$ from the action. The relation (5.16) can be argued from the algebra of the matrices $`𝒬_{IJ}`$ that follows from (4.23). Any solution is a represenattion of the algebra and it can be shown from the algebra that in the generic case the moduli are related through (5.16). Hence the equation of the variety becomes $$(x_1,x_2,x_3,x_4,t)=0,$$ (5.18) where we have defined $$\begin{array}{cc}\hfill (x_1,x_2,x_3,x_4,t)=x_1x_2x_3x_4& \frac{1}{16}(\xi _1^2\eta _1^2+\xi _2^2\eta _2^2+\xi _3^3\eta _3^2)\hfill \\ & +\frac{1}{8}(\xi _1\xi _2\eta _1\eta _2+\xi _2\xi _3\eta _2\eta _3+\xi _3\xi _1\eta _3\eta _1)\hfill \\ & \frac{1}{4}(\xi _1\eta _2\eta _3x_1+\xi _2\eta _3\eta _1x_2+\xi _3\eta _1\eta _2x_3)\hfill \\ & +\frac{1}{4}(\eta _3^2x_1x_2+\eta _1^2x_2x_3+\eta _2^2x_3x_1)\hfill \\ & \frac{1}{4}(\xi _1^2x_1+\xi _2^2x_2+\xi _3^2x_3\xi _1\xi _2\xi _3)x_4\hfill \\ & t^2.\hfill \end{array}$$ (5.19) That the equation (5.18) is indeed satisfied by solutions of the F-flatness conditions should be explicitly checked on any solution of (4.23). Let us verify this in some examples. * Example 1 When $`\xi _i=\eta _i=0`$ for $`i=1,2,3`$, a solution of (4.23) is given by $$\begin{array}{cc}\hfill z_I& =𝗓_I\sigma _I\hfill \\ \hfill w_I& =𝗐_I\sigma _I,I=1,2,3,4,\hfill \end{array}$$ (5.20) where $`𝗓_I`$ and $`𝗐_I`$ are complex numbers. The F-flatness constraints (4.23) impose the following relations among $`𝗓_I`$ and $`𝗐_I`$: $$\begin{array}{cc}\hfill 𝗓_1𝗐_2=𝗓_2𝗐_1,& 𝗓_1𝗐_4=𝗓_4𝗐_1,\hfill \\ \hfill 𝗓_2𝗐_3=𝗓_3𝗐_2,& 𝗓_2𝗐_4=𝗓_4𝗐_2,\hfill \\ \hfill 𝗓_3𝗐_1=𝗓_1𝗐_3,& 𝗓_3𝗐_4=𝗓_4𝗐_3.\hfill \end{array}$$ (5.21) We also need the constraints ensuing from the D-flatness condition (4.10), namely $$|𝗓_1|^2+|𝗓_2|^2+|𝗓_3|^2+|𝗓_4|^2|𝗐_1|^2|𝗐_2|^2|𝗐_3|^2|𝗐_4|^2=0.$$ (5.22) Let us note that only the three relations written in the second column of (5.21) are independent — the other three relations in the first column follow from these. Also, note that the three equations in the second column of (5.21) are monomial equations. Therefore, we can give a toric description to the envisaged variety, following . In the notation of , we can express the variables $`𝗓_I`$, $`𝗐_I`$, in terms of the five independent variables, which we choose to be $`𝗓_1`$, $`𝗓_2`$, $`𝗓_3`$, $`𝗓_4`$ and $`𝗐_4`$. This is expressed as: $`\begin{array}{ccccccccccc}& & 𝗓_1& 𝗓_2& 𝗓_3& 𝗓_4& 𝗐_1& 𝗐_2& 𝗐_3& 𝗐_4& \\ 𝗓_1& (\mathrm{}& 1& 0& 0& 0& 1& 0& 0& 0& )\mathrm{}\\ 𝗓_2& 0& 1& 0& 0& 0& 1& 0& 0\\ 𝗓_3& 0& 0& 1& 0& 0& 0& 1& 0\\ 𝗓_4& 0& 0& 0& 1& 1& 1& 1& 0\\ 𝗐_4& 0& 0& 0& 0& 1& 1& 1& 1\end{array}.`$ In terms of six homogeneous coordinates, $`p_i`$, $`i=0,\mathrm{}5`$, the five independent variables can then be expressed as: $`T=\begin{array}{ccccccccc}& & p_0& p_1& p_2& p_3& p_4& p_5& \\ 𝗓_1& (\mathrm{}& 1& 1& 0& 0& 0& 0& )\mathrm{}\\ 𝗓_2& 1& 0& 1& 0& 0& 0\\ 𝗓_3& 1& 0& 0& 1& 0& 0\\ 𝗓_4& 1& 0& 0& 0& 1& 0\\ 𝗐_4& 0& 0& 0& 0& 1& 1\end{array}.`$ The kernel of $`T`$ is given by $`(KerT)^\mathrm{T}=\left(\begin{array}{cccccc}1& 1& 1& 1& 1& 1\end{array}\right),`$ (5.23) and provides part of the charge matrix for the variety. We can also find out the charges of the five independent variables under a $`^{}`$ from the D-flatness condition (5.22) as $`V=\left(\begin{array}{ccccc}1& 1& 1& 1& 1\end{array}\right).`$ (5.24) The charges of the homogeneous coordinates $`p_i`$ under this $`^{}`$ are obtained from $`V`$, using a matrix $`U`$, satisfying $`TU^\mathrm{T}=𝕀_5`$, namely, $`U=\left(\begin{array}{cccccc}0& 1& 0& 0& 0& 0\\ 0& 0& 1& 0& 0& 0\\ 0& 0& 0& 1& 0& 0\\ 1& 1& 1& 1& 0& 0& \\ 0& 0& 0& 0& 0& 1\end{array}\right),`$ (5.25) as follows: $`VU=\left(\begin{array}{cccccc}1& 0& 0& 0& 0& 1\end{array}\right).`$ (5.26) Concatenating this with $`(KerT)^\mathrm{T}`$ in (5.23), we obtain the charge matrix $`\stackrel{~}{Q}=\left(\begin{array}{cccccc}1& 1& 1& 1& 1& 1\\ 1& 0& 0& 0& 0& 1\end{array}\right).`$ (5.27) The co-kernel of the transpose of $`\stackrel{~}{Q}`$, after deleting a column, identical to the first one, takes the form $`\stackrel{~}{T}=coKer\stackrel{~}{Q}^\mathrm{T}=\left(\begin{array}{ccccc}1& 2& 0& 0& 0\\ 1& 0& 2& 0& 0\\ 1& 0& 0& 2& 0\\ 1& 0& 0& 0& 2\end{array}\right),`$ (5.28) and is the toric data of the variety under consideration. The equation of the variety can be read of from the kernel of $`\stackrel{~}{T}`$, namely $`Ker\stackrel{~}{T}=\left(\begin{array}{c}2\\ 1\\ 1\\ 1\\ 1\end{array}\right).`$ (5.29) Correspondingly, the equation of the variety under consideration is given by $$t^2=x_1x_2x_3x_4.$$ (5.30) which is the same as (5.18) with $`\xi _i=\eta _i=0`$. The five variables appearing in the above equation are expressed in terms of the gauge-invariant combinations described earlier, as follows. $$\begin{array}{cc}\hfill x_I& =𝗓_I𝗐_I,I=1,2,3,4,\hfill \\ \hfill t& =\frac{1}{2}(𝗓_1𝗐_2𝗓_3𝗐_4+𝗐_1𝗓_2𝗐_3𝗓_4).\hfill \end{array}$$ (5.31) Thus, we have verified the claim that (5.18) is indeed the equation of the variety describing the vacuum moduli space. ❑ Let us now verify the claim in the case where all the deformation parameters are non-vanishing. We wish to have a solution which goes over to the solution given in the previous example in the limit of vanishing deformation parameters. * Example 2 Let us present the solution first. $$\begin{array}{cc}\hfill z_1=𝗓_1\sigma _1+\xi _3\sigma _2/2𝗐_\mathrm{𝟤},& w_1=𝗐_1\sigma _1+\xi _3\sigma _2/2𝗓_\mathrm{𝟤},\hfill \\ \hfill z_2=𝗓_2\sigma _2+\xi _1\sigma _3/2𝗐_\mathrm{𝟥},& w_2=𝗐_2\sigma _2+\xi _1\sigma _3/2𝗓_\mathrm{𝟥},\hfill \\ \hfill z_3=𝗓_3\sigma _3+\xi _2\sigma _1/2𝗐_\mathrm{𝟣},& w_3=𝗐_3\sigma _3+\xi _2\sigma _1/2𝗓_\mathrm{𝟣},\hfill \\ \hfill z_4=𝗓_4\sigma _4+\alpha _1\sigma _1+\alpha _2\sigma _2+\alpha _3\sigma _3,& w_4=𝗐_4\sigma _4+\mathrm{}(\alpha _1\sigma _1+\alpha _2\sigma _2+\alpha _3\sigma _3),\hfill \end{array}$$ (5.32) where $`𝗓_I`$ and $`𝗐_I`$ satisfy the equations (5.21) and the three complex variables $`\alpha _1`$, $`\alpha _2`$, $`\alpha _3`$ solve the following set of three relations: $$\begin{array}{cc}\hfill 2𝗓_1𝗐_3\alpha _3+\xi _2\alpha _1\eta _3𝗓_1& =0,\hfill \\ \hfill 2𝗓_2𝗐_1\alpha _1+\xi _3\alpha _2\eta _1𝗓_2& =0,\hfill \\ \hfill 2𝗓_3𝗐_2\alpha _2+\xi _1\alpha _3\eta _2𝗓_3& =0,\hfill \end{array}$$ (5.33) and we have defined $$\mathrm{}=\frac{𝗐_1}{𝗓_1}=\frac{𝗐_2}{𝗓_2}=\frac{𝗐_3}{𝗓_3}=\frac{𝗐_4}{𝗓_4},$$ (5.34) by (5.21). It should be noted that $`z_I`$ and $`w_I`$ as given in (5.32), satisfy the F-flatness conditions (4.23) if $`𝗓_I`$ and $`𝗐_I`$ satisfy (5.21). Hence, as above, the corresponding variety is again a fourfold, given by (5.18) in $`^5`$. However, the D-flatness conditions alter from (4.10). In terms of the above solution, we have $$\begin{array}{cc}\hfill x_1& =𝗓_1𝗐_1+\frac{\xi _3^2}{4𝗓_2𝗐_2},\hfill \\ \hfill x_2& =𝗓_2𝗐_2+\frac{\xi _1^2}{4𝗓_3𝗐_3},\hfill \\ \hfill x_3& =𝗓_3𝗐_3+\frac{\xi _2^2}{4𝗓_1𝗐_1},\hfill \\ \hfill x_4& =𝗓_4𝗐_4+\mathrm{}(\alpha _1^2+\alpha _2^2+\alpha _3^2),\hfill \\ \hfill t& =\left(𝗓_1𝗐_2𝗓_3𝗐_4+\frac{\xi _1\xi _2\xi _3𝗐_4}{8𝗐_1𝗓_2𝗐_3}\right).\hfill \end{array}$$ (5.35) We have used (5.21) in writing the expression for $`t`$ in (5.35). The equation (5.18) is satisfied for these values of the five variables. Let us note that, as mentioned above, we have the same equations among the different variables $`𝗓_I`$ and $`𝗐_I`$ in the present case, as in Example 1 above. However, the D-flatness conditions are now different and involves the parameters $`\xi _i`$ and $`\eta _i`$. We have not solved the D-flatness equations. However, the above solution should be gauge-equivalent to a solution that simultaneously solves the F- and D-flatness equations . The variety, anyway, is different from the one in the previous example. ❑ ### 5.3 Special branches Returning to the equation (5.18) for the variety, it has a variety of singularities depending on the values of the deformation parameters $`\xi _i`$ and $`\eta _i`$. The singular subsets of the variety (5.18) are simultaneous solutions of $`=0`$ and $`=0`$, as noted in §3. We list some of the cases below. NB: In this subsection we have rescaled the parameters $`\xi _i`$ and $`\eta _i`$ to $`2\xi _i`$ and $`2\eta _i`$ respectively, in order to avoid clumsy factors in the expressions. 1. In the limit of vanishing deformation, $`\xi _i=\eta _i=0`$, we recover the singular orbifold. This has a $`_2\times _2\times _2`$ singularity at the origin, along with higher dimensional singular subspaces, all of which contain the origin. 2. When all $`\xi _i=0`$, we can have one, two or all three of the $`\eta _i`$ non-vanishing. 1. $`\eta _i0`$. This has a line singularity along $`x_4`$, with $`x_1=x_2=x_3=t=0`$. 2. $`\eta _1=0`$, $`\eta _2,\eta _30`$. This also has a line singularity along $`x_4`$, with $`x_i=t=0`$. 3. $`\eta _1`$, $`\eta _20`$, $`\eta _3=0`$. This has a singular plane given by $`x_3=0`$, $`x_1x_2x_4+\eta _1^2x_2+\eta _2^2x_1=0`$. 4. $`\eta _1=\eta _2=0`$, $`\eta _30`$. The singularity is along the union of the two planes given by: $`x_3`$-$`x_4`$ plane, with $`x_1=x_2=t=0`$, and the plane, $`x_1=0`$, $`x_3x_4+\eta _3^2=0`$, $`x_2`$ arbitrary. 3. The next cases are when one of the $`\xi _i`$ is non zero. Let us assume, $`\xi _1=\xi _2=0`$, $`\xi _30`$. we can have different numbers and combinations of $`\eta _i`$ non-vanishing. 1. $`\eta _i0`$. This has a line singularity given by $$x_1=\frac{\xi _3\eta _1}{\eta _2},x_2=\frac{\xi _3\eta _2}{\eta _1},x_3x_4+\frac{\eta _1\eta _2}{\xi _3}x_3+\eta _3^2=0$$ (5.36) 2. $`\eta _1,\eta _20`$, $`\eta _3=0`$. This has a singular line which can be obtained by setting $`\eta _3=0`$ in (5.36). The case with $`\xi _i0`$, $`\eta _i=0`$ will be similar. 3. $`\eta _1=0`$ and $`\eta _2,\eta _30`$. There is no singularity. This can also be seen by taking the limit $`\eta _10`$ in (5.36), — the singularity is sent to infinity along $`x_2`$. 4. The case with $`\eta _2=0`$, $`\eta _1,\eta _30`$ is similar to (3c). 5. $`\eta _10`$, $`\eta _2=\eta _3=0`$. This has a line singularity along $`x_1`$ with $`x_2=x_3=x_4=0`$. This is similar to the case with $`\eta _20`$, $`\eta _30`$ and the rest of the parameters vanishing. 6. $`\eta _1=\eta _2=0`$, $`\eta _30`$. This has singularity along a plane given by the common solution of $$x_1x_2\xi _3^2=0,x_3x_4+\eta _3^2=0.$$ (5.37) 4. Next, let us consider the cases, when two of the $`\xi _i`$ are non-vanishing, say $`\xi _1`$, $`\xi _20`$, $`\xi _3=0`$. 1. When all $`\eta _i0`$, the variety has a singular line given by $$x_1x_2=0,x_3=\xi _1^2x_1+\xi _2^2x_2,x_4=(\eta _1^2x_2+\eta _2^2x_1).$$ (5.38) The case with $`\xi _i0`$, $`\eta _1=0`$, $`\eta _2=0`$ will be similar. 2. $`\eta _1,\eta _20`$, $`\eta _3=0`$. There is no singularity. 3. $`\eta _1=0`$, $`\eta _2,\eta _30`$. This has a line-singularity given by $$\begin{array}{cc}\hfill x_2=\frac{\eta _2\xi _1}{\eta _3},x_4& =\frac{\eta _2\eta _3}{\xi _1},\hfill \\ \hfill \eta _2(x_1x_3\xi _2^2)\eta _3\xi _1x_1& =0,\hfill \end{array}$$ (5.39) 4. $`\eta _1=\eta _2=0`$, $`\eta _30`$. This has a singularity along $`x_3`$, with $`x_1=x_2=x_4=0`$. 5. $`\eta _10`$, $`\eta _2`$, $`\eta _3=0`$. This has a singular plane along the solutions of $$x_1x_4+\eta _1^2=0,x_2x_3\xi _1^2=0.$$ (5.40) 5. Finally, the cases with all the three $`\xi _i`$ turned on. When all $`\eta _i0`$, this has a line singularity $$\begin{array}{cc}\hfill x_1& =\frac{\xi _2\xi _3}{\xi _1}\left[1+\frac{(ab)(ca)}{x+bc}\right],\hfill \\ \hfill x_2& =\frac{\xi _3\xi _1}{\xi _2}\left[1+\frac{(ab)(bc)}{x+ca}\right],\hfill \\ \hfill x_3& =\frac{\xi _1\xi _2}{\xi _3}\left[1+\frac{(bc)(ca)}{x+ab}\right],\hfill \\ \hfill x_4& =\frac{x}{\xi _1\xi _2\xi _3},\hfill \\ \hfill t& =0,\hfill \end{array}$$ (5.41) where we have defined $`a=\eta _1\xi _1`$, $`b=\eta _2\xi _2`$ and $`c=\eta _3\xi _3`$ and $`x`$ is an arbitrary complex number. The different cases with some of the $`\eta _i`$ set to zero can be obtained from (5.41) by setting the parameters $`a,b,c`$ to zero accordingly. Moreover, in the above discussion, we have assumed generic values of the six parameters $`\xi _i`$ and $`\eta _i`$, whenever non-zero. Several other special branches can be obtained by relating these non-vanishing parameters. These can be obtained from the corresponding cases in the above list. ### 5.4 The moduli space for other representations of $`G`$ So far we have used the eight-dimensional regular representation of $`G`$, constructed by tensoring the four-dimensional projective representation $`\text{Diag}\{\sigma _I,\sigma _I\}`$, from (2.6) with $`𝕀_M`$, with $`M=2`$. This corresponds to a single brane at the orbifold singularity. The theory with $`N`$ branes at the singularity can be obtained, in a similar way as above, using $`𝕀_M`$, such that $`4M=2^3N`$. The moduli spaces for configurations with $`N>1`$ branes, or equivalently $`M>2`$, can be found in the same manner as discussed above. However, it has been pointed out that string theory allows also the case with $`M=1`$. Let us discuss this case briefly in this subsection. The quantities $`z_I`$ and $`w_I`$ appearing in (4.14) are now numbers, instead of $`2\times 2`$ matrices. The $`M=1`$ moduli space ensues by imposing the F-flatness conditions (4.23) on the complex numbers $`z_I`$ and $`w_I`$. We can define the gauge-invariant polynomials as above. However, in the present case, the polynomials $`z_1w_2z_3w_4`$ and $`w_1z_2w_3z_4`$ are not independent, but determined by polynomials of order two, e.g. $`2z_1w_2z_3w_4=\frac{1}{2}(ab+3c)+\xi _3z_3w_4+\eta _3z_1w_2`$. Thus, we define only the four variables $$x_1=z_Iw_I,I=1,2,3,4,$$ (5.42) as above. When $`\eta _i=\xi _i=0`$, we have, the F-flatness equations (4.23), can be partially solved with, e.g. $`z_1=z_2=w_1=w_2=0`$. This leaves us with the following F-flatness condition $$z_3w_4z_4w_3=0,$$ (5.43) and the D-flateness equation, now reduced to $$|z_3|^2+|z_4|^2|w_3|^2|w_4|^2=0.$$ (5.44) This space admits a toric description, thanks to the monomial relation (5.43). Following the steps outlined in Example 1 above, it can be seen that the moduli space is a $`^2/_2`$-plane $$xzy^2=0,$$ (5.45) where $`x,y,z`$ are three complex numbers. The same exercise can be repeated with other pairs of $`z_i`$ and $`w_i`$, corresponding to other possible solutions of the F-flatness equations. Since there are six such solutions, six $`^2/_2`$-planes arise and the moduli space is given as the union of these six planes. These six planes can be interpreted as associated with the invariant subspaces of the six group elements each of which contribute a deformation. The fractional brane may be a D3-brane wrapped on a vanishing two-sphere transverse to the plane. However a confirmmation of this needs more involved analysis. When $`\eta _i=0`$, and $`\xi _i0`$, three of the equations in (4.23) involving $`z_4`$ and $`w_4`$ are solved by $`w_I=sz_I`$, where $`s`$ is a constant. The three rest can be rewritten as $$\begin{array}{cc}\hfill x_1x_2& =\xi _3^2/4,\hfill \\ \hfill x_2x_3& =\xi _1^2/4,\hfill \\ \hfill x_3x_1& =\xi _2^2/4,\hfill \end{array}$$ (5.46) while $`x_4=sz_4^2`$. The solution to these equations furnishes the moduli space, which is a line given by $$x_1=\frac{\xi _2\xi _3}{2\xi _1},x_2=\frac{\xi _3\xi _1}{2\xi _2},x_3=\frac{\xi _1\xi _2}{2\xi _3},$$ (5.47) with $`x_4`$ arbitrary. Finally, when all $`\xi _i`$ and $`\eta _i`$ are non-zero, the moduli space is described by equations involving $`x_i`$, $`i=1,2,3`$ in a similar, but more complicated way, as above. These can again be solved for $`x_i`$, while $`x_4`$ is left arbitrary. Thus, the moduli space is again a line. ## 6 Conclusion To summarise, we have studied a D1-brane on the four-dimensional orbifold singularity $`^4/(_2\times _2\times _2)`$, with discrete torsion. The resulting moduli space in absence of any deformation is the singular orbifold given by the equation $`t^2=x_1x_2x_3x_4`$ in $`^5`$, as found earlier . The deformations of the moduli space in the presence of discrete torsion correspond to perturbations of the superpotential of the corresponding $`(0,2)`$ SYM in two dimensions, and are constrained by consistency requirements from string theory on the orbifold. The moduli arising in the twisted sector of string theory now deform away certain singularities of codimensions one and two, which in turn correspond to some subsets of $`^4`$ fixed by certain elements of the group $`G=_2\times _2\times _2`$. This is in harmony with expectations from conformal field theoretic description. But the singularity with codimension three turns out to be stable, as the twisted sector of the closed string theory fails to provide the modes required for its deformation. That the stable singularity is a line and not a point, unlike its three-dimensional counterpart , is rooted in the peculiarity of discrete torsion. As mentioned in §2, the discrete torsion $`\alpha `$, in spite of its deceptive general appearance in (2.5), is actually between two $`_2`$ factors out of the three in the quotienting group $`_2\times _2\times _2`$. By a suitable change of basis, we can thus think of the discrete torsion as affecting only a $`_2\times _2`$ subgroup of $`G`$, acting on a subset $`^3^4`$. Apart from the details of further quotienting by a $`_2`$, it is the node of this $`^3/(_2\times _2)`$ , that gives rise to the singular line. Indeed, this aspect of discrete torsion has featured in studies of mirror symmetry on this orbifold. Starting from the Calabi-Yau manifold obtained as the blown up $`𝕋^8/(_2\times _2\times _2)`$, which is the compact version of our case at hand, one can T-dualise the type-II string theory on this manifold along the $`𝕋^4`$-fibres, to obtain the same string theory on the deformed $`𝕋^8/(_2\times _2\times _2)`$ singularity. The T-dualities along the four directions of the $`𝕋^4`$ administers the “right dose” of discrete torsion as in here, such that the two theories are mirror-dual to each other. This situation provides a non-trivial demonstration of mirror symmetry . Thus, the situation considered in the present article is expected to be mirror-dual to the moduli space obtained by considering D-brane in absence of discrete torsion . A comment is in order. In absence of discrete torsion, a D1-brane on the orbifold $`^4/(_2\times _2\times _2)`$ resolves the orbifold singularity with seven Fayet–Iliopoulos parameters . In view of the fact that for this resolution, $`h^{11}=6`$, as we found in §3, this signifies that the resolved moduli space is smooth but not Calabi-Yau . It is not clear at the moment if one should invoke the mirror principle beyond Calabi-Yau varieties to incorporate another parameter of deformation in the present case also, superceding the restrictions arising from the twisted sector of string theory. If so, then this will call for a formulation of consistency conditions beyond the stringy ones studied here. Another implication of this configuration is related to the fact that discrete torsion can be simulated through an antisymmetric B-field background . On $`^3/(_2\times _2)`$, the B-field has a non-zero field strength supported at the singular point as necessitated by supersymmetry . In the present case, by an S-duality transformation, we may change the D-string into a fundamental string (in Type–IIB) and the background NS-NS B-field to an RR B-field. Thus, the present analysis can also be interpreted as describing a fundamental string at an orbifold in presence of a background RR B field which has a non-zero field strength only on a line singularity at the classical level. Finally, in the case of orbifold singularities there exist parallel brane configurations that give rise to the theory of branes at the singularity. In particular, one can also map the desingularisation moduli of the singularity to the parameters of the brane configurations . It would be interesting to understand the analogous brane configurations corresponding to present case and identify the presence as well as the absence of the various deformation modes, which will provide another way of looking at the orbifolds with discrete torsion. ## Acknowledgement We would like to thank D Jatkar and A Sen for illuminating discussions.
no-problem/9909/physics9909046.html
ar5iv
text
# Coupled non-equilibrium growth equations: Self consistent mode coupling using vertex renormalization ## 1 Acknowledgment The authors AKC and AB sincerely acknowledge partial financial support fom C. S. I. R., India. ## 2 Figure Captions Fig.1a The self-consistent equation for the correlator with $`\mathrm{𝑏𝑎𝑟𝑒}`$ vertex. The double thick line is the dressed correlator and the double straight line the propagator. The cross stands for the noise. Fig.1b The self-consistent equation for the correlator with $`\mathrm{𝑑𝑟𝑒𝑠𝑠𝑒𝑑}`$ vertex. The double thick line is the dressed correlator and the double straight line the dressed propagator. The cross stands for the noise. Fig.2 The self-consistent equation for the vertex.
no-problem/9909/astro-ph9909057.html
ar5iv
text
# Using Perturbative Least Action to Run N Body Simulations Back in Time ## 1 Introduction In order to understand the current dynamics and history of large scale structure, it would be helpful if we were able to generate plausible initial conditions which would produce structure consistent with observations. One of the great difficulties with this problem is that it is fundamentally ill-posed, since much of the structure of interest is highly nonlinear. Many different initial conditions can give rise to virtually indistinguishable final density fields, even if all of them obey the constraints given by the Zel’dovich approximation. We present the idea that even highly nonlinear $`N`$ body simulations may be self-consistently run backwards in time. While previous attempts at this problem, such as those by Peebles (1989, 1994), Shaya et al. (1995,1999), and Giavalsco et al. (1993), have suggested using the least action variational principle to solve the orbits of many particles, this approach yields the unfortunate result that only one of the potential solutions (namely, the first infall solution) can be recovered. We suggest the novel approach that by using least action as a perturbation from a known set of (randomly generated) orbits, a unique solution may be found for that set of initial conditions. By performing this analysis with many randomly selected sets of initial conditions, many sets of self-consistent solutions may be found. In this way, we may generate realistic initial conditions for interesting observed structure. ## 2 Method The least action variational principle states that given a initial and final positions, a set of particles acting under mutual gravitation will take the path which minimizes the action (Peebles 1989): $$S\underset{i}{}_0^{t_0}𝑑t\frac{a^2\dot{𝐱}_i^2}{2}\frac{\varphi _i}{2}$$ (1) where $`𝐱_i`$ is the position of the $`i^{th}`$ particle given in comoving coordinates, and $`\varphi _i`$ is the potential on the $`i^{th}`$ particle produced by all the others. However, let us say that we know that some path $`𝐱_i^{(0)}(t)`$ minimizes the action (e.g. the output from an $`N`$ body code). In that case, we may imagine another path: $$𝐱_i(t)=𝐱_i^{(0)}(t)+𝐱_i^{(1)}(t),$$ (2) where $`𝐱_i^{(1)}(t_i)`$ gives the change in the initial density field. Since we know that the action is minimized for the original path, we may write down the action for this new path as: $$S=S^{(0)}+\underset{i}{}_0^{t_0}𝑑t\left(a^2\dot{𝐱}_i^{(0)}\dot{𝐱}_i^{(1)}+\frac{a^2\dot{𝐱}_i^{(1)2}}{2}\frac{\varphi _i}{2}+\frac{\varphi _i^{(0)}}{2}\right)$$ (3) If we parameterize the perturbation of the paths as: $$𝐱_i^{(1)}(t)=D(t)𝐱_i^{(1)}(t_0)+\underset{n}{}C_{in}^\alpha f_n(t),$$ (4) where $`D(t)`$ is the growth factor of perturbations as given in linear theory, $`\alpha =\{1,2,3\}`$ is direction of the vector, and $`f_n(t)`$ are a set of basis functions, then the perturbed action (and hence the total action), can be minimized when $$\frac{S^{(1)}}{C_{in}^\alpha }=𝑑t\left[\dot{f}_n(t)a^2\dot{𝐱}^{(1)}+f_n\left(\frac{\varphi _i^{(0)}}{x^\alpha }\frac{\varphi _i}{x^\alpha }\right)\right]=0.$$ (5) Note that the derivative of the first term in equation (3) is equivalent to the middle term in equation (5) by virtue of the fact that $`𝐱^{(0)}`$ is required to minimize action. Using these equations, a randomly generated particle field (e.g. a Gaussian random field from some given power spectrum) may be iterated toward some target density field in the following way. First, a set of initial conditions are created using the Zel’dovich approximation and known power spectrum. Next, the particle positions and velocities are evolved and recorded using some Poisson solver. For our simulations, we have used a straight Particle Mesh (PM) code. We then compare the final particle field to the target density field. Next, we determine perturbations (values of $`𝐱_i^{(1)}(t_0)`$) which would cause the final density field to more strongly match the target field. Using those perturbations, we find the values of $`C_{in}^\alpha `$ which solve equation( 5). Using those coefficients, we determine the change in positions (and hence density) of the particles at high redshift, which gives us a new set of initial conditions. This new set of initial conditions can be run through the $`N`$ body code, and the process may be iterated until a satisfactory fit is returned. While the perturbed initial field is not, strictly speaking, Gaussian random, it does obey the Zel’dovich approximation by an appropriate selection of basis functions. Moreover, since the unperturbed initial field was Gaussian random, and since the perturbative least action approach aims to find the closest set of initial conditions to those randomly generated (as illustrated in Figure 1) which will satisfy the proscribed final conditions, the solution will be fairly close to Gaussian random and as close as possible to the given power spectrum. ## 3 Simulations As a test of this scheme, we create a highly nonlinear target density field with three overlapping isothermal spheres, with peaks as high as $`\delta =5`$. We then take two different sets of initial density fields (realizations of the same power spectrum), and iterate using the Perturbative Least Action (PLA) principle. We use a Particle Mesh code as our Poisson solver. The simulations are each $`64^3`$ gridcells and $`32^3`$ particles, and were run from $`z=99`$ to $`z=0`$. Twenty timeslices or positions and velocities are used in order to do the least action integration. Around six iterations (i.e. computation of the least action, and running the result through the PM code) were necessary to produce the results shown. Figure 1 shows the results of these simulations. The top row of panels show two randomly generated density fields at z=99. The velocity fields in each are given by the Zel’dovich approximation. By applying perturbative least action to each of these sets of initial fields with a particular target final density field, a new set of “perturbed” initial fields may be created. The second row shows the perturbed initial fields. Notice that the large scale perturbations remain unchanged, and that only the small scale perturbations seem affected. This is due to the fact that we have specified the target field on a cell by cell basis, necessitating a very large amount of small scale power. Finally, the bottom panels show the result of integrating from our perturbed initial conditions. Despite the widely different initial conditions, both final fields strongly resemble both each other and the target field. Though this toy problem is presented as proof of method, it is clear that this principle is applicable to a number of more complex problems such as determination of small scale primordial power, cluster redshift surveys, and study of the Local Group. This last would be quite interesting as recent studies (see e.g. Mateo 1998, and references therein) give a rather detailed picture of the current local density field and a series of simulations which could provide insight into probable infall scenarios would be most illuminating. ###### Acknowledgements. We would like to thank P.J.E. Peebles, Michael Strauss, Jerry Ostriker and Michael Vogeley, for helpful suggestions, and Michael Blanton for his invaluable visualization software. DMG is supported by an NSF graduate research fellowship.
no-problem/9909/cond-mat9909231.html
ar5iv
text
# ENHANCEMENT OF COULOMB DRAG AWAY FROM HALF FILLED LANDAU LEVELS ## References
no-problem/9909/hep-lat9909099.html
ar5iv
text
# ITEP-TH-43/99 KANAZAWA 99-22 The lattice 𝑆⁢𝑈⁢(2) confining string as an Abrikosov vortex ## Figure Captions. * The fit of the data of Ref. by a continuum solution. * The same as in Fig. 1 but using a classical solution on the two-dimensional lattice.
no-problem/9909/astro-ph9909498.html
ar5iv
text
# 1 Abstract ## 1 Abstract We examine here the inner region accretion flows onto black holes. A variety of models are presented. We also discuss viscosity mechanisms under a variety of circumstances, for standard accretion disks onto galactic black holes and supermassive black holes and hot accretion disks. Relevant work is presented here on unified aspects of disk accretion onto supermassive black holes and the possible coupling of thick disks to beams in the inner regions. We also explore other accretion flow scenarios. We conclude that a variety of scenarios yield high temperatures in the inner flows and that viscosity is likely not higher than alpha $``$ 0.01. ## 2 Introduction “Accretion is recognized as a phenomenon of fundamental importance in Astrophysics” (Frank, King and Raine, 1992). This is indeed the case as gravitational energy released in accretion processes is believed to be the dominant source of energy in a variety of high energy galactic compact sources in binary star systems containing white dwarfs, neutron stars and black holes; as well as extragalactic, supermassive black holes (Shapiro and Teukolsky, 1983). Both spherical/quasi-spherical accretion (Bondi, 1952; Frank et al., 1992 and references therein; Treves, Maraschi and Abramowicz, 1989 and papers therein); and disk accretion (Pringle, 1981; Dermott, Hunter, and Wilson, 1992; Frank et al., 1992 and references therein; Treves et al., 1989 and papers therein) operate and the type of accretion may depend on boundary conditions such as the motion of gas at infinity, its angular momentum per unit mass, etc. The field of accretion astrophysics is obviously vast. In this paper we concentrate on the inner regions of accretion flows onto galactic (stellar) black hole candidates (GBH) and supermassive black holes (SMBH) in active galactic nuclei (AGN). Accretion can be a very efficient process. The luminous energy released as matter accretes with mass accretion rate $`\dot{M}`$ is $`L10^{34}(\dot{M}/10^9M_{}\mathrm{yr}^1)\mathrm{erg}/\mathrm{sec}\mathrm{for}\mathrm{a}\mathrm{white}\mathrm{dwarf}`$ $`L10^{37}(\dot{M}/10^9M_{}\mathrm{yr}^1)\mathrm{erg}/\mathrm{sec}\mathrm{for}\mathrm{a}\mathrm{neutron}\mathrm{star}\mathrm{or}\mathrm{a}\mathrm{black}\mathrm{hole}`$ (1) With corresponding efficiencies ($`L=\eta \dot{M}c^2`$) $`\eta 10^4\mathrm{for}\mathrm{a}\mathrm{white}\mathrm{dwarf}`$ $`\eta 0.05\mathrm{for}\mathrm{a}\mathrm{neutron}\mathrm{star}`$ $`\eta 0.06\mathrm{for}\mathrm{a}\mathrm{Schwarzschild}\mathrm{black}\mathrm{hole}`$ $`\eta 0.42\mathrm{for}\mathrm{a}\mathrm{maximally}\mathrm{rotating}\mathrm{Kerr}\mathrm{black}\mathrm{hole}`$ (2) In spherical accretion, the specific angular momentum is, by definition zero. High temperatures can be achieved in the inner regions but whether this radiation escapes or not depends on the optical thickness of the accreting gas (Loeb and Laor 1992). Very high temperatures can be achieved in the inner regions of accretion disks: Minimum values apply in an optically thick gas where LTE is achieved, $$\frac{1}{4}aT_{min}^4=F(r)=\frac{3}{8}\pi \frac{GM\dot{M}}{r^3},$$ (3) whereas when complete internalization of gravitational energy by an optically thin gas is achieved, $$kT_{max}\frac{GMm_p}{r_{ms}}$$ (4) where $`r_{ms}`$ is the marginally stable radius. $`T_{max}`$ 150 MeV ($`2\times 10^{12}`$ K) for a Schwarzschild black hole and $``$ 750 MeV ($`10^{13}`$K) for a Kerr Black hole. §3 provides a brief summary of a variety of accretion models applicable to BHs. §4 covers the physics of accretion as related to the physics of viscous flows. §5 covers the topic of outflows, presumably originating close to the central object. §6 is discussion and conclusions. ## 3 Models Several models pertaining to disk and quasi-spherical accretion flows are presented here. ### 3.1 STANDARD DISKS Disk accretion proceeds via the outward transfer of angular momentum of the accreting gas. The seminal work of Shakura and Sunyaev (1973) provided the basic formalism of accretion in what hence came to be known as “standard” disks (see also Novikov and Thorne, 1973; Lynden-Bell and Pringle, 1974). Such disks are optically thick, geometrically thin, radiate locally as black bodies (BB) and have a radial dependence $`T(r)r^{3/4}`$. T is analogous to the effective temperature of a star. The modified BB spectrum has a characteristic broad peak, at low-frequencies $`S_\nu \nu ^2`$ while at high frequencies the spectrum drops exponentially. In between, but over a not too large range of frequencies the spectrum depends on frequency as $`\nu ^{1/3}`$, the characteristic accretion disk law (Pringle, 1981; Kafatos, 1988). For typical parameters applicable to SMBHs, $`10^8M_{}`$ and near-Eddington accretion, the peak of the modified black body spectrum occurs in the UV, $`\mathrm{log}\nu !15.315.5`$, with $`T20,00030,000`$ K (Ramos, 1997); whereas for GBHs, $`10M_{}`$, the peak occurs below $``$ 1 keV, with $`T1\mathrm{a}\mathrm{few}\times 10^7`$K (Shapiro, Lightman, and Eardley, 1976). In reality the disk accretion is more complicated than the above simple expressions. Several regions in the accretion flows have been identified (Shakura and Sunyaev, 1973; Novikov and Thorne, 1973; Kafatos, 1988): i) an outer region where gas pressure dominates over radiation pressure and where the opacity is predominantly free-free; ii) a middle region where gas pressure again dominates but the opacity is primarily due to electron scattering; and iii) an inner region where radiation pressure dominates over gas pressure and the opacity is primarily due to electron scattering. The latter is expected to occur for $`r50r_g`$, where $`r_g`$ is the gravitational radius of the black hole, $`GM/c^2`$. The simple thick disk solution applies to the “outer” disk region (Novikov and Thorne, 1973) as well as in accretion disks around white dwarfs. ### 3.2 MODIFIED DISKS Further considerations indicate that optically thick disks have to be modified. Modifications include electron scattering and Comptonization. If electron scattering dominates, the emitted flux is lower (Rybicki and Lightman, 1979) as follows $$I_\nu =B_\nu \sqrt{\frac{\kappa _{\mathrm{abs}}}{(\kappa _{\mathrm{abs}}+\kappa _{\mathrm{es}})}}$$ (5) Modifications due to electron scattering are important for $`\mathrm{log}\nu >15`$ (Ramos, 1997) for SMBHs. Malkan and Sargent (1982), and Ramos (1997) have applied these modifications. At high energies, photons are scattered many times by the electrons before they leave the disk. This was recognized as far back as the original work by Shakura and Sunyaev (1973). The result is that the (relatively) soft photons emanating from the disk are upscattered by the hot electrons and become hardened. This is known as Comptonization (Shapiro, Lightman, and Eardley, 1976; Sunyaev and Titarchuk, 1980; 1985) and is believed to produce hard, power-law radiation above the usual broad disk peak. Comptonization is important above $`\mathrm{log}\nu =15.8`$ for SMBHs (Ramos, 1997) and above 10 keV for GBH candidates such as Cygnus X-1 (Shapiro, Lightman, and Eardley, 1976). ### 3.3 TWO-TEMPERATURE DISKS AND ION-SUPPORTED TORI If $`T_e10^9`$ K in the inner portion of the disk, a two-temperature solution is obtained (Shapiro, Lightman, and Eardley, 1976: Eilek and Kafatos, 1983), where the ions are much hotter than the electrons, $`T_i10^{11}10^{13}`$K. In such a disk, a puffed-up inner region is formed supported by the ion pressure. Unsaturated Comptonization transfers energy from the electrons to the soft photons emitted in the cooler, underlying disk. The process is described by the dimensionless parameter y (Shapiro, Lightman, and Eardley, 1976), where y = ¡fractional energy change per scattering¿¡number of scatterings¿ or $$y=(\frac{4kT_e}{m_ec^2})\mathrm{max}(\tau _{es},\tau _{es}^2)$$ (6) Unsaturated Comptonization occurs for $`\mathrm{y}1`$ and is appropriate whenever there is a copious source of soft photons in the inner region (Shapiro, Lightman, and Eardley, 1976; Sunyaev and Titarchuk, 1985). The y-value is related to the energy flux spectral index A (where the energy flux is measured in $`\mathrm{keV}\mathrm{cm}^{}2\mathrm{s}^{}1\mathrm{keV}^{}1`$) through $`\mathrm{A}0.72\mathrm{y}^{0.917}`$ (Kafatos, 1983) and as such y $``$ 1 provides a natural explanation for the spectra of many cosmic sources. The 2T solution is thermally unstable (Piran, 1978) and whether it occurs or not depends on a variety of factors, including whether accretion is at near-Eddington rates, where the Eddington luminosity is $`\mathrm{L}_{\mathrm{Edd}}10^{38}(M/M_{})`$ erg/sec. Detailed spectra of 2T disks including relativistic effects, ion-ion collisions and resultant radiation spectra at gamma-rays (from pions and relativistic pairs which subsequently radiate via inverse-Compton)! have been calculated by Eilek and Kafatos (1983). Both the Shapiro, Lightman and Eardley and Eilek and Kafatos solutions apply to near-Eddington accretion rates. Gamma-gamma scatterings will degrade gamma-rays above $``$ MeV as these photons scatter the softer X-rays emerging from the 2T disk. The resultant optical depth for AGNs (Eilek and Kafatos, 1983) is $$\tau _{\gamma \gamma }5\times 10^2\mathrm{D}_{\mathrm{Mpc}}^2\mathrm{E}_\mathrm{T}(\mathrm{keV})\mathrm{N}(2\mathrm{E}_\mathrm{T})\mathrm{R}_\gamma ^1,$$ (7) where $`\mathrm{D}_{\mathrm{Mpc}}`$ is the distance of the AGN in Mpc, $`\mathrm{E}_\mathrm{T}`$ is the relevant threshold X-ray energy for $`e^+`$ $`e^{}`$ production and N is the corresponding photon flux at the earth (photons $`\mathrm{cm}^2`$ $`\mathrm{s}^1`$ $`\mathrm{keV}^1`$) computed at $`2\mathrm{E}_\mathrm{T}`$. $`\gamma `$-$`\gamma `$ scattering will form a broad shoulder $``$ 1 MeV with an exponentially-declining tail and the absence of high-energy radiation in many accreting GBH and SMBH (e.g. Seyferts) sources may be explained by this fundamental physical process (Eilek and Kafatos, 1983). High-energy gamma-rays (above 100 MeV - TeV) are probably arising in a jet (see below). A closely-related model to 2T Comptonized disks is the ion-supported torus model ( Rees, Begelman, Blandord and Phinney, 1982), proposed as the underlying engine in extragalactic jet sources. They found self-consistent 2T solutions even for $`r2000r_g`$ for sub-Eddington rates as low as $`10^4\dot{M}_{\mathrm{Edd}}`$. ### 3.4 HOT CORONAE Hot coronae may surround an underlying, cooler disk. Corona models have been proposed for Cygnus X-1 (Liang and Price, 1977; Bisnovatyi-Kogan and Blinnikov, 1977) and provide a competing model to hot, 2T disks. Coronae provide a natural explanation for unsaturated Comptonization since a corona would envelop an underlying cool disk where the soft photons emanate. Conduction-balanced coronae (see also Rosner, Tucker, and Vaiana, 1978) would produce temperatures, $`T10^{11}`$K, lower than 2T disks but still higher than standard inner accretion disks. Hot haloes or coronae may also be produced in a bulk motion Comptonization model (see below) and would account for the hard-radiation in such flows. ### 3.5 BULK COMPTONIZATION A promising theoretical model proposed for the soft-state GBH candidates is the bulk motion Comptonization model (Chakrabarti and Titarchuk, 1995; Titarchuk, Mastichiadis,and Kylafis, 1997; Titarchuk and Zanias, 1998; Shrader and Titarchuk, 1998). In this model (Bautista and Titarchuk, 1999), the production of hard phtons peaks at $`2r_S`$ (where $`r_S`$ is the Schwarzschild radius, = 2 $`r_g`$). It explains the continuum X-ray spectra of soft-state GBHs in their soft state (Shrader and Titarchuk, 1998). Accretion onto the central BH proceeds via a spherically-converging flow at gravitational free-fall speeds. A variant of this model assumes the formation of strong shocks as the convergent inflow speeds become greater than the local sound speed (Chakrabarti and Titarchuk, 1995). In the latter model, the disk is divided into two main components, a standard cool disk which extends to the outer bounday and produces soft (UV) photons; and an optically thin sub-Keplerian halo (or corona) which terminates in a standing shock near the black hole. The post-shock region Comptonizes the soft (UV) photons that are subsequently radiated as a hard spectrum with spectral index $``$ 1.5. ### 3.6 THICK DISKS AND ADVECTION-DOMINATED FLOWS In the above models, it is generally assumed that the disk motion is Keplerian (or quasi-Keplerian). In reality, when radiation pressure is included, the flow becomes non-Keplerian and the disk fattens geometrically (Abramowicz et al.,1978: Jaroszynski et al.,1980; Paczynski and Wiita,1980, Paczynski 1998). A funnel wall is produced as matter cannot reach the axis of rotation. Such thick disk flows may play a role in the production of matter outflows, although the exact mechanism has not been proposed in the above works. These disks are special cases of advective disks (Chakrabarti, 1998). Besides pressure effects, radial inflow effects have also to be considered. In normal disks, the radial inflow speed is assumed to be negligibly small. In reality, this speed can be large, particularly in the inner regions. As such, the advection term $`vdv/dr`$ should be included in the momentum equation. It may also be the case that transonic flows always result in accretion onto black holes (Chakrabarti, 1990; Kafatos and Yang, 1994) as the inflow speed has to smoothly join the subsonic regime with the supersonic regime near the horizon. It may also be the case that shocks are prevalent (Chakrabarti, 1990; Yang and Kafatos, 1995; Chakrabarti and Titarchuk, 1995; Chakrabarti, 1996). Advection-dominated accretion flows (ADAFs) have been widely discussed in the literature (Narayan and Yi, 1994; Liang and Narayan, 1997; Esin, McClintock, and Narayan, 1997; Narayan, Mahadevan and Quataert, 1999). At large or super-Eddington rates (Katz, 1977; Abramowicz et al., 1988) optically thick advection solutions are obtained. In these solutions, the large optical depth of the inflowing gas traps or advects it into the central black hole. At low, sub-Eddington accretion rates (Rees et al., 1982; Narayan and Yi, 1994), optically-thin advection flows, termed ADAFs, result. In this model, the accreting gas has low density, is unable to radiate and viscous energy is advected onto the central BH. Optically-thin ADAFs are hot, 2T flows. This model assumes a self-similar solution and can (according to Narayan and co-workers) only operate at low accretion rates and high viscosity values, $`\alpha 0.1`$. Whether these conditions can be satisfied in realistic flows is another matter. ### 3.7 ADVECTION DOMINATED INFLOW-OUTFLOW SOLUTIONS ADAFs have the generic drawback of having a positive Bernoulli parameter in the disk. Such accretion flows thus tend to evaporate, before they accrete. Recently, advection dominated inflow outflow solutions (ADIOS) have been proposed (Blandford & Begelman, 1999) which overcome this drawback by postulating a powerful outflow that carries away excess mass, energy and momentum, thus allowing the accretion to proceed. Subramanian et al. (2000) are investigating a scenario where relativistic outflows can be produced in such an advection-dominated accretion flow, as a result of Fermi acceleration due to collisions with kinks in the tangled magnetic field embedded in the accretion flow. This mechanism has been explored in detail by Subramanian et al. (1999), and it is expected that the low-density environment of advection-dominated flows will be ideal sites for the launching of outflows via this mechanism. On the other hand, time-dependent treatments of quasi-spherical accretion (Ogilvie, 1999) suggest that advection-dominated accretion can proceed without the need for outflows. ## 4 Viscosity Mechanisms Viscosity in accretion disks has been the subject of investigation for nearly 20 years (for a review, see Pringle 1981). It was recognized early on that ordinary molecular visocisty cannot produce the level of angular momentum transport required to provide accretion rates commensurate with observed levels of emission. In lieu of a detailed model of microphysical viscosity, Shakura & Sunyaev (1973) embodied all the unknown microphysics of viscosity into a single parameter $`\alpha `$ according to the prescription $$t_{r\varphi }=\alpha P,$$ (8) where $`t_{r\varphi }`$ denotes the $`r\varphi `$ component of the viscous stress and $`P`$ is the ambient pressure in the disk. Much of the subsequent developments concentrated on obtaining estimates of the $`\alpha `$ parameter due to fluid turbulence (Shakura & Sunyaev 1973; Goldman & Wandel 1995) magnetic viscosity (Eardley & Lightman 1975; Balbus & Hawley 1998 and references therein) and radiation viscosity (Loeb & Laor 1992) and ion viscosity (Paczynski 1978; Kafatos 1988). Subramanian et al. (1996) have shown that “hybrid” viscosity (neither pure ion visocisty nor pure magnetic viscosity) due to hot ions scattering off kinks in the tangled magnetic field embedded in accretion disks is the dominant form of viscosity in hot, two-temperature accretion disks. This work assumes the magnetic field embedded in the accretion disk to be isotropically tangled. Recent simulations (see, for instance, Armitage 1998) suggest, however, that the manner in which the magnetic field is tangled might be significantly anisotropic. Based on the detailed calculations presented in Subramanian et al. (1996), we believe that this will merely introduce a direction dependece in the hybrid viscosity, but will not change the overall conclusion that this (hybrid viscosity) is the dominant form of viscosity in hot accretion disks. The viscosity obtained is characterized by a viscosity parameter $`\alpha 0.01`$ (Subramanian et al. (1996). In the turbulent viscosity mechanism characterized by convection (Yang, 1999) when gas pressure dominates, an upper limit to $`\alpha `$ of 0.01 is also obtained. For radiative viscosity (Loeb and Laor, 1992) in which radiation pressure dominates, $`\alpha `$ is greater than 1. These models are, however, producing optically thick, relatively cool inner regions. ## 5 Jets/Outflows Several objects for which much of the preceding discussion of accretion disks is relevant also exhibit jets/outflows. Although several mechanisms for producing outflows have been proposed, none of them, with the exception of a few (Das 1998) make the connection between the outflows and the underlying accretion disk. Subramanian et al. (1999) have proposed a model in which the outflow is powered by Fermi acceleration of seed protons due to collisions with magnetic scattering centers embedded in the accretion disk/corona. This is natural mechanism that is expected to operate in any accretion disk, and Subramanian et al. (2000) are examining its viability in the context of the ADIOS scenario. The physical scenario in which the Fermi acceleration of protons (which powers the outflow at its base) takes place is the same as that in which hybrid viscosity (Subramanian et al. 1996) is operative. The energy inherent in the shear flow (gravitational potential energy) is dissipated partly by viscous heating of the thermal protons, and partly by Fermi acceleration of the supra-thermal protons, which in turn form a relativistic outflow. ## 6 Discussion and Conclusions We have seen that a variety of scenarios predict hot, inner regions, either 2T, hot coronae or ADAFs, where $`T_{max}10^{11}10^{13}`$K. In most models, these hot inner regions occur for radii not much greater than 10’s of gravitational radii. In ADAFs, however, high temperatures persist for hundreds or even 1,000’s of radii. Besides spectral observations that would reveal hard X-rays and $`\gamma `$ rays, timing observations would be crucial (with characteristic timescales comparable to the light-travel time through the hot region). Several of these models face theoretical difficulties or inconsistencies: 2T disks are unstable (although appropriate viscosity laws may mitigate this difficulty); hot coronae are attractive but the mechanism of heating the corona is unknown. In many (all?) cases, transonic flows and even shocks may be prevalent, breaking up the ususal assumptions of quasi-Keplerian or even steady disk structure. ADAFs are particularly problematic: The assumption of self-similarity is probably erroneous. ADAFs assume that the velocity in the $`\theta `$ direction is zero. However, at the boundary between the thin disk and the geometrically thick ADAF solution, $`v_\theta `$ is not zero. Also, at the axis this velocity would be zero but then a funnel would be formed. ADAFs assume no shocks but shocks, transonic flows, etc. are probably prevalent. Although the usual ADAF assumption of bremmstrahlung cooling produces ineffient cooling, there is no reason that other more efficient coolings processes (such as Compton processes) would not be operating. Finally, ADAFs seem to require $`\alpha 0.1`$ whereas realistic physical calculations for hot, optically thin flows suggest $`\alpha 0.01`$.
no-problem/9909/cs9909013.html
ar5iv
text
# Self-stabilizing mutual exclusion on a ring, even if 𝐾=𝑁Id: dijkstra.tex,v 1.5 1999/09/21 13:35:03 hoepman Exp ## 1 Introduction In \[Dij74, Dij82\], Dijkstra presents the following mutual exclusion protocol for a ring of nodes $`0,\mathrm{},N`$ where each node can read the state $`x[]\{0,\mathrm{},K1\}`$ of its anti-clockwise neighbour, and where node $`0`$ runs a different program than the other nodes. Dijkstra proves self-stabilization of this protocol to a configuration where only one node is privileged at a time, for $`K>N`$ under a central daemon and says \[Dij82\]: “for smaller values of $`K`$, counter examples kill the assumption of self-stabilization”. Failing to find a counter example for $`K=N`$, we instead found the following proof that the system also stabilizes when $`K=N`$, provided that $`N>1`$. ###### Theorem 1.1 Even if $`K=N`$ and $`N>1`$, Dijkstra’s mutual exclusion protocol \[Dij74, Dij82\] (Protocol 1.1) stabilizes, under a central daemon, to a configuration where only one node is privileged. ###### Proof. We first define the legitimate configurations as those configurations that satisfy $`x[i]=a`$ for all $`i`$ with $`0i<j`$ and $`x[i]=(a1)modK`$ for all $`i`$ with $`ji<N+1`$ for some choice of $`a`$ and $`j`$. Hence the configuration where all nodes have the same state is legitimate. Dijkstra already showed (independent of any restriction on $`K`$) closure of the legitimate states, that no run of the protocol ever terminates, and that in each of these runs the exceptional node will change state (aka “fire”) infinitely often. Let $`N>1`$. Consider the case where node $`0`$ fires for the first time. Then just before that, $`x[0]=x[N]=b`$ for some $`b`$ and the new value of $`x[0]`$ becomes $`b+1`$. Now consider the case when node $`0`$ fires again. Then just before that, $`x[0]=x[N]=b+1`$. In order for node $`N`$ to change value from $`b`$ to $`b+1`$, it must have copied $`b+1`$ from its anti-clockwise neighbour $`x[N1]`$ (which exists if $`N>1`$). This moment must have occurred after node $`0`$ changed state to $`x[0]=b+1`$. But then, just after node $`N`$ copies $`b+1`$ from node $`N1`$ we actually have $`x[N1]=x[N]=x[0]=b+1`$. In other words, if $`N>1`$, three different nodes hold the same value $`b+1`$. Then the remaining $`N2`$ nodes can each take a different value from the remaining $`K1`$ values (unequal to $`b+1`$), which means that if $`KN`$ (so in particular when $`K=N`$) at this point in time there is a value $`a`$ (among these $`K1`$ values) not occurring as the state of any node on the ring. Because node $`0`$ fires infinitely often, eventually $`x[0]`$ becomes $`a`$. Because the other nodes merely copy values from their anti-clockwise neighbours, at this point no other node holds $`a`$. The next time node $`0`$ fires, $`x[N]=x[0]=a`$. The only way that node $`N`$ gets the value $`a`$ is if all intermediate nodes have copied $`a`$ from node $`0`$. We conclude that for nodes, $`x[i]=a`$, which is a legitimate state. ∎∎
no-problem/9909/cond-mat9909293.html
ar5iv
text
# Why does a metal–superconductor junction have a resistance? ## 1 Introduction In the late sixties, Kulik used the mechanism of Andreev reflection to explain how a metal can carry a dissipationless current between two superconductors over arbitrarily long length scales, provided the temperature is low enough . One can say that the normal metal has become superconducting because of the proximity to a superconductor. This proximity effect exists even if the electrons in the normal metal have no interaction. At zero temperature the maximum supercurrent that the metal can carry decays only algebraically with the separation between the superconductors — rather than exponentially, as it does at higher temperatures. The recent revival of interest in the proximity effect has produced a deeper understanding into how the proximity-induced superconductivity of non-interacting electrons differs from true superconductivity of electrons having a pairing interaction. Clearly, the proximity effect does not require two superconductors. One should be enough. Consider a junction between a normal metal and a superconductor (an NS junction). Let the temperature be zero. What is the resistance of this junction? One might guess that it should be smaller than in the normal state, perhaps even zero. Isn’t that what the proximity effect is all about? The answer to this question has been in the literature since 1979 , but it has been appreciated only in the last few years. A recent review gives a comprehensive discussion within the framework of the semiclassical theory of superconductivity. A different approach, using random-matrix theory, was reviewed by the author . In this lecture we take a more pedestrian route, using the analogy between Andreev reflection and optical phase-conjugation to answer the question: Why does an NS junction have a resistance? ## 2 Andreev reflection and optical phase-conjugation It was first noted by Andreev in 1963 that an electron is reflected from a superconductor in an unusual way. The differences between normal reflection and Andreev reflection are illustrated in Fig. 1. Let us discuss them separately. * Charge is conserved in normal reflection but not in Andreev reflection. The reflected particle (the hole) has the opposite charge as the incident particle (the electron). This is not a violation of a fundamental conservation law. The missing charge of $`2e`$ is absorbed into the superconducting ground state as a Cooper pair. It is missing only with respect to the excitations. * Momentum is conserved in Andreev reflection but not in normal reflection. The conservation of momentum is an approximation, valid if the superconducting excitation gap $`\mathrm{\Delta }`$ is much smaller than the Fermi energy $`E_\mathrm{F}`$ of the normal metal. The explanation for the momentum conservation is that the superconductor can not exert a significant force on the incident electron, because $`\mathrm{\Delta }`$ is too small compared to the kinetic energy $`E_\mathrm{F}`$ of the electron . Still, the superconductor has to reflect the electron somehow, because there are no excited states within a range $`\mathrm{\Delta }`$ from the Fermi level. It is the unmovable rock meeting the irresistible object. Faced with the challenge of having to reflect a particle without changing its momentum, the superconductor finds a way out by transforming the electron into a particle whose velocity is opposite to its momentum: a hole. * Energy is conserved in both normal and Andreev reflection. The electron is at an energy $`\epsilon `$ above the Fermi level and the hole is at an energy $`\epsilon `$ below it. Both particles have the same excitation energy $`\epsilon `$. Andreev reflection is therefore an elastic scattering process. * Spin is conserved in both normal and Andreev reflection. To conserve spin, the hole should have the opposite spin as the electron. This spin-flip can be ignored if the scattering properties of the normal metal are spin-independent. The NS junction has an optical analogue known as a phase-conjugating mirror . Phase conjugation is the effect that an incoming wave $`\mathrm{cos}(kx\omega t)`$ is reflected as a wave $`\mathrm{cos}(kx\omega t)`$, with opposite sign of the phase $`kx`$. Since $`\mathrm{cos}(kx\omega t)=\mathrm{cos}(kx+\omega t)`$, this is equivalent to reversing the sign of the time $`t`$, so that phase conjugation is sometimes called a time-reversal operation. The reflected wave has a wavevector precisely opposite to that of the incoming wave, and therefore propagates back along the incoming path. This is called retro-reflection. Phase conjugation of light was discovered in 1970 by Woerdman and by Stepanov, Ivakin, and Rubanov . A phase-conjugating mirror for light (see Fig. 2) consists of a cell containing a liquid or crystal with a large nonlinear susceptibility. The cell is pumped by two counter-propagating beams at frequency $`\omega _0`$. A third beam is incident with a much smaller amplitude and a slightly different frequency $`\omega _0+\delta \omega `$. The non-linear susceptibility leads to an amplification of the incident beam, which is transmitted through the cell, and to the generation of a fourth beam, which is reflected. This non-linear optical process is called “four-wave mixing”. Two photons of the pump beams are converted into one photon for the transmitted beam and one for the reflected beam. Energy conservation dictates that the reflected beam has frequency $`\omega _0\delta \omega `$. Momentum conservation dictates that its wavevector is opposite to that of the incident beam. Comparing retro-reflection of light with Andreev reflection of electrons, we see that the Fermi energy $`E_\mathrm{F}`$ plays the role of the pump frequency $`\omega _0`$, while the excitation energy $`\epsilon `$ corresponds to the frequency shift $`\delta \omega `$. A phase-conjugating mirror can be used for wavefront reconstruction. Imagine an incoming plane wave, that is distorted by some inhomogeneity. When this distorted wave falls on the mirror, it is phase conjugated and retro-reflected. Due to the time-reversal effect, the inhomogeneity that had distorted the wave now changes it back to the original plane wave. An example is shown in Fig. 3. Complete wavefront reconstruction is possible only if the distorted wavefront remains approximately planar, since perfect time reversal upon reflection holds only in a narrow range of angles of incidence for realistic systems. This is an important, but not essential complication, that we will ignore in what follows. ## 3 The resistance paradox We have learned that a disordered medium (such as the frosted glass in Fig. 3) becomes transparent when it is backed by a phase-conjugating mirror. By analogy, one would expect that a disordered metal backed by a superconductor would become “transparent” too, meaning that its resistance should vanish (up to a small contact resistance that is present even without any disorder). This does not happen. Upon decreasing the temperature below the superconducting transition temperature, the resistance drops slightly but then rises again back to its high-temperature value. (A recent experiment is shown in Fig. 4, where the conductance is plotted instead of the resistance.) This socalled “re-entrance effect” has been reviewed recently by Courtois et al. , and we refer to that review for an extensive list of references. The theoretical prediction is that at zero temperature the resistance of the normal-metal–superconductor junction is the same as in the normal state. How can we reconcile this with the notion of Andreev reflection as a “time-reversing” process, analogous to optical phase-conjugation? To resolve this paradox, let us study the analogy more carefully, to see where it breaks down. For a simple discussion it is convenient to replace the disordered medium by a tunnel barrier (or semi-transparent mirror) and consider the phase shift accumulated by an electron (or light wave) that bounces back and forth between the barrier and the superconductor (or phase-conjugating mirror). A periodic orbit (see Fig. 5) consists of two round-trips, one as an electron (or light at frequency $`\omega _0+\delta \omega `$), the other as a hole (or light at frequency $`\omega _0\delta \omega `$). The miracle of phase conjugation is that phase shifts accumulated in the first round trip are cancelled in the second round trip. If this were the whole story, one would conclude that the net phase increment is zero, so all periodic orbits would interfere constructively and the tunnel barrier would become transparent because of resonant tunneling. But it is not the whole story. There is an extra phase shift of $`\pi /2`$ acquired upon Andreev reflection that destroys the resonance. Since the periodic orbit consists of two Andreev reflections, one from electron to hole and one from hole to electron, and both reflections have the same phase shift $`\pi /2`$, the net phase increment of the periodic orbit is $`\pi `$ and not zero. So subsequent periodic orbits interfere destructively, rather than constructively, and tunneling becomes suppressed rather than enhanced. In contrast, a phase-conjugating mirror adds a phase shift that alternates between $`+\pi /2`$ and $`\pi /2`$ from one reflection to the next, so the net phase increment of a periodic orbit remains zero. For a more quantitative description of the conductance we need to compute the probability $`R_{\mathrm{he}}`$ that an incident electron is reflected as a hole. The matrix of probability amplitudes $`r_{\mathrm{he}}`$ can be constructed as a geometric series of multiple reflections: $`r_{\mathrm{he}}`$ $`=`$ $`t^{}{\displaystyle \frac{1}{\mathrm{i}}}t+t^{}{\displaystyle \frac{1}{\mathrm{i}}}r{\displaystyle \frac{1}{\mathrm{i}}}r^{}{\displaystyle \frac{1}{\mathrm{i}}}t+t^{}{\displaystyle \frac{1}{\mathrm{i}}}\left[r{\displaystyle \frac{1}{\mathrm{i}}}r^{}{\displaystyle \frac{1}{\mathrm{i}}}\right]^2t+\mathrm{}`$ (1) $`=`$ $`t^{}{\displaystyle \frac{1}{\mathrm{i}}}\left[1r{\displaystyle \frac{1}{\mathrm{i}}}r^{}{\displaystyle \frac{1}{\mathrm{i}}}\right]^1t.`$ Each factor $`1/\mathrm{i}=\mathrm{exp}(\mathrm{i}\pi /2)`$ corresponds to an Andreev reflection. The matrices $`t,t^{}`$ and $`r,r^{}`$ are the $`N\times N`$ transmission and reflection matrices of the tunnel barrier, or more generally, of the disordered region in the normal metal. (The number $`N`$ is related to the cross-sectional area $`A`$ of the junction and the Fermi wavelength $`\lambda _\mathrm{F}`$ by $`NA/\lambda _\mathrm{F}^2`$.) The matrices $`t,r`$ pertain to the electron and the matrices $`t^{},r^{}`$ to the hole. The resulting reflection probability $`R_{\mathrm{he}}=N^1\mathrm{Tr}r_{\mathrm{he}}^{}r_{\mathrm{he}}^{}`$ is given by $$R_{\mathrm{he}}=\frac{1}{N}\mathrm{Tr}\left(\frac{tt^{}}{1+rr^{}}\right)^2=\frac{1}{N}\mathrm{Tr}\left(\frac{tt^{}}{2tt^{}}\right)^2.$$ (2) We have used the relationship $`tt^{}+rr^{}=1`$, dictated by current conservation. The conductance $`G_{\mathrm{NS}}`$ of the NS junction is related to $`R_{\mathrm{he}}`$ by $$G_{\mathrm{NS}}=\frac{4e^2}{h}NR_{\mathrm{he}}.$$ (3) In the optical analogue one has the probability $`R_\pm `$ for an incident light wave with frequency $`\omega _0+\delta \omega `$ to be reflected into a wave with frequency $`\omega _0\delta \omega `$. The matrix of probability amplitudes is given by the geometric series $`r_\pm `$ $`=`$ $`t^{}{\displaystyle \frac{1}{\mathrm{i}}}t+t^{}{\displaystyle \frac{1}{\mathrm{i}}}r\mathrm{i}r^{}{\displaystyle \frac{1}{\mathrm{i}}}t+t^{}{\displaystyle \frac{1}{\mathrm{i}}}\left[r\mathrm{i}r^{}{\displaystyle \frac{1}{\mathrm{i}}}\right]^2t+\mathrm{}`$ (4) $`=`$ $`t^{}{\displaystyle \frac{1}{\mathrm{i}}}\left[1r\mathrm{i}r^{}{\displaystyle \frac{1}{\mathrm{i}}}\right]^1t.`$ The only difference with Eq. (1) is the alternation of factors $`1/\mathrm{i}`$ and $`\mathrm{i}`$, corresponding to the different phase shifts $`\mathrm{exp}(\pm \mathrm{i}\pi /2)`$ acquired at the phase-conjugating mirror. The reflection probability $`R_\pm =N^1\mathrm{Tr}r_\pm ^{}r_\pm ^{}`$ now becomes independent of the disorder , $$R_\pm =\frac{1}{N}\mathrm{Tr}\left(\frac{tt^{}}{1rr^{}}\right)^2=1.$$ (5) The disordered medium has become completely transparent. It is remarkable that a small difference in phase shifts has such far reaching consequences. Note that one needs to consider multiple reflections in order to see the difference: The first term in the series is the same in Eqs. (1) and (4). That is probably why this essential difference between Andreev reflection and optical phase-conjugation was not noticed prior to Ref. . ## 4 How big is the resistance? Now that we understand why a disordered piece of metal connected to a superconductor does not become transparent, we would like to go one step further and ask whether the resistance (or conductance) is bigger or smaller than without the superconductor. To that end we compare, following Ref. , the expression for the conductance of the NS junction \[obtained from Eqs. (2) and (3)\], $$G_{\mathrm{NS}}=\frac{4e^2}{h}\underset{n=1}{\overset{N}{}}\frac{T_n^2}{(2T_n)^2},$$ (6) with the Landauer formula for the normal-state conductance, $$G_\mathrm{N}=\frac{2e^2}{h}\underset{n=1}{\overset{N}{}}T_n.$$ (7) The numbers $`T_1,T_2,\mathrm{}T_N`$ are the eigenvalues of the matrix product $`tt^{}`$. These transmission eigenvalues are real numbers between 0 and 1 that depend only on the properties of the metal (regardless of the superconductor). Both formulas (6) and (7) hold at zero temperature, so we will be comparing the zero-temperature limits of $`G_{\mathrm{NS}}`$ and $`G_\mathrm{N}`$. Since $`x^2/(2x)^2x`$ for $`x[0,1]`$, we can immediately conclude that $`G_{\mathrm{NS}}2G_\mathrm{N}`$. If there is no disorder, then all $`T_n`$’s are equal to unity, hence $`G_{\mathrm{NS}}`$ reaches its maximum value of $`2G_\mathrm{N}`$. For a tunnel barrier all $`T_n`$’s are $`1`$, hence $`G_{\mathrm{NS}}`$ drops far below $`G_\mathrm{N}`$. A disordered metal will lie somewhere in between these two extremes, but where? We have already alluded to the answer in the previous section, that $`G_{\mathrm{NS}}=G_\mathrm{N}`$ for a disordered metal in the zero-temperature limit. To derive this remarkable equality, we parameterize the transmission eigenvalue $`T_n`$ in terms of the localization length $`\zeta _n`$, $$T_n=\frac{1}{\mathrm{cosh}^2(L/\zeta _n)},$$ (8) where $`L`$ is the length of the disordered region. Substitution into Eqs. (6) and (7) gives the average conductances $`G_{\mathrm{NS}}_L`$ $`=`$ $`{\displaystyle \frac{4e^2}{h}}N{\displaystyle _0^{\mathrm{}}}𝑑\zeta P_L(\zeta )\mathrm{cosh}^2(2L/\zeta ),`$ (9) $`G_\mathrm{N}_L`$ $`=`$ $`{\displaystyle \frac{2e^2}{h}}N{\displaystyle _0^{\mathrm{}}}𝑑\zeta P_L(\zeta )\mathrm{cosh}^2(L/\zeta ).`$ (10) (For Eq. (9) we have used that $`2\mathrm{cosh}^2x1=\mathrm{cosh}2x`$.) The probability distribution $`P_L(\zeta )`$ of $`\zeta `$ is independent of $`L`$ in a range of lengths between $`l`$ and $`Nl`$ . It then follows immediately that $$G_{\mathrm{NS}}_L=2G_\mathrm{N}_{2L}.$$ (11) Since $`G_\mathrm{N}1/L`$, according to Ohm’s law, we arrive at the equality of $`G_{\mathrm{NS}}`$ and $`G_\mathrm{N}`$. The restriction to the range $`lLNl`$ is the restriction to the regime of diffusive transport: For smaller $`L`$ we enter the ballistic regime and $`G_{\mathrm{NS}}`$ rises to $`2G_\mathrm{N}`$; For larger $`L`$ we enter the localized regime, where tunneling takes over from diffusion and $`G_{\mathrm{NS}}`$ becomes $`G_\mathrm{N}`$. ## 5 Conclusion We have learned a fundamental difference between Andreev reflection of electrons and phase-conjugation of light. While it is appealing to think of the Andreev reflected hole as the time reverse of the incident electron, this picture breaks down upon closer inspection. The phase shift of $`\pi /2`$ acquired upon Andreev reflection spoils the time-reversing properties and explains why a disordered metal does not become transparent when connected to a superconductor. The research on which this lecture is based was done in collaboration with J. C. J. Paasschens. It was supported by the “Stichting voor Fundamenteel Onderzoek der Materie” (FOM) and by the “Nederlandse organisatie voor Wetenschappelijk Onderzoek” (NWO).
no-problem/9909/math-ph9909004.html
ar5iv
text
# Untitled Document I want to draw back the paper because it is completely wrong !!!!!!!!!
no-problem/9909/patt-sol9909006.html
ar5iv
text
# A Particle Model of Rolling Grain Ripples Under Waves ## I The rolling grain ripples An example of a flat bed with rolling grain ripples coexisting with vortex ripples is seen in Fig. 1. In the middle of the picture the ripples have not yet formed and the bed is still flat, while on the top vortex ripples are seen to invade into the flat bed. The vortex ripples are typically nucleated from the boundaries or from a pertubation in the bed. In the lower part of the picture the rolling grains ripples have formed on the flat bed, and are seen as the small bands of loose grains on top of the flat bed. The flow over the bed created by the surface wave is oscillating back and forth in a harmonic fashion. This flow creates a shear stress on the bed $`\tau (t)`$ which in non-dimensional form reads: $$\theta (t)=\frac{\tau (t)}{\rho (s1)gd},$$ (1) where $`\rho `$ is the density of water, $`s`$ is the relative density of the sand (for quartz sand in water $`s=2.65`$), $`g`$ is the gravity and $`d`$ is the mean diameter of the grains. $`\theta `$ is usually called the Shields parameter . When the shear stress exceeds a critical value $`\theta _c`$, the grains start to move. For a turbulent boundary layer the value of the critical Shields parameter is $`\theta _c0.06`$ . The grains which have become loosened from the bed, start to move back and forth on the flat bed, and after a while the grains come to rest in parallel bands. In the lee side of each band the bed is shielded from the full force of the flow, creating a “shadow zone” where the grains move slower than in the upstream side of the bands. Due to this shadow zone, more grains end up in the bands than leave the bands, and they grow until they form small ridges, the rolling grain ripples. When the rolling grain ripples are fully developed, no grains will be pulled loose from the bed in the space between them, and they are stable. However, in reality the rolling grain ripples are dominated by invading vortex ripples, which is the main reason why they are rarely observed in Nature. ## II A simple model The above scenario can be formulated mathematically by writing an equation of motion for each grain/particle. In the following an equation of motion for the particles are developed. In the beginning the particles represents the grains, but as the single grains quickly merge, the particles most of the time represents a ripple. First the velocity of each particle is found, assuming that the particle is alone of the flat bed, and then the influence of the shadow zones from neighboring particles are taken into account. Consider $`N`$ particles rolling on top of a rough, solid surface. Each particle is characterized by its position $`x_i`$ and its height $`h_i`$ (see Fig. 2). As the ripples are triangular, the area of each particle $`A_i`$ and their heights are related as: $$h_i=\sqrt{A_i\mathrm{tan}\varphi },$$ (2) where $`\varphi `$ is the angle of repose of the sand (33). The ripple moves back and forth slower than a single grain, according to the “1/height” law. This law is well known in the study of dunes in the desert or sub-aqueous dunes , and can be illustrated by a simple geometrical argument. Suppose there is a flux of sand over the crest of a ripple or a dune $`q_{crest}`$ (Fig. 3). To make the ripple move a distance $`\delta x`$ an amount of sand $`h\delta x`$ is needed. As the sediment flux is amount of sand per unit time, the velocity of the ripple is $`u_{ripple}=q_{crest}/h1/h`$. If the height of the initial particles (the single grains) is assumed to be equal to the grain diameter $`d`$, the velocity of the particles can be related to that of the single grains as: $$u_i=\frac{d}{h_i}U_g\mathrm{sin}(\omega t),$$ (3) where $`U_g`$ is the velocity amplitude of the motion of a single grain and $`\omega `$ is the angular frequency of the oscillatory motion. The flow moves back and forth and makes the particles move accordingly on top of the bed. In the wake of each particle/ripple there is a shadow zone (Fig. 4), which is the area behind the particle where the absolute value of the shear stress is smaller than it would be on a flat bed. The length of the shadow zone is therefore larger than the length of the separation bubble formed by the particle (note that the shadow zone would be present even in the absence of separation). If the shadow zone is much smaller than the amplitude of water motion, the flow in the lee side of the ripple can be assumed to have sufficient time to become fully developed. The fully developed flow over a triangle is similar to that past a backward facing step in steady flow, which has been extensively studied (see e.g. Tjerry ). In that case the relevant quantities, i.e., the length of the separation bubble, the length of the wake etc., scale with the height of the step. As a first assumption, the shadow zone is therefore assumed to have a length which is proportional to the height of the particle: $`\alpha _sh_i`$. If a particle enters the shadow zone of another particle, it is slowed down according to the distance between the particles. This means that the actual velocity of a particle is $`u_if(\mathrm{\Delta }x)`$ where $`\mathrm{\Delta }x`$ is the distance between the grain and the nearest neighbor upstream, $`u_i`$ is the velocity of the particle outside the shadow zone and $`f`$ is a function determining the nature of the slowing of the motion of the particle. A simple linear function is used, as shown in Fig. 4. The exact form of the function $`f`$ is not crucial, as will be evident later, the important parameter being the extent of the shadow zone as determined by $`\alpha _s`$. It is now possible to write the equations of motion for the particles as a system of coupled ODEs: $$\dot{x_i}=\frac{d}{h_i}u_g(t)\underset{\mathrm{positive}\mathrm{half}\mathrm{period}}{\underset{}{f\left(\frac{x_ix_{i1}}{\alpha _sh_{i1}}\frac{u(t)}{|u(t)|}\right)}}\underset{\mathrm{negative}\mathrm{half}\mathrm{period}}{\underset{}{f\left(\frac{x_ix_{i+1}}{\alpha _sh_{i+1}}\frac{u(t)}{|u(t)|}\right)}}$$ (4) for $`i=1,\mathrm{},N`$, where $`u_g(t)=U_g\mathrm{sin}(\omega t)`$. The motion of a particle is thus made up of three parts (i) the motion of the single undisturbed particle, (ii) the effect of the shadow from the particle to the left $`(i1)`$, which might affect particle $`i`$ in the first half period and (iii) the effect of the shadow of the particle to the right $`(i+1)`$ in the second half period. When lengths are scaled by the diameter of the grains and time by the frequency $`\omega `$, it is possible to identify the three relevant dimensionless parameters of the model: $`\alpha _s`$, the length of the shadow zone divided by the height. $`a_g/d`$, the amplitude of the motion of a single grain, divided by the grain diameter ($`a_g=\omega U_g`$). $`\lambda _i/d`$, the initial distance between the grains; $`\lambda _i=L/N`$, where $`L`$ is the length of the domain and $`N`$ is the initial number of grains. ### A Relation to physical quantities Even though the model seem quite heuristic, the parameters entering the model, $`a_g/d`$, $`\lambda _i/d`$ and $`\alpha _s`$ can be related to physical parameters describing the flow and the properties of the grains. The line of arguments presented here closely follows those used to derive the flux of sand on a flat bed (the bed load), as can be found in text books, i.e., , or in . First the velocity of a single grain will be derived, from which $`a_g/d`$ can be inferred. Thereafter the initial number of grains in motion is found, from which follows $`\lambda _i/d`$. Finally the length of the shadow zone $`\alpha _s`$ is discussed. #### 1 The velocity of the grains The velocity of the grain can be found by considering the force balance on a single grain lying on the flat bed. The grain is subject to a drag force proportional to the square of the relative flow velocity $`u_r=u_{nb}u_g`$ where $`u_{nb}`$ is the velocity near the bed and $`u_g`$ is the velocity of the grain: $$F_d=\frac{1}{2}C_D\rho A|u_r|u_r,$$ (5) where $`A`$ is the area of the grain and $`C_D`$ is a drag coefficient. The numerical signs is used to obtain the right sign of the force. The velocity profile in the vicinity of the bed is supposed to be logarithmic. As shown in this is a reasonable assumption except when the flow reverses. However, during reversal the velocities are anyway small and the accuracy is of minor importance. The logarithmic profile over a rough bed can be written as (eg. ): $$u(y)=\frac{u_f}{\kappa }\mathrm{ln}\left(\frac{30y}{k_N}\right),$$ (6) where $`\kappa =0.41`$ is the von Kármán constant, $`u_f\sqrt{\tau /\rho }`$ is the friction velocity and $`k_N`$ is the Nikuradse roughness length. It is then possible to find the near bed velocity as the velocity at $`y1/2d`$: $$u_{nb}=\xi u_f$$ (7) where the constant $`\xi `$ can be determined from Eq. (6) by assuming $`k_N=d`$. Opposing the drag on the grain is the friction of the bed: $$F_f=\mu W$$ (8) where $`\mu `$ is a friction coefficient and $`W=\rho g(s1)d^3\pi /6`$ is the immersed weight of the grain. By making a balance of forces, $`F_d+F_f=0`$, the velocity of the grain can be found: $$u_g=\xi u_f\left(1\sqrt{\left|\frac{\theta _c}{\theta }\right|}\right),$$ (9) where $$\theta _c=\frac{4\mu }{3C_D\xi ^2},$$ (10) is the critical Shields parameter. Usually $`\mu =\mathrm{tan}\varphi 0.65`$ . #### 2 The initial spacing of grains Now that the velocity of the grains has been calculated, there still remains to determine the number of grains per area $`n`$ in motion. To this end a small volume of moving sand at the top the flat bed is considered. The balance of the forces acting on this volume is written as: $$\tau _b=\tau _G+\tau _c.$$ (11) The interpretation of the terms is as follows: The parameter $`\tau _b`$ is the shear stress on the top of the bed load layer. It is assumed that this is equal to the shear stress on a fixed flat bed. $`\tau _G`$ is the stress arising from the inter-granular collisions, giving rise to “grain-stresses” , modeled as: $`\tau _G=n\mu W`$. It is assumed that the inter-granular stress absorbs all the stress, except the critical stress $`\tau _c`$; this is the so-called “Bagnold hypothesis” . Making Eq. (11) non-dimensional by dividing with $`\rho (s1)gd`$, the number of grains in motion is found as: $$n=\frac{6}{\pi d^2\mu }(\theta \theta _c),$$ (12) If $`\theta <\theta _c`$, then no grains are in motion and $`n=0`$. $`n`$ can also be viewed as the initial density of grains, and by assuming a square packing of the grains the initial distance between the grains becomes: $`{\displaystyle \frac{\lambda _i}{d}}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{n}d}}`$ (13) $`=`$ $`\sqrt{{\displaystyle \frac{\pi \mu }{6(\theta \theta _c)}}}.`$ (14) #### 3 The length of the shadow zone The last parameter $`\alpha _s`$, which characterize the length of the shadow zone, is a bit harder to estimate accurately. By exploring the analogy with the backward facing step which was suggested in section II, we can get some bounds for $`\alpha _s`$. In the backward facing step there is a zone with flow separation which extends approximately 6 step heights from the step, see Fig. 5. After approximately 16 step heights there is a point where the shear stress has a small maximum. It follow that the length of the shadow zone should be longer than the separation zone, but shorter that the point of the maximum in the shear stress, i.e., $`6<\alpha _s<16`$. ### B Numerical and analytical solutions of the model In the following section the behavior of the model is examined. To study the detailed behavior the set of coupled ordinary differential equations (4) is integrated numerically. It is seen that the model reaches a steady state and an analytical expression for the spacing between the ripples in the steady state is developed. The numerical simulations in this section are based on a simple example with $`a_g/d=35`$ and $`\lambda _i/d=3.23`$ and $`\alpha _s`$ is set to 10. As initial condition all particles have an area of $`1.0\pm `$ 10 %, to add some perturbation. The initial number of particles $`N`$ in this example is 800. In the first few periods a lot of grains is colliding and merging (Fig. 6). As the ripples are formed and grow bigger the evolution slows down, until a steady state is reached (Fig. 7). The spacing between the ripples in the steady state show some scatter around the average value, which is also observed in experiments. The variation in the average spacing of the steady state $`\lambda _{eq}`$ for realizations with different initial random seed, turned out to be on the order of $`1/N_{eq}`$ where $`N_{eq}`$ is the final number of ripples. To study the behavior of the average spacing of the ripples $`\lambda _{eq}`$, a number of simulations were made where the parameters were varied one at a time. Each run was started from the initial disordered state. Changing $`a_g/d`$ only results in a minor change in the spacing of the ripples (Fig. 8a). The final spacing between the ripples does depend on the length of the shadow zone $`\alpha _s`$; the longer the shadow zone, the longer the ripples (Fig. 8b). This can be used to estimate the average equilibrium spacing between the ripples. When the distance between two ripples is longer than the shadow zone of the ripples, they are no longer able to interact. This gives: $$\lambda _{eq}>\alpha _sh_{eq},$$ (15) where subscript $`eq`$ denotes average value at equilibrium. However if two ripples have a spacing just barely shorter than Eq. (15), they will be able to interact and eventually they will merge. One can therefore expect to find spacings up to $`\lambda _{eq}<2\alpha _sh_{eq}`$. Assuming that the average length is in between the two bounds one get: $$\lambda _{eq}=\gamma \alpha _sh_{eq},1<\gamma <2.$$ (16) where $`\gamma `$ can be found by comparing the results from the full simulations with Eq. (16). The height of the ripples at equilibrium can be found by splitting the initial number of particles evenly onto the equilibrium ripples. Then the average area at equilibrium is $`A_{eq}=\lambda _{eq}/\lambda _id^2`$ and from Eq. (2) follows that the height is $`h_{eq}/d=\sqrt{\mathrm{tan}\varphi \lambda _{eq}/\lambda _i}`$, which gives an average equilibrium spacing: $`{\displaystyle \frac{\lambda _{eq}}{d}}`$ $`=`$ $`\gamma ^2{\displaystyle \frac{\alpha _s^2d\mathrm{tan}\varphi }{\lambda _i}}`$ (17) $`=`$ $`\alpha _s^2\gamma ^2\sqrt{{\displaystyle \frac{6\mathrm{tan}\varphi }{\pi }}}\sqrt{\theta \theta _c}.`$ (18) The equilibrium spacing is therefore found to be proportional to $`\sqrt{\theta \theta _c}`$ with the constant of proportionality being made up of $`\alpha _s`$, $`\gamma `$ and various geometrical factors. All the quantities related to the dynamical evolution of the ripples, i.e., the velocity of the ripples, the shape of the function $`f(\mathrm{\Delta }x)`$ etc. does not enter into the expression. ### C Comparison with experiments The only parameter that has not been accurately determined is $`\alpha _s`$. The value of this parameter can be estimated by comparison with measurements. In 1976, Sleath made a series of experiments, measuring spacing between rolling grain ripples . The ripples were formed on a flat tray oscillating in still water using sand of two different grain sizes: 0.4 mm and 1.14 mm. To compare with the experiments the value of $`\theta `$ need to be calculated. $`\theta `$ reflect the number of grains in motion, and it seems natural to use the maximum value during the wave period, $`\theta _{max}`$. To estimate $`\theta _{max}`$ the shear stress on the bed has to be estimated. The maximum shear stress on the bed, $`\tau _{max}`$ during a wave period can be found using the concept of a constant friction factor $`f_w`$ (): $$\tau _{max}=\frac{1}{2}\rho f_wu_{max}^2,$$ (19) with $`u_{max}`$ being the maximum near-bed velocity. The friction factor can be estimated using the empirical relation : $$f_w=0.04\left(\frac{a}{k_N}\right)^{0.25}$$ (20) where $`a=u_{max}\omega `$ and $`k_Nd`$. The range of Shields parameters in the experiments was found in this way to range from the critical Shields parameter to $`\theta =0.42`$. For the high Shields parameters the rolling grain ripples were reported to be very unstable, and to quickly develop into vortex ripples. In these cases, the measured ripple spacing then reflects the spacing between the rolling grain ripples before they developed into vortex ripples . In Fig. 9 the experimental results are compared with runs of the model using $`\alpha _s=15.0`$ (the reason for this particular value will be clear shortly) and $`N=10000`$. By fitting all the runs to Eq. (16) it was found that $`\gamma =1.40`$. The results using Eq. (18) and $`\gamma =1.40`$ are shown with a line. First of all, it is seen that Eq. (18) predicts the results from the full model Eq.(4) well. The correspondence between the model and the experiments is reasonable, but there are some systematic discrepancies, which will be discussed. There are some few points with small ripple spacing for which the model does not fit the measurements. These measurements has a Shields parameter very near the critical (i.e., just around the onset of grain motion), which implies some additional complications. The grains used in the experiment were not of a uniform size; rather they were part of a distribution of grain sizes, and the grain size reported is then the median of the distribution, $`d_{50}`$. The Shields parameter is calculated using the median of the distribution, but actually one could calculate a Shields parameter for different fractions of the distribution, thus creating a $`\theta _{10}`$, a $`\theta _{50}`$ etc.. When $`\theta _{50}`$ is smaller than then the critical Shields parameter, $`\theta _{10}`$ might still be higher than the critical Shields parameter. This implies that grains with a diameter smaller than $`d_{50}`$ will be in motion, while the larger grains will stay in the bed. As only $`d_{50}`$ is used in the calculation of the equilibrium ripple spacing, the distance between the grains $`\lambda _i`$ will be overestimated near the critical Shields parameter, where the effect of the poly-dispersity is expected to be strongest. An overestimation of $`\lambda _i`$ will lead to an under-prediction of the ripple length, which is exactly what is seen in Fig. 9. There are also three points from the experiments taken at very large Shields parameters, which seems to be a bit outside the prediction of the model. As already mentioned, these points are probably doubtful because of the very fast growth of vortex ripples. It is therefore reasonable to assume that vortex ripples invaded the rolling grain ripples before these had time to reach their full length. To find a reasonable value of $`\alpha _s`$ Eq. (18) was fitted to the experimental points. To avoid the points which might be of doubtful quality, as discussed above, only the points in the range $`0.075<\theta <0.3`$ were used. This gave the value of $`\alpha _s=15.0`$, in agreement with the qualitative arguments in section II A 3. ## III Discussion of the results From the comparison of the model with measurements it seems as if the model confidently reproduces the experiments. In the model the number of grains in motion is constant (even though the number of particles changes). In an experimental situation, however, new grains might be lifted from the bed and added to the initial number of grains in motion. As the part of the flat bed between the ripples is covered by the shadow zones of the particles, these stretches will be shielded from the full force of the flow, and only very slowly will new grains be loosened here. This small addition of new grains will result in a slow growth of the rolling grain ripples, such that eventually grow into vortex ripples. This slow growth is very well illustrated by recent measurements , but not covered by the present model. ## IV Conclusion In conclusion, a model has been created which explain the creation and the equilibrium state of rolling grain ripples of the type described by Bagnold. The final distance between the ripples is proportional to $`\sqrt{\theta \theta _c}`$. The model has been compared with measurements with reasonable agreement. ###### Acknowledgements. It is a pleasure to thank Tomas Bohr, Clive Ellegaard, Enrico Foti, Jørgen Fredsøe and Vachtang Putkaradze for useful discussions. I also wish to thank the anonymous referees for constructive criticism.
no-problem/9909/astro-ph9909426.html
ar5iv
text
# On the Possibility that Mgii Absorbers Can Track the Merger Evolution of Galaxy Groups from High Redshift ## 1. Motivations: Why Study Mgii Systems? One of the central motivations for studying intervening quasar absorption lines, is that they provide insights into galactic evolution from the perspective of the chemical, ionization, and kinematic conditions of interstellar, halo, and intragroup gas. In this contribution, a “new” taxonomy of absorption line systems is presented, one in which equal, simultaneous consideration is given to the Hi, Mgii, Feii, and Civ absorption strengths and to the gas kinematics. Details of the work presented here can be found elsewhere (Churchill 1997; Churchill et al. 1999a,b,c). Here, we investigate an extreme, rapidly evolving, class of Mgii system and discuss the possibility that its further study may provide insights into the evolution of clustering on the scale of galaxy groups. Arguably, the Mgii–selected systems at $`z1`$ are best suited for a taxonomic study of absorption systems because: (1) their statistical (Lanzetta, Turnshek, & Wolfe 1987; Steidel & Sargent 1992) and kinematic (Petitjean & Bergeron 1990; Churchill 1997; Charlton & Churchill 1998) properties are thoroughly documented, (2) they arise in structures possessing a wide range of Hi column densities, including sub–Lyman limit (Churchill et al. 1999, 1999a), Lyman limit (e.g. Steidel & Sargent 1992), and damped Ly$`\alpha `$ (e.g. Rao & Turnshek 1998; Boissè et al. 1998) systems, (3) they give rise to a range of Civ absorption strengths (Bergeron et al. 1994; Churchill et al. 1999b,c), and (4) those with rest–frame equivalent widths, $`W_r(\text{Mg}\text{ii})`$, greater than $`0.3`$ Å are associated with normal, bright galaxies (Bergeron & Boissé 1991; Steidel, Dickinson, & Persson 1994; Churchill, Steidel, & Vogt 1996; Steidel 1998). ## 2. Ionization, Kinematics, and Absorber Taxonomy The Mgii kinematics, and the Mgii, Feii, Civ, and Ly$`\alpha `$ absorption strengths, were studied for 45 Mgii absorption–selected systems with redshifts 0.4 to 1.4. The kinematics of the Mgii and Feii absorption was resolved at $`6`$ km s<sup>-1</sup> resolution with the HIRES instrument (Vogt et al. 1994) on Keck I. The Ly$`\alpha `$ and Civ absorption was obtained from the HST archive of FOS spectra. These UV spectra have resolution $`230`$ km s<sup>-1</sup>, so that the detailed kinematics of the neutral and high ionization gas are not available for study. See Figure 1 for an example of the data. For any given $`W_r(\text{Mg}\text{ii})`$, there is a large, $`1`$ dex, variation in the ratio $`W_r(\text{C}\text{iv})/W_r(\text{Mg}\text{ii})`$ (Churchill et al. 1999b,c). This indicates a large spread in the global ionization conditions in Mgii absorbers, and by implication, the ISM, and halos of the host galaxies, and possibly the intragroup media when small groups are intercepted by the line of sight. It was also found that $`W_r(\text{C}\text{iv})`$ is strongly correlated to the Mgii kinematics (Churchill et al. 1999a,c), where the kinematics is quantified using the second velocity moment of the Mgii $`\lambda 2796`$ optical depth. As such, there is a strong connection between the kinematic distribution of the low ionization gas and the presence of a strong, high ionization phase. For the majority of the systems, the gas must be multiphase in that a substantial fraction of the high ionization gas arises in a physically distinct phase from the lower ionization gas (Churchill et al. 1999c; also see Churchill & Charlton 1999). A clustering analysis (tree and $`K`$–means) was used to examine multivariate trends between the Mgii kinematics, and Mgii, Feii, Ly$`\alpha `$, and Civ absorption strengths. To a high level of significance (greater than 99.99% confidence), it was found that the properties of Mgii systems can be organized into five classes, which we have called “DLA/Hi–Rich”, “Double”, “Classic”, “Civ–deficient”, and “Single/Weak”. An example system for each of the five classes is shown in Figure 1. Ticks above the Mgii and Feii profiles (HIRES/Keck) give the velocities of the multiple Voigt profile components (Churchill 1997) for the singly ionized gas and ticks above the Ly$`\alpha `$ profile and both members of the Civ doublet (FOS/HST) show the expected location of these components for the neutral and higher ionization gas. ## 3. The Double Systems In view of the topic of the meeting, we focus here on the Double systems, since they may provide clues to the clustering of material at higher redshifts. We present the HIRES/Keck Mgii $`\lambda 2796`$ profiles of Double systems, including a few at $`z>1.4`$, in Figure 2. Though Churchill et al. (1999) suggested that Double systems may be associated with later–type galaxies undergoing concurrent star formation (i.e. the multiphase gas arises in superbubbles and from outflows, or chimneys, similar to the gaseous components of the Galaxy), there are at least two other obvious explanations for Double systems. The first scenario is that they might be two Classic systems nearly aligned on the sky and clustered within a $`500`$ km s<sup>-1</sup> velocity separation (i.e. galaxy pairs). An example of this scenario, at $`z0`$, is observed in the spectrum of SN 1993J (Bowen, Blades, & Pettini 1995). The SN 1993J line of sight probes half the disk and halo of M81, half the disk and halo of the Galaxy, and the “intergalactic” material apparently from the strong dwarf–galaxy interactions taking place with both galaxies. The M81/Galaxy Mgii $`\lambda 2796`$ absorption profile has a virtually identical kinematic spread, saturation, and complexity as that of the $`z=1.79`$ absorber toward Q $`1225+317`$ (Figure 2). Double systems constitute $`7`$% of our sample. Interestingly, at $`z0.3`$, roughly 7% of all galaxies are observed to be in “close physical pairs” (Patton et al. 1997), where a pair has a projected separation less than $`20h^1`$ kpc. Even accounting for the evidence that this fraction increases with redshift (e.g. Neuschaefer et al. 1997), the fraction of Double systems in our sample is consistent with that of galaxy pairs at intermediate redshifts. The second scenario is that Double systems may consist of a primary and a satellite galaxy (e.g. York et al. 1986), possibly in a group environment. Using the Local Group as a model and applying the simple cross–sectional dependence for $`W_r(\text{Mg}\text{ii})`$ with galaxy luminosity (Steidel 1995), the probability of intercepting a “double” absorber for a random line of sight passing through a “Milky Way” galaxy in a “Local Group” was estimated (see Charlton & Churchill 1996). Though the results are fairly sensitive to the assumed gas cross sections of small mass galaxies, we find a $`25`$% chance of intercepting both the LMC and the Milky Way, and a $`5`$% chance of intercepting both the SMC and the Milky Way. All other galaxies in the Local Group have negligible probabilities of being intercepted for a line of sight passing with 50 kpc of the Milky Way. If, at $`z1`$, roughly 30% of all galaxies typically have one LMC–like satellite galaxy within 50 kpc (see Zaritsky et al. 1997), it could explain the observed fraction of “Double” systems found in our sample. ## 4. Galaxy Group Evolution If most Double systems arise in the environments associated with galaxy pairs, then the redshift evolution observed in the number of galaxy pairs would necessarily need to be in step with the evolution in the class of “Double” Mgii absorbers themselves. Over the redshift interval $`1z2`$, it is seen that the galaxy pair fraction, evolves proportional to $`(1+z)^p`$, with $`2p4`$ (Neuschaefer et al. 1997). This compares well with $`p=2.2\pm 0.7`$ for very strong Mgii absorbers with $`W_r>1.0`$ Å (Steidel & Sargent 1992). As such, galaxy pair evolution remains a plausible scenario for explaining the observed evolution in the class of the largest equivalent width Mgii absorbers (illustrated in Figure 2). None of these arguments are conclusive, nor absolutely compelling in the face of several attractive scenarios (i.e. intergalactic infall, star forming events, etc.) that are equally consistent with the available data. Even so, the hypothesis that the strongest, most kinematically complex Mgii absorbers arise in galaxy groups or pairs is directly testable, and is thus useful for future investigations that probe galactic evolution from the point of view of absorption line systems. Deep imaging and redshift confirmation of the galaxies associated with Double systems and searches for high ionization intragroup gas, such as Nv and Ovi (Mulchaey et al. 1996), may confirm this hypothesis. ### Acknowledgments. I would like to thank my collaborators, Richard Mellon, Jane Charlton, and Buell Jannuzi, for their excellent contributions to work from which this contribution is based. ## References Bergeron, J., & Boissé, P. 1991, A&A, 243, 344 Bergeron, J., et al. 1994, ApJ, 436, 33 Boissé, P., Le Brun, V., Bergeron, J., Deharveng, J.–M. 1998, A&A, 333, 841 Bowen, D. V., Blades, J. C., & Pettini, M. 1995, ApJ, 448, 634 Charlton, J. C., & Churchill, C. W. 1996, in Galactic Chemodynamics 4: The History of the Milky Way and its Satellite System, eds. A. Burkert, D. Hartmann, & S. Majewski (PASP Conference Series) Charlton, J. C., & Churchill, C. W. 1998, ApJ, 499, 181 Churchill, C. W. 1997, Ph.D. Thesis, University of California, Santa Cruz Churchill, C. W., & Charlton, J. C. 1999, AJ, 118. 59 Churchill, C. W., Rigby, J. R., Charlton, J. C., & Vogt, S. S. 1999, ApJS, 120, 51 Churchill, C. W., et al. 1999a, ApJ, 519, L43 Churchill, C. W., et al. 1999b, ApJ, submitted Churchill, C. W., et al. 1999c, ApJ, submitted Churchill, C. W., Steidel, C. C., & Vogt, S. S. 1996, ApJ, 471, 164 Lanzetta, K. M., Turnshek, D. A., & Wolfe, A. M. 1987, ApJ, 322, 739 Mulchaey, J. S., Mushotzky, R. F., Burstein, D., & Davis, D. S. 1996, ApJ, 456, L5 Neuschaefer, L. W., Im, M., Ratnatunga, K. U., Griffiths, R. E., & Casertano, S. 1997, ApJ, 480, 59 Patton, D. R., Pritchet, C. J., Yee, H. K. C., Ellingson, E., & Carlberg, R. G. 1997, AJ, 475, 29 Petitjean, P., & Bergeron, J. 1990, A&A, 231, 309 Rao, S. M., & Turnshek, D. A. 1998, ApJ, 500, L115 Steidel, C. C. 1995, in QSO Absorption Lines, ed. G. Meylan (Garching : Springer Verlag), 139 Steidel, C. C. 1998, in Galactic Halos: A UC Santa Cruz Workshop, ASP Conf. Series, V136, ed. D. Zaritsky (San Francisco : PASP), 167 Steidel, C. C., Dickinson, M. & Persson, E. 1994, ApJ, 437, L75 Steidel, C. C., & Sargent, W. L. W. 1992, ApJS, 80, 1 Vogt, S. S., et al. 1994, in Proceedings of the SPIE, 2128, 326 York, D. G., Dopita, M., Green, R., & Bechtold, J. 1986, ApJ, 311, 610 Zaritsky, D., Smith, R., Frenk, C., White, S. D. M. 1997, ApJ, 478, 39
no-problem/9909/adap-org9909005.html
ar5iv
text
# Beyond Hebb: Exclusive-OR and Biological Learning∗ \[ ## Abstract A learning algorithm for multilayer neural networks based on biologically plausible mechanisms is studied. Motivated by findings in experimental neurobiology, we consider synaptic averaging in the induction of plasticity changes, which happen on a slower time scale than firing dynamics. This mechanism is shown to enable learning of the exclusive-OR (XOR) problem without the aid of error backpropagation, as well as to increase robustness of learning in the presence of noise. PACS numbers: 87.17.Aa, 82.20.Wt, 87.19.La today \] Since the early days of neurophysiology we have evidence of biological mechanisms serving as the basis for learning and information processing in the brain. Cajal’s pictures showing networks of intertwined nerve cells readily lead to the hypothesis of information flow and processing in these networks . Subsequently formulated theoretical models of the neuron, as by McCulloch and Pitts , and the Hebbian learning rule, postulating synaptic strengthening for simultaneous pre- and postsynaptic activity , sparked the development of algorithms for neuronal learning and memory. The development of learning algorithms, however, took place almost decoupled from biological validation, partly due to lack of detailed knowledge of the neurophysiology of learning, but also due to their success in applied fields (“connectionism”, “machine learning”). Among the first models were layered assemblies of formal neurons (Perceptrons) combined with gradient rules defining the synaptic weights . Later, combining Hebb’s strictly local rule with symmetrically connected formal neurons defined the Hopfield model of simple associative learning . However, only a complicated non-local learning rule, now known as error backpropagation, finally was able to solve simple non-linear learning problems as the learning of the exclusive-or (XOR) function . This complicated form of reverse information transfer, however, has not been observed in biological circuits . For computation in biological nervous systems the question remains, which underlying biological processes are capable of the most general form of learning , including problems of the XOR class. A more biologically plausible learning concept is learning by reinforcement and recently a number of models along this line have been formulated . One such model by Barto and Anandan combines a local mechanism of synaptic plasticity changes with a global feedback signal indicating information worth memorizing . A remaining problem in these models is the regulation of mean activity level in large networks which has been attacked by Alstrøm and Stassinopoulos and Stassinopoulos and Bak . An even more elegant mechanism has been proposed by Chialvo and Bak with reinforcement through negative feedback which is motivated by the observed long-term depression (LTD) in biological networks. In this algorithm, the dynamics of synaptic plasticity comes to a halt when learning reaches its goal, just by definition. While we think that this is a very interesting approach to formulating a biologically plausible learning mechanism, this model suffers a severe restriction in learning capabilities. It has been shown to work well on simple tasks as non-overlapping pattern sets, however, it is not able to learn tasks as the XOR function, at least not without unreasonably large numbers of neurons and very long learning times. It is, therefore, nearly as limited as the early single layer perceptron models that, for this reason, nearly paralyzed the research in neural networks in the seventies (mainly following the sobering analysis of perceptron capabilities by Minsky and Papert ). In the following we will study a model in this spirit which, however, does not exhibit this restriction. Let us first define the model, then report numerical results on its learning capabilities. We will then discuss the robustness of our model in the presence of noise. The letter concludes with a discussion of the motivation of our model from current findings in neurobiology. Consider a layered network of binary formal neurons $`x_i\{0,1\}`$, with $`I`$ input sites $`x_0,\mathrm{},x_{I1}`$, $`J`$ hidden sites $`x_I,\mathrm{},x_{I+J1}`$, and $`K`$ output units $`x_{I+J},\mathrm{},x_{I+J+K1}`$. The adjacent layers are completely connected by weights $`w_{ji}`$ from each input to each hidden unit and from each hidden unit to each output unit. In addition, each weight is assigned an internal degree of freedom, acknowledging the finite time scale of synaptic plasticity induction as will be discussed below. In the model this is represented by an additional discrete variable $`c_{ji}`$ associated to each weight $`w_{ji}`$ serving as a synaptic memory during learning. The network dynamics is defined by the following steps. The input sites are activated with external stimuli $`x_0,\mathrm{},x_{I1}`$. Each hidden node $`j`$ then receives a weighted input $`h_j=_{i=0}^{I1}w_{ji}x_i`$. Its state is chosen according to a probabilistic rule s.t. each hidden neuron fires with probability $`p_j=a^1\mathrm{exp}(\beta h_j)`$ with the normalization $`a=_j\mathrm{exp}(\beta h_j)`$. We consider the low activity limit of the network where only one hidden neuron fires at a time. Each output neuron $`k`$ now receives an input sum $`h_k=_{j=I}^{I+J1}w_{kj}x_j`$ with the only non-zero contribution from the firing hidden neuron $`j^{}`$ such that $`h_k=w_{kj^{}}`$. The above probabilistic rule applies to the output layer as well, determining one firing output neuron $`x_k^{}`$ which represents the output of the network corresponding to a given input pattern. Note that in the low activity limit used here, the probabilistic rule is a stochastic approximation of the winner-take-all rule . We think our variant based on local dynamics is biologically more realistic than supplying global information of which neuron has the highest input sum within a layer. In the limit $`\beta \mathrm{}`$, the neuronal activity in our model follows exact winner-take-all dynamics, since then $`\mathrm{max}_jp_j=a^1\mathrm{exp}(\beta \mathrm{max}_jh_j)1`$. This deterministic case has been used in the network model of Chialvo and Bak . Here, however, we consider stochastic models with finite values of $`\beta `$. Now it remains to specify the learning dynamics of the network weights $`w_{ji}`$ themselves. For each activation pattern, the network output is compared to the target output and a feedback signal $`r`$ returned to the network, with $`r=+1`$, if its output neurons represent the predefined target output, given the current input, and $`r=1`$ otherwise. Depending on this binary feedback, connections and corresponding counter values are updated. All “active” synapses $`w`$ (and corresponding counter values $`c`$) for which pre- and postsynaptic sites have been simultaneously active are updated as follows. The feedback signal is subtracted from the memory $`c`$ of each active synapse according to: $$cc^{}=\{\begin{array}{ccccc}\mathrm{\Theta },\hfill & \mathrm{if}\hfill & & cr& >\mathrm{\Theta }()\hfill \\ cr,\hfill & \mathrm{if}\hfill & \hfill \mathrm{\Theta }& cr& 0\hfill \\ 0,\hfill & \mathrm{if}\hfill & \hfill 0>& cr.& \end{array}$$ (1) Thus, each counter $`c`$ is an error account of the corresponding synapse. The capacity of the account is given by the memory size $`\mathrm{\Theta }`$. In case this threshold is exceeded \[marked by $`()`$ in equation (1)\] the synapse is penalized, i. e., it is weakened by a constant amount $`\delta `$: $$ww^{}=w\delta .$$ (2) (Alternatively, a multiplicative penalty combined with a constant growth of weights has been successfully checked, too.) Therefore, the counter averages over the record of a synapse, instead of penalizing each single error at the moment it occurs. Note that the model by Chialvo and Bak is just this latter case and is obtained by setting $`\mathrm{\Theta }=0`$ and $`\beta =\mathrm{}`$. After these changes to weights and counters the learning cycle is iterated by presenting another—possibly different—pattern of stimuli to the network. Note that $`\beta `$ and $`\delta `$ are not independent parameters; changing the value of $`\delta `$ does not affect the dynamics, as long as the product $`\beta \delta `$ is kept constant and the weights are rescaled correspondingly. Furthermore, the firing probabilities are conserved under transformations that add the same value to all outgoing connections of one neuron. We could therefore keep the values of the weights in a bounded domain without changing the model dynamics. Let us next demonstrate the learning capability and robustness of the model by simulating an XOR learning task. The network used has $`I=3`$ input sites $`x_0`$, $`x_1`$, and $`x_2`$, with the input site $`x_01`$ serving as bias. The hidden layer has $`J=3`$ neurons, the minimum number necessary to represent the XOR function in the present architecture. $`K=2`$ output neurons represent the two possible outcomes with only one of them active at a time. The initial values of the weights $`w`$ are uniformly chosen random numbers $`[0,1]`$, all counters $`c`$ are set to $`0`$, and $`\delta =1`$. The four patterns of the XOR function are presented with equal probability. Fig. 1 shows learning curves for memory sizes $`\mathrm{\Theta }=0,1,2`$ with $`\beta =10`$ and averaged over 10000 independent runs each. The displayed error $`E`$ is the fraction of simulation runs that have produced an incorrect output at the considered time step. We find that learning takes place with $`\mathrm{\Theta }1`$ only, where the error quickly converges to zero, whereas with $`\mathrm{\Theta }=0`$ as in the model of Chialvo and Bak no learning takes place at all. The error remains constant hardly below the “default” of $`0.5`$ (this holds for the whole simulation time of 100,000 trials, not shown here). The obviously dramatic effect of the synaptic memory can be understood in the following way: Any synapse that is involved in failure—meaning that pre- and postsynaptic firings have occured prior to unsuccessful output of the network—is a candidate for decrement. In the case $`\mathrm{\Theta }=0`$ all such “failing” synapses are weakened, such that on repeated presentation of the same stimuli the activity is likely to be lead to a different output neuron. This is a simple and reasonable principle as long as our learning goal is the mapping of just one pattern of stimuli or a set of non-overlapping patterns. However, the task of learning a non-trival logical operation as the one we are facing here, requires a more elaborate mechanism: The immediate weakening of all synapses, that are involved in failure for a certain pattern, eventually destroys a useful structure for the successful mapping of other patterns. This is avoided by the synaptic memory considered here: Only if a synapse is repeatedly involved in failure, its efficacy is reduced. The idea of averaging over errors and updating the weights on a slower time scale than sample presentation is well known from batch learning methods . In those methods, errors are determined over a whole sweep through the pattern set and subsequently weights are updated synchronously. However, those algorithms fail to explain learning in biological neural systems as they rely on biologically implausible mechanisms as, for example, back-propagating errors. In fact, what we wish to define here is a learning method based on purely local dynamics, where weight changes are based only on information that is locally available (the two adjacent neurons of a synapse) with nothing more than a single global reinforcement signal—exactly the information that is available to a biological synapse. A first step in this direction would be a trivial “localized” version of batch learning where weight changes are based on the global reinforcement signal, only. Indeed, this works for single layer networks, however, fails for learning XOR-type problems in multi-layer networks. Here, our work proposes a solution, using a synaptic error account combined with asynchronous updating of the synaptic weights. It can be viewed as a generalization of the Hebbian learning rule: While the Hebb rule alone is not able to make a network learn the XOR, the above extension does so. The resulting network is a self-contained dynamical system with local dynamical rules defined in a way that the overall network dynamics results in adaptive learning of general logical functions including the XOR problem. Besides learning XOR as shown here, the algorithm also proved to learn logical functions of higher dimensions and complexity. The aspect of protecting synapses from too quick changes has further implications with regard to the network’s robustness against noise. Fig. 2 demonstrates the effect of the inverse noise level $`\beta `$ on the mean residual error after 90,000 trials. For fixed memory length $`\mathrm{\Theta }`$ we find a sharp transition from a regime of non-learning, characterized by $`E=0.5`$, to a regime of effective learning with $`E0`$. We conclude that the network is able to learn just as long as the information gain provided by the feedback signal is larger than the information loss caused by the uncertainty of the stochastic neural dynamics. The effect of increasing the memory length $`\mathrm{\Theta }`$ is obvious: The critical point between the two regimes is shifted to lower values of $`\beta `$, i. e., higher noise. Synapses with larger memory can average out the uncertainty and therefore enable stochastically firing networks to adapt to their environment. Now let us briefly discuss the biological motivations for the choice of mechanisms used in the model above. First, observations in experimental neurobiology show clear evidence that modulation of long-term potentiation (LTP) and depression (LTD) via external signals occurs (i.e., modulation of plasticity of weights). In one example from the hippocampus CA1 region, which is involved in learning and memory formation, modulation mediated by dopamine has been verified . In particular, when dopamine is applied during or shortly after LTD activity, one observes that LTD is suppressed (and LTP can appear instead). Learning activity can thereby receive feedback via dopamine which then modulates synaptic plasticity, in particular LTD. Indeed, hormone signals are widely known to interfere with learning and memory formation. For example adrenal hormones have been shown to enhance susceptibility for LTD , an effect which has even been found following behavioral stress in living animals . A broad class of other factors that modulate synaptic plasticity have been classified, sometimes summarized as “metaplasticity” . We believe that further research in this area will provide new insights in the computational mechanics of biological nervous systems. Furthermore, progress has been made in exploring the mechanisms of retrograde feedback in LTP and LTD. Evidence accumulates in favor of some physiological mechanisms that feed back the postsynaptic activity to the presynaptic site. A possible mechanism recently proposed for LTD is the messenger nitric oxide evoking a specific presynaptic biochemical cascade which, eventually, interacts with the intracellular mechanisms for vesicle formation and loading . The subsequently reduced transmitter release establishes a long term depression of this synaptic pathway. An interesting observation is the long time scale of this process of the order of 15 minutes , especially when compared to that of neuronal firing packages. This opens up the possibility that considerable time averaging may occur in the course of inducing LTD. The effect of such a synaptic averaging on learning has been simulated above by an internal counter associated with each synaptic weight. Further experimental research on the timing of externally induced LTD and the lifetimes of the biochemical agents involved in the retrograde signaling cascade may show to what extent synaptic averaging in the induction of plasticity changes is established in nature. To summarize, we studied a biologically motivated model for goal-directed learning in multilayer neural networks. In contrast to existing models, synaptic plasticity is based on a time-averaged individual failure rate of each synapse. Thereby, learning of general logical functions (including XOR) is possible on the basis of local synaptic plasticity alone, combined with homogeneous failure feedback. In particular, no error backpropagation is needed. The presented algorithm also works in the presence of noise, where internal errors are compensated for by the averaging of each synapse: only persistent failure is punished.
no-problem/9909/cond-mat9909183.html
ar5iv
text
# A Fault-Tolerant Superconducting Associative Memory The demand for high-density data storage with ultrafast accessibility motivates the search for new memory implementations. Ideally such storage devices should be robust to input error and to unreliability of individual elements; furthermore information should be addressed by its content rather than by its location. Here we present a concept for an associative memory whose key component is a superconducting array with natural multiconnectivity. Its intrinsic redundancy is crucial for the content-addressability of the resulting storage device and also leads to parallel image retrieval. Because patterns are stored nonlocally both physically and logically in the proposed device, information access and retrieval are fault-tolerant. This superconducting memory should exhibit picosecond single-bit acquisition times with negligible energy dissipation during switching and multiple non-destructive read-outs of the stored data. The key component of our proposed associative memory is a superconducting array with multiple interconnections (Figure 1), where each bit is represented by a wire and thus is physically delocalized. More specifically this network consists of a stack of two perpendicular sets of $`N`$ parallel wires separated by a thin oxide layer. At low temperatures a superconductor-insulator-superconductor layered structure, known as a Josephson junction, exists at each node of this array; logically each pattern in our proposed memory is stored nonlocally in these $`N^2`$ interconnections. We note that in this network each horizontal/vertical wire is coupled to each vertical/horizontal one by a Josephson junction so that in the thermodynamic limit ($`N\mathrm{}`$ for fixed area) the number of neighbors diverges. The important energy-scales of this long-range array are those associated with the superconducting wires and with the Josephson junctions. Each superconducting wire is characterized by a macroscopic phase which is constant in equilibrium; here we assume that phase slips in each wire are energetically unfavorable. Application of a magnetic field results in the rotation of this phase, where the rotation rate is determined by the amplitude of the applied field. The interaction energy of a Josephson junction is minimized when the phase difference across its insulating layer is zero. In the absence of a field this condition is satisfied at each junction of the array. However application of a field transverse to the network results in an overconstrained system since the $`2N`$ phases and the $`N^2`$ Josephson junctions have competing energetic requirements. The identification of the ground-state in such a system is a hard combinatorial optimization problem, as the number of metastable states scales exponentially with the number of wires, $`N`$. Because of its high-connectivity, the long-range Josephson array is accessible to analytic treatment; furthermore it can be fabricated and studied in the laboratory. A detailed theoretical characterization of this network has been performed. At low temperatures the system has an extensive number of states, $`𝒩_{states}e^{cN}`$ where $`cO(1)`$, separated by free energy barriers that scale with the number of wires, $`N`$. Its specific low-temperature configuration is determined by sample history, a feature also observed in glassy materials. Experiments have confirmed predicted static properties of this multi-connected array, though detailed dynamical investigations in the laboratory remain to be performed. The proposed superconducting network (Figure 1) has long-range temporal correlations (memory) and an extensive number of metastable states, and thus it is natural to explore its possible use for information storage. Indeed high-connectivity and nonlinear elements (e.g. Josephson junctions) are key features required for the construction of associative memories. Here one would like to store $`p`$ patterns in such a way that if the memory is exposed to another one ($`\xi _i`$) with a significant ($`\frac{1}{\sqrt{N}}`$) overlap with a stored image ($`\stackrel{~}{\xi }_i`$), $`q=\frac{1}{N}_i^N\xi _i\stackrel{~}{\xi }_i`$, then it produces $`\stackrel{~}{\xi }`$. A simple model for such a memory is based on an array of McCulloch-Pitts neurons (Figure 2). The patterns are stored in the couplings, $`J_{ij}`$. Each nonlinear element has multiple inputs, and the output is a nonlinear function of the weighted sum of the inputs. The McCulloch-Pitts network, with inputs $`n_i\{0,1\}`$, can be reformulated as a spin model where $`\xi _i=2n_i1`$; then the output is $$\xi _i=\mathrm{sgn}\left(\underset{j}{}J_{ij}\xi _j\right)$$ (1) where $$\mathrm{sgn}(x)=\{\begin{array}{cc}+1x0& \\ 1x0& \end{array}$$ (2) and the couplings $`J_{ij}`$ can have arbitrary sign. Clearly the output is robust to errors in the input due to the multiple connections present. In order to ensure that the McCulloch-Pitts array is content-addressable, the couplings must be chosen so that the stored images correspond to stable configurations of the network. Hopfield has proposed an algorithm where the desired patterns are local minima of an energy function, e.g. $`H=\frac{1}{2}_{ij}J_{ij}S_iS_j`$ where $`S_i\{1,+1\}`$. The couplings are chosen so that the energy is minimized for maximal overlap of $`S_i`$, the array configuration, and the desired output, $`\xi _i`$. For one pattern, this condition is satisfied for $`J_{ij}=\frac{1}{N}\xi _i\xi _j`$ where $`N`$ is the number of elements in the array; then $`H=\frac{1}{2N}\left(_iS_i\xi _i\right)^2`$. We note that with this choice of weights $`J_{ij}`$ the output is $$\xi _i=\mathrm{sgn}\left(\underset{j=1}{\overset{N}{}}J_{ij}\xi _j\right)$$ (3) for all $`i`$ since $`\xi _j^2=1`$ where $`\xi _j`$ and $`\xi _i`$ are the desired inputs and outputs respectively. We note that, according to (3) with the discussed choice of $`J_{ij}`$, a real input $`S_j\xi _j`$ yields the desired output if it has errors in less than half its bits. In the Hopfield model, the couplings associated with several stored images are simple superpositions of the one-pattern case such that $$J_{ij}=\frac{1}{N}\underset{\mu =1}{\overset{p}{}}\xi _i^\mu \xi _j^\mu $$ (4) where $`\mu `$ labels each pattern. The total storage capacity of the network, $`p_{max}`$, is dependent on the acceptable error rate; in general $`p_{max}=\alpha N`$ where $`\alpha 0.138`$ if the probability of an erroneous bit in each pattern is $`P_{error}<0.01`$. Here we discuss the Hopfield algorithm because of its simplicity, but we note that other more efficient algorithms can also be implemented in this network. The long-range Josephson array (Fig. 1) can be adapted to become a superconducting analogue of a McCulloch-Pitts network. It is described by the Hamiltonian $$=\mathrm{Re}\underset{j\overline{j}}{}S_j^{}J_{j\overline{j}}S_{\overline{j}}$$ (5) with $`1(j,\overline{j})N`$ where $`j`$ and $`\overline{j}`$ are the indices of the horizontal and vertical wires respectively; $`S_j`$ are effective complex spins with unit amplitude, $`S_j=e^{i\varphi _j}`$ where $`\varphi _j`$ is the phase of the $`j`$-th superconducting wire. The couplings are site-dependent and are related to the enclosed flux, $`\mathrm{\Phi }_{j\overline{j}}`$, in a given area whose edges are defined by wires $`j`$ and $`\overline{j}`$ such that $$J_{j\overline{j}}=\frac{J}{\sqrt{N}}\mathrm{exp}\frac{2\pi i\mathrm{\Phi }_{j\overline{j}}}{\mathrm{\Phi }_0}.$$ (6) where $`\mathrm{\Phi }_0`$ is the flux quantum. For a uniform magnetic field $`H`$, $`\mathrm{\Phi }_{j\overline{j}}=H(\overline{j}jl^2)`$ where $`l`$ is the interwire spacing. We emphasize that the sign of $`J_{j\overline{j}}`$ in (6) can be both positive and negative depending on the value of $`\frac{\mathrm{\Phi }_{ȷ\overline{j}}}{\mathrm{\Phi }_0}`$. In complete analogy with our previous discussion of the McCulloch-Pitts network, patterns are stored in this superconducting associative memory by appropriate choice of the weights, $`J_{j\overline{j}}`$. Because the $`J_{j\overline{j}}`$ are functions of the enclosed fluxes, $`\mathrm{\Phi }_{j\overline{j}}`$, they can be set to their desired values by appropriately tuning the local applied field $`H_{j\overline{j}}`$. In practice this writing procedure could be accomplished by an array of superconducting quantum interference devices (SQUIDs) superimposed on the multi-connected Josephson network. The stored patterns are encoded in the Josephson couplings of the long-range array and correspond to stable configurations of the $`2N`$ superconducting phases. A “fingerprint” of each image can then be determined using voltage pulses and the Josephson phase-voltage relation $`\mathrm{\Delta }\varphi =\frac{2\pi }{\mathrm{\Phi }_0}V𝑑t`$ where $`\mathrm{\Delta }\varphi `$ is the phase difference across the relevant junction. More specifically a set of voltage pulses can be applied to a small subset ($`>\sqrt{N}`$) of the horizontal wires, a “key”, thereby altering the phase differences at the associated nodes. The phases of the vertical wires must readjust in order for the system to settle into a stable configuration, a process which results in the absence/presence of a voltage pulse. The set of key input and $`N`$ output voltage pulses therefore constitutes a signature of each stored image. Single-flux quantum (SFQ) voltage pulses,where $`V𝑑t=\mathrm{\Phi }_0`$, may be used for direct analogy with the McCulloch-Pitts array where inputs $`n_{\overline{j}}\{0,1\}`$ now refer to the absence/presence of a SFQ pulse. Again it is convenient to describe the network in terms of the variables $`\xi _i=2n_i1`$. Following the Hopfield algorithm, the local coupling associated with one pattern is $`J_{j\overline{j}}=\frac{1}{N}\xi _j\xi _{\overline{j}}`$ in accordance with (6) so the the desired output $`\xi _j=J_{j\overline{j}}S_{\overline{j}}`$ is robust for $`S_{\overline{j}}\xi _{\overline{j}}`$. The weights coding many stored patterns are superpositions, (4), of the one-pattern cases. From a practical standpoint, these couplings are “written” by local (uniform) fields applied to individual plaquettes of area $`l^2`$; for a cell with its lower left-hand corner defined by the Cartesian coordinates $`(j,\overline{j})`$, the plaquette flux, $`\mathrm{\Phi }_{plaquette}^{j\overline{j}}`$, is related to the weights by the expression $$\mathrm{\Phi }_{plaquette}^{j\overline{j}}=\frac{\mathrm{\Phi }_0}{2}\mathrm{\Theta }\left\{J_{j\overline{j}}J_{j+1\overline{j}}J_{j+1\overline{j}+1}J_{j\overline{j}+1}\right\}$$ (7) where $`\mathrm{\Theta }(x)=1`$ if $`x0`$ and $`\mathrm{\Theta }(x)=0`$ otherwise. Again we note that other algorithms can be used to determine the couplings; here we use the Hopfield model as an illustrative example. In the proposed superconducting memory, each stored image is coded by a distinct set of superconducting phases associated with weak supercurrents and negligible induced fields. In conventional superconducting memories/logic, digital information is stored locally in trapped magnetic fluxes that are switched between SFQ states, $`\mathrm{\Phi }\{0,\mathrm{\Phi }_0\}`$, by Josephson junctions; therefore the associated supercurrents should be large and can lead to unwanted crosstalk between adjacent elements. However in order to maintain their advantage in speed compared to other memory technologies, Josephson junction devices must use SFQ for both information storage and retrieval. This condition is satisfied by the design of our proposed memory where the Josephson junctions switch fluxoids while the applied magnetic fluxes remain fixed; it is not the local fields but the supercurrents that store the information. The fault-tolerance of the long-range Josephson network discussed here is due to the nonlocal nature of its data storage both at the physical and the logical levels. In conventional planar superconducting arrays there are $`O(N^2)`$ individual short superconducting wires and the fluxoids are spatially confined to areas $`Al^2`$ where $`l`$ is the internode spacing. By contrast, in the multi-connected network the phases reside on the $`2N`$ wires of length $`Nl`$; thus the fluxoids here are extended to the entire system. Data is coded nonlocally in configurations of these superconducting phases, similar to the situation in an optical holographic storage device. There the stored patterns are independent of the input and an analogous superconducting holographic memory can be constructed. For example, let us consider the input wavefunction $$\mathrm{\Psi }_{\overline{j}}^p=\mathrm{exp}2\pi i\varphi _{\overline{j}}^p=\mathrm{exp}\frac{2\pi i\overline{j}p}{N}$$ (8) where $`\overline{j}`$ and $`p`$ are indices labelling the horozontal wires and the stored patterns respectively. Then the input voltage pulses would be $`V𝑑t=\left[\frac{\overline{j}p}{N}\right]\mathrm{\Phi }_0`$ where $`[]`$ refers to the fractional part. Using the Hopfield algorithm, we have $`J_{j\overline{j}}=\frac{1}{N}_p\xi _j^p\mathrm{\Psi }_{\overline{j}}^p`$ which yields the desired output $`\xi _j^p=_jJ_{j\overline{j}}^{}\mathrm{\Psi }_{\overline{j}}^p`$. We note that any orthogonal basis for the inputs will work; therefore this long-range Josephson array can be used as a key component of both an associative and a holographic memory. Practically, the proposed superconductor memory cell consists of the superimposed SQUID and long-range networks for writing and reading respectively, and a phase reset circuit. Each data retrieval event in the READ array must be followed by a reset operation since the output signals correspond to phase differences with respect to a reference state. This reset circuit can be constructed from a series of double-junction SQUID loops connected to each horizontal wire of the multi-connected array with a variable coupling to ground; if finite this coupling locks the relevant phase into a reference state, whereas if zero (e.g. in the presence of a control line current) the next data retrieval process can be performed. This memory cell can then be embedded in an enviroment with known input/output SFQ circuitry that includes DC/SFQ voltage pulse converters and SFQ transmission/amplification lines. The network parameters, the charging ($`E_c`$) and the coupling ($`E_J`$) energy scales, should be chosen to optimize performance, particularly to maximize access rate, $`\omega \mathrm{min}(\mathrm{\Delta },\sqrt{E_cE_J})`$ where $`\mathrm{\Delta }`$ refers to the superconducting gap. An additional constraint on $`E_C`$ ($`E_C0.01E_J`$) results from the condition that phase fluctuations remain weak and do not result in errors. Given these constraints, the minimal dissipation per bit ($`E_J`$) is roughly $`10\mathrm{\Delta }`$ which is $`10^{22}`$ joule for aluminum in contrast to its value of $`10^{15}`$ joule for conventional semiconducting electronics. In summary, we have proposed an associative memory device that is a superconducting analogue of a McCulloch-Pitts network. It is content-addressable with a single-bit data accessible time, $`\tau _A`$, that is determined by the superconducting gap and the charging and coupling energy-scales in the network. Moreover because this memory is intrinsically parallel due to its crossbar design, an image of $`N`$ bits can be retrieved in a time (per bit) $`\tau _{DT}\frac{\tau _A}{N}`$; by contrast $`\tau _{DT}=\tau _A`$ in a conventional local memory. For example, an array of $`N=1000`$ wires with $`l=0.5\mu `$ has a capacity of $`C=0.1N^2=10^5`$ bits; a set of such arrays on a typical 1cm<sup>2</sup> chip would then have a capacity of 1 Gigabyte with an image access time (per bit) of $`\tau _{DT}=10^{15}`$ seconds. The fault-tolerance of this superconducting memory enhances its appeal as a candidate for ultrafast high-density information storage without conventional problems of power dissipation and subsequent heat removal. As a point of comparison, we remark that this proposed device is faster but has lower absolute capacity than the best optical holographic memories; this is because the latter are intrinsically three-dimensional. We end by noting that we can tune the long-range array such that its stored images are maximally distant from each other in phase space. In this case the matrix elements associated with external noise will be negligible, and these patterns will have long decoherence times. Such orthogonal configurations could be promising as basis states for quantum computation.
no-problem/9909/hep-ph9909537.html
ar5iv
text
# References UCRHEP-T263 September 1999 Fitting Precision Electroweak Data with Exotic Heavy Quarks Darwin Chang, We-Fu Chang NCTS and Department of Physics, National Tsing-Hua University, Hsinchu 30043, Taiwan, ROC Ernest Ma Physics Department, Univeristy of California, Riverside, CA 92521, USA ## Abstract The 1999 precision electroweak data from LEP and SLC persist in showing some slight discrepancies from the assumed standard model, mostly regarding $`b`$ and $`c`$ quarks. We show how their mixing with exotic heavy quarks could result in a more consistent fit of all the data, including two unconventional interpretations of the top quark. Precision measurements of electroweak parameters at the $`Z`$ resonance have been available for many years. Their updated values in 1999 as reported at Tampere and at Stanford are consistent with the expectations of the minimal standard model, including all radiative corrections to one-loop order. However, certain slight discrepancies persist, mostly regarding $`b`$ and $`c`$ quarks. In this note, we show how their mixing with exotic heavy quarks could result in a more consistent fit of all the data, including two unconventional interpretations of the top quark. The most telling sign that there may be something beyond the minimal standard model in precision electroweak measurements is the observation that the two most precise measurements of $`\mathrm{sin}^2\theta _{eff}`$ are 3.0 standard deviations apart. One is the left-right asymmetry $`A_{LR}`$ (which directly measures $`A_e`$) from SLC at SLAC that gives $$\mathrm{sin}^2\theta _{eff}(A_{LR})=0.23101\pm 0.00028,$$ (1) and the other is the forward-backward asymmetry $`A_{FB}^{0,b}`$ of $`b`$ quarks from LEP at CERN which gives $$\mathrm{sin}^2\theta _{eff}(A_{FB}^{0,b})=0.23236\pm 0.00036.$$ (2) We note that Eq. (1) is consistent with the forward-backward asymmetry of leptons measured at LEP which gives $$\mathrm{sin}^2\theta _{eff}(A_{FB}^{0,l})=0.23107\pm 0.00053,$$ (3) whereas Eq. (2) is consistent with the $`A_b`$ measurement at SLC, i.e. $`A_b=0.905\pm 0.026`$ versus the extracted value of $`A_b=0.881\pm 0.020`$ from the value of $`A_{FB}^{0,b}`$ shown. This points to the possibility that there is new physics in the decay $`Zb\overline{b}`$. Specifically, consider the effective left-handed and right-handed couplings of the $`b`$ quark to the $`Z`$ boson in the standard model: $`g_{bL}^{SM}`$ $`=`$ $`\left(1+{\displaystyle \frac{ϵ_1}{2}}\right)\left({\displaystyle \frac{1}{2}}(1+ϵ_b)+{\displaystyle \frac{1}{3}}\mathrm{sin}^2\theta _{eff}\right),`$ (4) $`g_{bR}^{SM}`$ $`=`$ $`\left(1+{\displaystyle \frac{ϵ_1}{2}}\right){\displaystyle \frac{1}{3}}\mathrm{sin}^2\theta _{eff},`$ (5) where the radiative corrections $`ϵ_1`$ and $`ϵ_b`$ are functions of $`m_t`$ and $`m_H`$. Note the important fact that $`ϵ_b`$ (which has a strong quadratic dependence on $`m_t`$) contributes only to $`g_{bL}^{SM}`$. On the other hand, the measured quantity $`R_b\mathrm{\Gamma }(Zb\overline{b})/\mathrm{\Gamma }(Z\mathrm{hadrons})`$ is proportional to $`g_{bL}^2+g_{bR}^2`$, whereas $`A_{FB}^{0,b}`$ and $`A_b`$ are proportional to $`(g_{bL}^2g_{bR}^2)/(g_{bL}^2+g_{bR}^2)`$. From the 1999 data reported at Tampere and at Stanford, $$R_b=0.21642\pm 0.00073,A_{FB}^{0,b}=0.0984\pm 0.0020,A_b=0.905\pm 0.026,$$ (6) the couplings $`g_{bL}`$ and $`g_{bR}`$ can be extracted: $$g_{bL}=0.4163\pm 0.0020,g_{bR}=0.0996\pm 0.0076.$$ (7) Using $`m_t=174`$ GeV, $`m_H=100`$ GeV, and $`\alpha (m_Z)^1=128.9`$, the standard model yields $$g_{bL}^{SM}=0.4208,g_{bR}^{SM}=0.0774.$$ (8) Note that $`g_{bL}^2+g_{bR}^2`$ is almost exactly equal to $`(g_{bL}^{SM})^2+(g_{bR}^{SM})^2`$, but $`g_{bL}`$ and $`g_{bR}`$ are each over two standard deviations away from $`g_{bL}^{SM}`$ and $`g_{bR}^{SM}`$ respectively. As we already pointed out last year, since $`ϵ_b`$ depends only on the left-handed partner of the $`b`$ quark, this may be an indication that $`m_t`$ is actually much greater than 174 GeV and the observed “top” events are due to an exotic quark $`Q_4`$ of charge $`4/3`$. In this scenario, the singlet $`b_R`$ mixes with the exotic quark $`Q_1`$ in the doublet $`(Q_1,Q_4)_R`$ so that $`g_{bR}`$ $`=`$ $`\left(1+{\displaystyle \frac{ϵ_1}{2}}\right)\left[{\displaystyle \frac{1}{3}}\mathrm{sin}^2\theta _{eff}\mathrm{cos}^2\theta _b+\left({\displaystyle \frac{1}{2}}+{\displaystyle \frac{1}{3}}\mathrm{sin}^2\theta _{eff}\right)\mathrm{sin}^2\theta _b\right]`$ (9) $`=`$ $`\left(1+{\displaystyle \frac{ϵ_1}{2}}\right)\left({\displaystyle \frac{1}{3}}\mathrm{sin}^2\theta _{eff}+{\displaystyle \frac{1}{2}}\mathrm{sin}^2\theta _b\right).`$ Since $`\mathrm{sin}^2\theta _{eff}/3`$ is small to begin with, a reasonably small $`\mathrm{sin}^2\theta _b`$ is sufficient to make $`g_{bR}`$ fit the data. \[If radiative corrections to $`g_{bR}`$ from new physics were invoked, an unreasonably large effect of about 30% would be needed.\] In the following we will update our analysis using the 1999 data. We will also address the new possibility that slight discrepancies in $`Zc\overline{c}`$ may be due to yet another exotic quark and offer a second alternative interpretation of the “top” events. Using the 1999 $`Zl^{}l^+`$ data assuming lepton universality, i.e. $$\mathrm{\Gamma }_l=83.96\pm 0.09\mathrm{MeV},A_{FB}^{0,l}=0.01701\pm 0.00095,$$ (10) together with $$m_W=80.394\pm 0.042\mathrm{GeV},m_Z=91.1871\pm 0.0021\mathrm{GeV},$$ (11) we find $$ϵ_1=(4.7\pm 1.1)\times 10^3,ϵ_2=(7.2\pm 2.4)\times 10^3,ϵ_3=(3.6\pm 1.7)\times 10^3,$$ (12) which agree very well with previous values and also with the standard model, i.e. $$ϵ_1^{SM}=5.4\times 10^3,ϵ_2^{SM}=7.6\times 10^3,ϵ_3^{SM}=5.2\times 10^3.$$ (13) Using Eqs. (3), (4) and (7), we then obtain $$ϵ_b=(15.3\pm 4.0)\times 10^3.$$ (14) This implies that $$m_t=271\begin{array}{c}+33\hfill \\ 38\hfill \end{array}\mathrm{GeV},$$ (15) where we have approximated $`ϵ_b`$ by its leading contribution, $`G_Fm_t^2/4\pi ^2\sqrt{2}`$. To explain $`g_{bR}`$ of Eq. (7) and thus also Eq. (2), we use Eq. (9) and find $$\mathrm{sin}^2\theta _b=0.045\pm 0.015.$$ (16) In the standard model, $`ϵ_1`$ and $`ϵ_b`$ are fixed by $`m_t=174`$ GeV and $`\theta _b`$ is absent, so the experimental discrepancy from $`Zb\overline{b}`$ data is forced into a value of $`\mathrm{sin}^2\theta _{eff}`$ given by Eq. (2) which is 3.0 standard deviations away from the true value given by Eqs. (1) and (3). Our interpretation of the data so far is that $`b_R`$ is not purely $`I_3=0`$ as in the standard model, but has a small $`I_3=1/2`$ component from mixing with the exotic $`(Q_1,Q_4)_R`$ doublet. We also take the viewpoint that $`b_L`$ is as given by the standard model and the measured $`g_{bL}`$ is a direct indication of the mass of its partner, defined as the $`t`$ quark. This results in Eq. (15). At this point, we need to revise our assessment of the agreement of Eq. (12) with Eq. (13), namely that in the presence of new physics, $`ϵ_{1,2,3}`$ receive additional contributions, hence a change in the value of $`m_t`$ may be suitably compensated. Details have already been discussed in our previous paper. Consider now the 1999 $`Zc\overline{c}`$ data: $$R_c=0.1674\pm 0.0038,A_{FB}^{0,c}=0.0691\pm 0.0037,A_c=0.630\pm 0.026,$$ (17) from which the couplings $`g_{cL}`$ and $`g_{cR}`$ can be extracted: $$g_{cL}=0.341\pm 0.005,g_{cR}=0.164\pm 0.005,$$ (18) whereas the standard model yields $$g_{cL}^{SM}=0.347,g_{cR}^{SM}=0.155.$$ (19) Although the deviations here are small, there is a hint that $`g_{cR}`$ may be too big in magnitude and $`g_{cL}`$ too small. To explain both, we take the analog of Eq. (9) and let $`c`$ mix with a heavy quark $`Q_2`$, where $`Q_{2L}`$ is a singlet but $`(Q_5,Q_2)_R`$ is an exotic doublet, so that $`g_{cR}`$ $`=`$ $`\left(1+{\displaystyle \frac{ϵ_1}{2}}\right)\left({\displaystyle \frac{2}{3}}\mathrm{sin}^2\theta _{eff}{\displaystyle \frac{1}{2}}\mathrm{sin}^2\theta _{cR}\right),`$ (20) $`g_{cL}`$ $`=`$ $`\left(1+{\displaystyle \frac{ϵ_1}{2}}\right)\left({\displaystyle \frac{1}{2}}{\displaystyle \frac{2}{3}}\mathrm{sin}^2\theta _{eff}{\displaystyle \frac{1}{2}}\mathrm{sin}^2\theta _{cL}\right).`$ (21) Using Eqs. (3), (12) and (18), we then obtain $$\mathrm{sin}^2\theta _{cR}=0.02\pm 0.01,\mathrm{sin}^2\theta _{cL}=0.01\pm 0.01.$$ (22) This opens up the possibility that $`Q_2`$ may also mix with $`t`$ (and not just with $`c`$) so that the Tevatron “top” events are due to $`Q_2`$ rather than $`t`$ which is heavier. This second interpretation is of course much more speculative because it is not directly related to the data. Note that the $`ϵ_{1,2,3}`$ contributions of $`Q_2`$ and $`Q_5`$ may be handled in the same way as those of $`Q_1`$ and $`Q_4`$, as discussed by us in Ref. . In conclusion, we have shown in this short note that the 1999 precision electroweak data at LEP and SLC still support the possibility that $`b_R`$ mixes with $`Q_{1R}`$ of the exotic heavy quark doublet $`(Q_1,Q_4)_R`$. Hence the “top” events may be due to $`Q_4`$ which has charge $`4/3`$, whereas the true $`t`$ quark is heavier, as evidenced by the value of $`ϵ_b`$ extracted from $`g_{bL}`$. Experimentally, $`tbW^+`$ and $`\overline{Q}_4\overline{b}W^+`$ are not distinguishable at the Tevatron at present because the $`b`$ or $`\overline{b}`$ jet charge is not easily measured, but that will become possible in the near future. We also propose here a second, more speculative idea that the “top” events may be due to a heavy quark $`Q_2`$ of charge 2/3, where $`Q_{2L}`$ is a singlet but $`(Q_5,Q_2)_R`$ is an exotic doublet. In both scenarios, the lifetime of the “top” is enhanced by the inverse square of a reduced coupling and the single production of “top” at the Tevatron is suppressed. ACKNOWLEDGEMENT We thank J. H. Field and P. B. Renton for valuable correspondence. This work was supported in part by the U. S. Department of Energy under Grant No. DE-FG03-94ER40837 and a grant from the National Science Council of the Republic of China.
no-problem/9909/astro-ph9909501.html
ar5iv
text
# Constraints on Galaxy Formation from the Tully-Fisher Relation ## 1. Introduction Understanding the formation of galaxies is intimately related to understanding the origin of the fundamental scaling relations. In particular, any successful theory for the formation of disk galaxies should be able to explain the slope, zero-point and small amount of scatter of the Tully-Fisher relation (TFR). The empirical TFR which most directly reflects the mass in stars and the total dynamical mass of the halo is the $`K`$-band TFR of Verheijen (1997), which uses the flat part of the rotation curve as velocity measure, and we use this relation to constrain our models. ## 2. Modeling the Formation of Disk Galaxies We assume disks to form by the settling of baryonic matter in virialized dark halos described by the NFW density profile (Navarro, Frenk & White 1997). It is assumed that baryons conserve their angular momentum, thus settling into a disk (cf. Mo, Mao & White 1998). Adiabatic contraction of the dark halo is taken into account, as well as a recipe for bulge formation based on a self-regulating mechanism that ensures disks to be stable (van den Bosch 1998). Once the density distribution of the baryonic material is known, we compute the fraction of baryons converted into stars. Only gas with densities above the critical density given by Toomre’s stability criterion is considered eligible for star formation (cf. Kennicutt 1989). A simple recipe for supernovae feedback is included, which describes what fraction of the baryonic mass is prevented from becoming part of the disk/bulge system. The slope and scatter of the TFR depend strongly on the luminosity and velocity measures used. Therefore, it is essential that one extracts the same measures from the models as the ones in the TFR used to constrain those models. We improve upon previous studies by carefully doing so. Details of the models can be found in van den Bosch (1999) and van den Bosch & Dalcanton (1999). ## 3. Results Within the framework of dark matter, simple dynamics predict a TFR of the form $`LV_{\mathrm{rot}}^\gamma `$ with $`\gamma =3`$. The empirical $`K`$-band TFR, however, has $`\gamma 4.2`$. If we ignore feedback and the star formation threshold density, such that all the available baryons are transformed into a stable disk/bulge system, our models indeed yield a TFR with $`\gamma =3`$, but with a large amount of scatter (Figure 1, model L0). This scatter owes to the spread in the angular momenta, J, of proto-galaxies: halos with lower J yield more compact disks and, because of the adiabatic contraction, more concentrated halos. Consequently, less rapidly spinning proto-galaxies result in disks with higher rotation velocities. Taking the stability related star formation threshold densities into account increases $`\gamma `$ from $`3.0`$ to $`3.6`$ (Figure 1, model L1). In addition, the scatter is strongly reduced. This owes to the fact that more compact disks have higher disk mass fractions that are eligible for star formation, resulting in brighter disks. The spread in J therefore induces a spread along the TFR, rather than perpendicular to it. Additional physics are required to further steepen the TFR to its observed slope of $`\gamma =4.2`$. In van den Bosch (1999) we argue that feedback is the only feasible mechanism to achieve this. We have tuned the model parameters that control the feedback from supernovae to tilt the TFR to its observed slope. The resulting model (L5) predicts an amount of scatter that is in excellent agreement with observations (see panel on the right in Figure 1). In order to assess the robustness of the resulting model, we now compare model L5 to other independent observations. In Figure 2 we plot the gas mass fractions, $`M_{\mathrm{HI}}/L_B`$, as function of both absolute magnitude and central surface brightness. The models are in excellent agreement with the data. This success owes mainly to the star formation recipe used, which yields lower gas mass fractions in more compact disks, as observed. The panels on the right in Figure 2 plot the characteristic mass-to-light ratio $`\mathrm{{\rm Y}}_0`$ (see van den Bosch & Dalcanton 1999 for details) as function of the central surface brightness. Once again, the model is in good agreement with the data, nicely reproducing the observed $`\mathrm{{\rm Y}}_0`$$`\mathrm{\Sigma }_0`$ “conspiracy” (cf. McGaugh & de Blok 1998). McGaugh (1998) has shown that mass discrepancies in disk galaxies set in at a characteristic acceleration of $`10^{10}\mathrm{m}\mathrm{s}^2`$. In Figure 3 we plot the enclosed mass-to-light ratio of 40 randomly chosen galaxies from model L5 (each sampled at 15 different radii) as function of radius, orbital frequency, and local acceleration. As observed, the model galaxies reveal a narrow correlation between mass-to-light ratio and acceleration. This is a remarkable success for the dark matter model; there is no obvious reason why disks in dark halos would reveal a characteristic acceleration, unlike in the case of modified Newtonian dynamics, where it is integral to the theory. ## 4. Conclusions We have shown that simple models for the formation of disk galaxies in a dark matter scenario can explain a wide variety of observations. After tuning the feedback parameters to fit the slope of the empirical $`K`$-band TFR, the model predicts gas mass fractions, characteristic accelerations, an $`\mathrm{{\rm Y}}_0`$ \- $`\mathrm{\Sigma }_0`$ “conspiracy”, and global mass-to-light ratios that are all in excellent agreement with observations, without additional tweaking of the parameters. This strongly contrast with the picture drawn by McGaugh (1999). Although the results presented here may appear a baby step (cf. McGaugh 1999) to some advocates of modified Newtonian dynamics, they can be considered a giant leap for those who believe in dark matter. ### Acknowledgments. It is a pleasure to thank Julianne Dalcanton for a fruitful and enjoyable collaboration. Financial support has been provided by NASA through Hubble Fellowship grant # HF-01102.11-97.A. ## References Kennicutt, R. C. Jr. 1989, ApJ, 344, 685 McGaugh, S. S. 1998, preprint (astro-ph/9812327) McGaugh, S. S. 1999, these proceedings (astro-ph/9909452) McGaugh, S. S., & de Blok, W. J. G. 1997, ApJ, 481, 689 McGaugh, S. S., & de Blok, W. J. G. 1998, ApJ, 499, 41 Mo, H. J., Mao, S., & White, S. D. M. 1998, MNRAS , 295, 319 Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, ApJ, 490, 493 van den Bosch, F. C. 1998, ApJ, 507, 601 van den Bosch, F. C. 1999, ApJ, in press (astro-ph/9909298) van den Bosch, F. C., & Dalcanton, J. J. 1999, ApJ, submitted Verheijen, M. A. W. 1997, PhD Thesis, University of Groningen
no-problem/9909/cond-mat9909079.html
ar5iv
text
# Soliton regime in the model with no Lifshitz invariant ## 1 Introduction When describing the incommensurate (IC) phases in the ferroelectrics of so called type II the phenomenological approach proposed by Y Ishibashi and H Shiba is widely used . In according to this model, the system thermodynamic potential can be written in the form : $$\mathrm{\Phi }=\mathrm{\Phi }_0\frac{1}{L}\underset{0}{\overset{L}{}}[(\phi ^{\prime \prime })^2g(\phi \phi ^{})^2\gamma (\phi ^{})^2+q\phi ^2+\frac{p}{2}\phi ^4+\frac{h}{3}\phi ^6]𝑑x$$ (1) where $`\phi (x)`$ is a one-dimensional order parameter (e.g., a component $`P_y(x)`$ of the spontaneous polarization $`𝐏P_y`$); $`\phi ^{}(x)\phi /x,`$ $`L`$ is a crystal length in the direction of spatial modulation of the order parameter. In the expression (1) the scale transformation is made in order to emphasize the physically relevant material parameters $`g,`$ $`q,`$ $`p`$ ($`h=0,1;`$ $`\gamma 1`$ \- like in the we remain the notation ’$`\gamma `$’ to indicate the contribution of invariant $`(\phi ^{})^2`$ which favors the appearance of IC state). For all known ferroelectrics of type II (sodium nitrite $`NaNO_2`$ , thiourea $`SC(NH_2)_2`$ , $`Sn_2P_2Se_6`$ and betaine calcium chloride dihydrate ($`BCCD`$) ) the parameter $`g`$ is negative: $`g<0`$. For sodium nitrite and $`Sn_2P_2Se_6`$ the direct (virtual) disordered-to-commensurate phase transition is regarded to be of the first order, thereby $`p<0`$ and $`h=1`$ for $`NaNO_2`$ and $`Sn_2P_2Se_6`$ . In the case of thiourea and $`BCCD`$ the parameter $`p>0`$ and $`h=0`$. It is usually assumed that the only parameter $`q`$ is dependent on temperature $`T`$: $`q=q_0(TT_0)`$, where $`q_0`$ and $`T_0`$ are some constants. The model (1) fairly well describes a lot of properties of the IC phase in the ferroelectrics of type II . One of such properties is a predominantly sinusoidal character of the order parameter modulation wave. Experimental studies indicate that for the type II ferroelectrics higher order satellites are of low intensity even in the close vicinity of the lock-in transition (see for review ). This circumstance is a reason why the one-harmonic approximation is often used when considering the IC order parameter configuration in, e.g., sodium nitrite or $`Sn_2P_2Se_6`$ $$\phi (x)=a\mathrm{sin}(bx).$$ (2) On the other hand, the recent experiments reveal that for thiourea the dependence of order parameter on position $`x`$ contains relatively large contribution of the higher harmonics and can not be regarded as pure sinusoidal. The solitonic properties of the modulation functions in thiourea are explained in . In order to describe nonlinear features of the IC order parameter configuration the authors of make use another than (1) phenomenological approach . The approach is similar to the model developed for the systems with two-component order parameter for which the Lifshitz invariant can be introduced . Examples of such systems are compounds of the $`A_2BX_4`$ family, e.g., $`Rb_2ZnCl_4`$ . Although in the case of type II ferroelectrics the order parameter is usually considered as one-dimensional, the role of second component is performed by some other normal coordinate $`\xi (x)`$ (e.g., $`xy`$-component of the elastic strain tensor) . The function $`\xi (x)`$ transforms like the first derivative of the order parameter: $`\xi (x)\phi ^{}(x).`$ These transformation properties of $`\xi (x)`$ allow to construct the thermodynamic potential with a term analogous to Lifshitz invariant . The approach is expected to be more appropriate than the model (1) when interpreting nonsinusoidal configurations of the order parameter . To some extent, these expectations are grounded on the analogy between the approach and the theory developed in . Really, the latter constitutes a powerful tool for description of the soliton structures in the compounds of $`A_2BX_4`$ family . In the present paper we show that the nonlinear contribution to the modulation wave observed in the experiments for thiourea can be explained in the framework of the model (1) as well. Our consideration is based on the nonlinear approximation for the IC order parameter configuration proposed in . Using it we obtain that if the system material parameters $`g`$ and $`p`$ are so that $`gp^16.0`$ ($`h=0`$, $`p>0`$) then the ratio $`a_3/a_1`$ for the amplitudes of third ($`a_3`$) and fundamental ($`a_1`$) harmonics of the modulation wave is equal to $`a_3/a_10.1`$. This result completely agrees with the experimental data for thiourea . Moreover, it is found that for some values of the parameters $`g`$ and $`p`$ (for systems with $`h=0`$, $`p>0`$ these values satisfy the relationship $`\gamma gp^116`$) the transition from the IC phase into the commensurate state is continuous. The order parameter configuration is domain-like in the proximity of such transition. The structure of present paper is following. In section 2 we formulate main features of the order parameter approximation proposed in . In section 3 we consider the nonlinear configurations of order parameter in the case of thiourea. The estimations of nonlinear properties for other known ferroelectrics of type II are given as well. The principal possibility of existence of the strong soliton regime in the type II systems is investigated in section 4. The comparison of the obtained results with the properties of type I systems is presented in section 5. ## 2 Sn-approximation for the IC order parameter configuration In according with the analysis made in , the equilibrium configuration of order parameter in the IC phase for the type II systems (1) can be approximated as $$\phi (x)=a\mathrm{s}n(bx,k)$$ (3) where $`\mathrm{s}n(x,k)`$ is the Jacobi elliptic sinus . In (3) the amplitude $`a`$, the wave number $`b`$ and the elliptic modulus $`k`$ ($`0k1`$) are defined by the minimization of the thermodynamic potential (1) in respect to $`a,`$ $`b,`$ $`k`$ . In contrast to the approach (2), the approximation (3) allows to consider not only the linear regime of the IC phase but the nonlinear configurations of order parameter as well. Really, if the elliptic modulus $`k`$ is small ($`k0`$) the function (3) is closed to the dependence (2): $`\mathrm{s}n(x,k0)\mathrm{sin}(x)`$ . But when $`k1`$ the spatial behavior of elliptic sinus becomes domain-like: the wide regions with almost constant values $`\phi (x)\pm \phi _0`$ are separated by the narrow regions where the function (3) changes abruptly. In the model (1), (3) the elliptic modulus $`k`$ is equal to zero at the point $`q_I=\frac{1}{4}\gamma ^2`$ of the disordered-to-incommensurate phase transition . With decreasing temperature the elliptic modulus $`k`$ grows and takes its largest value $`k_c`$ at the lock-in transition point $`q_c`$. The preliminary investigations show that $`k_c`$ can be close to unity for some values of the material parameters. For example, if $`g=10,`$ $`p=1,`$ $`h=0`$ then $`k_c=0.965`$ . Another important property of the order parameter approximation (3) is an additional (in comparison with the approach (2)) mechanism causing the change of modulation period. The continuous dependence of modulation period $`P`$ on temperature is one of the most characteristical features of the IC phases . In the framework of approach (2) the period of IC structure is equal to $`P=2\pi /b`$. The wave number $`b`$ depends on temperature only when the $`(\phi \phi ^{})^2`$ \- invariant is present in the expansion of the thermodynamic potential (1) : $$b^2=\frac{1}{2}\gamma +\frac{1}{8}ga^2.$$ (4) And if the material parameter $`g`$ is negative then the period $`P`$ increases with decreasing temperature as it is observed in experiments . The period of IC order parameter configuration (3) is defined as $`P=4K(k)/b`$ ($`K(k)`$ is the complete elliptic integral of the first kind ). It depends not only on the value of $`b`$ (as it takes place in the one-harmonic approach (2)) but also on the elliptic modulus $`k`$ which characterizes the degree of nonlinearity of the modulation wave. As consequence, the approximation (3) imposes less strong requirements on the material parameters, in particular, on $`g`$ (e.g., the period $`P`$ grows even when $`g=0`$ ). Moreover, if the elliptic modulus $`k`$ is close to unity $`k1`$ (i.e. $`K(k)\mathrm{}`$ ) then the nonlinear mechanism of the increase of modulation period becomes dominant and $`P`$ can be very large: $`P\mathrm{}`$. The numerical investigation of variational equation for the functional (1) shows that the approximation (3) correctly reproduces nonlinear properties of the equilibrium configuration of order parameter in the IC phase. ## 3 Nonlinear configurations of the order parameter in thiourea Now let us to apply the model (1), (3) to thiourea. As it has been mentioned above, in the case of thiourea the material parameter $`h`$ equals to zero. For the sake of simplicity we also assume that $`p=1`$. Numerical minimization of the thermodynamic potential (1), (3) in respect to $`a,`$ $`b,`$ $`k`$ shows that if the material parameter $`g`$ is equal to $`g=6.0`$ then the elliptic modulus takes the value $`k_c0.923`$ at the point $`q_c=0.290`$ of the lock-in transition. Using the formulae for the Fourier expansion of elliptic sinus or the procedures of fast Fourier transformation (FFT), one can find that for this $`k_c`$ the ratio of amplitudes of the third ($`a_3`$) and the fundamental ($`a_1`$) harmonics of the modulation wave (3) is equal to $`a_3/a_10.104`$ and also $`a_5/a_10.012`$ ($`a_5`$ is an amplitude of the fifth harmonic). The spatial dependence of the order parameter $`\phi (x)`$ at the temperature $`q_c`$ is shown in figure 1 (full line). The obtained results are in a good agreement with the experimental data . Besides the order parameter $`\phi (x)`$, some other modulation functions are also discussed in the case of thiourea . If such function $`\xi (x)`$ is coupled with the order parameter $`\phi (x)=a\mathrm{s}n(bx,k)a\mathrm{sin}[\theta (x)]`$ by the relation $`\xi (x)\mathrm{cos}[\theta (x)]\mathrm{c}n(bx,k)`$ then the dependence of $`\xi (x)`$ on position $`x`$ has the form depicted in figure 1 by broken line (cf. with figure 3 in ). Of course, the complete description of $`\xi (x)`$ and other modulation functions requires a modification of the thermodynamic potential (1) by including additional invariants which correspond to interactions of $`\xi (x),\mathrm{}`$ with each other and with the order parameter. Such an analysis is, however, beyond the scope of the present paper. The nonlinear properties of the IC modulation wave in thiourea essentially differ from ones for sodium nitrite. Taking for estimations the material parameters given in (in our notations they correspond to $`g=9.51,`$ $`p=0.651,`$ $`h=1`$) we find that for $`NaNO_2`$ the lock-in value of elliptic modulus is $`k_c0.589,`$ $`a_3/a_10.026`$ and $`a_5/a_10.0007`$. Note that these results are very close to ones obtained analytically in . As for other ferroelectrics of type II, estimations reveal that for $`Sn_2P_2Se_6`$ ($`g=1.37,`$ $`p=0.19,`$ $`h=1`$ ) the characteristics of modulation wave at the lock-in transition point are the following: $`k_c0.706,`$ $`a_3/a_10.041,`$ $`a_5/a_10.0018;`$ and for $`BCCD`$ ($`g=8.0,`$ $`p=2.0,`$ $`h=0`$ ) - $`k_c0.887,`$ $`a_3/a_10.086`$ and $`a_5/a_10.0082`$. ## 4 Soliton regime in the type II systems Results given in the preceding section describe the nonlinear properties of the IC modulation in four known compounds belonging to the type II ferroelectrics. However, these estimations do not answer on more general question: to what extent the soliton regime can develop in the model (1), (3) in principle. In order to clarify the situation we have investigated the behavior of systems (1), (3) in the limit $`k1`$ in more detail. When $`k1`$ the thermodynamic potential (1), (3) acquires the form : $$\begin{array}{cc}\mathrm{\Phi }\hfill & a^2\left[q\left(1+\frac{1}{2}k^2\right)\mathrm{\Lambda }^1\left(q+\frac{2}{3}\gamma b^2\frac{8}{15}b^4\right)\frac{1}{2}k^2\mathrm{\Lambda }^1\left(q+\frac{8}{15}b^4\right)\right]+\hfill \\ & a^4\left[\frac{1}{2}p\left(1+k^2\right)\mathrm{\Lambda }^1\left(\frac{2}{3}p+\frac{2}{15}gb^2\right)k^2\mathrm{\Lambda }^1\left(\frac{2}{3}p+\frac{1}{15}gb^2\right)\right]+\hfill \\ & a^6\frac{1}{3}h\left(1+\frac{3}{2}k^2\frac{23}{15}\mathrm{\Lambda }^1\frac{23}{10}k^2\mathrm{\Lambda }^1\right)\hfill \end{array}$$ (5) where $`k^2=1k^2,`$ $`\mathrm{\Lambda }`$$`=\mathrm{ln}(4/k^{});`$ $`\mathrm{\Lambda }^10`$ if $`k1`$. The equilibrium wave number $`b`$ can be found from the equation $`\mathrm{\Phi }/b=0,`$ what yields (cf. with (4)): $$b^2=\frac{5}{8}\gamma (1+\frac{1}{2}k^2)+\frac{1}{8}ga^2(1+k^2).$$ (6) Taking into account (6) the thermodynamic potential (5) can be rewritten as $$\begin{array}{cc}\mathrm{\Phi }\hfill & a^2\left(1+\frac{1}{2}k^2\right)\left[q\mathrm{\Lambda }^1\left(q+\frac{5}{24}\gamma ^2\right)\right]+\hfill \\ & a^4\left(1+k^2\right)\left[\frac{1}{2}p\mathrm{\Lambda }^1\left(\frac{2}{3}p+\frac{1}{12}\gamma g\right)\right]+\hfill \\ & a^6\left(1+\frac{3}{2}k^2\right)\left[\frac{1}{3}h\mathrm{\Lambda }^1\left(\frac{23}{45}h+\frac{1}{120}g^2\right)\right]\hfill \end{array}$$ (7) The function $`\mathrm{\Lambda }^1(k)`$ changes much slower then $`k^2`$ approaches to zero (e.g., for $`k^2=0.1`$ the value of $`\mathrm{\Lambda }^1(k)`$ is $`0.394`$, for $`k^2=4\times 10^8`$ \- $`\mathrm{\Lambda }^1(k)0.1`$). Hence, it is reasonable to omit the terms proportional to $`k^2`$ in (7). The further analysis depends on the value of material parameter $`h.`$ For the systems with $`h=0`$, $`p>0`$ the results are as follows. The equation $`\mathrm{\Phi }/a=0`$ defines the equilibrium amplitude $`a`$. Substituting its solution into the expression (7) and comparing the result with the thermodynamic potential for the commensurate state $`\mathrm{\Phi }_c=q^2/2p`$, we find the effective temperature $`q_c`$ of the lock-in transition: $$\begin{array}{c}q_c=5g^2p^2\left[\gamma gp^14+4\left(1\frac{1}{2}\gamma gp^1\right)^{1/2}\right]\\ \frac{5}{2}\gamma ^2\left(\gamma gp^14\right).\end{array}$$ (8) In (8) the approximate (right hand side) formula is derived for the case when the term proportional to $`a^6`$ is neglected in (7). This expression clearly demonstrates main features of the dependence of $`q_c`$ on $`gp^1`$. Now we consider the conditions under which the elliptic modulus can be equal to unity at the lock-in transition point $`q_c`$. The thermodynamic potential (7) depends on the elliptic modulus $`k`$ only through the function $`\mathrm{\Lambda }^1(k)`$ (remember we omit the terms proportional to $`k^2`$ in (7)). Thereby it is convenient to formulate the variational task for $`k`$ in terms of $`\mathrm{\Lambda }^1.`$ In these terms the condition $`k_c=1`$ means $`\mathrm{\Lambda }^1(k_c)=0`$, and the equation $`\mathrm{\Phi }/k=0`$ is equivalent to $`\mathrm{\Phi }/(\mathrm{\Lambda }^1)=0`$. Solving the latter at the temperature $`q_c`$ (8), we find that the function $`\mathrm{\Lambda }^1(k_c)`$ is equal to zero if the material parameters satisfy the relation: $$\gamma gp^1=16.$$ (9) Therefore, for the systems with material parameters $`g`$ and $`p`$ related with each other in according to (9) ($`h=0`$), the elliptic modulus of the order parameter modulation wave (3) is equal to unity at the point of lock-in transition. It means that for such systems the transition from the IC phase into the commensurate state is continuous. Close to this transition the order parameter spatial configuration is domain-like, and the soliton density $`n_S=\pi /2K(k)`$ (see and references therein) approaches to zero with decreasing temperature $`q`$ to its lock-in value $`q_c`$. Numerical calculations confirm the results of analytical investigation (see table 1). For the case $`gp^1=16`$ the elliptic modulus $`k_c`$ has approached to unity, but we have stopped calculations at the value $`k_c=0.9999.`$ The properties of elliptic functions change abruptly at the point $`k=1`$ , and there are some difficulties to reproduce the point $`k=1`$ numerically. As it follows from table 1 the dependence of $`k_c`$ on $`gp^1`$ has a maximum when $`gp^1=16`$. For little ($`gp^11`$) and large ($`gp^1100`$) values of the parameter combination $`gp^1`$ the contribution of higher harmonics in the modulation wave is relatively small. The analysis for the case of systems with $`h=1`$, $`p<0`$ can be made in the similar manner. Here we point out only the following. When $`h=1`$, $`p<0,`$ the direct disordered-to-commensurate phase transition is of the first order. As consequence, for large enough values of the parameter $`\left|p\right|`$ the range of IC phase stability can be relatively small like it takes place for $`NaNO_2`$ ($`q_c+0.07`$) . Nevertheless, for any $`p`$ there exists some $`g`$ for which $`k_c=1.`$ For example, for $`p=0.65`$ (the case of sodium nitrite ) the lock-in value $`k_c`$ of the elliptic modulus of the modulation wave (3) equals to $`1`$ if the material parameter $`g`$ is $`g4.5`$. However, due to the presence of term proportional to $`a^6`$ in (1) , i.e. due to $`h=1`$, the material parameters $`g`$ and $`p`$ are not so correlated as in the case of $`h=0`$ (in fact, when $`h=0`$ the ratio $`g/p`$ is relevant rather than the parameters $`g`$ and $`p`$ themselves). As consequence, for the systems with $`h=1,`$ $`p<0`$ the dependence $`g(p)`$ providing $`k_c=1`$ is more complex than (9). ## 5 Discussion In the present paper we have shown that in the framework of the phenomenological model with no Lifshitz invariant different nonlinear configurations of the IC order parameter can be described: almost sinusoidal one as in the case of sodium nitrite ($`a_3/a_10.03`$); one with more large contribution of higher harmonics as in thiourea ($`a_3/a_10.1`$); strong soliton regimes when the lock-in transition is continuous (such compound is unknown at the moment). Nonlinear properties of the concrete system are defined by values of the material parameters $`g`$ and $`p`$ (see, e.g., table 1). The specific role of $`(\phi \phi ^{})^2`$ \- invariant should be emphasized when discussing the nonlinear features of the IC order parameter configurations. If this term is not included in the thermodynamic potential (1) the soliton structure does not develop (see also ). From this point of view, $`(\phi \phi ^{})^2`$ \- term is analogous to the Umklapp invariant of relatively low order (the anisotropy invariant) which favors the appearance of domain-like structures in the type I ferroelectrics . The difference between $`(\phi \phi ^{})^2`$ \- term in (1) and the anisotropy invariant is that the former is a part of the gradient gain of the thermodynamic potential and can not influence on characteristics of the commensurate phase. On the contrary, in the type I ferroelectrics the behavior of IC and commensurate phases is correlated due to the anisotropy invariant (it belongs to the local interactions). In the case of type II systems analogous correlation appears only for some specific values of the material parameters $`g`$ and $`p`$, i.e. only for some correlated actions of the invariants $`(\phi \phi ^{})^2`$ and $`\phi ^4`$ which define the low-temperature behavior of IC and commensurate phases. The existence of IC state in the type I systems is caused by symmetric properties (the Lifshitz condition is not fulfilled) . For the systems of type II such global reasons are absent and the spatial modulation of the order parameter is a consequence of specific features of the interatomic interactions . As result, in the type II ferroelectrics the appearance of soliton regime has no systematic character, in contrast to the situation which takes place for, e.g., compounds of the $`A_2BX_4`$ family. Acknowledgments The author would like to thank V F Klepikov and Yu M Vysochanskii for the fruitful discussion and support. Figure Captions. Figure 1. The order parameter $`\phi (x)`$ (full line) and the modulation function $`\xi (x)`$ (broken line) as functions of position $`x`$ at the temperature $`q_c=0.29`$ (the lock-in transition point). The material parameters are the following: $`g=6.0`$, $`p=1.0`$, $`h=0`$. The amplitudes of order parameter $`\phi (x)`$ and function $`\xi (x)`$ are arbitrary. Table Caption. Table 1. Characteristics of the order parameter modulation wave in the type II systems (the case $`h=0`$, $`p>0`$) at the point $`q_c`$ of lock-in transition for different values of the material parameters $`g`$ and $`p`$: $`k_c`$ is a lock-in value of the elliptic modulus, $`a_3/a_1`$ is a ratio of the third and first harmonics, $`n_S`$ is a soliton density. | $`g/p`$ | $`q_c`$ | $`k_c`$ | $`a_3/a_1`$ | $`n_S`$ | | --- | --- | --- | --- | --- | | -1.0 | -0.690 | 0.768 | 0.052 | 0.81 | | -10. | -0.212 | 0.965 | 0.136 | 0.57 | | -16. | -0.156 | 0.9999 | 0.262 | 0.27 | | -20. | -0.134 | 0.970 | 0.142 | 0.56 | | -100. | -0.048 | 0.766 | 0.052 | 0.81 |
no-problem/9909/astro-ph9909437.html
ar5iv
text
# Dark Matter in Groups and Clusters of Galaxies ## 1. History of the dark matter concept The story of dark matter is a classical example of a scientific revolution (Kuhn 1970, Tremaine 1987). It is impossible in this review talk to discuss all aspects of dark matter. We start with a historical introduction, followed by a comparison of ordinary stellar populations and the nature of dark matter. Thereafter we consider dark matter in galaxies, groups and clusters of galaxies, and in voids; we also discuss the mean density of matter in the Universe. First hints on the presence of a mass paradox in galaxies and clusters of galaxies came over 60 years ago. Oort (1932) noticed that there may exist a discrepancy between the dynamical estimate of the local density of matter in the Solar neighborhood in the Galaxy, and the density of luminous matter. Known stellar populations may be insufficient to explain the vertical gravitational attraction in the Galaxy which causes motions of stars perpendicular to the plane of the Galaxy. Zwicky (1933) measured radial velocities of galaxies in the Coma cluster and found that the mass of the cluster exceeds the summed mass of its galaxies more than tenfolds. These studies raise two problems, the one of the local dark matter in the disk of the Galaxy, and the global dark matter penetrating clusters of galaxies. In the 1930s astronomers were very busy to understand the evolution of stars, and dark matter problems escaped the attention of the astronomical community. The next essential step in the dark matter story was made by Kahn & Woltjer (1959). They noticed that the Andromeda galaxy and our Galaxy approach each other, whereas almost all other galaxies recede from us. The total mass of the Local Group, inferred from ascribing the approach velocity to mutual attraction, exceeds the conventional mass of M31 and Galaxy approximately tenfold. This discovery again did not attract much attention. During the discussion of the stability of clusters of galaxies in the 1960s, Ambartsumian suggested an opposite view that clusters may be recently formed and expanding systems. This suggestion contradicts, however, data on ages of cluster galaxies, see van den Bergh (1999). In the late 1960s and early 1970s it was realized that mass paradox may be a global problem for all bright galaxies. Einasto (1969), Sizikov (1969) and Freeman (1970) noticed that rotation velocities of galaxies decrease more slowly in the outskirts of galaxies than expected from the distribution of light. Two possibilities were discussed to explain this discrepancy – systematic deviations from circular motion or the presence of some massive but invisible population in the outskirts of galaxies. One approach that has led to the conclusion of the presence of dark matter around galaxies was the modeling of galaxies using a combination of all available observational data on stellar populations in galaxies of different morphological type. Such combined models were reported during the First European Astronomy Meeting in Athens in September 1972 (Einasto 1974). It was shown that ordinary stellar populations cannot explain almost flat rotation curves of the outer parts of spiral galaxies. To explain flat rotation curves the presence of a new invisible population, a “dark corona”, was suggested. Independent evidence for the presence of dark matter around galaxies was inferred by Ostriker and Peebles (1973) based on disk stability arguments. Available data were, however, not sufficient to determine the total mass and dimension of the hypothetical dark population. To derive the mass distribution at larger distances from galactic centers the teams at Tartu and Princeton investigated the dynamics of companions of bright galaxies. They demonstrated that the internal mass, inferred from the motion of companion galaxies, increases with distance from centers of bright galaxies up to several hundred kiloparsec, thus increasing the hitherto assumed dimensions and masses of galaxies by an order of magnitude (Einasto, Kaasik & Saar 1974, Ostriker, Peebles & Yahil 1974). These studies suggest that the presence of dark matter is a general property of galaxies and systems of galaxies; this matter has a dominant contribution to the mass budget in the Universe. Difficulties connected with this interpretation of rotation curves and dynamics of companion galaxies were discussed by Burbidge (1975). These three studies triggered the boom of the dark matter studies. Dark matter was discussed during the Third European Astronomical Meeting in Tbilisi, in July 1975. This Meeting was the highlight of the dark matter discussion where supporters (Bertola & Tullio 1976, Einasto et al. 1976) and opponents (Karachentsev 1976, Oleak 1976, Materne & Tammann 1976, Fesenko 1976) of the concept of dark matter had a hot debate. The majority of speakers argued against the dark matter concept; in the summary of the Meeting Kharadze noticed that the dark matter concept did not find support. The next public discussion of the dark matter problem was during the IAU General Assembly in Grenoble in August 1976. Here the focus was the nature of the dark population. Ostriker, Peebles & Yahil (1974) assumed that dark halos consist of faint stars; this concept was discussed by Maarten Schmidt. Population studies led the group at Tartu to conclude that dark matter cannot be made of ordinary stars but must have a different origin (Jaaniste & Saar 1975). To make a clear distinction between known halo population (which consists of old stars) and the new population the term “corona” was suggested (Einasto 1974). The difference between ordinary galactic populations and the dark matter population was summarized by Einasto, Jõeveer & Kaasik (1976). Ivan King from the audience noticed “perhaps really there are two halos in galaxies, stellar and dark”. Initially hot gas was considered as a possible candidate for the dark matter (Einasto 1974). However, subsequent studies by Komberg & Novikov (1975), and Chernin et al. (1976) demonstrated that only a fraction of the corona may be gaseous. X-ray observations have confirmed that the mass of hot gas in coronae is comparable with the mass of stellar populations, however, hot gas is not sufficient to explain the total “missing” mass. Thus the origin of coronae remained unclear. The final acceptance of the presence of dark matter around galaxies came after Morton Roberts, Vera Rubin and their collaborators had shown that the outer parts of practically all spiral galaxies have flat rotational curves (Roberts & Whitehurst 1975, Rubin, Ford & Thonnard 1978, 1980, Rubin 1987). However, theorists accepted the presence of dark matter only after its role in the evolution of the structure of the Universe was realized. This illustrates the Eddington’s test: “No experimental result should be believed until confirmed by theory” (Turner 1999b). It was clear that, if nature created so much dark matter, it must have some purpose. Rees (1977) noticed that neutrinos can be considered as dark matter particles; and Chernin (1981) showed that, if dark matter is non-baryonic, then this helps to explain the paradox of small temperature fluctuations of background microwave radiation. Density perturbations of non-baryonic dark matter start growing already during the radiation-dominated era whereas the growth of baryonic matter is damped by radiation. If non-baryonic dark matter is dynamically dominating, the total density perturbation can have an amplitude of the order $`10^3`$ at the recombination epoch, which is needed for the formation of the observed structure of the Universe. Baryonic matter flows after recombination to gravitational wells formed by non-baryonic matter. Chernin considered neutrinos with non-zero rest mass as a possible candidate, but other non-baryonic particles do the job as well. This result was discussed in a conference in Tallinn in spring 1981. In the summary speech of this conference Zeldovich concluded: “Observers work hard to collect data, theorist interpret observations; are often in error, correct their errors and try again; and there are only very rare moments of clarification. Today it is one of such rare moments when we have holy feeling of understanding Nature. Non-baryonic dark matter is needed to start structure formation early enough”. Soon it was realized that neutrino-dominated or hot dark matter generates almost no fine structure of the Universe – galaxy filaments in superclusters (Zeldovich, Einasto & Shandarin 1982), and that the structure forms too late (White, Davis & Frenk 1984). A much better candidate for dark matter is some sort of cold particles as axions (Blumenthal et al. 1984). The dark matter concept as a solid basis of the contemporary cosmology was incorporated in full details in a series of lectures by Primack (1984), and was discussed in the IAU Symposium on Dark Matter (Kormendy & Knapp 1987). Thus in the end it took over fifty years from the first discoveries by Oort and Zwicky until the new paradigm was generally accepted. However, the story is not over. The nature of the dark matter is still unclear – we do not know exactly what the cold dark matter is, and whether it is mixed with hot dark matter (neutrinos). ## 2. Galactic populations Dark matter is invisible, and the only possibility to determine its mass, radius and shape in galaxies and clusters of galaxies is modeling of populations present in these systems, using all available observational data on the distribution of populations and on dynamics of the system. Thus our first task is to find the main parameters of stellar populations in galaxies. Models of stellar populations in galaxies were constructed by Einasto (1974), and more recently by Einasto & Haud (1989), Bertola et al. (1993) and Persic, Salucci & Stel (1996), among others. Models use luminosity profiles of galaxies, rotation curves, velocity dispersions of central stellar clusters, and other relevant data. Parameters can be determined for the stellar halo, the bulge and the disk, and for the dark population. To determine the amount of dark matter in and around galaxies, the mass-to-luminosity ratio of the stellar population, $`M/L_B`$, is of prime importance. The available data show that the mean $`M/L_B`$ is surprisingly constant; for the stellar halo it is of the order of unity, for the bulge approximately 3, and only for the metal-rich cores of massive galaxies the value approaches 10. The mean mass-to-luminosity ratio for all visible matter, weighed with the luminosities of galaxies, is $`M/L_B=4.1\pm 1.4`$. We can summarize properties of ordinary and dark populations as follows: 1) Stellar populations have $`1M/L_B10`$, while dark population has $`M/L_B1000`$. 2) There is a continuous transition of stellar populations from stellar halo to bulge, from bulge to old disk, from old to young disk; and intermediate populations are clearly seen in our Galaxy. All stellar populations contain a continuous sequence of stars of different mass, some of these stars have ages and masses which correspond to ages and masses of luminous red giants, thus all stellar populations are visible, in contrast to the dark population. 3) The density of stellar populations rapidly increases toward the plane or the center of the galaxy, while the dark matter population shows a much lower concentration of mass to the galactic plane and center. These arguments show quantitatively that dark matter must have an origin different from stellar populations. Since old and young stellar populations form a continuous sequence, the dark population must have originated much earlier. There must be a large gap between the formation time of dark halo and oldest visible stellar populations, since there appears to be no intermediate populations between the dark and stellar populations (Einasto, Jõeveer & Kaasik 1976). ## 3. Dark matter in galaxies The possible existence of dark matter near the plane of the Galaxy was advocated by Oort (1932, 1960), who determined the density of matter in the Solar vicinity and found that there may be a discrepancy between the dynamical density and the density calculated from the sum of densities of known stellar populations. This discrepancy was studied by Kuzmin (1952), Eelsalu (1959) and Jõeveer (1972, 1974); all three independent analyses demonstrated that there is practically no local mass discrepancy in the Galaxy. A much higher value of the local dynamical density was found by Bahcall (1984a, 1984b, 1987). It is clear that non-baryonic dark matter cannot contract to a flat population needed to explain the presence of the local mass discrepancy. For this reason it is natural to expect that the local dark matter, if present, must be of stellar origin. The local dark matter problem has been analyzed by Gilmore (1990 and references therein). Most recent data suggest that dynamically determined local density of mass is approximately $`0.1M_{}pc^3`$, in good agreement with direct estimates of the density. Thus there is no firm evidence for the presence of local dark matter in our Galaxy. Flat rotation curves of galaxies suggest that there must be another dark population in galaxies. This global dark matter must have a more-or-less spherical distribution to stabilize the flat population (Ostriker & Peebles 1973). As discussed above the dark population has properties completely different from properties of known stellar populations. It is generally believed that this population is non-baryonic. The mass and volume occupied by dark matter halos around galaxies can be determined only on the basis of relative motions of visible objects moving within these dark halos. Almost all bright galaxies are surrounded by dwarf companion galaxies, and in this respect they can be considered as poor groups of galaxies (Einasto et al. 1974). Such clouds of satellites have radii 0.1 to 1 $`h^1`$ Mpc. The relative motions of companion galaxies indicate that the total mass within the radius of orbits grows approximately linearly with distance. This suggests that dark halos of main galaxies have approximately isothermal density profiles. The outer radius of isothermal halos of giant galaxies is, however, not well determined, since there are no objects which can test the relative velocity at large distance from the main galaxy. The Local Group of galaxies yields an unique possibility to measure the relative radial velocity of two subgroups, located around our Galaxy and the Andromeda galaxy. These measurements show that the total mass of the Local Group is $`5\times 10^{12}M_{}`$ (Kahn & Woltjer 1959, Einasto & Lynden-Bell 1982). Masses determined from velocities of companions within both subgroups are $`2\times 10^{12}M_{}`$ and $`3\times 10^{12}M_{}`$, for our Galaxy and M31, respectively. In these determinations it is assumed that dark halos of M31 and Galaxy have external radii about 200 – 300 kpc (Haud & Einasto 1989, Tenjes, Haud & Einasto 1994). We see that individual masses are in good agreement with to total mass derived from the approach velocity; in other words this agreement confirms that estimated external radii and masses of dark halos are correct. ## 4. Dark matter in clusters The distribution of mass in clusters of galaxies can be determined by three independent methods: from the distribution of relative velocities of galaxies, from the distribution and temperature of hot X-ray emitting gas, and from the gravitational lensing effect. All three methods can be applied in the case of clusters of galaxies and rich groups of galaxies, so we start our discussion from these systems. ### 4.1. Rich clusters of galaxies The classical method to determine the mass distribution in clusters is based on the measurements of the velocity dispersion of galaxies in clusters. The method may be biased since the number of clusters with measured redshifts is usually small and it is difficult to exclude foreground and background clusters, especially in regions of high density of galaxies (superclusters). During the last decade X-ray measurements have supplied a more accurate method to determine masses and mass profiles of clusters of galaxies. The method is based on the observation that hot gas and galaxies are in hydrostatic equilibrium within a common cluster potential, i.e. both move under gravity in the potential well of the cluster. The mass distribution of the cluster can be derived from the mean temperature of the gas and radial gradients of the temperature and density (Watt et al. 1992, Mohr et al. 1999). The intensity of the X-ray emission gives information on the mass distribution of hot gas, thus X-ray observations yield simultaneously the distribution of the total mass and gas mass in the cluster. Galaxies give additional information on the distribution of mass in galaxies, thus altogether three distributions can be found. ROSAT X-ray satellite data are presently available for many clusters and rich groups of galaxies. As an example of the integrated mass distribution in the Perseus cluster of galaxies we refer to Böhringer (1995). In other clusters studied so far the distributions are rather similar. The main conclusions from these studies are the following: 1) the radial distributions of the total mass, gas mass, and galaxy mass are similar; 2) intra-cluster hot gas constitutes $`14\pm 2`$ % of the total mass of clusters (for Hubble constant $`h=0.65`$); 3) the mass in visible populations of galaxies is $`3`$ % of the total mass of the cluster. X-ray data also yield the mass-to-luminosity ratio of the cluster: $`M/L_V=150hM_{}/V_{}`$ (David, Jones & Forman 1996). This mean value is valid for the whole range of temperatures and masses of clusters and groups. Modern data based on velocity dispersions of galaxies in clusters yield $`M/L_V=213\pm 60hM_{}/V_{}`$ (Carlberg et al. 1997). Gravitational lensing yields another independent method to derive the mass of clusters of galaxies. This method has been applied for several clusters, and the results are in agreement with masses determined from X-ray data (Mellier, Fort & Kneib 1993, Schindler et al. 1995). ROSAT data have been used to investigate the mass distribution in a clusters filament in the core of the Shapley supercluster (Kull & Böhringer 1999). Data show that there exist a continuous X-ray emission along the filament joining three rich clusters of galaxies. This emission indicates the presence of a potential well along the filament filled with dark matter and hot gas. A similar distribution of galaxies along the main chain of the Perseus supercluster is known long ago (Jõeveer, Einasto & Tago 1978). These data indicate that galaxies, hot gas and dark matter form similar condensations along filaments joining clusters and groups of galaxies. ### 4.2. Poor groups of galaxies Most galaxies in the Universe belong to poor groups with one or few bright galaxies and a number of faint dwarf companions. The Local group is an example of poor groups with two major concentration centers. The basic difficulty in the study of the mass distribution in poor groups lies in the weakness of the X-ray emission and absence in most groups bright companions to measure the relative velocity on large distance from the groups center. Available X-ray data suggest that poor groups have a lower fraction of hot gas than do rich clusters of galaxies; the mass of hot gas is approximately ten times smaller than the stellar mass (Ponman & Bertram 1993, Pildis, Bregman, & Evrard 1995). According to X-ray data dark matter extends significantly beyond the apparent configuration of bright galaxies in good agreement with optical data on the distribution of faint companion galaxies; the total mass-to-luminosity ratio is in agreement with optical data, $`M/L_B120hM_{}/L_{}`$ (Ponman & Bertram 1993). Galaxies in compact groups show signs of distortions which indicate that these groups are formed as a result of orbital decay; galaxies merge within a few billion years to form a giant elliptical galaxy in the center of the group; this process is rather rapid. The presence of such groups indicates that there should exist fossil groups, consisting only of the central giant elliptical galaxy surrounded by the dark matter of the previous group. Such fossil groups are actually observed, examples are NGC 315 in the Perseus supercluster chain – a massive radio galaxy with very large radio lobes (Jõeveer, Einasto & Tago 1978), and NGC 1132 (Mulchaey & Zabludoff 1999). ### 4.3. Dynamics of main galaxies in groups and clusters If the dominant galaxies in groups were formed by merging of its former companions one would expect the internal velocity dispersion of these galaxies to be comparable with the velocity dispersion of galaxies in the group before the merger event. In Fig. 1 we plot the velocity dispersion of central galaxies, $`\sigma _{gal}`$, in groups and clusters as a function of the velocity dispersion of galaxies in respective systems, $`\sigma _{clust}`$. We see that the internal velocity dispersion of dominant galaxies in rich clusters is much lower than the velocity dispersion in clusters, but comparable to the velocity dispersion of galaxies in subgroups often found in clusters. This observation suggests that central galaxies formed already in the early stages of cluster evolution, before subgroups merged with the presently observed cluster. A similar conclusion has been reached by Dubinski (1998) using N-body simulations of cluster evolution. ## 5. Dark matter in voids In the mid-1970s it was discovered that field galaxies are not randomly distributed in space but form long filaments and chains between clusters and groups; clusters and groups themselves are concentrated to superclusters of galaxies (Jõeveer & Einasto 1978, Jõeveer, Einasto & Tago 1978). Between galaxy filaments there are big volumes devoid of any visible form of matter. One of the first questions asked was: are these voids really empty or do they contain some hidden matter? This problem was investigated by Einasto, Jõeveer & Saar (1980). The study was based on the well-known theory of the growth of density perturbations developed by Zeldovich (1970) and Press & Schechter (1974). According to Zeldovich matter flows away from under-dense regions towards high-density ones until over-dense regions collapse and form galaxies. The density of matter in under-dense regions decreases approximately exponentially and never reaches zero. In order to form a galaxy or cluster the over-density within a radius of $`r`$ must exceed a certain limit, about 1.68 in case of spherical perturbations (Press-Schechter limit). On the basis of these considerations one can make two important conclusions: first, there must exist some primordial matter in voids, and second, the galaxy formation is a threshold phenomenon. Recent hydrodynamical simulations of the evolution of the density field and formation of galaxies have confirmed these theoretical expectations (Cen & Ostriker 1992, 1999, Katz et al. 1992, 1996). Quantitative estimates of the total fraction of matter in voids have been made by Einasto et al. (1994, 1999). These estimates are based on N-body calculations of the evolution of under- and over-density regions for a variety of cosmological models. The density field was calculated using a small smoothing length, about 1 $`h^1`$ Mpc, which corresponds to the mean size of small groups of galaxies, dominant structural elements of the Universe. A problem in these calculations is the identification of the present epoch of simulations. This epoch can be determined using $`\sigma _8`$ normalization of the density field. The present mean value of density perturbations in a sphere of radius 8 $`h^1`$ Mpc can be determined directly from observations. Initially half of all matter was located in regions below the mean density (this follows from the simple fact that initial density perturbations are very small). During the evolution matter flows away from low-density regions (see Fig. 2). The present fraction of matter in voids is somewhat model-dependent, $`25\pm 10`$ % of the total amount of matter (Einasto et al. 1999). ## 6. Mean density of matter in the Universe There are several independent methods to derive the mean density of matter in the Universe. The first method is based on primordial nucleosynthesis data, which indicate that the baryon density is $`\mathrm{\Omega }_bh^2=0.019\pm 0.002`$ (Schramm & Turner 1998, Turner 1999a). If we use a Hubble parameter of $`h=0.65\pm 0.05`$, and apply the ratio of baryon to overall density as suggested by X-ray data, we obtain for the mean density of matter $`\mathrm{\Omega }_M=0.31(h/0.65)^{1/2}\pm 0.04`$. The second method uses mass-to-luminosity ratios of groups and clusters, and the mean luminosity density. This method gives the density of the clustered matter. If we add the density of matter in voids as suggested by void evacuation data, we get $`\mathrm{\Omega }_M=0.25\pm 0.05`$ (Bahcall 1997, Einasto et al. 1999). The distant supernova project (Perlmutter et al. 1998, 1999, Riess et al. 1998) allows to measure the curvature of the Universe and to distinguish between the matter density, $`\mathrm{\Omega }_M`$, and the cosmological constant parameter, $`\mathrm{\Omega }_\mathrm{\Lambda }`$; this method suggests that the Universe is dominated by the cosmological term, the density of matter is $`\mathrm{\Omega }_M=0.28\pm 0.1`$. Similarly, the comoving maximum of the galaxy power spectrum allows to measure the cosmological curvature (Broadhurst & Jaffe 1999), and prefers a Universe with $`\mathrm{\Omega }_M=0.4\pm 0.1`$. As demonstrated by Bahcall & Fan (1998) and Eke et al. (1998), the rate of the evolution of cluster abundance depends strongly on the mean density of the Universe. The cluster abundance method yields for the density a value $`\mathrm{\Omega }_M=0.3\pm 0.1`$. Finally, the dynamics of the Local Group and its vicinity, using the least action method, also yields a low density value (Shaya et al. 1999). The weighed mean of these independent methods is $`\mathrm{\Omega }_M=0.30\pm 0.05`$; i.e. the overall density of matter in the Universe is sub-critical by a wide margin. The quoted error is intrinsic, if we add possible systematic errors we get an error estimate $`\pm 0.1`$. Supernova and CMB data exclude the possibility of an open Universe: the dominating component in the Universe is the dark energy – the cosmological constant term or some other term with negative pressure (Turner 1999b, Perlmutter et al. 1998, 1999). ## 7. Summary The present knowledge of the dark matter in the Universe can be summarized as follows. 1) The evidence for the presence of local dark matter in the disk of the Galaxy is not convincing; if present, it must be of baryonic origin as non-baryonic matter cannot form a flat disk. 2) The mean mass-to-luminosity ratio of stellar populations in galaxies is $`M/L_B4M_{}/L_{}`$; the mean mass-to-luminosity ratio in groups and clusters of galaxies is $`100200hM_{}/L_{}`$. 3) The presence of dark matter halos of galaxies, and of dark common halos of groups and clusters is well established; the bulk of the dark population consists of some sort of cold dark matter. About $`5`$ % of mass in poor groups, and $`15`$ % in rich clusters is in the form of hot X-ray emitting gas. 4) There exists dark matter in voids; the fraction of matter in voids is $`25`$ %, and in high-density regions $`75`$ % of the total matter. 5) The total density of matter is, $`\mathrm{\Omega }_M=0.3\pm 0.1`$, and the density of dark energy (cosmological constant) is $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7\pm 0.1`$. 6) On all scales larger than sizes of galaxies the dynamics is determined by dark matter. #### Acknowledgments. The authors would like to thank H. Andernach for suggestions on the presentation this work. ## References Bahcall, J.N. 1984a, ApJ, 276, 169 Bahcall, J.N. 1984b, ApJ, 287, 926 Bahcall, J.N. 1987, in Dark Matter in the Universe, eds. J. Kormendy & G.R. Knapp, Reidel, Dordrecht, p. 17 Bahcall, N. in Critical Dialogues in Cosmology, edt. N. Turok, World Scientific, Singapore, p. 221 Bahcall, N.A., & Fan, X. 1998, ApJ, 504, 1 Bertola, F., Pizzella, A., Persic, M., & Salucci, P. 1993, ApJ, 416, L45 Bertola, F., & Tullio, G. di 1976, in Stars and Galaxies from Observational Points of View, ed. E.K. Kharadze, Mecniereba, Tbilisi, p. 423 Blumenthal, G.R., Faber, S.M., Primack, J.R. & Rees, M.J. 1984, Nature, 311, 517 Böhringer, H. 1995, in Reviews in Modern Astronomy, 8, ed. G. Klare, Springer Broadhurst, T., & Jaffe, A.H. 1999, ApJ (submitted) \[astro-ph/9904348\] Burbidge, G. 1975, ApJ, 196, L7 Carlberg, R. G., Yee, H. K. C., & Ellington, E. 1997, ApJ, 478, 462 Cen, R., & Ostriker, J.P. 1992, ApJ, 399, L113 Cen, R., & Ostriker, J.P. 1999, ApJ (in preparation) Chernin, A.D. 1981, Astr. Zh., 58, 25 Chernin, A.D., Einasto, J., & Saar, E. 1976, Ap&SS, 39, 53 David, L.P., Jones, C., & Forman, W. 1996, ApJ, 473, 692 Dubinski, J. 1998, ApJ, 502, 141 Eelsalu, H. 1959, Tartu Astr. Obs. Publ., 33, 153 Einasto, J. 1969, Astrofizika 5, 137. Einasto, J. 1974, in Stars and the Milky Way System, Vol. 2, Ed. L.N. Mavridis, Springer, Berlin-Heidelberg-New York, p. 291 Einasto, J., Einasto, M., Tago, E., Müller, V., Knebe, A., Cen, R., Starobinsky, A.A., & Atrio-Barandela, F. 1999, ApJ, 519, 456 Einasto, J., & Haud, U. 1989, A&A, 223, 89 Einasto, J., Jõeveer, M., & Kaasik, A. 1976, Tartu Astr. Obs. Teated, 54, 3 Einasto, J., Jõeveer, M., Kaasik, A., & Vennik, J. 1976, in Stars and Galaxies from Observational Points of View, ed. E.K. Kharadze, Mecniereba, Tbilisi, p. 431 Einasto, J., Jõeveer, M., & Saar. E. 1980, MNRAS, 193, 353 Einasto, J., Kaasik, A., & Saar, E. 1974, Nature, 250, 309 Einasto, J., & Lynden Bell, D. 1982, MNRAS, 199, 67 Einasto, J., Saar, E., Einasto, M., Freudling, W., & Gramann, M. 1994, ApJ, 429, 465 Einasto, J., Saar, E., Kaasik, A. & Chernin, A.D. 1974, Nature, 252, 111 Eke, V., Cole, S., Frenk, C.S., & Henry, J.P. 1998, MNRAS, 298, 1145 Fesenko, B.I. 1976, in Stars and Galaxies from Observational Points of View, ed. E.K. Kharadze, Mecniereba, Tbilisi, p. 486 Freeman, K.C. 1970, ApJ, 160, 811 Gilmore, G. 1990, in Baryonic Dark Matter, eds. D. Lynden-Bell & G. Gilmore, Kluwer, Dordrecht, p. 137 Haud, U., & Einasto, J. 1989, A&A, 223, 95 Jaaniste, J., & Saar, E. 1975, Tartu Astr. Obs. Publ., 43, 216 Jõeveer, M. 1972, Tartu Astr. Obs. Publ., 37, 3 Jõeveer, M. 1974, Tartu Astr. Obs. Teated, 46, 35 Jõeveer, M., & Einasto, J. 1978, in The Large Scale Structure of the Universe, eds. M.S. Longair & J. Einasto, Reidel, p. 409 Jõeveer, M., Einasto, J., & Tago, E. 1978, MNRAS, 185, 35 Kahn, F.D., & Woltjer. L. 1959, ApJ, 130, 705 Karachentsev, I.D. 1976, in Stars and Galaxies from Observational Points of View, ed. E.K. Kharadze, Mecniereba, Tbilisi, p. 439 Katz, N., Hernquist, L. & Weinberg, D.H. 1992, ApJ, 399, L109 Katz, N., Weinberg, D.H., & Hernquist, L. 1996, ApJS, 105, 19 Komberg, B.V., & Novikov, I.D. 1975, Pisma Astron. Zh. 1, 3 Kormendy, J., & Knapp, G.R. 1987, Dark Matter in the Universe, IAU Symp. No. 117, Reidel, Dordrecht Kuhn, T.S. 1970, The Structure of Scientific Revolutions, Univ. of Chicago Press, Chicago Kull, A., & Böhringer, H. 1999, A&A, 341, 23 Kuzmin, G.G. 1952, Tartu Astr. Obs. Publ., 32, 5 Materne, J., & Tammann, G.A. 1976, in Stars and Galaxies from Observational Points of View, ed. E.K. Kharadze, Mecniereba, Tbilisi, p. 455 Mellier, Y., Fort, B., & Kneib, J.-P. 1993, ApJ, 407, 33 Mohr, J. J., Mathiesen, B., & Evrard, A. E. 1999, ApJ, 517, 627 Mulchaey, J.S., & Zabludoff, A.I. 1999, ApJ, 514, 133 Oleak, H. 1976, in Stars and Galaxies from Observational Points of View, ed. E.K. Kharadze, Mecniereba, Tbilisi, p. 451 Oort, J.H. 1932, Bull. Astr. Inst. Netherlands, 6, 249 Oort, J.H. 1960, Bull. Astr. Inst. Netherlands, 15, 45 Ostriker, J.P., Peebles, P.J.E. 1973, ApJ, 186, 467 Ostriker, J.P., Peebles, P.J.E. & Yahil, A. 1974, ApJ, 193, L1 Perlmutter, S. et al. 1998, Nature, 391, 51 Perlmutter, S. et al. 1999, ApJ, 517, 565 Persic, M., Salucci, P., & Stel, F. 1996, MNRAS, 281, 27 Pildis, R.A., Bregman, J.N., & Evrard, A.E. 1995, ApJ, 443, 514 Ponman, T.J., & Bertram, D. 1993, Nature, 363, 51 Primack, J.R. 1984, Dark matter, galaxies, and large scale structure of the Universe, SLAC Publ. 3387 Press, W.H. & Schechter, P.L. 1974, ApJ, 187, 425 Rees, M. 1977, in Evolution of Galaxies and Stellar Populations, ed. B.M. Tinsley & R.B. Larson, New Haven, Yale Univ. Obs., 339 Riess, A.G. et al. 1998, AJ, 116, 1009 Roberts, M.S., & Whitehurst, R.N. 1975, ApJ, 201, 327 Rubin, V.C. 1987, in Dark Matter in the Universe, eds. J. Kormendy & G.R. Knapp, Reidel, Dordrecht, p. 51 Rubin, V.C., Ford, W.K. & Thonnard, N. 1978, ApJ, 225, L107 Rubin, V.C., Ford, W.K. & Thonnard, N. 1980, ApJ, 238, 471 Shaya, E.J., Peebles, P.J.E., Tully, R.B. & Phelps, S.D. 1999, this volume Schindler, S., Guzzo, L., Ebeling, H., Böhringer, H., Chincarini, G., Collins, C.A., De Grandi, S., Neumann, D.M., Briel, U.G., Shaver, P., & Vettolani, G. 1995, A&A, 299, L9 Schramm, D.N. & Turner, M.S. 1998, Rev. Mod. Phys., 70, 303 Sizikov, V.S. 1969, Astrofizika, 5, 317 Tenjes, P., Haud, U., & Einasto, J. 1994, A&A, 286, 753 Tremaine, S. 1987, in Dark Matter in the Universe, eds. J. Kormendy & G.R. Knapp, Reidel, Dordrecht, p. 547 Turner, M.S. 1999a, Physica Scripta (in press), \[astro-ph/9901109\] Turner, M.S. 1999b, \[astro-ph/9904049\] van den Bergh, S. 1999, PASP, 111, 657 Watt, M.P., Ponman, T.J., Bertram, D., Eyles, C.J., Skinner, G.K., & Willmore, A.P. 1992, MNRAS, 258, 738 White, S.D.M., Davis, M., & Frenk, C.S. 1984, MNRAS, 209, 27P Zeldovich, Ya.B. 1970, A&A, 5, 84 Zeldovich, Ya.B., Einasto, J. & Shandarin, S.F. 1982, Nature, 300, 407 Zwicky, F. 1933, Helv. Phys. Acta 6, 110
no-problem/9909/chao-dyn9909037.html
ar5iv
text
# Surrogate time series ## 1 Introduction A nonlinear approach to analysing time series data can be motivated by two distinct reasons. One is intrinsic to the signal itself while the other is due to additional knowledge we may have about the nature of the observed phenomenon. As for the first motivation, it might be that the arsenal of linear methods has been exploited thoroughly but all the efforts left certain structures in the time series unaccounted for. As for the second, a system may be known to include nonlinear components and therefore a linear description seems unsatisfactory in the first place. Such an argument is often heard for example in brain research — nobody expects for example the brain to be a linear device. In fact, there is ample evidence for nonlinearity in particular in small assemblies of neurons. Nevertheless, the latter reasoning is rather dangerous. The fact that a system contains nonlinear components does not prove that this nonlinearity is also reflected in a specific signal we measure from that system. In particular, we do not know if it is of any practical use to go beyond the linear approximation when analysing the signal. After all, we do not want our data analysis to reflect our prejudice about the underlying system but to represent a fair account of the structures that are present in the data. Consequently, the application of nonlinear time series methods has to be justified by establishing nonlinearity in the time series. Suppose we had measured the signal shown in Fig. 1 in some biological setting. Visual inspection immediately reveals nontrivial structure in the serial correlations. The data fails a test for Gaussianity, thus ruling out a Gaussian linear stochastic process as its source. Depending on the assumptions we are willing to make on the underlying process, we might suggest different origins for the observed strong “spikyness” of the dynamics. Superficially, low dimensional chaos seems unlikely due to the strong fluctuations, but maybe high dimensional dynamics? A large collection of neurons could intermittently synchronise to give rise to the burst episodes. In fact, certain artificial neural network models show qualitatively similar dynamics. The least interesting explanation, however, would be that all the spikyness comes from a distortion by the measurement procedure and all the serial correlations are due to linear stochastic dynamics. Occam’s razor tells us that we should be able to rule out such a simple explanation before we venture to construct more complicated models. Surrogate data testing attempts to find the least interesting explanation that cannot be ruled out based on the data. In the above example, the data shown in Fig. 1, this would be the hypothesis that the data has been generated by a stationary Gaussian linear stochastic process (equivalently, an autoregressive moving average or ARMA process) that is observed through an invertible, static, but possible nonlinear observation function: $$s_n=s(x_n),\{x_n\}:\text{ARMA}(M,N).$$ (1) Neither the order $`M,N`$, the ARMA coefficients, nor the function $`s()`$ are assumed to be known. Without explicitly modeling these parameters, we still know that such a process would show characteristic linear correlations (reflecting the ARMA structure) and a characteristic single time probability distribution (reflecting the action of $`s()`$ on the original Gaussian distribution). Figure 2 shows a surrogate time series that is designed to have exactly these properties in common with the data but to be as random as possible otherwise. By a proper statistical test we can now look for additional structure that is present in the data but not in the surrogates. In the case of the time series in Fig. 1, there is no additional structure since it has been generated by the rule $$s_n=\alpha x_n^3,x_n=0.9x_{n1}+\eta _n$$ (2) where $`\{\eta _n\}`$ are Gaussian independent increments and $`\alpha `$ is chosen so that the data have unit variance.<sup>1</sup><sup>1</sup>1 In order to simplify the notation in mathematical derivations, we will assume throughout this paper that the mean of each time series has been subtracted and it has been rescaled to unit variance. Nevertheless, we will often transform back to the original experimental units when displaying results graphically. This means that the strong nonlinearity that generates the bursts is due to the distorted measurement that enhances ordinary fluctuations, generated by linear stochastic dynamics. In order to systematically exclude simple explanations for time series observations, this paper will discuss formal statistical tests for nonlinearity. We will formulate suitable null hypotheses for the underlying process or for the observed structures themselves. In the former case, null hypotheses will be extensions of the statement that the data were generated by a Gaussian linear stochastic processes. The latter situation may occur when it is difficult to properly define a class of possibly underlying processes but we want to check if a particular set of observables gives a complete account of the statistics of the data. We will attempt to reject a null hypothesis by comparing the value of a nonlinear parameter taken on by the data with its probability distribution. Since only exceptional cases allow for the exact or asymptotic derivation of this distribution unless strong additional assumptions are made, we have to estimate it by a Monte Carlo resampling technique. This procedure is known in the nonlinear time series literature as the method of surrogate data, see Refs. . Most of the body of this paper will be concerned with the problem of generating an appropriate Monte Carlo sample for a given null hypothesis. We will also dwell on the proper interpretation of the outcome of such a test. Formally speaking, this is totally straightforward: A rejection at a given significance level means that if the null hypothesis is true, there is certain small probability to still see the structure we detected. Non-rejection means even less: either the null hypothesis is true, or the discriminating statistics we are using fails to have power against the alternative realised in the data. However, one is often tempted to go beyond this simple reasoning and speculate either on the nature of the nonlinearity or non-stationarity that lead to the rejection, or on the reason for the failure to reject. Since the actual quantification of nonlinearity turns out to be the easiest — or in any case the least dangerous — part of the problem, we will discuss it first. In principle, any nonlinear parameter can be employed for this purpose. They may however differ dramatically in their ability to detect different kinds of structures. Unfortunately, selecting the most suitable parameter has to be done without making use of the data since that would render the test incorrect: If the measure of nonlinearity has been optimised formally or informally with respect to the data, a fair comparison with surrogates is no longer possible. Only information that is shared by data and surrogates, that is, for example, linear correlations, may be considered for guidance. If multiple data sets are available, one could use some sequences for the selection of the nonlinearity parameter and others for the actual test. Otherwise, it is advantageous to use one of the parameter free methods that can be set up with very little detailed knowledge of the data. Since we want to advocate to routinely use a nonlinearity test whenever nonlinear methods are planned to be applied, we feel that it is important to make a practical implementation of such a test easily accessible. Therefore, one branch of the TISEAN free software package is devoted to surrogate data testing. Appendix A will discuss the implementational aspects necessary to understand what the programs in the package do. ## 2 Detecting weak nonlinearity Many quantities have been discussed in the literature that can be used to characterise nonlinear time series. For the purpose of nonlinearity testing we need such quantities that are particular powerful in discriminating linear dynamics and weakly nonlinear signatures — strong nonlinearity is usually more easily detectable. An important objective criterion that can be used to guide the preferred choice is the discrimination power of the resulting test. It is defined as the probability that the null hypothesis is rejected when it is indeed false. It will obviously depend on how and how strongly the data actually deviates from the null hypothesis. ### 2.1 Higher order statistics Traditional measures of nonlinearity are derived from generalisations of the two-point auto-covariance function or the power spectrum. The use of higher order cumulants as well as bi- and multi-spectra is discussed for example in Ref. . One particularly useful third order quantity<sup>2</sup><sup>2</sup>2We have omitted the commonly used normalisation to second moments since throughout this paper, time series and their surrogates will have the same second order properties and identical pre-factors do not enter the tests. is $$\varphi ^{\mathrm{rev}}(\tau )=\frac{1}{N\tau }\underset{n=\tau +1}{\overset{N}{}}(s_ns_{n\tau })^3,$$ (3) since it measures the asymmetry of a series under time reversal. (Remember that the statistics of linear stochastic processes is always symmetric under time reversal. This can be most easily seen when the statistical properties are given by the power spectrum which contains no information about the direction of time.) Time reversibility as a criterion for discriminating time series is discussed in detail in Ref. , where, however, a different statistic is used to quantify it. The concept itself is quite folklore and has been used for example in Refs. . Time irreversibility can be a strong signature of nonlinearity. Let us point out, however, that it does not imply a dynamical origin of the nonlinearity. We will later (Sec. 7.1) give an example of time asymmetry generated by a measurement function involving a nonlinear time average. ### 2.2 Phase space observables When a nonlinearity test is performed with the question in mind if nonlinear deterministic modeling of the signal may be useful, it seems most appropriate to use a test statistic that is related to a nonlinear deterministic approach. We have to keep in mind, however, that a positive test result only indicates nonlinearity, not necessarily determinism. Since nonlinearity tests are usually performed on data sets which do not show unambiguous signatures of low-dimensional determinism (like clear scaling over several orders of magnitude), one cannot simply estimate one of the quantitative indicators of chaos, like the fractal dimension or the Lyapunov exponent. The formal answer would almost always be that both are probably infinite. Still, some useful test statistics are at least inspired by these quantities. Usually, some effective value at a finite length scale has to be computed without establishing scaling region or attempting to approximate the proper limits. In order to define an observable in $`m`$–dimensional phase space, we first have to reconstruct that space from a scalar time series, for example by the method of delays: $$𝐬_n=(s_{n(m1)\tau },s_{n(m2)\tau },\mathrm{},s_n).$$ (4) One of the more robust choices of phase space observable is a nonlinear prediction error with respect to a locally constant predictor $`F`$ that can be defined by $$\gamma (m,\tau ,ϵ)=\left(\frac{1}{N}[s_{n+1}F(𝐬_n)]^2\right)^{1/2}.$$ (5) The prediction over one time step is performed by averaging over the future values of all neighbouring delay vectors closer than $`ϵ`$ in $`m`$ dimensions. We have to consider the limiting case that the deterministic signature to be detected is weak. In that case, the major limiting factor for the performance of a statistical indicator is its variance since possible differences between two samples may be hidden among the statistical fluctuations. In Ref. , a number of popular measures of nonlinearity are compared quantitatively. The results can be summarised by stating that in the presence of time-reversal asymmetry, the particular quantity Eq.(3) that derives from the three-point autocorrelation function gives very reliable results. However, many nonlinear evolution equations produce little or no time-reversal asymmetry in the statistical properties of the signal. In these cases, simple measures like a prediction error of a locally constant phase space predictor, Eq.(5), performed best. It was found to be advantageous to choose embedding and other parameters in order to obtain a quantity that has a small spread of values for different realisations of the same process, even if at these parameters no valid embedding could be expected. Of course, prediction errors are not the only class of nonlinearity measures that has been optimised for robustness. Notable other examples are coarse-grained redundancies , and, at an even higher level of coarse-graining, symbolic methods . The very popular method of false nearest neighbours can be easily modified to yield a scalar quantity suitable for nonlinearity testing. The same is true for the concept of unstable periodic orbits (UPOs) . ## 3 Surrogate data testing All of the measures of nonlinearity mentioned above share a common property. Their probability distribution on finite data sets is not known analytically – except maybe when strong additional assumptions about the data are made. Some authors have tried to give error bars for measures like predictabilities (e.g. Barahona and Poon ) or averages of pointwise dimensions (e.g. Skinner et al. ) based on the observation that these quantities are averages (mean values or medians) of many individual terms, in which case the variance (or quartile points) of the individual values yield an error estimate. This reasoning is however only valid if the individual terms are independent, which is usually not the case for time series data. In fact, it is found empirically that nonlinearity measures often do not even follow a Gaussian distribution. Also the standard error given by Roulston for the mutual information is fully correct only for uniformly distributed data. His derivation assumes a smooth rescaling to uniformity. In practice, however, we have to rescale either to exact uniformity or by rank-ordering uniform variates. Both transformations are in general non-smooth and introduce a bias in the joint probabilities. In view of the serious difficulties encountered when deriving confidence limits or probability distributions of nonlinear statistics with analytical methods, it is highly preferable to use a Monte Carlo resampling technique for this purpose. ### 3.1 Typical vs. constrained realisations Traditional bootstrap methods use explicit model equations that have to be extracted from the data and are then run to produce Monte Carlo samples. This typical realisations approach can be very powerful for the computation of confidence intervals, provided the model equations can be extracted successfully. The latter requirement is very delicate. Ambiguities in selecting the proper model class and order, as well as the parameter estimation problem have to be addressed. Whenever the null hypothesis involves an unknown function (rather than just a few parameters) these problems become profound. A recent example of a typical realisations approach to creating surrogates in the dynamical systems context is given by Ref. . There, a Markov model is fitted to a coarse-grained dynamics obtained by binning the two dimensional delay vector distribution of a time series. Then, essentially the transfer matrix is iterated to yield surrogate sequences. We will offer some discussion of that work later in Sec. 7. As discussed by Theiler and Prichard , the alternative approach of constrained realisations is more suitable for the purpose of hypothesis testing we are interested in here. It avoids the fitting of model equations by directly imposing the desired structures onto the randomised time series. However, the choice of possible null hypothesis is limited by the difficulty of imposing arbitrary structures on otherwise random sequences. In the following, we will discuss a number of null hypotheses and algorithms to provide the adequately constrained realisations. The most general method to generate constrained randomisations of time series is described in Sec. 5. Consider as a toy example the null hypothesis that the data consists of independent draws from a fixed probability distribution. Surrogate time series can be simply obtained by randomly shuffling the measured data. If we find significantly different serial correlations in the data and the shuffles, we can reject the hypothesis of independence. Constrained realisations are obtained by creating permutations without replacement. The surrogates are constrained to take on exactly the same values as the data, just in random temporal order. We could also have used the data to infer the probability distribution and drawn new time series from it. These permutations with replacement would then be what we called typical realisations. Obviously, independence is not an interesting null hypothesis for most time series problems. It becomes relevant when the residual errors of a time series model are evaluated. For example in the BDS test for nonlinearity , an ARMA model is fitted to the data. If the data are linear, then the residuals are expected to be independent. It has been pointed out, however, that the resulting test is not particularly powerful for chaotic data . ### 3.2 The null hypothesis: model class vs. properties From the bootstrap literature we are used to defining null hypothesis for time series in terms of a class of processes that is assumed to contain the specific process that generated the data. For most of the literature on surrogate data, this situation hasn’t changed. One very common null hypothesis goes back to Theiler and coworkers and states that the data have been generated by a Gaussian linear stochastic process with constant coefficients. Constrained realisations are created by requiring that the surrogate time series have the same Fourier amplitudes as the data. We can clearly see in this example that what is needed for the constrained realisations approach is a set of observable properties that is known to fully specify the process. The process itself is not reconstructed. But this example is also exceptional. We know that the class of processes defined by the null hypothesis is fully parametrised by the set of ARMA$`(M,N)`$ models (autoregressive moving average, see Eq.(6) below). If we allow for arbitrary orders $`M`$ and $`N`$, there is a one-to-one correspondence between the ARMA coefficients and the power spectrum. The power spectrum is here estimated by the Fourier amplitudes. The Wiener–Khinchin theorem relates it to the autocorrelation function by a simple Fourier transformation. Consequently, specifying either the class of processes or the set of constraints are two ways to achieve the same goal. The only generalisation of this favourable situation that has been found so far is the null hypothesis that the ARMA output may have been observed by a static, invertible measurement function. In that case, constraining the single time probability distribution and the Fourier amplitudes is sufficient. If we want to go beyond this hypothesis, all we can do in general is to specify the set of constraints we will impose. We cannot usually say which class of processes this choice corresponds to. We will have to be content with statements that a given set of statistical parameters exhaustively describes the statistical properties of a signal. Hypotheses in terms of a model class are usually more informative but specifying sets of observables gives us much more flexibility. ### 3.3 Test design Before we go into detail about the generation of surrogate samples, let us outline how an actual test can be carried out. Many examples are known of nonlinearity measures that aren’t even approximately normally distributed. It has therefore been advocated since the early days to use robust statistics rather than parametric methods for the actual statistical test. In other words, we discourage the common practice to represent the distribution of the nonlinearity measure by an error bar and deriving the significance from the number of “sigmas” the data lies outside these bounds. Such a reasoning implicitly assumes a Gaussian distribution. Instead, we follow Theiler et al. by using a rank–order test. First, we select a residual probability $`\alpha `$ of a false rejection, corresponding to a level of significance $`(1\alpha )\times 100\%`$. Then, for a one–sided test (e.g. looking for small prediction errors only), we generate $`M=1/\alpha 1`$ surrogate sequences. Thus, including the data itself, we have $`1/\alpha `$ sets. Therefore, the probability that the data by coincidence has the smallest, say, prediction error is exactly $`\alpha `$, as desired. For a two–sided test (e.g. for time asymmetry which can go both ways), we would generate $`M=2/\alpha 1`$ surrogates, resulting in a probability $`\alpha `$ that the data gives either the smallest or the largest value. For a minimal significance requirement of 95% , we thus need at least 19 or 39 surrogate time series for one– and two–sided tests, respectively. The conditions for rank based tests with more samples can be easily worked out. Using more surrogates can increase the discrimination power. ## 4 Fourier based surrogates In this section, we will discuss a hierarchy of null hypotheses and the issues that arise when creating the corresponding surrogate data. The simpler cases are discussed first in order to illustrate the reasoning. If we have found serial correlations in a time series, that is, rejected the null hypothesis of independence, we may ask of what nature these correlations are. The simplest possibility is to explain the observed structures by linear two-point autocorrelations. A corresponding null hypothesis is that the data have been generated by some linear stochastic process with Gaussian increments. The most general univariate linear process is given by $$s_n=\underset{i=1}{\overset{M}{}}a_is_{ni}+\underset{i=0}{\overset{N}{}}b_i\eta _{ni},$$ (6) where $`\{\eta _n\}`$ are Gaussian uncorrelated random increments. The statistical test is complicated by the fact that we do not want to test against one particular linear process only (one specific choice of the $`a_i`$ and $`b_i`$), but against a whole class of processes. This is called a composite null hypothesis. The unknown values $`a_i`$ and $`b_i`$ are sometimes referred to as nuisance parameters. There are basically three directions we can take in this situation. First, we could try to make the discriminating statistic independent of the nuisance parameters. This approach has not been demonstrated to be viable for any but some very simple statistics. Second, we could determine which linear model is most likely realised in the data by a fit for the coefficients $`a_i`$ and $`b_i`$, and then test against the hypothesis that the data has been generated by this particular model. Surrogates are simply created by running the fitted model. This typical realisations approach is the common choice in the bootstrap literature, see e.g. the classical book by Efron . The main drawback is that we cannot recover the true underlying process by any fit procedure. Apart from problems associated with the choice of the correct model orders $`M`$ and $`N`$, the data is by construction a very likely realisation of the fitted process. Other realisations will fluctuate around the data which induces a bias against the rejection of the null hypothesis. This issue is discussed thoroughly in Ref. , where also a calibration scheme is proposed. The most attractive approach to testing for a composite null hypothesis seems to be to create constrained realisations . Here it is useful to think of the measurable properties of the time series rather than its underlying model equations. The null hypothesis of an underlying Gaussian linear stochastic process can also be formulated by stating that all structure to be found in a time series is exhausted by computing first and second order quantities, the mean, the variance and the auto-covariance function. This means that a randomised sample can be obtained by creating sequences with the same second order properties as the measured data, but which are otherwise random. When the linear properties are specified by the squared amplitudes of the (discrete) Fourier transform $$|S_k|^2=\left|\frac{1}{\sqrt{N}}\underset{n=0}{\overset{N1}{}}s_ne^{i2\pi kn/N}\right|^2,$$ (7) that is, the periodogram estimator of the power spectrum, surrogate time series $`\{\overline{s}_n\}`$ are readily created by multiplying the Fourier transform of the data by random phases and then transforming back to the time domain: $$\overline{s}_n=\frac{1}{\sqrt{N}}\underset{k=0}{\overset{N1}{}}e^{i\alpha _k}|S_k|e^{i2\pi kn/N},$$ (8) where $`0\alpha _k<2\pi `$ are independent uniform random numbers. ### 4.1 Rescaled Gaussian linear process The two null hypotheses discussed so far (independent random numbers and Gaussian linear processes) are not what we want to test against in most realistic situations. In particular, the most obvious deviation from the Gaussian linear process is usually that the data do not follow a Gaussian single time probability distribution. This is quite obvious for data obtained by measuring intervals between events, e.g. heart beats since intervals are strictly positive. There is however a simple generalisation of the null hypothesis that explains deviations from the normal distribution by the action of an invertible, static measurement function: $$s_n=s(x_n),x_n=\underset{i=1}{\overset{M}{}}a_ix_{ni}+\underset{i=0}{\overset{N}{}}b_i\eta _{ni}.$$ (9) We want to regard a time series from such a process as essentially linear since the only nonlinearity is contained in the — in principle invertible — measurement function $`s()`$. Let us mention right away that the restriction that $`s()`$ must be invertible is quite severe and often undesired. The reason why we have to impose it is that otherwise we couldn’t give a complete specification of the process in terms of observables and constraints. The problem is further illustrated in Sec. 7.1 below. The most common method to create surrogate data sets for this null hypothesis essentially attempts to invert $`s()`$ by rescaling the time series $`\{s_n\}`$ to conform with a Gaussian distribution. The rescaled version is then phase randomised (conserving Gaussianity on average) and the result is rescaled to the empirical distribution of $`\{s_n\}`$. The rescaling is done by simple rank ordering. Suppose we want to rescale the sequence $`\{s_n\}`$ so that the rescaled sequence $`\{r_n\}`$ takes on the same values as some reference sequence $`\{g_n\}`$ (e.g. draws from a Gaussian distribution). Let $`\{g_n\}`$ be sorted in ascending order and $`\text{rank}(s_n)`$ denote the ascending rank of $`s_n`$, e.g. $`\text{rank}(s_n)=3`$ if $`s_n`$ is the 3rd smallest element of $`\{s_n\}`$. Then the rescaled sequence is given by $$r_n=g_{\mathrm{rank}(s_n)},n=1,\mathrm{},N.$$ (10) The amplitude adjusted Fourier transform (AAFT) method has been originally proposed by Theiler et al. . It results in a correct test when $`N`$ is large, the correlation in the data is not too strong and $`s()`$ is close to the identity. Otherwise, there is a certain bias towards a too flat spectrum, to be discussed in the following section. ### 4.2 Flatness bias of AAFT surrogates It is argued in Ref. that for short and strongly correlated sequences, the AAFT algorithm can yield an incorrect test since it introduces a bias towards a slightly flatter spectrum. In Fig. 3 we see power spectral estimates of a clinical data set and of 19 AAFT surrogates. The data is taken from data set B of the Santa Fe Institute time series contest . It consists of 4096 samples of the breath rate of a patient with sleep apnoea. The sampling interval is 0.5 seconds. The discrepancy of the spectra is significant. A bias towards a white spectrum is noted: power is taken away from the main peak to enhance the low and high frequencies. Heuristically, the flatness bias can be understood as follows. Amplitude adjustment attempts to invert the unknown measurement function $`s()`$ empirically. The estimate $`\widehat{s}^1()`$ of the inverse obtained by the rescaling of a finite sample to values drawn from a Gaussian distribution is expected to be consistent but it is not exact for finite $`N`$. The sampling fluctuations of $`\delta _n=\widehat{s}^1(s_n)s^1(s_n)`$ will be essentially independent of $`n`$ and thus spectrally white. Consequently, Gaussian scaling amounts to adding a white component to the spectrum, which therefore tends to become flatter under the procedure. Since such a bias can lead to spurious results, surrogates have to be refined before a test can be performed. ### 4.3 Iteratively refined surrogates In Ref. , we propose a method which iteratively corrects deviations in spectrum and distribution from the goal set by the measured data. In an alternating fashion, the surrogate is filtered towards the correct Fourier amplitudes and rank-ordered to the correct distribution. Let $`\{|S_k|^2\}`$ be the Fourier amplitudes, Eq.(7), of the data and $`\{c_k\}`$ a copy of the data sorted by magnitude in ascending order. At each iteration stage $`(i)`$, we have a sequence $`\{\overline{r}_n^{(i)}\}`$ that has the correct distribution (coincides with $`\{c_k\}`$ when sorted), and a sequence $`\{\overline{s}_n^{(i)}\}`$ that has the correct Fourier amplitudes given by $`\{|S_k|^2\}`$. One can start with $`\{\overline{r}_n^{(0)}\}`$ being either an AAFT surrogate, or simply a random shuffle of the data. The step $`\overline{r}_n^{(i)}\overline{s}_n^{(i)}`$ is a very crude “filter” in the Fourier domain: The Fourier amplitudes are simply replaced by the desired ones. First, take the (discrete) Fourier transform of $`\{\overline{r}_n^{(i)}\}`$: $$\overline{R}_k^{(i)}=\frac{1}{\sqrt{N}}\underset{n=0}{\overset{N1}{}}\overline{r}_ne^{i2\pi kn/N}.$$ (11) Then transform back, replacing the actual amplitudes by the desired ones, but keeping the phases $`e^{i\psi _k^{(i)}}=\overline{R}_k^{(i)}/|\overline{R}_k^{(i)}|`$: $$\overline{s}_n^{(i)}=\frac{1}{\sqrt{N}}\underset{k=0}{\overset{N1}{}}e^{i\psi _k^{(i)}}|S_k|e^{i2\pi kn/N}.$$ (12) The step $`\overline{s}_n^{(i)}\overline{r}_n^{(i+1)}`$ proceeds by rank ordering: $$\overline{r}_n^{(i+1)}=c_{\text{rank}(\overline{s}_n^{(i)})}.$$ (13) It can be heuristically understood that the iteration scheme is attracted to a fixed point $`\overline{r}_n^{(i+1)}=\overline{r}_n^{(i)}`$ for large $`(i)`$. Since the minimal possible change equals to the smallest nonzero difference $`c_nc_{n1}`$ and is therefore finite for finite $`N`$, the fixed point is reached after a finite number of iterations. The remaining discrepancy between $`\overline{r}_n^{(\mathrm{})}`$ and $`\overline{s}_n^{(\mathrm{})}`$ can be taken as a measure of the accuracy of the method. Whether the residual bias in $`\overline{r}_n^{(\mathrm{})}`$ or $`\overline{s}_n^{(\mathrm{})}`$ is more tolerable depends on the data and the nonlinearity measure to be used. For coarsely digitised data,<sup>3</sup><sup>3</sup>3 Formally, digitisation is a non-invertible, nonlinear measurement and thus not included in the null hypothesis. Constraining the surrogates to take exactly the same (discrete) values as the data seems to be reasonably safe, though. Since for that case we haven’t seen any dubious rejections due to discretisation, we didn’t discuss this issue as a serious caveat. This decision may of course prove premature. deviations from the discrete distribution can lead to spurious results whence $`\overline{r}_n^{(\mathrm{})}`$ is the safer choice. If linear correlations are dominant, $`\overline{s}_n^{(\mathrm{})}`$ can be more suitable. The final accuracy that can be reached depends on the size and structure of the data and is generally sufficient for hypothesis testing. In all the cases we have studied so far, we have observed a substantial improvement over the standard AAFT approach. Convergence properties are also discussed in . In Sec. 5.5 below, we will say more about the remaining inaccuracies. ### 4.4 Example: Southern oscillation index As an illustration let us perform a statistical test for nonlinearity on a monthly time series of the Southern Oscillation Index (SOI) from 1866 to 1994 (1560 samples). For a reference on analysis of Southern Oscillation data see Graham et al. . Since a discussion of this climatic phenomenon is not relevant to the issue at hand, let us just consider the time series as an isolated data item. Our null hypothesis is that the data is adequately described by its single time probability distribution and its power spectrum. This corresponds to the assumption that an autoregressive moving average (ARMA) process is generating a sequence that is measured through a static monotonic, possibly nonlinear observation function. For a test at the 99% level of significance ($`\alpha =0.01`$), we generate a collection of $`1/\alpha 1=99`$ surrogate time series which share the single time sample probability distribution and the periodogram estimator with the data. This is carried out using the iterative method described in Sec. 4.3 above (see also Ref. ). Figure 4 shows the data together with one of the 99 surrogates. As a discriminating statistics we use a locally constant predictor in embedding space, using three dimensional delay coordinates at a delay time of one month. Neighbourhoods were selected at 0.2 times the rms amplitude of the data. The test is set up in such a way that the null hypothesis may be rejected when the prediction error is smaller for the data than for all of the 99 surrogates. But, as we can see in Fig. 5, this is not the case. Predictability is not significantly reduced by destroying possible nonlinear structure. This negative result can mean several things. The prediction error statistics may just not have any power to detect the kind of nonlinearity present. Alternatively, the underlying process may be linear and the null hypothesis true. It could also be, and this seems the most likely option after all we know about the equations governing climate phenomena, that the process is nonlinear but the single time series at this sampling covers such a poor fraction of the rich dynamics that it must appear linear stochastic to the analysis. Of course, our test has been carried out disregarding any knowledge of the SOI situation. It is very likely that more informed measures of nonlinearity may be more successful in detecting structure. We would like to point out, however, that if such information is derived from the same data, or literature published on it, a bias is likely to occur. Similarly to the situation of multiple tests on the same sample, the level of significance has to be adjusted properly. Otherwise, if many people try, someone will eventually, and maybe accidentally, find a measure that indicates nonlinear structure. ### 4.5 Periodicity artefacts The randomisation schemes discussed so far all base the quantification of linear correlations on the Fourier amplitudes of the data. Unfortunately, this is not exactly what we want. Remember that the autocorrelation structure given by $$C(\tau )=\frac{1}{N\tau }\underset{n=\tau +1}{\overset{N}{}}s_ns_{n\tau }$$ (14) corresponds to the Fourier amplitudes only if the time series is one period of a sequence that repeats itself every $`N`$ time steps. This is, however, not what we believe to be the case. Neither is it compatible with the null hypothesis. Conserving the Fourier amplitudes of the data means that the periodic auto-covariance function $$C_p(\tau )=\frac{1}{N}\underset{n=1}{\overset{N}{}}s_ns_{\mathrm{mod}(n\tau 1,N)+1}$$ (15) is reproduced, rather than $`C(\tau )`$. This seemingly harmless difference can lead to serious artefacts in the surrogates, and, consequently, spurious rejections in a test. In particular, any mismatch between the beginning and the end of a time series poses problems, as discussed e.g. in Ref. . In spectral estimation, problems caused by edge effects are dealt with by windowing and zero padding. None of these techniques have been successfully implemented for the phase randomisation of surrogates since they destroy the invertibility of the transform. Let us illustrate the artefact generated by an end point mismatch with an example. In order to generate an effect that is large enough to be detected visually, consider 1500 iterates of the almost unstable AR(2) process, $`s_n=1.9s_{n1}0.9001s_{n2}+\eta _n`$ (upper trace of Fig. 6). The sequence is highly correlated and there is a rather big difference between the first and the last points. Upon periodic continuation, we see a jump between $`s_{1500}`$ and $`s_1`$. Such a jump has spectral power at all frequencies but with delicately tuned phases. In surrogate time series conserving the Fourier amplitudes, the phases are randomised and the spectral content of the jump is spread in time. In the surrogate sequence shown as the lower trace in Fig. 6, the additional spectral power is mainly visible as a high frequency component. It is quite clear that the difference between the data and such surrogates will be easily been picked up by, say, a nonlinear predictor, and can lead to spurious rejections of the null hypothesis. The problem of non-matching ends can often be overcome by choosing a sub-interval of the recording such that the end points do match as closely as possible . The possibly remaining finite phase slip at the matching points usually is of lesser importance. It can become dominant, though, if the signal is otherwise rather smooth. As a systematic strategy, let us propose to measure the end point mismatch by $$\gamma _{\mathrm{jump}}=\frac{(s_1s_N)^2}{_{n=1}^N(s_ns)^2}$$ (16) and the mismatch in the first derivative by $$\gamma _{\mathrm{slip}}=\frac{[(s_2s_1)(s_Ns_{N1})]^2}{_{n=1}^N(s_ns)^2}.$$ (17) The fractions $`\gamma _{\mathrm{jump}}`$ and $`\gamma _{\mathrm{slip}}`$ give the contributions to the total power of the series of the mismatch of the end points and the first derivatives, respectively. For the series shown in Fig. 6, $`\gamma _{\mathrm{jump}}=0.45\%`$ and the end effect dominates the high frequency end of the spectrum. By systematically going through shorter and shorter sub-sequences of the data, we find that a segment of 1350 points starting at sample 102 yields $`\gamma _{\mathrm{jump}}=10^5\%`$ or an almost perfect match. That sequence is shown as the upper trace of Fig. 7, together with a surrogate (lower trace). The spurious “crinkliness” is removed. In practical situations, the matching of end points is a simple and mostly sufficient precaution that should not be neglected. Let us mention that the SOI data discussed before is rather well behaved with little end-to-end mismatch ($`\gamma _{\mathrm{jump}}<0.004\%`$). Therefore we didn’t have to worry about the periodicity artefact. The only method that has been proposed so far that strictly implements $`C(\tau )`$ rather than $`C_p(\tau )`$ is given in Ref. and will be discussed in detail in Sec. 5 below. The method is very accurate but also rather costly in terms of computer time. It should be used in cases of doubt and whenever a suitable sub-sequence cannot be found. ### 4.6 Iterative multivariate surrogates A natural generalisation of the null hypothesis of a Gaussian linear stochastic process is that of a multivariate process of the same kind. In this case, the process is determined by giving the cross-spectrum in addition to the power spectrum of each of the channels. In Ref. , it has been pointed out that phase randomised surrogates are readily produced by multiplying the Fourier phases of each of the channels by the same set of random phases since the cross-spectrum reflects relative phases only. The authors of Ref. did not discuss the possibility to combine multivariate phase randomisation with an amplitude adjustment step. The extension of the iterative refinement scheme introduced in Sec. 4.3 to the multivariate case is relatively straightforward. Since deviations from a Gaussian distribution are very common and may occur due to a simple invertible rescaling due to the measurement process, we want to give the algorithm here. Recall that the iterative scheme consists of two procedures which are applied in an alternating fashion until convergence to a fixed point is achieved. The amplitude adjustment procedure by rank ordering (13) is readily applied to each channel individually. However, the spectral adjustment in the Fourier domain has to be modified. Let us introduce a second index in order to denote the $`M`$ different channels of a multivariate time series $`\{s_{n,m},n=1,\mathrm{},N,m=1,\mathrm{},M\}`$. The change that has to be applied to the “filter” step, Eq.(12), is that the phases $`\psi _{k,m}`$ have to be replaced by phases $`\varphi _{k,m}`$ with the following properties. (We have dropped the superscript $`(i)`$ for convenience.) The replacement should be minimal in the least squares sense, that is, it should minimise $$h_k=\underset{m=1}{\overset{M}{}}\left|e^{i\varphi _{k,m}}e^{i\psi _{k,m}}\right|^2.$$ (18) Also, the new phases must implement the same phase differences exhibited by the corresponding phases $`e^{i\rho _{k,m}}=S_{k,m}/|S_{k,m}|`$ of the data: $$e^{i(\varphi _{k,m_2}\varphi _{k,m_1})}=e^{i(\rho _{k,m_2}\rho _{k,m_1})}.$$ (19) The last equation can be fulfilled by setting $`\varphi _{k,m}=\rho _{k,m}+\alpha _k`$. With this, we have $`h_k=_{m=1}^M22\mathrm{cos}(\alpha _k\psi _{k,m}+\rho _{k,m})`$ which is extremal when $$\mathrm{tan}\alpha _k=\frac{\underset{m=1}{\overset{M}{}}\mathrm{sin}(\psi _{k,m}\rho _{k,m})}{_{m=1}^M\mathrm{cos}(\psi _{k,m}\rho _{k,m})}.$$ (20) The minimum is selected by taking $`\alpha _k`$ in the correct quadrant. As an example, let us generate a surrogate sequence for a simultaneous recording of the breath rate and the instantaneous heart rate of a human during sleep. The data is again taken from data set B of the Santa Fe Institute time series contest . The 1944 data points are an end-point matched sub-sequence of the data used as a multivariate example in Ref. . In the latter study, which will be commented on in Sec. 6.2 below, the breath rate signal had been considered to be an input and therefore not been randomised. Here, we will randomise both channels under the condition that their individual spectra as well as their cross-correlation function are preserved as well as possible while matching the individual distributions exactly. The iterative scheme introduced above took 188 iterations to converge to a fixed point. The data and a bi-variate surrogate is shown in Fig. 8. In Fig. 9, the cross-correlation functions of the data and one surrogate are plotted. Also, for comparison, the same for two individual surrogates of the two channels. The most striking difference between data and surrogates is that the coherence of the breath rate is lost. Thus, it is indeed reasonable to exclude the nonlinear structure in the breath dynamics from a further analysis of the heart rate by taking the breath rate as a given input signal. Such an analysis is however beyond the scope of the method discussed in this section. First of all, specifying the full cross-correlation function to a fixed signal plus the autocorrelation function over-specifies the problem and there is no room for randomisation. In Sec. 6.2 below, we will therefore revisit this problem. With the general constrained randomisation scheme to be introduced below, it will be possible to specify a limited number of lags of the auto- and cross-correlation functions. ## 5 General constrained randomisation Randomisation schemes based on the Fourier amplitudes of the data are appropriate in many cases. However, there remain some flaws, the strongest being the severely restricted class of testable null hypotheses. The periodogram estimator of the power spectrum is about the only interesting observable that allows for the solution of the inverse problem of generating random sequences under the condition of its given value. In the general approach of Ref. , constraints (e.g. autocorrelations) on the surrogate data are implemented by a cost function which has a global minimum when the constraints are fulfilled. This general framework is much more flexible than the Fourier based methods. We will therefor discuss it in some detail. ### 5.1 Null hypotheses, constraints, and cost functions As we have discussed previously, we will often have to specify a null hypothesis in terms of a complete set of observable properties of the data. Only in specific cases (e.g. the two point autocorrelation function), there is a one-to-one correspondence to a class of models (here the ARMA process). In any case, if $`\{\overline{s}_n\}`$ denotes a surrogate time series, the constraints will most often be of (or can be brought into) the form $$F_i(\{\overline{s}_n\})=0,i=1,\mathrm{},I.$$ (21) Such constraints can always be turned into a cost function $$E(\{\overline{s}_n\})=\left(\underset{i=1}{\overset{I}{}}|w_iF_i(\{\overline{s}_n\})|^q\right)^{1/q}.$$ (22) The fact that $`E(\{\overline{s}_n\})`$ has a global minimum when the constraints are fulfilled is unaffected by the choice of the weights $`w_i0`$ and the order $`q`$ of the average. The least squares or $`L^2`$ average is obtained at $`q=2`$, $`L^1`$ at $`q=1`$ and the maximum distance when $`q\mathrm{}`$. Geometric averaging is also possible (and can be formally obtained by taking the limit $`q0`$ in a proper way). We have experimented with different choices of $`q`$ but we haven’t found a choice that is uniformly superior to others. It seems plausible to give either uniform weights or to enhance those constraints which are particularly difficult to fulfil. Again, conclusive empirical results are still lacking. Consider as an example the constraint that the sample autocorrelation function of the surrogate $`\overline{C}(\tau )=\overline{s}_n\overline{s}_{n\tau }`$ (data rescaled to zero mean and unit variance) are the same as those of the data, $`C(\tau )=s_ns_{n\tau }`$. This is done by specifying zero discrepancy as a constraint $`F_\tau (\{\overline{s}_n\})=\overline{C}(\tau )C(\tau ),\tau =1,\mathrm{},\tau _{\mathrm{max}}`$. If the correlations decay fast, $`\tau _{\mathrm{max}}`$ can be restricted, otherwise $`\tau _{\mathrm{max}}=N1`$ (the largest available lag). Thus, a possible cost function could read $$E=\mathrm{max}_{\tau =0}^{\tau _{\mathrm{max}}}\left|\overline{C}(\tau )C(\tau )\right|.$$ (23) Other choices of $`q`$ and the weights are of course also possible. In all the cases considered in this paper, one constraint will be that the surrogates take on the same values as the data but in different time order. This ensures that data and surrogates can equally likely be drawn from the same (unknown) single time probability distribution. This particular constraint is not included in the cost function but identically fulfilled by considering only permutations without replacement of the data for minimisation. By introducing a cost function, we have turned a difficult nonlinear, high dimensional root finding problem (21) into a minimisation problem (22). This leads to extremely many false minima whence such a strategy is discouraged for general root finding problems . Here, the situation is somewhat different since we need to solve Eq.(21) only over the set of all permutations of $`\{s_n\}`$. Although this set is big, it is still discrete and powerful combinatorial minimisation algorithms are available that can deal with false minima very well. We choose to minimise $`E(\{\overline{s}_n\})`$ among all permutations $`\{\overline{s}_n\}`$ of the original time series $`\{s_n\}`$ using the method of simulated annealing. Configurations are updated by exchanging pairs in $`\{\overline{s}_n\}`$. The annealing scheme will decide which changes to accept and which to reject. With an appropriate cooling scheme, the annealing procedure can reach any desired accuracy. Apart from simulated annealing, genetic algorithms have become very popular for this kind of problems and there is no reason why they couldn’t be used for the present purpose as well. ### 5.2 Computational issues of simulated annealing Simulated annealing is a very powerful method of combinatorial minimisation in the presence of many false minima. Simulated annealing has a rich literature, classical references are Metropolis et al. and Kirkpatrick , more recent material can be found for example in Vidal . Despite its many successful applications, using simulated annealing efficiently is still a bit of an art. We will here discuss some issues we have found worth dealing with in our particular minimisation problem. Since the detailed behaviour will be different for each cost function, we can only give some general guidelines. The main idea behind simulated annealing is to interpret the cost function $`E`$ as an energy in a thermodynamic system. Minimising the cost function is then equivalent to finding the ground state of a system. A glassy solid can be brought close to the energetically optimal state by first heating it and subsequently cooling it. This procedure is called “annealing”, hence the name of the method. If we want to simulate the thermodynamics of this tempering procedure on a computer, we notice that in thermodynamic equilibrium at some finite temperature $`T`$, system configurations should be visited with a probability according to the Boltzmann distribution $`e^{E/T}`$ of the canonical ensemble. In Monte Carlo simulations, this is achieved by accepting changes of the configuration with a probability $`p=1`$ if the energy is decreased $`(\mathrm{\Delta }E<0)`$ and $`p=e^{\mathrm{\Delta }E/T}`$ if the energy is increased, $`(\mathrm{\Delta }E0)`$. This selection rule is often referred to as the Metropolis step. In a minimisation problem, the temperature is the parameter in the Boltzmann distribution that sets its width. In particular, it determines the probability to go “up hill”, which is important if we need to get out of false minima. In order to anneal the system to the ground state of minimal “energy”, that is, the minimum of the cost function, we want to first “melt” the system at a high temperature $`T`$, and then decrease $`T`$ slowly, allowing the system to be close to thermodynamic equilibrium at each stage. If the changes to the configuration we allow to be made connect all possible states of the system, the updating algorithm is called ergodic. Although some general rigorous convergence results are available, in practical applications of simulated annealing some problem-specific choices have to be made. In particular, apart from the constraints and the cost function, one has to specify a method of updating the configurations and a schedule for lowering the temperature. In the following, we will discuss each of these issues. Concerning the choice of cost function, we have already mentioned that there is a large degeneracy in that many cost functions have an absolute minimum whenever a given set of constraints if fulfilled. The convergence properties can depend dramatically on the choice of cost function. Unfortunately, this dependence seems to be so complicated that it is impossible even to discuss the main behaviour in some generality. In particular, the weights $`w_i`$ in Eq.(22) are sometimes difficult to choose. Heuristically, we would like to reflect changes in the $`I`$ different constraints about equally, provided the constraints are independent. Since their scale is not at all set by Eq.(21), we can use the $`w_i`$ for this purpose. Whenever we have some information about which kind of deviation would be particularly problematic with a given test statistic, we can give it a stronger weight. Often, the shortest lags of the autocorrelation function are of crucial importance, whence we tend to weight autocorrelations by $`1/\tau `$ when they occur in sums. Also, the $`C(\tau )`$ with larger $`\tau `$ are increasingly ill-determined due to the fewer data points entering the sums. As an extreme example, $`C(N1)=s_1s_{N1}`$ shows huge fluctuations due to the lack of self-averaging. Finally, there are many more $`C(\tau )`$ with larger $`\tau `$ than at the crucial short lags. A way to efficiently reach all permutations by small individual changes is by exchanging randomly chosen (not necessarily close-by) pairs. How the interchange of two points can affect the current cost is illustrated schematically in Fig. 10. Optimising the code that computes and updates the cost function is essential since we need its current value at each annealing step — which are expected to be many. Very often, an exchange of two points is reflected in a rather simple update of the cost function. For example, computing $`C(\tau )`$ for a single lag $`\tau `$ involves $`O(N)`$ multiplications. Updating $`C(\tau )`$ upon the exchange of two points $`i<j`$ only requires the replacement of the terms $`s_is_{i\tau }`$, $`s_{i+\tau }s_i`$, $`s_js_{j\tau }`$, and $`s_{j+\tau }s_j`$ in the sum. Note that cheap updates are a source of potential mistakes (e.g. avoid subtracting terms twice in the case that $`i=j\tau `$) but also of roundoff errors. To ensure that the assumed cost is always equal to the actual cost, code carefully and monitor roundoff by computing a fresh cost function occasionally. Further speed-up can be achieved in two ways. Often, not all the terms in a cost function have to be added up until it is clear that the resulting change goes up hill by an amount that will lead to a rejection of the exchange. Also, pairs to be exchanged can be selected closer in magnitude at low temperatures because large changes are very likely to increase the cost. Many cooling schemes have been discussed in the literature . We use an exponential scheme in our work. We will give details on the – admittedly largely ad hoc — choices that have been made in the TISEAN implementation in Appendix A. We found it convenient to have a scheme available that automatically adjusts parameters until a given accuracy is reached. This can be done by cooling at a certain rate until we are stuck (no more accepted changes). If the cost is not low enough yet, we melt the system again and cool at a slower rate. ### 5.3 Example: avoiding periodicity artefacts Let us illustrate the use of the annealing method in the case of the standard null hypothesis of a rescaled linear process. We will show how the periodicity artefact discussed in Sec. 4.5 can be avoided by using a more suitable cost function. We prepare a surrogate for the data shown in Fig. 6 (almost unstable AR(2) process) without truncating its length. We minimise the cost function given by Eq.(23), involving all lags up to $`\tau _{\mathrm{max}}=100`$. Also, we excluded the first and last points from permutations as a cheap way of imposing the long range correlation. In Fig. 11 we show progressive stages of the annealing procedure, starting from a random scramble. The temperature $`T`$ is decreased by 0.1% after either $`10^6`$ permutations have been tried or $`10^4`$ have been successful. The final surrogate neither has spuriously matching ends nor the additional high frequency components we saw in Fig. 6. The price we had to pay was that the generation of one single surrogate took 6 h of CPU time on a Pentium II PC at 350 MHz. If we had taken care of the long range correlation by leaving the end points loose but taking $`\tau _{\mathrm{max}}=N1`$, convergence would have been prohibitively slow. Note that for a proper test, we would need at least 19 surrogates. We should stress that this example with its very slow decay of correlations is particularly nasty — but still feasible. Obviously, sacrificing 10% of the points to get rid of the end point mismatch is preferable here to spending several days of CPU time on the annealing scheme. In other cases, however, we may not have such a choice. ### 5.4 Combinatorial minimisation and accuracy In principle, simulated annealing is able to reach arbitrary accuracy at the expense of computer time. We should, however, remark on a few points. Unlike other minimisation problems, we are not really interested in the solutions that put $`E=0`$ exactly. Most likely, these are the data set itself and a few simple transformations of it that preserve the cost function (e.g. a time reversed copy). On the other hand, combinatorics makes it very unlikely that we ever reach one of these few of the $`N!`$ permutations, unless $`N`$ is really small or the constraints grossly over-specify the problem. This can be the case, for example, if we include all possible lags of the autocorrelation function, which gives as many (nonlinear) equations as unknowns, $`I=N`$. These may close for small $`N`$ in the space of permutations. In such extreme situations, it is possible to include extra cost terms penalising closeness to one of the trivial transformations of the data. Let us note that if the surrogates are “too similar” to the data, this does not in itself affect the validity of the test. Only the discrimination power may be severely reduced. Now, if we don’t want to reach $`E=0`$, how can we be sure that there are enough independent realisations with $`E0`$? The theoretical answer depends on the form of the constraints in a complicated way and cannot be given in general. We can, however, offer a heuristic argument that the number of configurations with $`E`$ smaller than some $`\mathrm{\Delta }E`$ grows fast for large $`N`$. Suppose that for large $`N`$ the probability distribution of $`E`$ converges to an asymptotic form $`p(E)`$. Assume further that $`\stackrel{~}{p}(\mathrm{\Delta }E)=\text{Prob}(E<\mathrm{\Delta }E)=_0^{\mathrm{\Delta }E}p(E)𝑑E`$ is nonzero but maybe very small. This is evidently true for autocorrelations, for example. While thus the probability to find $`E<\mathrm{\Delta }E`$ in a random draw from the distribution of the data may be extremely small, say $`\stackrel{~}{p}(\mathrm{\Delta }E)=10^{45}`$ at 10 sigmas from the mean energy, the total number of permutations, figuring as the number of draws, grows as $`N!(N/e)^N\sqrt{2\pi N}`$, that is, much faster than exponentially. Thus, we expect the number of permutations with $`E<\mathrm{\Delta }E`$ to be $`\stackrel{~}{p}(\mathrm{\Delta }E)N!`$. For example, $`10^{45}\times 1000!10^{2522}`$. In any case, we can always monitor the convergence of the cost function to avoid spurious results due to residual inaccuracy in the surrogates. As we will discuss below, it can also be a good idea to test the surrogates with a linear test statistic before performing the actual nonlinearity test. ### 5.5 The curse of accuracy Strictly speaking, the concept of constrained realisations requires the constraints to be fulfilled exactly, a practical impossibility. Most of the research efforts reported in this article have their origin in the attempt to increase the accuracy with which the constraints are implemented, that is, to minimise the bias resulting from any remaining discrepancy. Since most measures of nonlinearity are also sensitive to linear correlations, a side effect of the reduced bias is a reduced variance of such estimators. Paradoxically, thus the enhanced accuracy may result in false rejections of the null hypothesis on the ground of tiny differences in some nonlinear characteristics. This important point has been recently put forth by Kugiumtzis . Consider the highly correlated autoregressive process $`x_n=0.99x_{n1}0.8x_{n2}+0.65x_{n3}+\eta _n`$, measured by the function $`s_n=s(x_n)=x_n|x_n|`$ and then normalised to zero mean and unit variance. The strong correlation together with the rather strong static nonlinearity makes this a very difficult data set for the generation of surrogates. Figure 12 shows the bias and variance for a linear statistic, the unit lag autocorrelation $`C_p(1)`$, Eq.(15), as compared to its goal value given by the data. The left part of Fig. 12 shows $`C_p(1)`$ versus the iteration count $`i`$ for 200 iterative surrogates, $`i=1`$ roughly corresponding to AAFT surrogates. Although the mean accuracy increases dramatically compared to the first iteration stages, the data consistently remains outside a 2$`\sigma `$ error bound. Since nonlinear parameters will also pick up linear correlations, we have to expect spurious results in such a case. In the right part, annealed surrogates are generated with a cost function $`E=\mathrm{max}_{\tau =1}^{200}|\overline{C}_p(\tau )C_p(\tau )|/\tau `$. The bias and variance of $`C_p(1)`$ are plotted versus the cost $`E`$. Since the cost function involves $`C_p(1)`$, it is not surprising that we see good convergence of the bias. It is also noteworthy that the variance is in any event large enough to exclude spurious results due to remaining discrepancy in the linear correlations. Kugiumtzis suggests to test the validity of the surrogate sample by performing a test using a linear statistic for normalisation. For the data shown in Fig. 12, this would have detected the lack of convergence of the iterative surrogates. Currently, this seems to be the only way around the problem and we thus recommend to follow his suggestion. With the much more accurate annealed surrogates, we haven’t so far seen examples of dangerous remaining inaccuracy, but we cannot exclude their possibility. If such a case occurs, it may be possible to generate unbiased ensembles of surrogates by specifying a cost function that explicitly minimises the bias. This would involve the whole collection of $`M`$ surrogates at the same time, including extra terms like $$E_{\mathrm{ensemble}}=\underset{\tau =0}{\overset{\tau _{\mathrm{max}}}{}}\left(\underset{m=1}{\overset{M}{}}\overline{C}_m(\tau )C(\tau )\right)^2.$$ (24) Here, $`\overline{C}_m(\tau )`$ denotes the autocorrelation function of the $`m`$–th surrogate. In any event, this will be a very cumbersome procedure, in terms of implementation and in terms of execution speed and it is questionable if it is worth the effort. ## 6 Various Examples In this section, we want to give a number of applications of the constrained randomisation approach. If the constraints consist only of the Fourier amplitudes and the single time probability distribution, the iteratively refined, amplitude adjusted surrogates discussed in Sec. 4.3 are usually sufficient if the end point artefact can be controlled and convergence is satisfactory. Even the slightest extension of these constraints makes it impossible to solve the inverse problem directly and we have to follow the more general combinatorial approach discussed in the previous section. The following examples are meant to illustrate how this can be carried out in practice. ### 6.1 Including non-stationarity Constrained randomisation using combinatorial minimisation is a very flexible method since in principle arbitrary constraints can be realised. Although it is seldom possible to specify a formal null hypothesis for more general constraints, it can be quite useful to be able to incorporate into the surrogates any feature of the data that is understood already or that is uninteresting. Non-stationarity has been excluded so far by requiring the equations defining the null hypothesis to remain constant in time. This has a two-fold consequence. First, and most importantly, we must keep in mind that the test will have discrimination power against non-stationary signals as a valid alternative to the null hypothesis. Thus a rejection can be due to nonlinearity or non-stationarity equally well. Second, if we do want to include non-stationarity in the null hypothesis we have to do so explicitly. Let us illustrate how this can be done with an example from finance. The time series consists of 1500 daily returns (until the end of 1996) of the BUND Future, a derived German financial instrument. The data were kindly provided by Thomas Schürmann, WGZ-Bank Düsseldorf. As can be seen in the upper panel of Fig. 13, the sequence is non-stationary in the sense that the local variance and to a lesser extent also the local mean undergo changes on a time scale that is long compared to the fluctuations of the series itself. This property is known in the statistical literature as heteroscedasticity and modelled by the so-called GARCH and related models. Here, we want to avoid the construction of an explicit model from the data but rather ask the question if the data is compatible with the null hypothesis of a correlated linear stochastic process with time dependent local mean and variance. We can answer this question in a statistical sense by creating surrogate time series that show the same linear correlations and the same time dependence of the running mean and running variance as the data and comparing a nonlinear statistic between data and surrogates. The lower panel in Fig. 13 shows a surrogate time series generated using the annealing method. The cost function was set up to match the autocorrelation function up to five days and the moving mean and variance in sliding windows of 100 days duration. In Fig. 13 the running mean and variance are shown as points and error bars, respectively, in the middle trace. The deviation of these between data and surrogate has been minimised to such a degree that it can no longer be resolved. A comparison of the time-asymmetry statistic Eq.(3) for the data and 19 surrogates did not reveal any discrepancy, and the null hypothesis could not be rejected. ### 6.2 Multivariate data In Ref. , the flexibility of the approach was illustrated by a simultaneous recording of the breath rate and the instantaneous heart rate of a human subject during sleep. The interesting question was, how much of the structure in the heart rate data can be explained by linear dependence on the breath rate. In order to answer this question, surrogates were made that had the same autocorrelation structure but also the same cross-correlation with respect to the fixed input signal, the breath rate. While the linear cross-correlation with the breath rate explained the coherent structure of the heart rate, other features, in particular its asymmetry under time reversal, remained unexplained. Possible explanations include artefacts due to the peculiar way of deriving heart rate from inter-beat intervals, nonlinear coupling to the breath activity, nonlinearity in the cardiac system, and others. Within the general framework, multivariate data can be treated very much the same way as scalar time series. In the above example, we chose to use one of the channels as a reference signal which was not randomised. The rationale behind this was that we were not looking for nonlinear structure in the breath rate itself and thus we didn’t want to destroy any such structure in the surrogates. In other cases, we can decide either to keep or to destroy cross-correlations between channels. The former can be achieved by applying the same permutations to all channels. Due to the limited experience we have so far and the multitude of possible cases, multivariate problems have not been included in the TISEAN implementation yet. ### 6.3 Uneven sampling Let us show how the constrained randomisation method can be used to test for nonlinearity in time series taken at time intervals of different length. Unevenly sampled data are quite common, examples include drill core data, astronomical observations or stock price notations. Most observables and algorithms cannot easily be generalised to this case which is particularly true for nonlinear time series methods. (See for material on irregularly sampled time series.) Interpolating the data to equally spaced sampling times is not recommendable for a test for nonlinearity since one could not a posteriori distinguish between genuine structure and nonlinearity introduced spuriously by the interpolation process. Note that also zero padding is a nonlinear operation in the sense that stretches of zeroes are unlikely to be produced by any linear stochastic process. For data that is evenly sampled except for a moderate number of gaps, surrogate sequences can be produced relatively straightforwardly by assuming the value zero during the gaps and minimising a standard cost function like Eq.(23) while excluding the gaps from the permutations tried. The error made in estimating correlations would then be identical for the data and surrogates and could not affect the validity of the test. Of course, one would have to modify the nonlinearity measure to avoid the gaps. For data sampled at incommensurate times, such a strategy can no longer be adopted. We then need different means to specify the linear correlation structure. Two different approaches are viable, one residing in the spectral domain and one in the time domain. Consider a time series sampled at times $`\{t_n\}`$ that need not be equally spaced. The power spectrum can then be estimated by the Lomb periodogram, as discussed for example in Ref. . For time series sampled at constant time intervals, the Lomb periodogram yields the standard squared Fourier transformation. Except for this particular case, it does not have any inverse transformation, which makes it impossible to use the standard surrogate data algorithms mentioned in Sec. 4. In Ref. , we used the Lomb periodogram of the data as a constraint for the creation of surrogates. Unfortunately, imposing a given Lomb periodogram is very time consuming because at each annealing step, the $`O(N)`$ spectral estimator has to be computed at $`O(N_f)`$ frequencies with $`N_fN`$. Press et al. give an approximation algorithm that uses the fast Fourier transform to compute the Lomb periodogram in $`O(N\mathrm{log}N)`$ time rather than $`O(N^2)`$. The resulting code is still quite slow. As a more efficient alternative to the commonly used but computationally costly Lomb periodogram, let us suggest to use binned autocorrelations. They are defined as follows. For a continuous signal $`s(t)`$ (take $`s=0`$, $`s^2=1`$ for simplicity of notation here), the autocorrelation function is $`C(\tau )=s(t)s(t\tau )=(1/T)_0^T𝑑t^{}s(t^{})s(t^{}\tau )`$. It can be binned to a bin size $`\mathrm{\Delta }`$, giving $`C_\mathrm{\Delta }(\tau )=(1/\mathrm{\Delta })_{\tau \mathrm{\Delta }}^\tau 𝑑\tau ^{}C(\tau ^{})`$. We now have to approximate all integrals using the available values of $`s(t_n)`$. In general, we estimate $$_a^b𝑑tf(t)(ba)\frac{\underset{_n(a,b)}{}f(t_n)}{|_n(a,b)|}.$$ (25) Here, $`_n(a,b)=\{n:a<t_nb\}`$ denotes the bin ranging from $`a`$ to $`b`$ and $`|(a,b)|`$ the number of its elements. We could improve this estimate by some interpolation of $`f()`$, as it is customary with numerical integration but the accuracy of the estimate is not the central issue here. For the binned autocorrelation, this approximation simply gives $$C_\mathrm{\Delta }(\tau )\frac{\underset{_{ij}(\tau \mathrm{\Delta },\tau )}{}s(t_i)s(t_j)}{|_{ij}(\tau \mathrm{\Delta },\tau )|}.$$ (26) Here, $`_{ij}(a,b)=\{(i,j):a<t_it_jb\}`$. Of course, empty bins lead to undefined autocorrelations. If we have evenly sampled data and unit bins, $`t_it_{i1}=\mathrm{\Delta },i=2,\mathrm{}N`$, then the binned autocorrelations coincide with ordinary autocorrelations at $`\tau =i\mathrm{\Delta },i=0,\mathrm{},N1`$. Once we are able to specify the linear properties of a time series, we can also define a cost function as usual and generate surrogates that realise the binned autocorrelations of the data. A delicate point however is the choice of bin size. If we take it too small, we get bins that are almost empty. Within the space of permutations, there may be only a few ways then to generate precisely that value of $`\overline{C}_\mathrm{\Delta }(\tau )`$, in other words, we over-specify the problem. If we take the bin size too large, we might not capture important structure in the autocorrelation function. As an application, let us construct randomised versions of part of an ice core data set, taken from the Greenland Ice Sheet Project Two (GISP2) . An extensive data base resulting from the analysis of physical and chemical properties of Greenland ice up to a depth of 3028.8 m has been published by the National Snow and Ice Data Center together with the World Data Center-A for Palaeoclimatology, National Geophysical Data Center, Boulder, Colorado . A long ice core is usually cut into equidistant slices and initially, all measurements are made versus depth. Considerable expertise then goes into the dating of each slice . Since the density of the ice, as well as the annual total deposition, changes with time, the final time series data are necessarily unevenly sampled. Furthermore, often a few values are missing from the record. We will study a subset of the data ranging back 10000 years in time, corresponding to a depth of 1564 m, and continuing until 2000 years before present. Figure 14 shows the sampling rate versus time for the particular ice core considered. We use the $`\delta ^{18}`$O time series which indicates the deviation of the $`\alpha =`$ $`{}_{}{}^{18}\mathrm{O}/^{16}\mathrm{O}`$ ratio from its standard value $`\alpha _0`$: $`\delta ^{18}\text{O}=0.103(\alpha \alpha _0)/\alpha _0`$. Since the ratio of the condensation rates of the two isotopes depends on temperature, the isotope ratio can be used to derive a temperature time series. The upper trace in Fig. 15 shows the recording from 10000 years to 2000 years before present, comprising 538 data points. In order to generate surrogates with the same linear properties, we estimate autocorrelations up to a lag of $`\tau =1000`$ years by binning to a resolution of 5 y. A typical surrogate is shown as the lower trace in Fig. 15. We have not been able to detect any nonlinear structure by comparing this recording with 19 surrogates, neither using time asymmetry nor prediction errors. It should be admitted, however, that we haven’t attempted to provide nonlinearity measures optimised for the unevenly sampled case. For that purpose, also some interpolation is permissible since it is then part of the nonlinear statistic. Of course, in terms of geophysics, we are asking a very simplistic question here. We wouldn’t really expect strong nonlinear signatures or even chaotic dynamics in such a single probe of the global climate. All the interesting information — and expected nonlinearity — lies in the interrelation between various measurements and the assessment of long term trends we have deliberately excluded by selecting a subset of the data. ### 6.4 Spike trains A spike train is a sequence of $`N`$ events (for example neuronal spikes, or heart beats) occurring at times $`\{t_n\}`$. Variations in the events beyond their timing are ignored. Let us first note that this very common kind of data is fundamentally different from the case of unevenly sampled time series studied in the last section in that the sampling instances $`\{t_n\}`$ are not independent of the measured process. In fact, between these instances, the value of $`s(t)`$ is undefined and the $`\{t_n\}`$ contain all the information there is. Very often, the discrete sequence if inter-event intervals $`x_n=t_nt_{n1}`$ is treated as if it were an ordinary time series. We must keep in mind, however, that the index $`n`$ is not proportional to time any more. It depends on the nature of the process if it is more reasonable to look for correlations in time or in event number. Since in the latter case we can use the standard machinery of regularly sampled time series, let us concentrate on the more difficult real time correlations. In particular the literature on heart rate variability (HRV) contains interesting material on the question of spectral estimation and linear modeling of spike trains, here usually inter-beat (RR) interval series, see e.g. Ref. . For the heart beat interval sequence shown in Fig. 16, spectral analysis of $`x_n`$ versus $`n`$ may reveal interesting structure, but even the mean periodicity of the heart beat would be lost and serious aliasing effects would have to be faced. A very convenient and powerful approach that uses the real time $`t`$ rather than the event number $`n`$ is to write a spike train as a sum of Dirac delta functions located at the spike instances: $$s(t)=\underset{n=1}{\overset{N}{}}\delta (tt_n).$$ (27) With $`𝑑ts(t)e^{i\omega t}=_{n=1}^Ne^{i\omega t_n}`$, the periodogram spectral estimator is then simply obtained by squaring the (continuous) Fourier transform of $`s(t)`$: $$P(\omega )=\frac{1}{2\pi }\left|\underset{n=1}{\overset{N}{}}e^{i\omega t_n}\right|^2.$$ (28) Other spectral estimators can be derived by smoothing $`P(\omega )`$ or by data windowing. It is possible to generate surrogate spike trains that realise the spectral estimator Eq.(28), although this is computationally very cumbersome. Again, we can take advantage of the relative computational ease of binned autocorrelations here.<sup>4</sup><sup>4</sup>4 Thanks to Bruce Gluckman for pointing this out to us. Introducing a normalisation constant $`\alpha `$, we can write $`C(\tau )=\alpha 𝑑ts(t)s(t\tau )=\alpha 𝑑t_{i,j=1}^N\delta (tt_i)\delta (t\tau t_j)`$. Then again, the binned autocorrelation function is defined by $`C_\mathrm{\Delta }(\tau )=(1/\mathrm{\Delta })_{\tau \mathrm{\Delta }}^\tau 𝑑\tau ^{}C(\tau ^{})`$. Now we carry out both integrals and thus eliminate both delta functions. If we choose $`\alpha `$ such that $`C(0)=1`$, we obtain: $$C_\mathrm{\Delta }(\tau )=\frac{|_{ij}(\tau \mathrm{\Delta },\tau )|}{N\mathrm{\Delta }}.$$ (29) Thus, all we have to do is to count all possible intervals $`t_it_j`$ in a bin. The upper panel in Fig. 17 shows the binned autocorrelation function with bin size $`\mathrm{\Delta }=0.02`$ sec up to a lag of 6 sec for the heart beat data shown in Fig. 16. Superimposed is the corresponding curve for a surrogate that has been generated with the deviation from the binned autocorrelations of the data as a cost function. The two curves are practically indistinguishable. In this particular case, most of the structure is given by the mean periodicity of the data. The lower trace of the same figure shows that even a random scramble shows very similar (but not identical) correlations. Information about the main periodicity is already contained in the distribution of inter-beat intervals which is preserved under permutation. As with unevenly sampled data, the choice of binning and the maximal lag are somewhat delicate and not that much practical experience exists. It is certainly again recommendable to avoid empty bins. The possibility to limit the temporal range of $`C_\mathrm{\Delta }(\tau )`$ is a powerful instrument to keep computation time within reasonable limits. ## 7 Questions of interpretation Having set up all the ingredients for a statistical hypothesis test of nonlinearity, we may ask what we can learn from the outcome of such a test. The formal answer is of course that we have, or have not, rejected a specific hypothesis at a given level of significance. How interesting this information is, however, depends on the null hypothesis we have chosen. The test is most meaningful if the null hypothesis is plausible enough so that we are prepared to believe it in the lack of evidence against it. If this is not the case, we may be tempted to go beyond what is justified by the test in our interpretation. Take as a simple example a recording of hormone concentration in a human. We can test for the null hypothesis of a stationary Gaussian linear random process by comparing the data to phase randomised Fourier surrogates. Without any test, we know that the hypothesis cannot be true since hormone concentration, unlike Gaussian variates, is strictly non-negative. If we failed to reject the null hypothesis by a statistical argument, we will therefore go ahead and reject it anyway by common sense, and the test was pointless. If we did reject the null hypothesis by finding a coarse-grained “dimension” which is significantly lower in the data than in the surrogates, the result formally does not give any new information but we might be tempted to speculate on the possible interpretation of the “nonlinearity” detected. This example is maybe too obvious, it was meant only to illustrate that the hypothesis we test against is often not what we would actually accept to be true. Other, less obvious and more common, examples include signals which are known (or found by inspection) to be non-stationary (which is not covered by most null hypotheses), or signals which are likely to be measured in a static nonlinear, but non-invertible way. Before we discuss these two specific caveats in some more detail, let us illustrate the delicacy of these questions with a real data example. Figure 18 shows as an intra-cranial recording of the neuronal electric field during an epileptic seizure, together with one iteratively generated surrogate data set that has the same amplitude distribution and the same linear correlations or frequency content as the data. We have eliminated the end-point mismatch by truncating the series to 1875 samples. A test was scheduled at the 99% level of significance, using nonlinear prediction errors (see Eq.(5), $`m=3`$, $`\tau =5`$, $`ϵ=0.2`$) as a discriminating statistics. The nonlinear correlations we are looking for should enhance predictability and we can thus perform a one-sided test for a significantly smaller error. In a test with one data set and 99 surrogates, the likelihood that the data would yield the smallest error by mere coincidence is exactly 1 in 100. Indeed, as can be seen in Fig. 19, the test just rejects the null hypothesis. Unfortunately, the test itself does not give any guidance as to what kind of nonlinearity is present and we have to face notoriously ill-defined questions like what is the most natural interpretation. Similar spike-and-wave dynamics as in the present example has been previously reported as chaotic, but these findings have been questioned . Hernández and coworkers have suggested a stochastic limit cycle as a simple way of generating spike-and-wave-like dynamics. If we represent the data in time delay coordinates — which is what we would usually do with chaotic systems — the nonlinearity is reflected by the “hole” in the centre (left panel in Fig. 20). A linear stochastic process could equally well show oscillations, but its amplitude would fluctuate in a different way, as we can see in the right panel of the same figure for an iso-spectral surrogate. It is difficult to answer the question if the nonlinearity could have been generated by a static mechanism like the measurement process (beyond the invertible rescaling allowed by the null hypothesis). Deterministic chaos in a narrower sense seems rather unlikely if we regard the prediction errors shown in Fig. 19: Although significantly lower than that of the surrogates, the absolute value of the nonlinear prediction error is still more than 50% of the rms amplitude of the data (which had been rescaled to unit variance). Not surprisingly, the correlation integral (not shown here) does not show any proper scaling region either. Thus, all we can hand back to the clinical researchers is a solid statistical result but the insight into what process is generating the oscillations is limited. A recent suggestion for surrogates for the validation of unstable periodic orbits (UPOs) may serve as an example for the difficulty in interpreting results for more fancy null hypothesis. Dolan and coworkers coarse-grain amplitude adjusted data in order to extract a transfer matrix that can be iterated to yield typical realisations of a Markov chain.<sup>5</sup><sup>5</sup>5 Contrary to what is said in Ref. , binning a two dimensional distribution yields a first order (rather than a second order) Markov process, for which a three dimensional binning would be needed to include the image distribution as well. The rationale there is to test if the finding of a certain number of UPOs could be coincidental, that is, not generated by dynamical structure. Testing against an order $`D`$ Markov model removes dynamical structure beyond the “attractor shape” (AS) in $`D+1`$ dimensions. It is not clear to us what the interpretation of such a test would be. In the case of a rejection, they would infer a dynamical nature of the UPOs found. But that would most probably mean that in some higher dimensional space, the dynamics could be successfully approximated by a Markov chain acting on a sufficiently fine mesh. This is at least true for finite dimensional dynamical systems. In other words, we cannot see what sort of dynamical structure would generate UPOs but not show its signature in some higher order Markov approximation. ### 7.1 Non-dynamic nonlinearity A non-invertible measurement function is with current methods indistinguishable from dynamic nonlinearity. The most common case is that the data are squared moduli of some underlying dynamical variable. This is supposed to be true for the celebrated sunspot numbers. Sunspot activity is generally connected with magnetic fields and is to first approximation proportional to the squared field strength. Obviously, sunspot numbers are non-negative, but also the null hypothesis of a monotonically rescaled Gaussian linear random process is to be rejected since taking squares is not an invertible operation. Unfortunately, the framework of surrogate data does not currently provide a method to test against null hypothesis involving noninvertible measurement functions. Yet another example is given by linearly filtered time series. Even if the null hypothesis of a monotonically rescaled Gaussian linear random process is true for the underlying signal, it is usually not true for filtered copies of it, in particular sequences of first differences, see Prichard for a discussion of this problem. The catch is that nonlinear deterministic dynamical systems may produce irregular time evolution, or chaos, and the signals generated by such processes will be easily found to be nonlinear by statistical methods. But many authors have confused cause and effect in this logic: deterministic chaos does imply nonlinearity, but not vice versa. The confusion is partly due to the heavy use of methods inspired by chaos theory, leading to arguments like “If the fractal dimension algorithm has power to detect nonlinearity, the data must have a fractal attractor!” Let us give a very simple and commonplace example where such a reasoning would lead the wrong way. One of the most powerful indicators of nonlinearity in a time series is the change of statistical properties introduced by a reversal of the time direction: Linear stochastic processes are fully characterised by their power spectrum which does not contain any information on the direction of time. One of the simplest ways to measure time asymmetry is by taking the first differences of the series to some power, see Eq.(3). Despite its high discrimination power, also for many but not all dynamical nonlinearities, this statistic has not been very popular in recent studies, probably since it is rather unspecific about the nature of the nonlinearity. Let us illustrate this apparent flaw by an example where time reversal asymmetry is generated by the measurement process. Consider a signal generated by a second order autoregressive (AR(2)) process $`x_n=1.6x_{n1}0.61x_{n2}+\eta _n`$. The sequence $`\{\eta _n\}`$ consists of independent Gaussian random numbers with a variance chosen such that the data have unit variance. A typical output of 2000 samples is shown as the upper panel in Fig. 21. Let the measurement be such that the data is rescaled by the strictly monotonic function $`s_n=e^{x_n/2}`$, The resulting sequence (see the lower panel in Fig. 21) still satisfies the null hypothesis formulated above. This is no longer the case if we take differences of this signal, a linear operation that superficially seems harmless for a “linear” signal. Taking differences turns the up-down-asymmetry of the data into a forward-backward asymmetry. As it has been pointed out by Prichard, the static nonlinearity and linear filtering are not interchangeable with respect to the null hypothesis and the sequence $`\{z_n=s_ns_{n5}=e^{x_n/2}e^{x_{n5}/2}\}`$ must be considered nonlinear in the sense that it violates the null hypothesis. Indeed, such a sequence (see the upper panel in Fig. 22) is found to be nonlinear at the 99% level of significance using the statistics given in Eq.(3), but also using nonlinear prediction errors. (Note that the nature of the statistic Eq.(3) requires a two-sided test.) A single surrogate series is shown in the lower panel of Fig. 22. The tendency of the data to raise slowly but to fall fast is removed in the linear surrogate, as it should. ### 7.2 Non-stationarity It is quite common in bio-medical time series (and elsewhere) that otherwise harmless looking data once in a while are interrupted by a singular event, for example a spike. It is now debatable whether such spikes can be generated by a linear process by nonlinear rescaling. We do not want to enter such a discussion here but merely state that a time series that covers only one or a few such events is not suitable for the statistical study of the spike generation process. The best working assumption is that the spike comes in by some external process, thus rendering the time series non-stationary. In any case, the null hypotheses we are usually testing against are not likely to generate such singular events autonomously. Thus, typically, a series with a single spike will be found to violate the null hypothesis, but, arguably, the cause is non-stationarity rather than non-linearity. Let us discuss as a simple example the same AR(2) process considered previously, this time without any rescaling. Only at a single instant, $`n=1900`$, the system is kicked by a large impulse instead of the Gaussian variate $`\eta _{1900}`$. This impulse leads to the formation of a rather large spike. Such a sequence is shown in Fig. 23. Note that due to the correlations in the process, the spike covers more than a single measurement. When we generate surrogate data, the first observation we make is that it takes the algorithm more than 400 iterations in order to converge to a reasonable tradeoff between the correct spectrum and the required distribution of points. Nevertheless, the accuracy is quite good — the spectrum is correct within 0.1% of the rms amplitude. Visual inspection of the lower panel of Fig. 23 shows that the spectral content — and the assumed values — during the single spike are represented in the surrogates by a large number of shorter spikes. The surrogates cannot know of an external kick. The visual result can be confirmed by a statistical test with several surrogates, equally well (99% significance) by a time asymmetry statistic or a nonlinear prediction error. If non-stationarity is known to be present, it is necessary to include it in the null hypothesis explicitly. This is in general very difficult but can be undertaken in some well behaved cases. In Sec. 6.1 we discussed the simplest situation of a slow drift in the calibration of the data. It has been shown empirically that a slow drift in system parameters is not as harmful as expected . It is possible to generate surrogates for sliding windows and restrict the discriminating statistics to exclude the points at the window boundaries. It is quite obvious that special care has to be taken in such an analysis. ## 8 Conclusions: Testing a Hypothesis <br>vs. Testing Against Surrogates Most of what we have to say about the interpretation of surrogate data tests, and spurious claims in the literature, can be summarised by stating that there is no such thing in statistics as testing a result against surrogates. All we can do is to test a null hypothesis. This is more than a difference in words. In the former case, we assume a result to be true unless it is rendered obsolete by finding the same with trivial data. In the latter case, the only one that is statistically meaningful, we assume a more or less trivial null hypothesis to be true, unless we can reject it by finding significant structure in the data. As everywhere in science, we are applying Occam’s razor: We seek the simplest — or least interesting — model that is consistent with the data. Of course, as always when such categories are invoked, we can debate what is “interesting”. Is a linear model with several coefficients more or less parsimonious than a nonlinear dynamical system written down as a one line formula? People unfamiliar with spectral time series methods often find their use and interpretation at least as demanding as the computation of correlation dimensions. From such a point of view it is quite natural to take the nonlinearity of the world for granted, while linearity needs to be established by a test against surrogates. The reluctance to take surrogate data as what they are, a means to test a null hypothesis, is partly explainable by the choice of null hypotheses which are currently available for proper statistical testing. As we have tried to illustrate in this paper, recent efforts on the generalisation of randomisation schemes broaden the repertoire of null hypotheses. The hope is that we can eventually choose one that is general enough to be acceptable if we fail to reject it with the methods we have. Still, we cannot prove that there is no dynamics in the process beyond what is covered by the null hypothesis. From a practical point of view, however, there is not much of a difference between structure that is not there and structure that is undetectable with our observational means. ## Acknowledgements Most data sets shown in this paper are publicly available and the sources were cited in the text. Apart from these, K. Lehnertz and C. Elger at the University clinic in Bonn kindly permitted us to use their epilepsy data. Thomas Schürmann at the WGZ-Bank Düsseldorf is acknowledged for providing financial time series data. A fair fraction of the ideas and opinions expressed in this paper we had the opportunity to discuss intensively over the past few years with a number of people, most notably James Theiler, Danny Kaplan, Dimitris Kugiumtzis, Steven Schiff, Floris Takens, Holger Kantz, and Peter Grassberger. We thank Michael Rosenblum for explaining to us the spectral estimation of spike trains. Bruce Gluckman pointed out the computational advantage of binned correlations over spike train spectra. Finally, we acknowledge financial support by the SFB 237 of the Deutsche Forschungsgemeinschaft. ## Appendix A The TISEAN implementation Starting with the publication of source code for a few nonlinear time series algorithms by Kantz and Schreiber , a growing number of programs has been put together to provide researchers with a library of common tools. The TISEAN software package is freely available in source code form and an introduction to the contained methods has been published in Ref. . More recent versions of the package ($`2.0`$) contain a comprehensive range of routines for the generation and testing of surrogate data. The general constrained randomisation scheme described in Sec. 5 is implemented as an extendable framework that allows for the addition of further cost functions with relatively little effort. With few exceptions, all the code used in the examples in this paper is publicly available as part of TISEAN 2.0. ### A.1 Measures of nonlinearity A few programs in the package directly issue scalar quantities that can be used in nonlinearity testing. These are the zeroth order nonlinear predictors (predict and zeroth) which implement Eq.(5) and the time reversibility statistic (timerev) implementing Eq.(3). For a couple of other quantities, we have deliberately omitted a black box algorithm to turn the raw results into a single number. A typical example are the programs for dimension estimation (d2, c2, c2naive, and c1) which compute correlation sums for ranges of length scales $`ϵ`$ and embedding dimensions $`m`$. For dimension estimation, these curves have to be interpreted with due care to establish scaling behaviour and convergence with increasing $`m`$. Single numbers issued by black box routines have lead to too many spurious results in the literature. Researchers often forget that such numbers are not interpretable as fractal dimensions at all but only useful for comparison and classification. Without genuine scaling at small length scales, a data set that gives $`\widehat{D}_2=4.2`$ by some ad hoc method to estimate $`\widehat{D}_2`$ cannot be said to have more degrees of freedom, or be more “complex” than one that yields $`\widehat{D}_2=3.5`$. This said, users are welcome to write their own code to turn correlation integrals, local slopes (c2d), Takens’ estimator (c2t), or Gaussian Kernel correlation integrals (c2g) into nonlinearity measures. The same situation is found for Lyapunov exponents (lyap\_k, lyap\_r), entropies (boxcount) and other quantities. Since all of these have already been described in Ref. , we refer the reader there for further details. ### A.2 Iterative FFT surrogates The workhorse for the generation of surrogate data within the TISEAN package is the program surrogates. It implements the iterative Fourier based scheme introduced in Ref. and discussed in Sec. 4.3. It has been extended to be able to handle multivariate data as discussed in Sec. 4.6. An FFT routine is used that can handle data sets of $`N`$ points if $`N`$ can be factorised using prime factors 2, 3, and 5 only. Routines that take arbitrary $`N`$ will end up doing a slow Fourier transform if $`N`$ is not factorisable with small factors. Occasionally, the length restriction results in the loss of a few points. The routine starts with a random scramble as $`\{\overline{r}_n^{(0)}\}`$, performs as many iterates as necessary to reach a fixed point and then prints out $`\overline{r}_n^{(\mathrm{})}`$ or $`\overline{s}_n^{(\mathrm{})}`$, as desired. Further, the number of iterations is shown and the residual root mean squared discrepancy between $`\overline{r}_n^{(\mathrm{})}`$ and $`\overline{s}_n^{(\mathrm{})}`$. The number of iterations can be limited by an option. In particular, $`i=0`$ gives the initial scramble as $`\{\overline{r}_n^{(0)}\}`$ or a non-rescaled FFT surrogate as $`\{\overline{s}_n^{(0)}\}`$. The first iterate, $`\{\overline{r}_n^{(1)}\}`$, is approximately (but not quite) equivalent to an AAFT surrogate. It is advisable to evaluate the residual discrepancy whenever the algorithm took more than a few iterations. In cases of doubt if the accuracy is sufficient, it may be useful to plot the autocorrelation function (corr or autocor) of the data and $`\overline{r}_n^{(\mathrm{})}`$, and, in the multivariate case, the cross-correlation function (xcor) between the channels. The routine can generate up to 999 surrogates in one call. Since the periodicity artefact discussed in Sec. 4.5 can lead to spurious test results, we need to select a suitable sub-sequence of the data before making surrogates. For this purpose, TISEAN contains the program endtoend. Let $`\{s_n^{(n_0)}=s_{n+n_0}\}`$ be a sub-sequence of length $`\stackrel{~}{N}`$ and offset $`n_0`$. The program then computes the contribution of the end-to-end mismatch $`(s_1^{(n_0)}s_{\stackrel{~}{N}}^{(n_0)})^2`$ to the total power in the sub-sequence: $$\gamma _{\mathrm{jump}}^{(\stackrel{~}{N},n_0)}=\frac{(s_1^{(n_0)}s_{\stackrel{~}{N}}^{(n_0)})^2}{_{n=1}^{\stackrel{~}{N}}(s_n^{(n_0)}s_{\text{}}^{(n_0)})^2}$$ (30) as well as the contribution of the mismatch in the first derivative $$\gamma _{\mathrm{slip}}^{(\stackrel{~}{N},n_0)}=\frac{[(s_2^{(n_0)}s_1^{(n_0)})(s_{\stackrel{~}{N}}^{(n_0)}s_{\stackrel{~}{N}1}^{(n_0)})]^2}{_{n=1}^{\stackrel{~}{N}}(s_n^{(n_0)}s_{\text{}}^{(n_0)})^2}$$ (31) and the weighted average $$\gamma ^{(\stackrel{~}{N},n_0)}=w\gamma _{\mathrm{jump}}^{(\stackrel{~}{N},n_0)}+(1w)\gamma _{\mathrm{slip}}^{(\stackrel{~}{N},n_0)}.$$ (32) The weight $`w`$ can be selected by the user and is set to 0.5 by default. For multivariate data with $`M`$ channels, $`(1/M)_{m=1}^M\gamma _m^{(\stackrel{~}{N},n_0)}`$ is used. Now the program goes through a sequence of decreasing $`\stackrel{~}{N}=2^i3^j5^k,i,j,k𝒩`$, and for each $`\stackrel{~}{N}`$ determines $`n_0^{}`$ such that $`\gamma ^{(\stackrel{~}{N},n_0^{})}`$ is minimal. The values of $`\stackrel{~}{N}`$, $`n_0^{}`$, and $`\gamma ^{(\stackrel{~}{N},n_0^{})}`$ are printed whenever $`\gamma `$ has decreased. One can thus easily find a sub-sequence that achieves negligible end point mismatch with the minimal loss of data. ### A.3 Annealed surrogates For cases where the iterative scheme does not reach the necessary accuracy, or whenever a more general null hypothesis is considered, the TISEAN package offers an implementation of the constrained randomisation algorithm using a cost function minimised by simulated annealing, as introduced in Ref. and described in Sec. 5. Since one of the main advantages of the approach is its flexibility, the implementation more resembles a toolbox than a single program. The main driving routine randomize takes care of the data input and output and operates the simulated annealing procedure. It must be linked together with modules that implement a cooling schedule, a cost function, and a permutation scheme. Within TISEAN, several choices for each of these are already implemented but it is relatively easy to add individual variants or completely different cost functions, cooling or permutation schemes. With the development structure provided, the final executables will then have names reflecting the components linked together, in the form randomize\_$`A`$_$`B`$_$`C`$, where $`A`$ is a cost function module, $`B`$ a cooling scheme, and $`C`$ a permutation scheme. Currently, two permutation schemes are implemented. In general, one will use a scheme random that selects a pair at random. It is, however, possible to specify a list of points to be excluded from the permutations. This is useful when the time series contains artifacts or some data points are missing and have been replaced by dummy values. It is planned to add a temperature-sensitive scheme that selects pairs close in magnitude at low temperatures. For certain cost functions (e.g. the spike train spectrum), an update can only be carried out efficiently if two consecutive points are exchanged. This is implemented in an alternative permutation scheme event. The only cooling scheme supported in the present version of TISEAN (2.0) is exponential cooling (exp). This means that whenever a certain condition is reached, the temperature is multiplied by a factor $`\alpha <1`$. Apart from $`\alpha `$ and the initial temperature $`T_0`$, two important parameters control the cooling schedule. Cooling is performed either if a maximal total number of trials $`S_{\mathrm{total}}`$ is exceeded, or if a maximal number $`S_{\mathrm{succ}}`$ of trials has been successfull since the last cooling. Finally, a minimal number of successes $`S_{\mathrm{min}}`$ can be specified below which the procedure is considered to be “stuck”. All these parameters can be specified explicitly. However, it is sometimes very difficult to derive reasonable values except by trial and error. Slow cooling is necessary if the desired accuracy of the constraint is high. It seems reasonable to increase $`S_{\mathrm{succ}}`$ and $`S_{\mathrm{total}}`$ with the system size, but also with the number of constraints incorporated in the cost function. It can be convenient to use an automatic scheme that starts with fast parameter settings and re-starts the procedure with slower settings whenever it gets stuck, until a desired accuracy is reached. The initial temperature can be selected automatically using the following algorithm. Start with an arbitrary small initial temperature. Let the system evolve for $`S_{\mathrm{total}}`$ steps (or $`S_{\mathrm{succ}}`$ successes). If less than 2/3 of the trials were successes, increase the initial temperature by a factor of ten to “melt” the system. This procedure is repeated until more than 2/3 successes are reached. This ensures that we start with a temperature that is high enough to leave all false minima. If the automatic scheme gets stuck (the low temperature allows too few changes to take place), it re-starts at the determined melting temperature. At the same time, the cooling rate is decreased by $`\alpha \sqrt{\alpha }`$, and $`S_{\mathrm{total}}\sqrt{2}S_{\mathrm{total}}`$. We suggest to create one surrogate with the automatic scheme and then use the final values of $`T_0`$, $`\alpha `$ and $`S_{\mathrm{total}}`$ for subsequent runs. Of course, other more sophisticated cooling schemes may be suitable depending on the specific situation. The reader is referred to the standard literature . Several cost functions are currently implemented in TISEAN. Each of them is of the general form (22) and the constraints can be matched in either the $`L^1`$, $`L^2`$, or the $`L^{\mathrm{}}`$ (or maximum) norms. In the $`L^1`$ and $`L^2`$ norms, autocorrelations are weighted by $`w_\tau =1/\tau `$ and frequencies by $`w_\omega =1/\omega `$. Autocorrelations (auto, or a periodic version autop) are the most common constraints available. Apart from the type of average, one has to specify the maximal lag $`\tau _{\mathrm{max}}`$ (see e.g. Eq.(23)). This can save a substantial fraction of the computation time if only short range correlations are present. For each update, only $`O(\tau _{\mathrm{max}})`$ terms have to be updated. For unevenly sampled data (see Sec. 6.3), the cost function uneven implements binned autocorrelations as defined by Eq.(26). The update of the histogram at each annealing step takes a number of steps proportional to the number of bins. The user has to specify the bin size $`\mathrm{\Delta }`$ and the total lag time covered contiguously by the bins. For surrogate spike trains, either the spike train peridogram Eq.(28) or binned correlations Eq.(29) can be used. In the former case, the cost function is coded in spikespec. The user has to give the total number of frequencies and the frequency resolution. Internally, the event times $`t_n`$ are used. A computationally feasible update is only possible if two consecutive intervals $`t_nt_{n1}`$ and $`t_{n+1}t_n`$ are exchanged by $`t_nt_{n1}+t_{n+1}t_n`$ (done by the permutation scheme event). As a consequence, coverage of permutation space is quite inefficient. With binned autocorrelations spikeauto, intervals are kept internally and any two intervals may be swapped, using the standard permutation scheme random. The documentation distributed with the TISEAN package describes how to add further cost functions. Essentially, one needs to provide cost function specific option parsing and input/output functions, a module that computes the full cost function and one that performs an update upon permutation. The latter should be coded very carefully. First it is the single spot that uses most of the computation time and second, it must keep the cost function consistent for all possible permutations. It is advisable to make extensive tests against freshly computed cost functions before entering production. In future releases of TISEAN, it is planned to include routines for cross correlations in mutivariate data, multivariate spike trains, and mixed signals. We hope that users take the present modular implementation as a starting point for the implementation of other null hypotheses.
no-problem/9909/astro-ph9909357.html
ar5iv
text
# HH 46/47: Also a parsec scale flow Based on observations collected at the European Southern Observatory, Chile (ESO No. 62.I-0848) ## 1 Introduction Observations over the past few years with large sensitive imaging cameras have shown that outflows and jets from young stars often extend over several parsecs, much further than previously suspected. Examples of these giant flows include well-known jets such as HH 34, HH 24, HH 111, HH 1/2, and the flow from T Tau (e.g. Bally & Devine 1994, 1997; Devine et al. 1997; Eislöffel & Mundt 1997; Reipurth et al. 1997, 1998b). The discovery that such outflows can be so large has significant implications for our understanding of their lifetimes and the cumulative impact they may have on their natal molecular clouds. HH 46/47 is a well-studied prototypical Herbig-Haro jet located in the Gum Nebula, driven by a young source (HH 47 IRS or IRAS 08242$-$5050; Graham & Elias 1983; Emerson et al. 1984; Cohen et al. 1984; Graham & Heyer 1989, Reipurth & Heathcote 1991) embedded in a southern Bok globule (GDC 1ESO 210$-$6A; Bok 1978; Reipurth 1983). The two brightest components of this outflow system, HH 46 and HH 47, were discovered by Schwartz (1977), and shown by Dopita et al. (1982; see also Graham & Elias 1983) to be shock-excited HH objects in a collimated bipolar flow. The blue-shifted lobe of the flow, with main components HH 46, HH 47 A, and HH 47 D, is prominent in optical emission as it escapes the globule, whereas the red-shifted lobe is mostly hidden from view as it runs through the globule, only to become visible again in the guise of HH 47 C as it leaves the globule; H<sub>2</sub> emission from the jet as it traverses the globule can however be seen in the near-infrared (Eislöffel et al. 1994). From HH 47 D in the north-east to HH 47 C in the south-west, the flow was thought to extend over 0.57 pc in projection at a distance of $``$ 450 pc. Ground-based optical imaging has been presented by Bok (1978), Reipurth & Heathcote (1991), Hartigan et al. 1993, Eislöffel & Mundt (1994), and Morse et al. (1994) among others, while HST images have been presented by Heathcote et al. (1996). A characteristic feature of the HH 46/47 jet is the apparent change in flow direction with time, from small-scale wiggles seen in the HST images (Heathcote et al. 1996) to a more gradual turning towards smaller position angles (Reipurth & Heathcote 1991). There is also an obvious misalignment of $``$17$`\mathrm{°}`$ between the jet and the counterjet seen near the driving source (Reipurth & Heathcote 1991). Morse et al. (1994) found that the outermost bowshock, HH 47 D, is running into previously accelerated material and they presented an image showing an H$`\alpha `$ feature 50 arcsec ahead of HH 47 D, which they suggested showed that HH 46/47 extends over a greater length than previously known. In this paper, we report the discovery of two groups of Herbig-Haro type objects at distances of approximately 10 arcmin (1.3 pc) to the north-east and south-west of HH 47 IRS and as a result, we argue that the HH 46/47 flow is indeed much larger than previously suspected, extending over several parsecs at least. ## 2 Observations and data reduction The images presented here were taken with the Wide Field Imager on the MPG/ESO 2.2 m telescope on La Silla on Jan. 21 1999. The camera uses a mosaic of 8 CCDs, each a 2k$`\times `$4k pixel array, providing a total detector size of 8k$`\times `$8k pixels with a field-of-view of $``$0.54$`\times `$0.54 degrees at an image scale of 0.25 arcsec/pixel. Images were taken through narrow-band filters at 658 nm (H$`\alpha `$) and 676 nm (\[S ii\]), and a medium-passband filter at 753 nm, which was used to measure continuum emission. Integration times were 20 min (H$`\alpha `$), 30 min (\[S ii\]), and 12.5 min (continuum), each split into 5 dithered exposures to compensate for the gaps between the CCDs, bad pixels, and cosmics. Data reduction was standard, starting with bias frame subtraction and division by a flat field frame (a combination of dome and sky flats); bad pixels were masked and excluded from further processing. Then mosaics were made from the single exposures as follows. First, a “positional reference frame” was constructed from the continuum exposures by accurately registering and combining the data for each chip separately, resulting in one average image for each chip. Large enough dithering steps had been chosen such that these images overlapped, allowing an accurate determination of the position of each chip with respect to the others. The averaged images for each of the 8 chips were then registered, rotated, shifted, and finally averaged into one large master image which served as positional reference frame for registering the other exposures. All individual exposures through all three filters were then registered to this positional reference frame. The final mosaics were then constructed by taking the median of the rotated and shifted single exposures to reject cosmic ray events. Further processing of the H$`\alpha `$ image was applied to show the newly discovered HH objects more clearly. The H$`\alpha `$ frame was lightly gaussian smoothed so that the PSF matched that of the continuum frame; the latter was then appropriately scaled and subtracted from the H$`\alpha `$ image. Since the wavelength difference between two filters was quite large, different colours of stars led to many small residual images which were removed by hand. Finally, a large-scale NW-SE gradient in the H$`\alpha `$ emission was subtracted from the image to enhance the contrast. ## 3 Results Figure 1 shows a 21′ $`\times `$ 14′ ($``$2.7 pc $`\times `$ 1.8 pc) section of the H$`\alpha `$ image. The position of the young star driving the HH 46/47 flow (the only protostellar object known in the area) is marked by a cross. It is seen to be embedded in a globule (GDC 1) which is illuminated and evaporated from the north-west by the exciting stars of the Gum nebula H ii region $ζ$ Pup and $γ^2$ Vel several degrees (20-40 pc) to the north-west. In the south-eastern part of the image, a number of apparently connected illuminated globules, rims, and fingerlike structures can be seen: these may be the remains of a larger cloud that has been evaporated from the north-west. The edges of these globules are outlined by rims of H$`\alpha `$ emission. Smooth extended H$`\alpha `$ emission is seen to fill the spaces between the globules and to the north-west. The HH 46/47 jet system consists of much more compact, knotty, and filamentary H$`\alpha `$ structures. Superimposed on the smooth background, a number of additional compact, knotty patches of H$`\alpha `$ emission can also be seen in the north-eastern and south-western corners of the image (marked with arrows). Figure 2 shows the same field with the continuum emission and large-scale H$`\alpha `$ gradient subtracted in order to show the compact H$`\alpha `$ features better. The main features are the rims around the globules whose structure suggests illumination from the north-west, with bright emission at their north-eastern tips and streamers extending to the south-east. In addition to GDC 1 harbouring HH 47 IRS and a comparably-sized second globule, GDC 2, at the eastern edge of the image (Reipurth 1983), many smaller and less pronounced features are seen, including a very narrow finger pointing to the north-west. The features in the north-east and south-west corners marked by arrows in Fig. 1 also stand out clearly now. We argue that these are Herbig-Haro type structures as follows: * The knots are emission line features, detected in H$`\alpha `$ and in \[S ii\] (albeit only faintly in the latter line), typical of Herbig-Haro objects; * They display a much more compact structure than the emission from the background H ii region, suggesting emission from shocked gas rather than from the diffuse ionized gas; * Their morphology differs from that of the rims around the globules: the latter are indicative of thin layers of gas evaporating from the denser globules, wrapping around the globules, while the candidate HH objects show up as compact blobs unassociated with any globules; the north-eastern group forms a bow-like structure heading north-east, whereas the rims around the globules head north-west; * The morphology of the north-eastern group is reminiscent of a large fragmented bow shock, indicating a working surface in a flow heading north-east, pointing back to somewhere in the vicinity of HH 47 IRS; * The south-western group is at the same distance from HH 47 IRS as the north-eastern group, suggesting that these may be matching shocks in a single flow, created by the same ejection event. We will henceforth refer to the north-eastern and south-western groups of knots as HH 47 NE and HH 47 SW respectively. The approximate location of HH 47 NE is 8<sup>h</sup> 26<sup>m</sup> 40<sup>s</sup>, $``$50$`\mathrm{°}`$ 58 15<sup>′′</sup> (J2000), some 9.9 arcmin ($``$1.3 pc) from HH 47 IRS at a position angle of $``$74$`\mathrm{°}`$. HH 47 SW is at roughly 8<sup>h</sup> 24<sup>m</sup> 55<sup>s</sup>, $``$51$`\mathrm{°}`$ 07 00<sup>′′</sup> (J2000), 10.5 arcmin from HH 47 IRS at a position angle of $``$50$`\mathrm{°}`$. The discovery of these HH objects suggests that the previously known HH 46/47 jet is only the innermost portion of a much larger flow, as suggested by Morse et al. (1994) based on their analysis of the HH 47 D bow shock. The flow is now seen to extend over at least $``$2.6 pc in projection, or $``$3 pc when deprojected using an orientation of $``$30$`\mathrm{°}`$ out of the plane of the sky (see Eislöffel & Mundt 1994; however, we do not know if and how the flow direction changes with respect to the plane of the sky). Thus HH 46/47 can be added to the growing class of parsec-scale outflows from young low-mass stars. Assuming a tangential flow velocity of about 150 km/s (see, e.g. Reipurth & Heathcote 1991; Schwartz et al. 1984; Eislöffel & Mundt 1994; Micono et al. 1998), the dynamical timescale for the new outer bow shocks HH 47 NE and SW is about 9000 years. HH 47 NE is displaced from the current jet axis (as defined by the north-eastern lobe of HH 46/47 with respect to HH 47 IRS, at a position angle of $``$54$`\mathrm{°}`$) by about 20$`\mathrm{°}`$. However, Reipurth & Heathcote (1991) found that, in addition to short timescale kinks and wiggles, the north-eastern lobe of the HH 46/47 jet appears to have been gradually changing its flow direction towards smaller position angles, by about 3$`\mathrm{°}`$ from HH 47 D to HH 47 B. The time between the ejection of these features is estimated to be about 1300 years; extrapolating this over the 9000 year dynamical timescale of HH 47 NE yields a change in flow direction of about 21$`\mathrm{°}`$. Thus, the observed displacement of HH 47 NE from the current flow axis is consistent with a steady change of flow direction over the last 9000 years at the rate currently observed for the inner part of the jet. While HH 47 SW is only a few degrees off the current axis of the flow as defined by the north-eastern lobe of the inner HH 46/47 jet, it lies some 8$`\mathrm{°}`$ off the axis of the south-western lobe (PA$``$58$`\mathrm{°}`$), as defined by HH 47 C and the counterjet seen near the source in \[S ii\] and H<sub>2</sub>. This also indicates a gradual change in flow direction in the south-western lobe, albeit smaller than seen in the north-eastern lobe. The different position angles of HH 47 NE and SW also indicate that the flow directions of the north-eastern and south-western lobe were not coincident at the time HH 47 NE and SW were ejected. This is consistent with the observation that the previously known inner jet and counterjet are not aligned at their origin (Reipurth & Heathcote 1991). The newly discovered binary nature of the outflow source (Reipurth, pers. comm.) may hold the key to understanding the wiggles, bends, and misalignment of the HH 46/47 jet and counterjet. HH 47 NE and SW are rather faint in the \[S ii\] filter, making it difficult to constrain their excitation mechanism. Their surface brightness seems to be of the same order as that of the leading bow in HH 47 D, while HH 47 D is brighter in H$`\alpha `$ than HH 47 NE and SW. This implies that the \[S ii\]:H$`\alpha `$ line ratio is greater for HH 47 NE and SW than for HH 47 D, suggesting that like HH 47 D, HH 47 NE and SW are excited by shocks rather than by external ionization, consistent with the bow shock appearance of HH 47 NE. However, the signal-to-noise is rather poor in both images and there are uncertainties in the transmission curve of the \[S ii\] filter used in the Wide Field Imager. Thus we cannot yet exclude the possibility that part of the excitation is due to external irradiation, as recently found for some jets in Orion (Reipurth et al. 1998a). This would not be an unreasonable finding, since HH 47 NE and SW are located in the Gum Nebula H ii region and thus very likely exposed to the ionizing radiation of stars including $ζ$ Pup and $γ^2$ Vel responsible for exciting the H ii region. Clearly, spectroscopic observations are required to address the excitation mechanism of HH 47 NE and SW directly. In summary, we have found two groups of Herbig-Haro type features, HH 47 NE and HH 47 SW, which appear to delineate the flow driven by HH 47 IRS over a significantly greater length than previously suspected ($``$3 pc), increasing the dynamical age of the system to $``$ 9000 years. The positions of HH 47 NE and SW with respect to the driving source and the inner jet confirm that there have been long-term secular changes in flow direction. ###### Acknowledgements. Thanks are due to the ESO staff (in particular Thomas Augusteijn, Dietrich Baade, Pablo Prado, Felipe Sanchez) for their support during the observations. This work was supported under *Deutsche Forschungsgemeinschaft* project number Zi 242/9-1.
no-problem/9909/cond-mat9909016.html
ar5iv
text
# Two-state shear diagrams for complex fluids in shear flow ## 1 Introduction There is growing interest in the flow behavior of complex fluids, including worm-like micellar surfactant solutions , lamellar and “onion” surfactant systems , and liquid crystals . In these and other systems the microstructure is altered by flow such that a bulk transition, reminiscent of an equilibrium phase transition, can occur. Signatures of this transition are kinks, plateaus, or non-monotonic behavior in the measured “flow curve”, that is, the relationship between applied shear stress and mean strain rate; structural changes observable by X-ray , neutron , or light scattering; explicit measurements of macroscopically discontinuous flow profiles ; and visual confirmation of phase separation and coexistence . Despite the surfeit of experiments, theories have been limited to a few systems (micelles and liquid crystals ), and few have attempted to calculate an entire phase diagram for a complex fluids solution in flow . In a remarkable paper, Schmitt *et al.* classified the instabilities that can occur in complex fluid solutions, and clarified the relationship between the nature of the instability (either in primarily the concentration, “spinodal”, or the strain rate, “mechanical”), the shape of the flow curves, and the orientation of the interface that initially develops during the instability . However, as noted in , the orientation of an interface in an initial instability may or may not be relevant to the orientation of the macroscopic phase separated state. In this work I outline the different macroscopic phase diagrams that can occur in complex fluid solutions in planar shear flow and describe how phase diagrams determine “flow curves”, the relation between applied shear stress and measured mean strain rate. No attempt is made to pose or solve any specific dynamical models (see ), but rather to explore the consequences of possible phase diagrams and provide a phenomenological framework within which to understand the rapidly growing body of rheological experiments. ## 2 Phenomenology of Phase Coexistence in Shear Flow Calculations of phase behavior requires first determining the relevant microstructure of the quiescent and flow-induced phases, and deriving equations of motion for the appropriate variables (momentum, mass concentration $`\varphi `$, and variables such as liquid crystalline or crystalline order, or aggregate shape and size distributions). These equations of motion determine the steady state flow curves corresponding to different microstructures. For example, fig. 1 shows a hypothetical shear-thinning fluid I which can be sheared into either a less viscous fluid S<sub>1</sub>, or a gel-like state S<sub>2</sub>. For this particular fluid the shear-induced phases are stabilized only in finite flow; systems near equilibrium phase transitions such as liquid crystals may have multiple stable flow curves in the limit of zero flow , while other systems such as dilute wormlike micelles and “onions” apparently have flow branches which are only stabilized in finite flow. The task of theory, and indeed of experiments, is to understand how material on different flow curves may coexist, and what controls the stability of one phase with respect to another. At coexistence a fluid partitions into phases with, in principle, different concentrations. In the fluid of fig. 1 the I and S<sub>1</sub> states could coexist at the same shear stress (which would require the interface between phases to lie in the velocity-vorticity plane); as could the S<sub>1</sub> and S<sub>2</sub> states; the I and S<sub>1</sub> phases could not coexist at a common shear stress. Phase coexistence among all three possible pairs of states is conceivable at a common strain rate (requiring an interface lying in the velocity–velocity-gradient plane). Phase diagrams may be constructed by determining the states on distinct stable flow curves for which the driving force for particle exchange vanishes, and for which a mechanically stable stationary interfacial solution between phases can be found. This procedure has been developed for dynamical models which incorporate both smooth and sharp interfaces. In the former case gradient terms are included in the equations of motion, while in the latter case a mechanical condition on interface stability is required. Complete phase diagrams have been calculated for a model system of rigid rod suspensions, for both common stress and common strain rate geometries . Unfortunately, we still lack methods for choosing from several candidate possibilities for phase coexistence. To begin, consider a complex fluid solution which possesses two distinct phases under shear flow: we denote these I and S (for the example fluid of fig. 1 these could be any of the I, S<sub>1</sub> and S<sub>2</sub> states). We assume this fluid has steady-state macroscopic phase coexistence observed experimentally and hence has a phase diagram which may, in principle, be calculated from the relevant dynamical equations of motion. For simplicity we ignore interesting complications associated with secondary-flow or dynamical instabilities which can induce non-stationary oscillating or chaotic steady states; and ignore any effects of curved geometries. Given such a fluid, there are two possible geometries for phase coexistence: (A) common stress phase separation, for which phases coexist in the flow gradient direction, and (B) common strain rate phase separation, for which phases coexist in the vorticity direction. For each case, flow can induce a transition to either a less viscous phase (A1, B1) or a more viscous phase (A2, B2). These geometries and flow curves are shown in fig. 2. To understand the concentration dependence of these flow curves we must examine the entire phase diagrams: some possibilities are shown in fig. 3. Upon increasing the concentration the S phase may be stabilized at either weaker flows (A1, B1, A2, *etc.*) or stronger flows (A1’, B1’, A2’, *etc.*). ## 3 Common Stress Phase Separation We first consider coexistence at a common stress (A1, A2, A1’, A2’), for which the mean strain rate $`\overline{\dot{\gamma }}`$ and mean concentration $`\overline{\varphi }`$ at stress $`\sigma `$ are partitioned according to $`\overline{\dot{\gamma }}`$ $`=`$ $`\alpha \dot{\gamma }_I(\varphi _I)+(1\alpha )\dot{\gamma }_S(\varphi _S),`$ (1) $`\overline{\varphi }`$ $`=`$ $`\alpha \varphi _I+(1\alpha )\varphi _S,`$ (2) with $`\alpha `$ the fraction of material in the I phase. Phase coexistence occurs in a region in the $`\sigma \varphi `$ plane, with horizontal tie lines connecting the coexisting points $`(\varphi _I,\dot{\gamma }_I)`$ and $`(\varphi _S,\dot{\gamma }_S)`$. This may also be represented in the $`\dot{\gamma }\varphi `$ plane. The lines $`\{\varphi _I(\sigma ),\varphi _S(\sigma ),\dot{\gamma }_I(\varphi _I(\sigma )),\dot{\gamma }_S(\varphi _S(\sigma ))\}`$ bound the phase coexistence regions. The slope of tie lines in the $`\dot{\gamma }\varphi `$ plane reflects the compositions of the two phases; vertical tie lines imply equal concentrations, while a non-infinite slope implies different concentrations. The flow curves $`\sigma (\dot{\gamma },\overline{\varphi })`$ can be calculated from the phase diagrams and eqs. 1 and 2 by applying the lever rule. They are non-analytic at the stresses and strain rates which bound the biphasic region, and would be obtained in experiment at a given mean concentration $`\overline{\varphi }`$. In the biphasic region the stress increases by crossing successive tie lines in the $`\dot{\gamma }\varphi `$ plane, and varies according to the tie line spacing and splay . From eqs. 1 and 2 one can calculate the shape of the stress “plateau” : $$\frac{d\sigma }{d\overline{\dot{\gamma }}}|_{\overline{\varphi }}=\left[\frac{\alpha }{\eta _I}+\frac{1\alpha }{\eta _S}m(\sigma ,\overline{\varphi })\left\{\frac{1\alpha }{\dot{\gamma }_S^{}\eta _S}+\frac{\alpha }{\dot{\gamma }_I^{}\eta _I}\right\}\right]^1,$$ (3) where $`m(\sigma ,\overline{\varphi })`$ is the slope of the tie line, $`\eta _k=\sigma /\dot{\gamma }`$ is the local viscosity of the $`k`$th branch, and $`\dot{\gamma }_k^{}=\dot{\gamma }_k/\varphi `$. Experiments on worm-like micelles have revealed that some systems can be “superstressed” to a metastable state, with the stress entering the biphasic region . For an A1 (or A1’) shear-thinning transition the strain rate increases as the stress is increased through the two phase region. The stress through the two-phase region increases less for mean concentrations $`\overline{\varphi }`$ for which the biphasic window is narrow (*e.g.* $`\varphi _1`$) or the tie lines are steep in the $`\dot{\gamma }\varphi `$ planel, and deviates more from constant for a wider biphasic region (*e.g.* $`\overline{\varphi }=\varphi _2`$) with more vertical tie lines. Hence, a signature of substantial concentration difference $`\delta \varphi \varphi _I\varphi _S`$ between coexisting phases is a large increase in stress through the biphasic region. This is consistent with calculations on a model for rigid rods in shear flow , and measurements on shear-thinning wormlike micelles . In dilute micelles $`\delta \varphi `$ is typically negligible and the stress plateau is nearly flat; while $`\delta \varphi `$ increases for concentrated solutions near an underlying isotropic-nematic transition and the stress plateau acquires a shape. For the A1 fluid the S phase can have a larger ($`m(\sigma ,\overline{\varphi })>0`$) or smaller ($`m(\sigma ,\overline{\varphi })<0`$) strain rate than the I phase. In the latter case the S phase has a larger effective viscosity, and for a small enough $`\delta \varphi `$ the fluid crosses over to a characteristic shear-thickening fluid, A2. In this case the strain rate actually decreases as the stress is increased through the two phase region. That is, the system remains in the I phase below a critical strain rate at which a band of more viscous material S develops whose small strain rate *reduces* the measured strain rate. Hence the system enters the biphasic region in the $`\dot{\gamma }\varphi `$ plane at the high strain rate, and traverses from top to bottom. Upon increasing the stress the system converts into the S phase with, in general, changing I and S concentrations, until the I phase disappears at the lower boundary of the two-phase envelope in the $`\dot{\gamma }\varphi `$ plane. For higher stresses the system follows the constitutive branch for the S phase and traces a vertical path in the $`\dot{\gamma }\varphi `$ plane. Hence the flow curve $`\sigma (\dot{\gamma })`$ has a distinct S shape. As with flow A1, the shape of the flow curve in the biphasic region depends on the slope of the tie lines (*i.e.* $`\delta \varphi `$) in the $`\dot{\gamma }\varphi `$ plane: vertical tie lines $`\delta \varphi =0`$ imply a vertical jump in $`\sigma (\dot{\gamma })`$, while finite $`\delta \varphi `$ and flatter tie lines imply a gentler slope for $`\sigma (\dot{\gamma })`$. Flow A2’ has been seen in shear-thickening wormlike micelles , and calculated using a phenomenological model . Flow A2 (or A2’) has been seen in surfactant onion crystals under shear, although the band geometry has not been verified . We have drawn (A1,A2,A1’,A2’) with finite biphasic regions at zero stress, corresponding to a perturbation of an existing phase transition. However, fluids such as the example fluid in fig. 1 would have phase coexistence only above a finite stress. The construction of these phase diagrams implies that tie lines move to higher strain rates with increasing stress (“conventional”). Although intuitive, the reverse (“unconventional”) does not violate any physical laws. For example, a version of A2’ with unconventional positive-sloped tie lines, would imply a $`\sigma \varphi `$ biphasic region moving to smaller $`\varphi `$ with increasing $`\sigma `$. However, a version of A2’ with negative-sloped tie lines yields a nonsensical phase diagram. ## 4 Common Strain Rate Phase Separation For coexistence at a common strain rate the shear stress is partitioned according to $$\overline{\sigma }=\alpha \sigma _I(\varphi _I)+(1\alpha )\sigma _S(\varphi _S),$$ (4) and the two main classes of flows are shown as B1 and B2. We do not show analogous diagrams B1’ and B2’, for which the coexistence strain rate increases with increasing concentration. For flow B2 the stress increases through the two phase region and the transition is to either a thicker or (for large enough $`\delta \varphi `$, as with flow A1) thinner phase. The shape of the stress through the two phase region is given, from eqs. 2 and 4 $$\frac{d\overline{\sigma }}{d\dot{\gamma }}|_{\overline{\varphi }}=\eta _I\alpha +\eta _S(1\alpha )m(\dot{\gamma },\overline{\varphi })\left\{\frac{\eta _S(1\alpha )}{\sigma _S^{}}+\frac{\eta _I\alpha }{\sigma _I^{}}\right\},$$ (5) where $`m(\dot{\gamma },\overline{\varphi })`$ is the slope of the tie line and $`\sigma _k^{}=\sigma _k/\varphi `$. In the limit of no concentration difference ($`\delta \varphi =0`$ or $`m(\dot{\gamma },\overline{\varphi })=\mathrm{}`$), $`\sigma (\dot{\gamma })`$ is vertical through the two phase region. Flow B1 is a shear thinning transition, and the stress decreases through the two-phase region. Banding geometries and flow curves similar to B1 were reported in an onion system , although this may not have been steady state. B2 has been reported in an onion system . ## 5 Applications The shear diagrams presented do not exhaust the possibilities; these flow behaviors can be combined smoothly in many ways. We present two possibilities in fig. 4. Fluid C1 phase separates at a common stress, and $`\delta \varphi `$ widens with increasing concentration, crossing over from shear thickening (A2) to shear thinning (A1) as the concentration increases. Fluid C2 phase separates at a common strain rate, with a coexistence width $`\delta \varphi `$ that narrows with increasing $`\varphi `$, combining B1’ and B2’ behavior. Typical experiments extract a series of steady state flow curves for different concentrations $`\{\varphi _i\}`$. Consider coexistence at common stress. The flow curves have kinks at the boundaries of the biphasic region, $`\{(\sigma _{Ii},\dot{\gamma }_{Ii},\varphi _i)\}`$ and $`\{(\sigma _{Si},\dot{\gamma }_{Si},\varphi _i)\}`$, which should be corroborated optically or otherwise. The strain rates and concentrations of the coexisting states may be determined as follows. One first fits curves $`\sigma _I(\varphi ),\sigma _S(\varphi ),\dot{\gamma }_I(\varphi ),\dot{\gamma }_S(\varphi )`$ to the values extracted from the kinks. A horizontal tie line connecting $`(\varphi _I,\varphi _S)`$ at a given stress $`\sigma _{}`$ may then be read off the biphasic boundaries, $`\sigma _I(\varphi _I)=\sigma _S(\varphi _S)=\sigma _{}`$ in the $`\sigma \varphi `$ plane; and the corresponding tie lines on the $`\dot{\gamma }\varphi `$ plane connects the points $`\dot{\gamma }_I(\varphi _I)`$ and $`\dot{\gamma }_S(\varphi _S)`$. In this way the complete shear diagrams and characteristics of coexisting states, may be constructed. As a check, the shape of $`\sigma (\dot{\gamma })`$ through the two-phase region may be computed by traversing the biphasic regions of the $`\dot{\gamma }\varphi `$ and $`\sigma \varphi `$ planes and using the lever rule to construct the mean strain rate via eq. (1). This analysis is necessary if the coexisting concentrations cannot be determined directly and, in the two phase region, the flow curve has a slope appreciably different from zero (common stress phase separation) or infinity (common strain rate phase separation). Returning to the hypothetical fluid of fig. 1, we expect common stress phase separations I-S<sub>1</sub> and S<sub>1</sub>-S<sub>2</sub> to be classes A1 and A2 (or A1’ and A2’), respectively, with the former disappearing at higher stresses; and common strain rate phase separations I-S<sub>1</sub>, I-S<sub>2</sub>, and S<sub>1</sub>-S<sub>2</sub> to be classes B1, B2, and B2 (or B1’, B2’, B2’), respectively. Since flow is necessary to stabilize the S<sub>1</sub> and S<sub>2</sub> phases, the biphasic window in this system would appear at a finite stress or strain rate. Note that the accessibility and even stability of composite flow curves depends on the control variable of the rheometer. For example, composite flow curves with negative slope $`d\sigma /d\dot{\gamma }<0`$ (*e.g.* A2 and B1) could be mechanically unstable , although ref. accessed such a curve under controlled stress conditions. Finally, kinetic possibilities such as metastability and hysteresis are sure to enrich the relatively simple scenarios proposed above. \*** It is a pleasure to thank David Lu, Armand Ajdari, Ovidiu Radulescu, Jacqueline Goveas, and Tom McLeish for encouragement and enjoyable discussions.
no-problem/9909/hep-ph9909334.html
ar5iv
text
# Focus Points and Naturalness in Supersymmetry ## I Introduction An understanding of electroweak symmetry breaking is currently one of the most important objectives in high energy particle physics. Renormalizability requires that electroweak symmetry be spontaneously broken; in the minimal standard model, this is realized by the condensation of the elementary Higgs field. In such a theory, however, the squared Higgs mass receives quadratically divergent radiative corrections. The Higgs mass, and with it the weak scale, is therefore expected to be of order the cut-off scale, which is typically identified with the grand unified theory (GUT) or Planck scale. The fact that the weak scale is much smaller than the cut-off scale requires a large fine-tuning and is therefore considered unnatural in the minimal standard model . Supersymmetry removes quadratic divergences and therefore provides a framework for naturally explaining the stability of the weak scale with respect to radiative corrections . However, the requirement of naturalness constrains supersymmetric models, as, in these models, the weak scale is generated when electroweak symmetry breaking is induced by (negative) squared mass parameters for the Higgs scalars. The weak scale is therefore related to these supersymmetry breaking parameters. It is hoped that an understanding of the mechanism of supersymmetry breaking will shed light on the origin of the weak scale. Even without this knowledge, though, it is clear that naturalness in supersymmetric theories requires that the supersymmetry breaking parameters in the Higgs potential be not too far above the weak scale. As naturalness implies upper bounds on supersymmetry breaking parameters and superpartner masses, its implications are obviously of great importance for supersymmetry searches. These implications depend on the assumed structure of supersymmetry breaking. With respect to the scalar masses, broadly speaking, three possibilities exist. In the first, all supersymmetry breaking scalar mass parameters are roughly of the same order; for example, they may be of the same order when generated at some high scale and then remain of the same order when evolved through renormalization group (RG) equations to the weak scale. Naturalness then demands that scalar Higgs, squark, and slepton masses all be near the weak scale. Such light particles are within the discovery reach of the Large Hadron Collider (LHC) and future lepton colliders. Another possibility is that a hierarchy exists between the various scalar masses. This hierarchy may be present at the scale at which supersymmetry breaking parameters are generated or may be generated dynamically through RG evolution . In either case, one finds that naturalness bounds on the first and second generation squarks and sleptons are much weaker than those for the third generation . First and second generation sfermions may then be much heavier than 1 TeV and far beyond the reach of near future colliders. However, top and bottom squarks, for example, are still constrained to have masses of order the weak scale, and should be discovered by the LHC. A third possibility, however, is that the RG trajectories of the Higgs mass parameters may meet at a “focus point” , where their values are independent of their ultraviolet boundary values.<sup>1</sup><sup>1</sup>1Focus points are not be confused with the well-known phenomena of fixed and quasi-fixed points . As we will see, when RG trajectories have a focus point behavior, they do not asymptotically approach a limit curve, but rather meet and then disperse. If this focus point is near the weak scale, the Higgs potential at the weak scale may be insensitive to the ultraviolet values of certain supersymmetry breaking parameters, including the scalar masses. In this case, naturalness, while constraining (unphysical) Higgs mass parameters, may lead to very weak upper bounds on the squark, slepton, and heavy Higgs boson masses, and these scalars may all be beyond the reach of near future colliders. The last possibility is the subject of this study. We will show that it is realized in a class of models that includes minimal supergravity. Minimal supergravity is at present probably the single most widely-studied framework for evaluating the potential of new experiments to probe physics beyond the standard model. We therefore concentrate on this model and explore the implications of focus points in minimal supergravity for naturalness and the superpartner spectrum. The organization of this paper is as follows. In Sec. II, we analyze the focus point behavior of the RG evolution of supersymmetry breaking parameters. In Sec. III, we discuss the implications of the up-type Higgs focus point for naturalness in minimal supergravity. In particular, we will see that multi-TeV scalar masses are consistent with naturalness. The implications of these results for superpartner searches are considered in Sec. IV. Finally, we conclude in Sec. V with a summary of our results and some philosophical comments concerning the concept of naturalness. ## II Focus Points In this section, we explore the phenomenon of focus points in the RG evolution of supersymmetry breaking parameters. We will see that, in certain circumstances, the supersymmetry breaking up-type Higgs mass has such a focus point at the weak scale, where it becomes insensitive to its boundary value at the high scale (for example, the GUT scale). Since the supersymmetry breaking masses for the Higgses are related to the weak scale, this fact has implications for naturalness, as we will discuss in Sec. III. We start by considering the RG behavior of supersymmetry breaking scalar masses. Denoting the mass of the $`i`$-th scalar field by $`m_i`$, the one-loop RG evolution of scalar masses is given schematically<sup>2</sup><sup>2</sup>2In Eqs. (1)–(3) and (5), we neglect positive $`𝒪(1)`$ coefficients for each term. by the inhomogeneous equations $`{\displaystyle \frac{dm_i^2}{d\mathrm{ln}Q}}{\displaystyle \frac{1}{16\pi ^2}}\left[g_a^2M_a^2+{\displaystyle \underset{j}{}}y_j^2m_j^2+{\displaystyle \underset{j}{}}y_j^2A_j^2\right],`$ (1) where $`g_a`$ and $`y_j`$ are gauge and Yukawa coupling constants, respectively, $`Q`$ is the renormalization scale, and the summation is over all chiral superfields coupled to the $`i`$-th chiral superfield through Yukawa interactions. The gaugino masses $`M_a`$ and supersymmetry breaking trilinear scalar couplings $`A_j`$ have RG equations $`{\displaystyle \frac{dM_a}{d\mathrm{ln}Q}}`$ $``$ $`{\displaystyle \frac{1}{16\pi ^2}}g_a^2M_a,`$ (2) $`{\displaystyle \frac{dA_i}{d\mathrm{ln}Q}}`$ $``$ $`{\displaystyle \frac{1}{16\pi ^2}}\left[g_a^2M_a+{\displaystyle \underset{j}{}}y_j^2A_j\right].`$ (3) As one can see, the evolution of the $`m_i^2`$ parameters depends on the gaugino masses and $`A`$ parameters, as well as on the scalar masses themselves. On the other hand, the gaugino masses and $`A`$ parameters evolve independently of the scalar masses.<sup>3</sup><sup>3</sup>3Note that this implies that if a hierarchy $`M_a,A_jm_i`$ is generated at some high scale, for example, by an approximate $`R`$-symmetry, it will not be destabilized by RG evolution. This structure implies that, if $`m_i^2|_\mathrm{p}`$ is a particular solution to Eq. (1) with fixed values of the gaugino masses and $`A`$ parameters, then for arbitrary constant $`\xi `$, $`m_i^2(Q)=m_i^2|_\mathrm{p}(Q)+\xi \mathrm{\Delta }_i^2(Q)`$ (4) is also a solution if the $`\mathrm{\Delta }_i^2`$ obey the following linear and homogeneous equation: $`{\displaystyle \frac{d\mathrm{\Delta }_i^2}{d\mathrm{ln}Q}}{\displaystyle \frac{1}{16\pi ^2}}{\displaystyle \underset{j}{}}y_j^2\mathrm{\Delta }_j^2.`$ (5) The evolution of the $`\mathrm{\Delta }_i^2`$ depends only on the $`\mathrm{\Delta }_i^2`$, and the $`\mathrm{\Delta }_i^2`$ are themselves solutions to the RG equations in the limit $`M_a,A_j0`$. With a given boundary condition, $`\mathrm{\Delta }_i^2`$ may vanish for some $`i`$ at some renormalization scale $`Q_\mathrm{F}^{(i)}`$. At this scale, $`m_i^2`$ is given by $`m_i^2|_\mathrm{p}`$ irrespective of $`\xi `$, and the family of boundary conditions parameterized by $`m_i^2(Q_0)=m_i^2|_\mathrm{p}(Q_0)+\xi \mathrm{\Delta }_i^2(Q_0)`$, with various $`\xi `$, all yield the same value of $`m_i^2`$ at the scale $`Q_\mathrm{F}^{(i)}`$. (Here, $`Q_0`$ is the scale where the boundary condition is given.) We call $`Q_\mathrm{F}^{(i)}`$ the focus point scale or “focus point” for $`m_i^2`$. For large $`\xi `$, $`m_i^2(Q_0)m_i^2|_\mathrm{p}(Q_0)`$, and if $`M_a^2(Q_0),A^2(Q_0)m_i^2|_\mathrm{p}(Q_0)`$, a large hierarchy $`m_i^2(Q_0)m_i^2(Q_\mathrm{F}^{(i)})`$ is generated. This concludes our general discussion of focus points. We now consider the existence of focus points in the minimal supersymmetric standard model. Before showing the results of detailed numerical calculations, we first analyze the focus point behavior by using one-loop RG equations. In the minimal supersymmetric standard model, the possibly large Yukawa couplings are those for the third generation quarks, $`y_t`$ and $`y_b`$.<sup>4</sup><sup>4</sup>4For large $`\mathrm{tan}\beta `$, the $`\tau `$ Yukawa coupling $`y_\tau `$ may be sizable, too. Here, for our simplified discussion, we neglect $`y_\tau `$, although its effects are included in our numerical calculations. The system of RG equations for the $`\mathrm{\Delta }_i^2`$ is then $`{\displaystyle \frac{d}{d\mathrm{ln}Q}}\left[\begin{array}{c}\mathrm{\Delta }_{H_u}^2\\ \mathrm{\Delta }_{U_3}^2\\ \mathrm{\Delta }_{Q_3}^2\\ \mathrm{\Delta }_{D_3}^2\\ \mathrm{\Delta }_{H_d}^2\end{array}\right]={\displaystyle \frac{y_t^2}{8\pi ^2}}\left[\begin{array}{ccccc}3& 3& 3& 0& 0\\ 2& 2& 2& 0& 0\\ 1& 1& 1& 0& 0\\ 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0\end{array}\right]\left[\begin{array}{c}\mathrm{\Delta }_{H_u}^2\\ \mathrm{\Delta }_{U_3}^2\\ \mathrm{\Delta }_{Q_3}^2\\ \mathrm{\Delta }_{D_3}^2\\ \mathrm{\Delta }_{H_d}^2\end{array}\right]+{\displaystyle \frac{y_b^2}{8\pi ^2}}\left[\begin{array}{ccccc}0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0\\ 0& 0& 1& 1& 1\\ 0& 0& 2& 2& 2\\ 0& 0& 3& 3& 3\end{array}\right]\left[\begin{array}{c}\mathrm{\Delta }_{H_u}^2\\ \mathrm{\Delta }_{U_3}^2\\ \mathrm{\Delta }_{Q_3}^2\\ \mathrm{\Delta }_{D_3}^2\\ \mathrm{\Delta }_{H_d}^2\end{array}\right],`$ (31) where $`Q_3`$, $`U_3`$, and $`D_3`$ represent the third generation SU(2)<sub>L</sub> doublet, singlet up-type, and singlet down-type squarks, and $`H_u`$ and $`H_d`$ are the up- and down-type Higgses, respectively. All other $`\mathrm{\Delta }_i^2`$’s are not coupled to large Yukawa coupling constants and hence are (almost) scale independent. To find the focus point, it is simplest to begin by considering small or moderate values of $`\mathrm{tan}\beta `$, for which $`y_b`$ is negligible. In this case, $`\mathrm{\Delta }_{D_3}^2`$ and $`\mathrm{\Delta }_{H_d}^2`$ remain constant, but the RG equations for $`\mathrm{\Delta }_{H_u}^2`$, $`\mathrm{\Delta }_{U_3}^2`$, and $`\mathrm{\Delta }_{Q_3}^2`$ are solved by $`\left[\begin{array}{c}\mathrm{\Delta }_{H_u}^2(Q)\\ \mathrm{\Delta }_{U_3}^2(Q)\\ \mathrm{\Delta }_{Q_3}^2(Q)\end{array}\right]=\kappa _6\left[\begin{array}{c}3\\ 2\\ 1\end{array}\right]e^{6I(Q)}+\kappa _0\left[\begin{array}{c}1\\ 0\\ 1\end{array}\right]+\kappa _0^{}\left[\begin{array}{c}0\\ 1\\ 1\end{array}\right],`$ (44) where $`I(Q){\displaystyle _{\mathrm{ln}Q_0}^{\mathrm{ln}Q}}{\displaystyle \frac{y_t^2(Q^{})}{8\pi ^2}}d\mathrm{ln}Q^{}.`$ (45) The $`\kappa `$’s are constants determined by the boundary conditions at the scale $`Q_0`$ and are independent of the renormalization scale $`Q`$. For given $`\kappa `$’s, the focus point for $`m_{H_u}^2`$ is given by $`3\kappa _6e^{6I(Q_\mathrm{F}^{(H_u)})}+\kappa _0=0.`$ (46) For large $`\mathrm{tan}\beta `$, we cannot neglect $`y_b`$, and the above arguments do not apply. For a general large $`\mathrm{tan}\beta `$, the evolution of the parameters is complicated and will be studied numerically below. However, for the specific case $`\mathrm{tan}\beta m_t/m_b`$, we can assume $`y_b=y_t`$ and follow an analysis similar to the one above.<sup>5</sup><sup>5</sup>5In fact, $`y_t`$ and $`y_b`$, even if initially identical, will be slightly split in their RG evolution by $`y_\tau `$ and the U(1)<sub>Y</sub> gauge interaction. In this discussion, we neglect this difference. This approximation is justified by the numerical calculations to follow. In this case, the $`\mathrm{\Delta }_i^2`$’s evolve according to $`\left[\begin{array}{c}\mathrm{\Delta }_{H_u}^2(Q)\\ \mathrm{\Delta }_{U_3}^2(Q)\\ \mathrm{\Delta }_{Q_3}^2(Q)\\ \mathrm{\Delta }_{D_3}^2(Q)\\ \mathrm{\Delta }_{H_d}^2(Q)\end{array}\right]=\kappa _7\left[\begin{array}{c}3\\ 2\\ 2\\ 2\\ 3\end{array}\right]e^{7I(Q)}+\kappa _5\left[\begin{array}{c}3\\ 2\\ 0\\ 2\\ 3\end{array}\right]e^{5I(Q)}+\kappa _0\left[\begin{array}{c}1\\ 1\\ 0\\ 0\\ 0\end{array}\right]+\kappa _0^{}\left[\begin{array}{c}0\\ 1\\ 1\\ 1\\ 0\end{array}\right]+\kappa _0^{\prime \prime }\left[\begin{array}{c}0\\ 0\\ 0\\ 1\\ 1\end{array}\right].`$ (77) The focus point for $`m_{H_u}^2`$ is then given by $`3\kappa _7e^{7I(Q_\mathrm{F}^{(H_u)})}+3\kappa _5e^{5I(Q_\mathrm{F}^{(H_u)})}+\kappa _0=0.`$ (78) The actual focus point depends on the boundary condition. Here, we first consider the well-studied case of minimal supergravity. In this framework, the supersymmetric Lagrangian is specified by five new fundamental parameters: the universal scalar mass $`m_0`$, the unified gaugino mass $`M_{1/2}`$, the supersymmetric Higgs mass $`\mu _0`$, the universal trilinear coupling $`A_0`$, and the bilinear Higgs scalar coupling $`B_0`$. These parameters are given at the GUT scale $`M_{\mathrm{GUT}}`$, which, in our analysis, is defined as the scale where the SU(2)<sub>L</sub> and U(1)<sub>Y</sub> gauge couplings meet. (Numerically, $`M_{\mathrm{GUT}}2\times 10^{16}\mathrm{GeV}`$.) All the supersymmetry breaking scalar masses are universal at $`M_{\mathrm{GUT}}`$, and we may take<sup>6</sup><sup>6</sup>6We choose $`m_i^2|_\mathrm{p}(M_{\mathrm{GUT}})=0`$ so that $`m_i^2|_\mathrm{p}`$ is independent of $`m_0`$ and is always $`𝒪(M_{1/2}^2)`$ (or $`𝒪(A_0^2)`$) or smaller. $`m_i^2|_\mathrm{p}(M_{\mathrm{GUT}})`$ $`=`$ $`0,`$ (79) $`\xi \mathrm{\Delta }_i^2(M_{\mathrm{GUT}})`$ $`=`$ $`m_0^2.`$ (80) With these boundary conditions, the coefficients $`\kappa `$ are $`(\kappa _6,\kappa _0,\kappa _0^{})`$ $`=`$ $`m_0^2({\displaystyle \frac{1}{2}},{\displaystyle \frac{1}{2}},0),y_by_t,`$ (81) $`(\kappa _7,\kappa _5,\kappa _0,\kappa _0^{},\kappa _0^{\prime \prime })`$ $`=`$ $`m_0^2({\displaystyle \frac{3}{7}},0,{\displaystyle \frac{2}{7}},{\displaystyle \frac{1}{7}},{\displaystyle \frac{2}{7}}),y_b=y_t,`$ (82) and the focus point scale is determined by $`e^{6I(Q)}`$ $`=`$ $`1/3,\mathrm{for}y_by_t,`$ (83) $`e^{7I(Q)}`$ $`=`$ $`2/9,\mathrm{for}y_b=y_t.`$ (84) Note that the $`I(Q)`$ in Eqs. (83) and (84) are not identical, as $`y_t`$ runs differently for small and large $`\mathrm{tan}\beta `$. Eqs. (83) and (84) determine the focus point scale in terms of the gauge couplings and the top quark Yukawa coupling, or equivalently, the top quark mass. Remarkably, for the physical gauge couplings and top quark mass $`m_t174\mathrm{GeV}`$, both conditions yield focus points that are very close to the weak scale! Thus, in minimal supergravity, the weak scale value of $`m_{H_u}^2`$ is highly insensitive to the universal scalar mass $`m_0`$. We now show numerically that the focus point is near the weak scale for $`m_t174\text{ GeV}`$. (For an analytical discussion, see the Appendix.) To study the focus point more carefully, we have evolved the supersymmetry breaking parameters with the full two-loop RG equations . The one-loop threshold corrections from supersymmetric particles to the gauge and Yukawa coupling constants are also included . We take as inputs $`\alpha _{\mathrm{em}}^1=137.0359895`$, $`G_F=1.16639\times 10^5`$, $`\alpha _s(m_Z)=0.117`$, $`m_Z=91.187\mathrm{GeV}`$, $`m_\tau ^{DR}(m_Z)=1.7463\mathrm{GeV}`$, bottom quark pole mass $`m_b=4.9\mathrm{GeV}`$, and, unless otherwise noted, top quark pole mass $`m_t=174\mathrm{GeV}`$. The scale dependence of $`m_{H_u}^2`$ for various values of $`m_0`$ in minimal supergravity is shown in Fig. 1. To high accuracy, all of the RG trajectories meet at $`Q𝒪(100\mathrm{GeV})`$. In fact, in this case, the weak value of $`m_{H_u}^2`$ is determined by the other fundamental parameters $`M_{1/2}`$ and $`A_0`$, and hence at least one of these parameters is required to be $`𝒪(100\mathrm{GeV})`$. In Fig. 1, two values of $`\mathrm{tan}\beta `$ were presented. In Fig. 2, we show the focus point scale of $`m_{H_u}^2`$ as a function of $`\mathrm{tan}\beta `$. The focus point is defined here as the scale where $`m_{H_u}^2/m_0=0`$. As noted above, we have included the low-energy threshold corrections to the gauge and Yukawa coupling constants, which depend on the soft supersymmetry breaking parameters. As a result, the RG trajectories do not all meet at one scale, and the focus point given in Fig. 2 has a slight dependence on $`m_0`$. For small values of $`\mathrm{tan}\beta `$, say, $`\mathrm{tan}\beta 23`$, the focus point is at very large scales. However, the important point is that, for all values of $`\mathrm{tan}\beta 5`$, including both moderate values of $`\mathrm{tan}\beta `$ and large values where $`y_b`$ and $`y_\tau `$ are not negligible, $`Q_\mathrm{F}^{(H_u)}𝒪(100\mathrm{GeV})`$, and the weak scale value of $`m_{H_u}^2`$ is insensitive to $`m_0`$. So far, we have considered only the case of a universal scalar mass. However, the $`m_{H_u}^2`$ focus point remains at the weak scale for a much wider class of boundary conditions. For example, for small $`\mathrm{tan}\beta `$, Eq. (44) shows that the parameter $`\kappa _0^{}`$ does not affect the evolution of $`m_{H_u}^2`$. As a result, the focus point of $`m_{H_u}^2`$ does not change even if we vary $`\kappa _0^{}`$, and the weak scale focus point is realized with any boundary condition of the form $$(m_{H_u}^2,m_{U_3}^2,m_{Q_3}^2)(1,1+x,1x),$$ (85) with $`x`$ an arbitrary constant. Similarly, for the case of $`y_b=y_t`$, the possible variation is $$(m_{H_u}^2,m_{U_3}^2,m_{Q_3}^2,m_{D_3}^2,m_{H_d}^2)(1,1+x,1x,1+xx^{},1+x^{}),$$ (86) with both $`x`$ and $`x^{}`$ arbitrary. Another possible modification of the boundary conditions may be seen by viewing the $`m_i^2|_\mathrm{p}(Q_0)`$ as perturbations. Since it was never necessary to specify the particular solution in the general focus point analysis, we may consider arbitrary $`m_i^2|_\mathrm{p}(Q_0)`$ without changing the focus point scale. The only constraint on $`m_i^2|_\mathrm{p}`$ is from naturalness. As will be discussed in the next section, $`m_{H_u}^2`$ is required to be $`𝒪((100\mathrm{GeV})^2)`$ at the weak scale for natural electroweak symmetry breaking. As a result, $`m_{H_u}^2|_\mathrm{p}`$, $`m_{U_3}^2|_\mathrm{p}`$, and $`m_{Q_3}^2|_\mathrm{p}`$ (and also $`m_{H_d}^2|_\mathrm{p}`$ and $`m_{D_3}^2|_\mathrm{p}`$ for large $`\mathrm{tan}\beta `$) are required to be of the order of the weak scale. Therefore, deviations from the boundary conditions of Eqs. (85) and (86) of order $`\delta m^2𝒪((100\mathrm{GeV})^2)`$ are acceptable and do not lead to fine-tuning problems. Similar arguments show that deviations $`M_a,A_j𝒪(100\mathrm{GeV})`$ are allowed. In particular, the focus point is independent of gaugino mass or $`A`$ parameter universality, and may therefore be found in many other frameworks. For example, focus points also exist in anomaly-mediated supersymmetry breaking models with additional universal scalar masses . Before closing this section, we discuss another way of formulating the focus point, which was originally used in Ref. for the specific case of minimal supergravity. By dimensional analysis, the evolution of $`m_{H_u}^2`$ may be parameterized as $`m_{H_u}^2(Q)=\eta _{m_0^2}(Q)m_0^2+\eta _{M_{1/2}^2}(Q)M_{1/2}^2+\eta _{M_{1/2}A_0}(Q)M_{1/2}A_0+\eta _{A_0^2}(Q)A_0^2,`$ (87) where the coefficients $`\eta `$ are determined by the (dimensionless) gauge and Yukawa coupling constants, and are independent of the (dimensionful) supersymmetry breaking parameters. In this formulation, the focus point is given by the scale where $`\eta _{m_0^2}=0`$, since at that scale, the value of $`m_{H_u}^2`$ is insensitive to $`m_0`$.<sup>7</sup><sup>7</sup>7Notice that in this formulation, it is clear that all RG trajectories meet at a focus point to all orders in the RG equations. Of course, as noted above, threshold effects smear out the focus point slightly — see Fig. 2. Based on this observation, it was noted in Ref. that, for minimal supergravity, and neglecting $`y_b`$, $`\eta _{m_0^2}=0`$ at the weak scale, and the weak scale becomes very insensitive to the universal scalar mass $`m_0`$, for $`m_t160170\mathrm{GeV}`$. As may be seen from the general analysis of focus points above, however, this conclusion holds much more generally: it is valid even when $`y_b`$ is not negligible, holds for the more general boundary conditions given in Eqs. (85) and (86), and is independent of all other scalar masses. In addition, as noted above and as is evident from Eq. (87), the conclusion is valid also for non-universal gaugino masses and $`A`$ parameters, as long as they are not too much larger than the weak scale. ## III Naturalness In the previous section, we saw that the weak scale value of $`m_{H_u}^2`$ is highly insensitive to the high scale scalar mass boundary conditions in a class of models that includes minimal supergravity. This fact has important implications for the naturalness of the gauge hierarchy, since $`m_{H_u}^2`$ determines, to a large extent, the shape of the Higgs potential. In minimal supergravity, $`m_{H_u}^2`$ and other sfermion masses have the same origin, the universal scalar mass $`m_0`$, and it has typically been believed that naturalness constraints on $`m_{H_u}^2`$ also give similar bounds on the sfermion masses. Such beliefs have led to great optimism in the search for scalar superpartners at future colliders in the framework of minimal supergravity . However, as we have seen, the relation between $`m_{H_u}^2`$ and other sfermion masses is not as trivial as typically assumed. In the following, we therefore reconsider the naturalness bounds on the sfermion masses in the minimal supergravity model. To begin, it is instructive to start with the tree-level expression for the weak scale. The $`Z`$ boson mass is determined by minimizing the tree-level Higgs potential to be $`{\displaystyle \frac{1}{2}}m_Z^2={\displaystyle \frac{m_{H_d}^2m_{H_u}^2\mathrm{tan}^2\beta }{\mathrm{tan}^2\beta 1}}\mu ^2.`$ (88) For all $`\mathrm{tan}\beta `$, $`m_{H_u}^2m_Z^2`$ is disfavored by the naturalness criterion, as in that case, a large cancellation between $`m_{H_u}^2`$ and $`\mu ^2`$ is needed to arrive at the physical value of the weak scale. For moderate and large values of $`\mathrm{tan}\beta `$, however, $`m_{H_d}^2m_Z^2`$ does not necessarily lead to fine-tuning. For more detailed discussions of naturalness, it is convenient to define a quantitative measure of fine-tuning . Following previous work, we use the sensitivity of the weak scale (i.e., $`m_Z`$) to fractional variations in the fundamental parameters as such a measure. In any discussion of naturalness, several subjective choices must be made. The choice of supersymmetry breaking framework is crucial. For example, in GUT models, the gaugino masses are all governed by one parameter, whereas in general, all three gaugino masses may be varied independently, and the sensitivity of the weak scale to each of them must be considered. In the following, we will specialize to minimal supergravity. As noted previously, minimal supergravity introduces five new fundamental parameters: $`m_0`$, $`M_{1/2}`$, $`\mu _0`$, $`A_0`$, and $`B_0`$. All quantities at the weak scale are fixed by these parameters. In particular, the vacuum expectation values of the Higgs bosons depend on these quantities. Therefore, some combination of them is constrained to yield the correct $`Z`$ boson mass. (At tree level, this constraint is that of Eq. (88). At one-loop, the Higgs masses squared are shifted by the corresponding tadpole contributions , as will be discussed below.) From the low energy point of view, it is therefore more convenient to consider $`\mathrm{tan}\beta `$ and $`\text{sign}(\mu )`$ as free parameters, instead of $`\mu _0`$ and $`B_0`$. We adopt the following procedure to calculate the magnitude of fine-tuning at all physically viable parameter points: (i) We consider the minimal supergravity framework with its $`4+1`$ input parameters $`\{P_{\mathrm{input}}\}=\{m_0,M_{1/2},A_0,\mathrm{tan}\beta ,\text{sign}(\mu )\}.`$ (89) Any point in the parameter space of minimal supergravity is specified by these parameters. (ii) The naturalness of each point is then calculated by first determining all the parameters of the theory (Yukawa couplings, soft supersymmetry breaking masses, etc.), consistent with low energy constraints. RG equations are used to relate high and low energy boundary conditions. In particular, using the relevant radiative breaking condition, $`|\mu _0|`$ and $`B_0`$ are determined consistent with the low energy constraints. (iii) We choose to consider the following set of (GUT scale) parameters to be free, continuously valued, independent, and fundamental: $$\{a_i\}=\{m_0,M_{1/2},\mu _0,A_0,B_0\}.$$ (90) (iv) All observables, including the $`Z`$ boson mass, are then reinterpreted as functions of the fundamental parameters $`a_i`$, and the sensitivity of the weak scale to small fractional variations in these parameters is measured by the sensitivity coefficients $`c_a\left|{\displaystyle \frac{\mathrm{ln}m_Z^2}{\mathrm{ln}a}}\right|,`$ (91) where all other fundamental (not input) parameters are held fixed in the partial derivative.<sup>8</sup><sup>8</sup>8The sensitivity of $`v^2=v_u^2+v_d^2`$, where $`v_u`$ and $`v_d`$ are the vacuum expectation values of the up- and down-type Higgs scalars, respectively, may be a more accurate choice, especially if variations of the gauge coupling constants are considered. In this paper, however, we follow the literature and define sensitivity coefficients as in Eq. (91). (v) Finally, we form the fine-tuning parameter $`c\mathrm{max}\{c_{m_0},c_{M_{1/2}},c_{\mu _0},c_{A_0},c_{B_0}\},`$ (92) which is taken as a measure of the naturalness of point $`\{P_{\mathrm{input}}\}`$, with large $`c`$ corresponding to large fine-tuning. Among the choices made in the prescription above, the choice of fundamental parameters $`a_i`$ is of particular importance. This choice varies throughout the literature . Since we are interested in the naturalness of the supersymmetric solution to the gauge hierarchy problem, we find it most reasonable to include only supersymmetry breaking parameters (and $`\mu `$, as its origin is likely to be tied to supersymmetry breaking) and not standard model parameters, such as $`y_t`$ or the strong coupling. We will return to this issue in Sec. V. Given the prescription for defining fine-tuning described above, we may now present the numerical results. In minimizing the Higgs potential, we use the one-loop corrected Higgs potential, calculated with parameters evaluated with two-loop RG equations. Denoting the physical stop masses by $`m_{\stackrel{~}{t}_1}`$ and $`m_{\stackrel{~}{t}_2}`$, we choose to minimize the potential at the scale $`Q_{\stackrel{~}{t}}=(m_{\stackrel{~}{t}_1}m_{\stackrel{~}{t}_2})^{1/2},`$ (93) where the one-loop corrections to the Higgs potential tend to be smallest . In terms of the fundamental parameters, $`Q_{\stackrel{~}{t}}0.5(m_0^2+4M_{1/2}^2)^{1/2}`$. In Figs. 35, we give contours of constant $`c_{m_0}`$, $`c_{M_{1/2}}`$, and $`c_{\mu _0}`$ in the $`(m_0,M_{1/2})`$ plane for $`\mathrm{tan}\beta =10`$ and 50. (The parameters $`c_{A_0}`$ and $`c_{B_0}`$ are typically negligible relative to these, and are so for the parameter ranges displayed.) Several features of these figures are noteworthy. First, as is evident from Fig. 3, the fine-tuning coefficient $`c_{m_0}`$ is small even for scalar masses as large as $`m_02\text{ TeV}`$. This is a consequence of the focus point behavior of $`m_{H_u}^2`$: for moderate and large $`\mathrm{tan}\beta `$, $`m_{H_u}^2`$ is insensitive to $`m_0`$, and, therefore, so is $`m_Z^2`$. The deviation of $`c_{m_0}`$ from zero for very large $`m_0`$ is a consequence of the fact that the focus point does not coincide exactly with the electroweak scale, or, more precisely, with $`Q_{\stackrel{~}{t}}`$. We will explain this statement in more detail below. On the other hand, large gaugino masses lead to large $`m_{H_u}^2`$ through RG evolution, and hence $`c_{M_{1/2}}`$ increases as $`M_{1/2}`$ increases, as shown in Fig. 4. Multi-TeV values of $`M_{1/2}`$ are therefore inconsistent with naturalness. The behavior of $`c_{\mu _0}`$, presented in Fig. 5, is also interesting. Since $`\mu \mu _0`$, Eq. (88) implies $`c_{\mu _0}4\mu ^2/m_Z^2`$. In particular, $`c_{\mu _0}`$ is small in the region bordering the excluded right-hand region, as there $`\mu `$ is suppressed by a cancellation between the $`\eta _{m_0^2}`$ and $`\eta _{M_{1/2}^2}`$ terms in Eq. (87). However, in this region, $`c_{m_0}`$ and $`c_{M_{1/2}}`$ are large and this region is fine-tuned; the simple criterion of requiring low $`\mu `$ for naturalness fails here. In Fig. 6, we show the overall fine-tuning parameter $`c`$, the maximum of $`c_a`$, in the $`(m_0,M_{1/2})`$ plane. The fine-tuning $`c`$ is determined by $`c_{\mu _0}`$, $`c_{M_{1/2}}`$, and $`c_{m_0}`$. For small $`m_0`$, $`c_{\mu _0}`$ is the largest. As $`m_0`$ increases, however, $`c_{M_{1/2}}`$ becomes dominant, and $`c`$ is therefore almost independent of $`m_0`$ in this region. Finally, for extremely large $`m_0`$, $`c_{m_0}`$ becomes important. (For large $`\mathrm{tan}\beta `$, large $`m_0`$ is excluded by the chargino mass limit before $`c_{m_0}`$ becomes dominant. As a result, we do not see the $`c_{m_0}`$ segment in Fig. 6b.) Note that, for fixed $`M_{1/2}`$ in Fig. 6, values of $`m_01\text{ TeV}`$ are actually more natural than small $`m_0`$: for large $`m_0`$, the parameter $`\mu `$, and therefore $`c_{\mu _0}`$, is reduced. Of course, eventually as $`m_0`$ increases to extremely large values, either $`\mu `$ becomes so small that the chargino mass bound is violated or $`c_{m_0}`$ becomes large, and so very large $`m_0`$ is either excluded or disfavored. As one can see, regions of parameter space with $`m_023\mathrm{TeV}`$ are as natural as regions with $`m_01\mathrm{TeV}`$. As will be discussed more fully in Sec. IV, in the region of parameter space with $`m_023\mathrm{TeV}`$, all squarks and sleptons have multi-TeV masses, and discovery of these scalars will be extremely challenging at near future colliders. On the other hand, the gaugino mass $`M_{1/2}`$ cannot be multi-TeV since it generates unnaturally large $`m_{H_u}^2`$. Although the focus point mechanism allows multi-TeV scalars consistent with naturalness, the same conclusion does not apply to gauginos and Higgsinos. As discussed in Sec. III, we expect also that the $`A`$ parameters should be bounded by naturalness to be near the weak scale. In Fig. 7, we present contours of constant $`c`$ in the $`(m_0,A_0)`$ plane. As expected, large $`A`$ terms lead to large $`m_{H_u}^2`$, and $`A_0`$ is also required to be $`𝒪(100\mathrm{GeV})`$. In Fig. 7, for increasing $`m_0`$, $`c`$ is determined successively by $`c_{\mu _0}`$, $`c_{A_0}`$ and $`c_{m_0}`$. (The $`c_{m_0}`$ segments are missing for the $`A_0>0`$ contours in Fig. 7b.) The dependence of the fine-tuning parameter $`c`$ on $`\mathrm{tan}\beta `$ is illustrated in Fig. 8. From this figure, for a given $`\mathrm{tan}\beta `$ and maximal allowed $`c`$, we can determine the upper bound on $`m_0`$. The exact range of $`c`$ required for a natural model is, of course, subjective. However, taking as an example the requirement $`c50`$, corresponding to $`\mu 300\text{ GeV}`$ at parameter points where $`c=c_{\mu _0}`$, we find $`m_02\mathrm{TeV}`$ for $`\mathrm{tan}\beta =10`$. So far, we have assumed $`m_t=174\mathrm{GeV}`$ in our calculations. However, given the experimentally allowed range $`m_t=173.8\pm 5.2\mathrm{GeV}`$ , we now consider the top quark mass dependence of the naturalness constraint on $`m_0`$. This can be understood only after accounting for the one-loop corrections to the Higgs effective potential.<sup>9</sup><sup>9</sup>9In fact, the sensitivity coefficient $`c_{m_0}`$ can only be reliably calculated, and, formally, is only meaningful, at one-loop, since at tree-level there is no preferred scale at which to enforce Eq. (88). At one-loop, the relation between $`m_Z`$ and the Higgs mass parameters, Eq. (88), is modified to $`{\displaystyle \frac{1}{2}}m_Z^2`$ $`=`$ $`{\displaystyle \frac{(m_{H_d}^2T_d/v_d)(m_{H_u}^2T_u/v_u)\mathrm{tan}^2\beta }{\mathrm{tan}^2\beta 1}}\mu ^2\mathrm{Re}\mathrm{\Pi }_{ZZ}^T(M_Z)`$ (94) $``$ $`m_{H_u}^2+T_u/v_u\mu ^2,\mathrm{for}\mathrm{tan}\beta 1,`$ (95) where $`T_u`$ and $`T_d`$ are the tadpole contributions to the effective potential and $`\mathrm{\Pi }_{ZZ}^T(p)`$, the transverse part of the $`Z`$ boson self-energy, is negligible. In minimal supergravity, the dominant terms in $`T_u`$ and $`T_d`$ are from third generation squark loops and have the generic form $$\frac{T_{u,d}}{v_{u,d}}\frac{3y_{t,b}^2}{16\pi ^2}m_{\stackrel{~}{t},\stackrel{~}{b}}^2\left[\frac{1}{2}\mathrm{ln}\left(\frac{m_{\stackrel{~}{t},\stackrel{~}{b}}}{Q}\right)\right].$$ (96) Using Eqs. (95), (87) and (96), we find $`{\displaystyle \frac{1}{2}}m_Z^2`$ $``$ $`\left\{\eta _{m_0^2}(Q)m_0^2+\mathrm{}\right\}+{\displaystyle \frac{3y_t^2}{16\pi ^2}}m_{\stackrel{~}{t}}^2\left[{\displaystyle \frac{1}{2}}\mathrm{ln}\left({\displaystyle \frac{m_{\stackrel{~}{t}}}{Q}}\right)\right]+\mathrm{}\mu ^2`$ (97) $``$ $`\left[\eta _{m_0^2}(Q)+\eta _{m_0^2}^{}(Q)\right]m_0^2\mu ^2+\mathrm{},`$ (98) where $`\eta _{m_0^2}^{}(Q)`$ encodes the dependence on $`m_0`$ arising at one-loop through the tadpole. Recall that the focus point $`Q_\mathrm{F}^{(H_u)}`$ is defined by $$\eta _{m_0^2}(Q=Q_\mathrm{F}^{(H_u)})=0,$$ (99) while empirically we find $$\eta _{m_0^2}^{}(Q=Q_{\stackrel{~}{t}})0,$$ (100) which may also be understood to a good approximation from Eq. (98). We have already seen from Fig. 2 that for $`m_t=174`$ GeV, $`Q_\mathrm{F}^{(H_u)}𝒪(100\text{ GeV})`$, well below the typical stop mass scale $`Q_{\stackrel{~}{t}}`$. The sensitivity of $`m_{H_u}^2`$ to $`m_0`$ may then be understood in two ways: either we may minimize the potential at $`Q_{\stackrel{~}{t}}`$ where the tadpole contributions are negligible, but $`\eta _{m_0^2}`$ is non-vanishing, or we may choose to minimize the potential at $`Q_\mathrm{F}^{(H_u)}`$, in which case $`\eta _{m_0^2}=0`$, but $`m_0`$ dependence arises from non-vanishing tadpole contributions. Either way, there is some residual dependence on $`m_0`$, as can be seen in Fig. 3. If $`Q_\mathrm{F}^{(H_u)}`$ and $`Q_{\stackrel{~}{t}}`$ are identical, however, $`\eta _{m_0^2}(Q)+\eta _{m_0^2}^{}(Q)`$ vanishes at $`Q_{\stackrel{~}{t}}`$. This can happen in two ways: for a fixed top quark mass, $`Q_{\stackrel{~}{t}}`$ can be lowered to $`Q_\mathrm{F}^{(H_u)}`$ by lowering $`m_0`$, or, for fixed large $`m_0`$, $`Q_\mathrm{F}^{(H_u)}`$ may be raised to $`Q_{\stackrel{~}{t}}`$ by increasing $`m_t`$ (and thereby increasing the top Yukawa renormalization effect on $`m_{H_u}^2`$). In Fig. 9, we present contours of fine-tuning $`c`$ in the $`(m_0,m_t)`$ plane. On the dotted contour, the focus point and $`Q_{\stackrel{~}{t}}`$ coincide, and $`c_{m_0}=0`$. This occurs for $`m_t`$ above 174 GeV, in accord with the discussion above. More generally, the sensitivity $`c_{m_0}`$ is indeed reduced for larger $`m_t`$. As a result, the upper bound on $`m_0`$ is increased for larger $`m_t`$, and for $`m_t180\text{ GeV}`$, near the 1$`\sigma `$ upper bound, $`m_03.4\mathrm{TeV}`$ is allowed for $`c50`$. For smaller top quark mass, the naturalness bound on $`m_0`$ becomes more stringent. Thus, while the focus point behavior persists for all $`m_t`$ within current experimental bounds, future improvements in top mass measurements may provide important information about the extent to which multi-TeV scalars are allowed in minimal supergravity. We may also consider variations in the high scale $`Q_0`$ where the supersymmetry breaking parameters are assumed to be generated. So far we have assumed $`Q_0=M_{\text{GUT}}`$. It is interesting to consider the effects of assuming that the boundary conditions are specified at a different scale, e.g., the string or Planck scale. To investigate this, we have taken a simple approach, and evolved the gauge couplings up to some fixed $`Q_0`$, set the supersymmetry breaking parameters at that scale, and then evolved them down to the weak scale. The minimal field content is assumed throughout the RG evolution; in particular, no additional GUT particle content is assumed for $`Q_0>M_{\text{GUT}}`$, and the unification of gauge couplings at $`M_{\text{GUT}}`$ is unexplained. In Fig. 10 we show contours of constant $`c`$ in the $`(m_0,Q_0)`$ plane. Just as in Fig. 6, the fine-tuning parameter $`c`$ is determined successively, for increasing $`m_0`$, by $`c_{\mu _0}`$, $`c_{M_{1/2}}`$ and $`c_{m_0}`$. We see that increasing the scale $`Q_0`$ also allows even larger scalar masses. For example the requirements $`c50`$ and $`Q_0<M_{\text{Pl}}`$ allow $`m_0`$ as large as 2.9 TeV. In Fig. 11 we show contours of constant $`c`$ in the $`(m_t,Q_0)`$ plane. As expected, smaller values of $`m_t`$ can be compensated for by larger evolution intervals, and vice versa. Notice, however, that varying $`m_t`$ within its current experimental uncertainty leads to changes in $`c`$ that are as large as those caused by varying $`Q_0`$ by several orders of magnitude. ## IV Implications for Supersymmetry Searches We have seen that the naturalness bound on $`m_0`$ (i.e., the typical sfermion mass) may be as large as a few TeV. In this section, we discuss the implication of these results for the superpartner spectrum and, in particular, the discovery prospects for scalar superpartners at future colliders. (Of course, it is clear that such heavy scalars also drastically reduce the size of supersymmetric effects in low energy experiments, but we will not address this further.) The implications for sleptons are fairly straightforward. Sleptons have small Yukawa couplings (with the possible exception of staus for large $`\mathrm{tan}\beta `$), and so their masses are virtually RG-invariant, with $`m_{\stackrel{~}{l}}m_0`$ in this scenario. Multi-TeV sleptons are beyond the kinematic limit $`m_{\stackrel{~}{l}}<\sqrt{s}/2`$ of all proposed linear colliders. They will also escape detection at hadron colliders, as they are not strongly produced, and will not be produced in large numbers in the cascade decays of strongly interacting superparticles. We now turn to squarks. Multi-TeV squarks will, of course, also evade proposed linear colliders. Traditionally, however, it has been expected that future hadron colliders, particularly the LHC, will discover all squarks in the natural region of minimal supergravity parameter space . This conclusion is based on the expectation that all squarks have masses $`12\mathrm{TeV}`$. In Fig. 12, we present contours for $`m_{\stackrel{~}{u}_L}`$.<sup>10</sup><sup>10</sup>10One-loop corrections are included in all superpartner masses. (All first and second generation squarks are nearly degenerate.) In the same figure, we have also included contours of the fine-tuning parameter $`c`$. For $`c50`$, we see that squark masses of $`m_{\stackrel{~}{u}_L}=2.2\mathrm{TeV}`$ are allowed, and more generally, the parameter space with multi-TeV squarks is as natural as parameter space with $`m_{\stackrel{~}{u}_L}1\mathrm{TeV}`$. Recall also that these mass limits may be extended to as large as $`3\text{ TeV}`$ for variations of $`m_t`$ within its current bounds. Squarks of mass $`2\text{ TeV}`$ may be detected at the LHC with large integrated luminosities of several $`100\text{ fb}^1`$. However, squarks with masses significantly beyond 2 TeV are likely to escape detection altogether . Since the top squarks and left-handed bottom squark interact strongly through $`y_t`$, they are lighter than the other squarks. For small $`\mathrm{tan}\beta `$, and small $`M_{1/2}`$ and $`A_0`$ parameters, their masses at the focus point of $`m_{H_u}^2`$ are given by $`m_{U_3}^2{\displaystyle \frac{1}{3}}m_0^2,m_{Q_3}^2{\displaystyle \frac{2}{3}}m_0^2.`$ (101) In general, these squark masses, particularly the stop masses, may also be influenced by left-right mixing. However, because naturalness constrains the $`A`$ and $`\mu `$ parameters to be of order the weak scale, left-right mixing effects are sub-leading for multi-TeV $`m_0`$. As a result, the lighter (heavier) stop is mostly right-handed (left-handed), and the lighter (heavier) sbottom is mostly left-handed (right-handed). For large $`\mathrm{tan}\beta `$, $`y_b`$ also suppresses third generation squark masses. For example, for $`y_b=y_t`$, we obtain $`m_{U_3}^2m_{D_3}^2m_{Q_3}^2{\displaystyle \frac{1}{3}}m_0^2.`$ (102) In Figs. 13 and 14 we present the masses of the stops and sbottoms, respectively. By comparing with Fig. 12, we see that the stops are always lighter than the first and second generation squarks. The lighter sbottom and heavier stop are nearly degenerate, since they are (approximately) in the same SU(2)<sub>L</sub> doublet. The heavier sbottom, which is mostly the right-handed sbottom, may also become significantly lighter than the first two generation squarks for large $`\mathrm{tan}\beta `$. Therefore, in the multi-TeV $`m_0`$ scenario, stop and sbottom production will be the most promising modes for squark discovery at the LHC. In contrast to the sfermions, gauginos cannot be very heavy in this scenario. This fact is explicit in Fig. 4: for large $`M_{1/2}`$, the fine-tuning coefficient $`c_{M_{1/2}}`$ becomes unacceptably large, irrespective of $`\mathrm{tan}\beta `$. As a result, naturalness requires fairly light gauginos. For example, the constraint $`c50`$ implies $`M_{1/2}400\mathrm{GeV}`$, corresponding to $`M_1160\mathrm{GeV}`$, $`M_2320\mathrm{GeV}`$, and $`M_31.2\mathrm{TeV}`$. Such gauginos will be produced in large numbers at the LHC, and will be discovered in typical scenarios.<sup>11</sup><sup>11</sup>11In some scenarios, however, the detection of all gauginos may be challenging. This is particularly true in scenarios with degeneracies, such as the Wino LSP scenario . To our knowledge, the detectability of all gauginos (not including the invisible LSP) in such scenarios at the LHC remains an open question. It is also interesting to consider the implications of the focus point for Higgs masses. In supersymmetric models with minimal field content, the lightest Higgs boson mass $`m_h`$ is bounded by $`m_Z`$ at tree level. However, this upper bound may be significantly violated by radiative corrections . In particular, top-stop loop contributions, approximately proportional to $`\mathrm{ln}(m_{\stackrel{~}{t}}/m_t)`$, may be important. Since the focus point allows heavy stops, one may wonder if this affects the upper bound on the lightest Higgs mass. To answer this question, we have calculated the one-loop radiative corrections to the lightest Higgs mass. The result is shown in Fig. 15; we emphasize the $`A_0`$ dependence, as the radiative correction is sensitive to left-right mixing through the $`A`$ parameter. For the region with small fine-tuning parameter, say, $`c50`$, we find $`m_h<118\text{ GeV}`$. It is important that a large $`A`$ parameter is forbidden by naturalness, as this suppresses left-right stop mixing, which usually significantly increases $`m_h`$. Therefore, even in the focus point scenario with multi-TeV squarks, Run II of the Tevatron will probe much of parameter space in its search for the lightest Higgs boson . Discovery of the heavy Higgs scalars is more challenging. At tree-level, the masses of the heavy Higgs scalars are approximately given by $`m_Am_Hm_{H^\pm }\sqrt{m_{H_u}^2+m_{H_d}^22\mu ^2}.`$ (103) Although $`m_{H_u}^2`$ and $`\mu ^2`$ are always bounded by naturalness, for moderate $`\mathrm{tan}\beta `$, $`m_{H_d}^2`$ is only weakly bounded by naturalness. For negligible $`y_b`$, $`m_{H_d}^2`$ does not participate in the focus point behavior and is roughly RG-invariant. For larger $`\mathrm{tan}\beta `$, on the other hand, $`m_{H_d}^2`$ may be significantly suppressed by $`y_b`$ during RG evolution. In particular, in the case of $`\mathrm{tan}\beta m_t/m_b`$, there is an approximate symmetry under interchanging $`H_uH_d`$ and $`U_3D_3`$, and so $`m_{H_d}^2`$ also has a focus point near the weak scale. In Fig. 16, we present the pseudoscalar Higgs mass $`m_A`$ in the $`(m_0,M_{1/2})`$ plane. For $`\mathrm{tan}\beta =10`$, $`m_{H_d}^2m_0^2`$, $`m_Am_0`$, and detection of the heavy Higgses at the LHC or proposed linear colliders becomes impossible for large $`m_0`$. However, for large $`\mathrm{tan}\beta `$, $`m_{H_d}^2`$ is suppressed by $`y_b`$, and $`m_A𝒪(100\text{ GeV})`$. For large $`\mathrm{tan}\beta `$, heavy Higgses with masses of several hundred GeV may be found at the LHC through the decays $`H,A\tau \overline{\tau }`$ . ## V Summary and Discussion In this paper, we have explored the existence of focus points in the RG behavior of supersymmetry breaking parameters and their implications for naturalness and experimental searches for supersymmetry. For the experimentally measured top quark mass, the supersymmetry breaking up-type Higgs mass parameter $`m_{H_u}^2`$ has a focus point at the scale $`Q𝒪(100\mathrm{GeV})`$ in a class of models which includes minimal supergravity. The value of $`m_{H_u}^2`$ at the weak scale is therefore highly insensitive to the universal scalar mass $`m_0`$ at the GUT scale. We have also seen that this focus point behavior exists for all values of $`\mathrm{tan}\beta 5`$. Since $`m_{H_u}^2`$ plays an important role in the determination of the weak scale, this focus point behavior affects the naturalness of electroweak symmetry breaking in minimal supergravity. In particular, because a large $`m_0`$ can result in a reasonably small $`m_{H_u}^2`$, naturalness constraints on $`m_0`$ are not as severe as typically expected. To discuss this issue quantitatively, we have calculated the fine-tuning parameter $`c`$, which is determined by the sensitivity of the weak scale to fractional variations of the fundamental parameters. As we have seen, in regions of parameter space with $`m_023\mathrm{TeV}`$ this fine-tuning parameter may be as small as in regions with $`m_01\mathrm{TeV}`$. As a result, multi-TeV sfermions are as natural as sfermions lighter than $`1\mathrm{TeV}`$. We note that the region of multi-TeV scalars and light gauginos and Higgsinos is also somewhat preferred by gauge coupling unification in minimal SU(5) , as well as $`b`$-$`\tau `$ Yukawa unification at moderate to large $`\mathrm{tan}\beta `$ . The discovery of squarks and sleptons at the LHC and proposed linear colliders may therefore be extremely challenging, and may require some even more energetic machines, such as muon or very large hadron colliders. In our analysis, we did not include the Yukawa couplings, notably $`y_t`$, and gauge couplings in the calculation of the the fine-tuning parameter $`c`$. In Fig. 17, we present the sensitivity coefficient $`c_{y_t}`$ for the (GUT scale) top Yukawa coupling. For $`m_01\mathrm{TeV}`$, $`c_{y_t}`$ is always larger than 70, and so if $`c_{y_t}`$ is included in the calculation of the fine-tuning parameter $`c`$, the naturalness bound on $`m_0`$ becomes much more stringent and $`m_01\mathrm{TeV}`$ is disfavored. The inclusion of $`y_t`$ and other standard model parameters in the fine-tuning calculation would thus lead to significantly different conclusions. A definitive resolution to this question of whether or not to include variations of standard model parameters in fine-tuning calculations cannot, we believe, be achieved without a more complete understanding of the fundamental theories of flavor and supersymmetry breaking. Without such knowledge, any discussion necessarily becomes somewhat philosophical. Nevertheless, several remarks are in order. First, it is sometimes argued that one should not consider variations with respect to parameters that have been measured or are highly correlated with measured quantities. According to this view, standard model parameters such as the strong coupling constant, and possibly also $`y_t`$, should not be included among the $`a_i`$. We do not subscribe to this view.<sup>12</sup><sup>12</sup>12Note that the exclusion of standard model parameters from the fundamental parameters $`a_i`$ does not imply that current experimental data are ignored in the calculation of fine-tuning. All experimental data are used in step (ii) of the fine-tuning prescription to specify the physical hypersurface of parameter space. If in the future the Higgsino mass $`\mu `$ is measured to be 10 TeV, given our current notions of naturalness, we believe this should be considered fine-tuned, irrespective of the accuracy with which the Higgsino mass is measured. Of course, if this were the case, the fact that a 10 TeV $`\mu `$ parameter is realized in nature would be a strong motivation to consider alternative, and perhaps more fundamental, theoretical frameworks in which a 10 TeV $`\mu `$ parameter is not unnatural. There are, however, other considerations which favor the exclusion of standard model parameters from the list of $`a_i`$. As noted in Sec. III, we are interested in the naturalness of the supersymmetric explanation of the gauge hierarchy. We should not require that supersymmetry also solve the problem of flavor. In fact, in many supergravity frameworks, the supersymmetry breaking parameters and the Yukawa couplings are expected to be determined independently. For example, in hidden sector scenarios, the supersymmetry breaking parameters are determined in one sector, while the Yukawa couplings are fixed in some other sector and by a completely independent mechanism. In this case, it seems reasonable to assume that $`y_t`$ is fixed to its observed value in some sector not connected to supersymmetry breaking, and we therefore should not consider variations with respect to it. Finally, it is worth noting that there are several possible scenarios in which it is clear that $`c_{y_t}`$ should not be included in $`c`$ or is at least negligibly small. One possibility is that $`y_t`$ may evolve from some higher scale, such as the Planck scale, to a fixed or focus point at the GUT scale. The weak scale is then highly insensitive to variations in the truly fundamental parameter, i.e., $`y_t`$ at the Planck scale. Alternatively, the top Yukawa coupling may arise as a renormalizable operator with coefficient determined by a correlation function of string vertex operators. The coupling $`y_t`$ would then be fixed to its current value (or possibly one of a discrete set of values), and it is again inappropriate to consider continuous variations with respect to $`y_t`$. Note that in both of these scenarios, $`y_t`$ may receive additional contributions from non-renormalizable operators of the form $`\delta y_tgϵ`$, where $`g`$ is a coupling constant, and $`ϵ`$ is some small expansion parameter, such as $`v/M_{Pl}`$, where $`v`$ is some vacuum expectation value. In this case, $`c_g`$ and $`c_ϵ`$ should be included in the definition of fine-tuning, but they will be negligible for small $`ϵ`$. ## Acknowledgments We thank J. Bagger, G. Giudice, and C. Wagner for discussions. This work was supported in part by the Department of Energy under contracts DE–FG02–90ER40542 and DE–AC02–76CH03000, by the National Science Foundation under grant PHY–9513835, through the generosity of Frank and Peggy Taplin (JLF), and by a Marvin L. Goldberger Membership (TM). ## Determination of Focus Point Scale for $`𝒚_𝒃\mathbf{}𝒚_𝒕`$ and $`𝒚_𝒃\mathbf{=}𝒚_𝒕`$ In this appendix, we discuss the focus point analytically at one-loop for the two cases $`y_by_t`$ and $`y_b=y_t`$. Solutions to the RG equations are well-known for these two cases . For both cases, we derive a closed form expression involving the gauge couplings and $`m_t`$ that must be satisfied if the focus point scale $`Q_\mathrm{F}`$ is to be at the weak scale. We also show that if the focus point is at the weak scale for $`y_by_t`$, it is also at the weak scale for $`y_b=y_t`$. In the analysis of the $`y_b=y_t`$ case, we neglect the effects of $`y_\tau `$ and the hypercharge differences in the $`y_t`$ and $`y_b`$ RG equations, so $`y_t`$ and $`y_b`$ remain degenerate throughout their RG evolution. The validity of these approximations is verified only by the numerical results in Sec. II. In addition, the intermediate case, where $`y_by_t`$ but $`y_b`$ may not be neglected, is not considered. However, the following analysis may be helpful in understanding the numerical results, and in particular, the behavior of Figs. 1 and 2. Let us define $`\stackrel{~}{\alpha }_a`$ $``$ $`{\displaystyle \frac{g_a^2}{16\pi ^2}},a=1,2,3,`$ (104) $`\stackrel{~}{\alpha }_y`$ $``$ $`{\displaystyle \frac{y_t^2}{16\pi ^2}}.`$ (105) These quantities obey the following RG equations: $`{\displaystyle \frac{d\stackrel{~}{\alpha }_a}{d\mathrm{ln}Q}}`$ $`=`$ $`2b_a\stackrel{~}{\alpha }_a^2,`$ (106) $`{\displaystyle \frac{d\stackrel{~}{\alpha }_y}{d\mathrm{ln}Q}}`$ $`=`$ $`2\left(s\stackrel{~}{\alpha }_y{\displaystyle \underset{a}{}}r_a\stackrel{~}{\alpha }_a\right)\stackrel{~}{\alpha }_y.`$ (107) In the minimal supersymmetric standard model, $`(b_1,b_2,b_3)=(11,1,3)`$, $`(r_1,r_2,r_3)=(13/9,3,16/3)`$, and $`s=6`$ and 7 for $`y_by_t`$ and $`y_b=y_t`$, respectively. The solution for $`\stackrel{~}{\alpha }_y`$ is $`\stackrel{~}{\alpha }_y(Q)={\displaystyle \frac{\stackrel{~}{\alpha }_y(Q_0)E(Q)}{12s\stackrel{~}{\alpha }_y(Q_0)F(Q)}},`$ (108) where $`E(Q)`$ $`=`$ $`{\displaystyle \underset{a}{}}\left[12b_a\stackrel{~}{\alpha }_a(Q_0)\mathrm{ln}(Q/Q_0)\right]^{r_a/b_a},`$ (109) $`F(Q)`$ $`=`$ $`{\displaystyle _{\mathrm{ln}Q_0}^{\mathrm{ln}Q}}E(Q^{})d\mathrm{ln}Q^{}.`$ (110) We find then that $`e^{sI(Q)}\mathrm{exp}\left(2s{\displaystyle _{\mathrm{ln}Q_0}^{\mathrm{ln}Q}}\stackrel{~}{\alpha }_y(Q^{})d\mathrm{ln}Q^{}\right)={\displaystyle \frac{1}{12s\stackrel{~}{\alpha }_y(Q_0)F(Q)}}=1+{\displaystyle \frac{2s\stackrel{~}{\alpha }_y(Q)F(Q)}{E(Q)}}.`$ (111) For a universal scalar mass, the conditions for the focus point (see Sec. II) are $`e^{sI}=\{\begin{array}{c}1/3,\mathrm{for}y_by_t\hfill \\ 2/9,\mathrm{for}y_b=y_t\hfill \end{array}.`$ (114) We see that these are simultaneously satisfied at $`Q=m_t`$ if $`{\displaystyle \frac{\stackrel{~}{\alpha }_y(m_t)F(m_t)}{E(m_t)}}={\displaystyle \frac{1}{18}}.`$ (115) In terms of the top quark mass, this requirement corresponds to $`{\displaystyle \frac{1}{16\pi ^2}}\left[{\displaystyle \frac{m_t(m_t)}{v\mathrm{sin}\beta }}\right]^2{\displaystyle \frac{F(m_t)}{E(m_t)}}={\displaystyle \frac{1}{18}},`$ (116) where $`m_t(m_t)`$ is the running top quark mass, and $`v=174\mathrm{GeV}`$. Given fixed gauge coupling constants, Eq. (116) specifies which top quark mass will place the focus point at the weak scale. For $`Q_0=M_{\mathrm{GUT}}=2\times 10^{16}\mathrm{GeV}`$ and $`\alpha (M_{\mathrm{GUT}})=1/24`$, we obtain $`F130`$ and $`E13`$ at $`Q=174\mathrm{GeV}`$. For $`\mathrm{sin}\beta 1`$, the requirement is then $`m_t(m_t)160\mathrm{GeV}`$, which is very close to the running mass corresponding to the physical pole mass $`m_t174\mathrm{GeV}`$. Therefore, for the experimentally measured top quark mass, the focus point of $`m_{H_u}^2`$ is close to the weak scale for $`y_b=y_t`$ and also for $`y_by_t`$.
no-problem/9909/math-ph9909008.html
ar5iv
text
# Nonabelian Toda equations associated with classical Lie groups ## 1 Introduction Toda equations arise in many problems of modern theoretical and mathematical physics. There is a lot of papers devoted to classical and quantum behaviour of abelian Toda equations. From the other hand, nonabelian Toda equations have not yet received a due attention. From our point of view, this is mainly caused by the fact that despite of their formal exact integrability till recent time there were no nontrivial examples of nonabelian Toda equations for which one can write the general solution in a more or less explicit form. Moreover, even the form of nonabelian Toda equations was known only for a few partial cases. In our recent paper we described some class of nonabelian Toda equations called there maximally nonabelian. These equations have a very simple structure and their general solution can be explicitly written. Shortly after that we realised that the approach used in allows to describe the explicit form of all nonabelian Toda equations associated with classical Lie groups. This is done in the present paper. ## 2 $``$-gradations and Toda equations ### 2.1 Toda equations From the point of view of the group-algebraic approach Toda equations are specified by a choice of a real or complex Lie group whose Lie algebra is endowed with a $``$-gradation. Recall that a Lie algebra $`𝔤`$ is said to be $``$-graded, or endowed with a $``$-gradation, if there is given a representation of $`𝔤`$ as a direct sum $$𝔤=\underset{m}{}𝔤_m,$$ where $`[𝔤_m,𝔤_n]𝔤_{m+n}`$ for all $`m,n`$. Let $`G`$ be a real or complex Lie group, and $`𝔤`$ be its Lie algebra. For a given $``$-gradation of $`𝔤`$ the subspace $`𝔤_0`$ is a subalgebra of $`𝔤`$. The subspaces $$𝔤_{<0}=\underset{m<0}{}𝔤_m,𝔤_{>0}=\underset{m>0}{}𝔤_m$$ are also subalgebras of $`𝔤`$. Denote by $`G_0`$, $`G_{<0}`$ and $`G_{>0}`$ the connected Lie subgroups of $`G`$ corresponding to the subalgebras $`𝔤_0`$, and $`𝔤_{<0}`$ and $`𝔤_{>0}`$ respectively. Let $`M`$ be either the real manifold $`^2`$ or the complex manifold $``$. For $`M=^2`$ we denote the standard coordinates by $`z^{}`$ and $`z^+`$. In the case of $`M=`$ we use the notation $`z^{}`$ for the standard complex coordinate and $`z^+`$ for the complex conjugate of $`z^{}`$. Denote the partial derivatives over $`z^{}`$ and $`z^+`$ by $`_{}`$ and $`_+`$ respectively. Consider a Lie group $`G`$ whose Lie algebra $`𝔤`$ is endowed with a $``$-gradation. Let $`l`$ be a positive integer, such that the grading subspaces $`𝔤_m`$ for $`l<m<0`$ and $`0<m<l`$ are trivial, and $`c_{}`$ and $`c_+`$ be some fixed mappings from $`M`$ to $`𝔤_l`$ and $`𝔤_{+l}`$, respectively, satisfying the relations $$_+c_{}=0,_{}c_+=0.$$ Restrict ourselves to the case when $`G`$ is a matrix Lie group. In this case the Toda equations are matrix partial differential equations of the form $$_+(\gamma ^1_{}\gamma )=[c_{},\gamma ^1c_+\gamma ],$$ (2.1) where $`\gamma `$ is a mapping from $`M`$ to $`G_0`$. If the Lie group $`G_0`$ is abelian we say that we deal with abelian Toda equations, otherwise we call them nonabelian Toda equations. There is a constructive procedure of obtaining the general solution to Toda equations . It is based on the use of the Gauss decomposition related to the $``$-gradation under consideration. Here the Gauss decomposition is the representation of an element of the Lie group $`G`$ as a product of elements of the subgroups $`G_{<0}`$, $`G_{>0}`$ and $`G_0`$ taken in an appropriate order. Another approach is based on the theory of representations of Lie groups . ### 2.2 $``$-gradations of complex semisimple Lie algebras Let $`q`$ be an element of a Lie algebra $`𝔤`$ such that the linear operator $`\mathrm{ad}q`$ is semisimple and has integer eigenvalues. Defining $$𝔤_m=\{x𝔤[q,x]=mx\}$$ we get a $``$-gradation of $`𝔤`$. This gradation is said to be generated by the grading operator $`q`$. If $`𝔤`$ is a finite dimensional complex semisimple Lie algebra, then any $``$-gradations of $`𝔤`$ is generated by a grading operator. Here up to the action of the group of the automorphisms of $`𝔤`$ all $``$-gradations of $`𝔤`$ can be obtained with the help of the following procedure. Let $`\mathrm{\Delta }`$ be the set of roots of a complex semisimple Lie algebra $`𝔤`$ with respect to a Cartan subalgebra $`𝔥`$, and $`\mathrm{\Pi }=\{\alpha _1,\mathrm{},\alpha _r\}`$ be a base of $`\mathrm{\Delta }`$. Assign to the vertices of the Dynkin diagram of $`𝔤`$ nonnegative integer labels $`q_i`$, $`i=1,\mathrm{},r`$, and define $$q=\underset{i,j=1}{\overset{r}{}}h_i(k^1)_{ij}q_j$$ (2.2) where $`h_i`$ are the corresponding Cartan generators and $`k=(k_{ij})`$ is the Cartan matrix of $`𝔤`$. It is clear that $`q`$ is a grading operator of some $``$-gradation of $`𝔤`$. Here the subspace $`𝔤_m`$ for $`m0`$ is the direct sum of the root subspaces $`𝔤^\alpha `$ corresponding to the roots $`\alpha =_{i=1}^rn_i\alpha _i`$ with $`_{i=1}^rn_iq_i=m`$. The subspace $`𝔤_0`$, besides the root subspaces corresponding to the roots $`\alpha =_{i=1}^rn_i\alpha _i`$ with $`_{i=1}^rn_iq_i=0`$, includes the Cartan subalgebra $`𝔥`$. If all labels $`q_i`$ are different from zero, then the subgroup $`𝔤_0`$ coincides with the Cartan subalgebra of $`𝔥`$. In this case the subgroup $`G_0`$ is abelian. In all other cases the subgroup $`G_0`$ is nonabelian. The maximally nonabelian Toda equations arise in the case when only one of the labels $`q_i`$ is different from zero. ### 2.3 Conformal invariance Let again $`G`$ be a real or complex Lie group, $`𝔤`$ be its Lie algebra, and $`M`$ be either the real manifold $`^2`$ or the complex manifold $``$. Since $`M`$ is simply connected a connection on the trivial principal $`G`$-bundle $`M\times G`$ can be identified with a $`𝔤`$-valued 1-form $`\omega `$ on $`M`$. Here the connection is flat if and only if $$\mathrm{d}\omega +\omega \omega =0.$$ (2.3) We call this relation the zero curvature condition. It can be shown that the Toda equations coincide with the zero curvature condition for the connection $$\omega =\mathrm{d}z^{}(\gamma ^1_{}\gamma +c_{})+\mathrm{d}z^+\gamma ^1c_+\gamma .$$ (2.4) Let $`\xi _\pm `$ be some mappings from $`M`$ to $`G_0`$, satisfying the condition $$_+\xi _{}=0,_{}\xi _+=0,$$ and $`\gamma `$ be a solution of the Toda equations (2.1). It is easy to get convinced that the mapping $$\gamma ^{}=\xi _+^1\gamma \xi _{}$$ (2.5) satisfies the Toda equations (2.1) with the mappings $`c_\pm `$ replaced by the mappings $$c_\pm ^{}=\xi _\pm ^1c_\pm \xi _\pm .$$ In this sense, the Toda equations determined by the mappings $`c_\pm `$ and $`c_\pm ^{}`$ which are connected by the above relation, are equivalent. If the mappings $`\xi _\pm `$ are such that $$\xi _\pm ^1c_\pm \xi _\pm =c_\pm ,$$ then transformation (2.5) is a symmetry transformation for the Toda equations. Let us show that if the $``$-gradation under consideration is generated by a grading operator and $`c_{}`$ and $`c_+`$ are constant mappings, then the corresponding Toda equations are conformally invariant. Let $`F:MM`$ be a conformal transformation. It means that for the functions $`F^{}=z^{}F`$ and $`F^+=z^+F`$ one has $$_+F^{}=0,_{}F^+=0.$$ For the connection $`\omega `$, given by (2.4), we get $$F^{}\omega =\mathrm{d}z^{}[(\gamma F)^1_{}(\gamma F)+_{}F^{}c_{}]+\mathrm{d}z^+(\gamma F)^1_+F^+c_+(\gamma F).$$ If the connection $`\omega `$ satisfies the zero curvature condition (2.3), then the connection $`F^{}\omega `$ also satisfies this condition. So if the mapping $`\gamma `$ satisfies the Toda equations, then the mapping $`\gamma F`$ satisfies the equations $$_+[(\gamma F)^1_{}(\gamma F)]=_{}F^{}_+F^+[c_{},(\gamma F)^1c_+(\gamma F)].$$ It is always possible to compensate the factor $`_{}F^{}_+F^+`$ in the right hand side of the above equation with the help of transformation (2.5). Indeed, defining $$\xi _{}=\mathrm{exp}\left(ql^1\mathrm{ln}_{}F^{}\right),\xi _+=\mathrm{exp}\left(ql^1\mathrm{ln}_+F^+\right),$$ one obtains $$\xi _{}^1c_{}\xi _{}=(_{}F^{})^1c_{},\xi _+^1c_+\xi _+=(_+F^+)^1c_+.$$ Therefore, the mapping $$\gamma ^{}=\mathrm{exp}\left(ql^1\mathrm{ln}_+F^+\right)(\gamma F)\mathrm{exp}\left(ql^1\mathrm{ln}_{}F^{}\right).$$ (2.6) satisfies the initial Toda equations. Thus, transformation (2.6) is a symmetry transformation for the Toda equations. Such transformations define an action of the group of conformal transformations on the space of solutions of the Toda equations under consideration. ## 3 Complex general linear group We begin the consideration of nonabelian Toda systems associated with classical Lie groups with the case of the Lie group $`\mathrm{SL}(r+1,)`$. Actually it is convenient to consider the Lie group $`\mathrm{GL}(r+1,)`$ whose Lie algebra $`𝔤𝔩(r+1,)`$ is endowed with $``$-gradations induced by $``$-gradations of the Lie algebra $`𝔰𝔩(r+1,)`$. The Lie algebra $`𝔰𝔩(r+1,)`$ is of type $`A_r`$. The Cartan matrix is $$k=\left(\begin{array}{cccccccc}\hfill 2& \hfill 1& \hfill 0& \mathrm{}& \hfill 0& \hfill 0& & \hfill 0\\ \hfill 1& \hfill 2& \hfill 1& \mathrm{}& \hfill 0& \hfill 0& & \hfill 0\\ \hfill 0& \hfill 1& \hfill 2& \mathrm{}& \hfill 0& \hfill 0& & \hfill 0\\ \hfill \mathrm{}& \hfill \mathrm{}& \hfill \mathrm{}& \mathrm{}& \hfill \mathrm{}& \hfill \mathrm{}& & \hfill \mathrm{}\\ \hfill 0& \hfill 0& \hfill 0& \mathrm{}& \hfill 2& \hfill 1& & \hfill 0\\ \hfill 0& \hfill 0& \hfill 0& \mathrm{}& \hfill 1& \hfill 2& & \hfill 1\\ & & & & & & & \\ \hfill 0& \hfill 0& \hfill 0& \mathrm{}& \hfill 0& \hfill 1& & \hfill 2\end{array}\right).$$ For the inverse matrix one obtains the expression $$k^1=\frac{1}{r+1}\left(\begin{array}{ccccccccccc}r& & r1& & r2& \mathrm{}& 3& & 2& & 1\\ & & & & & & & & & & \\ r1& & 2(r1)& & 2(r2)& \mathrm{}& 6& & 4& & 2\\ & & & & & & & & & & \\ r2& & 2(r2)& & 3(r2)& \mathrm{}& 9& & 6& & 3\\ \mathrm{}& & \mathrm{}& & \mathrm{}& \mathrm{}& \mathrm{}& & \mathrm{}& & \mathrm{}\\ 3& & 6& & 9& \mathrm{}& 3(r2)& & 2(r2)& & r2\\ & & & & & & & & & & \\ 2& & 4& & 6& \mathrm{}& 2(r2)& & 2(r1)& & r1\\ & & & & & & & & & & \\ 1& & 2& & 3& \mathrm{}& r2& & r1& & r\end{array}\right).$$ Let $`d`$ be a fixed integer such that $`1dr`$. Consider the $``$-gradation of $`𝔰𝔩(r+1,)`$ arising when we choose the labels of the corresponding Dynkin diagram equal to zero except the label $`q_d`$ which is chosen equal to 1. From relation (2.2) it follows that the corresponding grading operator, which we denote by $`\stackrel{\left(d\right)}{𝑞}`$, has the form $$\stackrel{\left(d\right)}{𝑞}=\frac{1}{r+1}\left[(r+1d)\underset{i=1}{\overset{d1}{}}ih_i+d\underset{i=d}{\overset{r}{}}(r+1i)h_i\right].$$ It is convenient to take as a Cartan subalgebra of $`𝔰𝔩(r+1,)`$ the subalgebra consisting of diagonal $`(r+1)\mathrm{\times }(r+1)`$ matrices with zero trace. Here the standard choice of the Cartan generators is $$h_i=e_{i,i}e_{i+1,i+1},$$ where the matrices $`e_{i,j}`$ are defined by $$(e_{i,j})_{kl}=\delta _{ik}\delta _{jl}.$$ (3.1) With such a choice of Cartan generators we obtain $$\stackrel{\left(d\right)}{𝑞}=\frac{1}{r+1}\left[(r+1d)\underset{i=1}{\overset{d}{}}e_{i,i}d\underset{i=d+1}{\overset{r+1}{}}e_{i,i}\right].$$ Thus, the grading operator has the following block matrix form: $$\stackrel{\left(d\right)}{𝑞}=\frac{1}{r+1}\left(\begin{array}{cc}k_2I_{k_1}& 0\\ 0& k_1I_{k_2}\end{array}\right),$$ where $`k_1=d`$ and $`k_2=r+1d`$, so that $`k_1+k_2=r+1`$. Here and henceforth $`I_k`$ denotes the unit $`k\mathrm{\times }k`$ matrix. The grading operator corresponding to the general $``$-gradation of $`𝔰𝔩(r+1,)`$ is a linear combination of the operators $`\stackrel{\left(d\right)}{𝑞}`$ with nonnegative integer coefficients. The explicit matrix form of the grading operators is depicted as follows. A general set of grading labels $`q_i`$ can be represented as $$(\underset{k_11}{\underset{}{0,\mathrm{},0}},m_1,\underset{k_21}{\underset{}{0,\mathrm{},0}},m_2,0,\mathrm{},0,m_{p1},\underset{k_p1}{\underset{}{0,\mathrm{},0}}),$$ where $`k_1,\mathrm{},k_p`$ and $`m_1,\mathrm{},m_{p1}`$ are positive integers. It is convenient to consider an arbitrary matrix $`x`$ of $`𝔰𝔩(r+1,)`$ as a $`p\mathrm{\times }p`$ block matrix $`(x_{ab})`$, where $`x_{ab}`$ is a $`k_a\mathrm{\times }k_b`$ matrix. The grading operator corresponding to the above set of labels has the following block matrix form: $$q=\left(\begin{array}{ccccc}\rho _1I_{k_1}& 0& \mathrm{}& 0& 0\\ 0& \rho _2I_{k_2}& \mathrm{}& 0& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& 0& \mathrm{}& \rho _{p1}I_{k_{p1}}& 0\\ 0& 0& \mathrm{}& 0& \rho _pI_{k_p}\end{array}\right),$$ (3.2) where $$\rho _a=\frac{1}{r+1}\left(\underset{b=1}{\overset{a1}{}}m_b\underset{c=1}{\overset{b}{}}k_c+\underset{b=a}{\overset{p1}{}}m_b\underset{c=b+1}{\overset{p}{}}k_c\right).$$ We will use grading operator (3.2) to define a $``$-gradation of the Lie algebra $`𝔤𝔩(r+1,)`$. It is easy to describe the arising grading subspaces of $`𝔤𝔩(r+1,)`$ and the relevant subgroups of $`\mathrm{GL}(r+1,)`$. For fixed $`ab`$, the block matrices $`x`$ having only the block $`x_{ab}`$ different from zero belong to the grading subspace $`𝔤_m`$ with $$m=\underset{c=a}{\overset{b1}{}}m_c,a<b,m=\underset{c=b}{\overset{a1}{}}m_c,a>b.$$ The block diagonal matrices form the subalgebra $`𝔤_0`$. The subalgebras $`𝔤_{<0}`$ and $`𝔤_{>0}`$ are formed by all block strictly lower and upper triangular matrices respectively. It is not difficult to describe the corresponding subgroups. The subgroup $`G_0`$ consists of all block diagonal nondegenerate matrices, and the subgroups $`G_{<0}`$ and $`G_{>0}`$ consist, respectively, of all block lower and upper triangular matrices with unit matrices on the diagonal. Note that the subgroup $`G_0`$ is isomorphic to the Lie group $`\mathrm{GL}(k_1,)\times \mathrm{}\times \mathrm{GL}(k_p,)`$. Proceed now to the consideration of the corresponding Toda equations. Assume that all integers $`m_a`$ are equal to one. In this case one has $$\rho _a=\frac{1}{r+1}\underset{b=1}{\overset{p}{}}bk_ba.$$ The elements $`c_{}`$ and $`c_+`$ should belong to the subspaces $`𝔤_1`$ and $`𝔤_{+1}`$ respectively. The general form of such elements is $$c_{}=\left(\begin{array}{ccccc}0& 0& \mathrm{}& 0& 0\\ C_1& 0& \mathrm{}& 0& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& 0& \mathrm{}& 0& 0\\ 0& 0& \mathrm{}& C_{(p1)}& 0\end{array}\right),c_+=\left(\begin{array}{ccccc}0& C_{+1}& \mathrm{}& 0& 0\\ 0& 0& \mathrm{}& 0& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& 0& \mathrm{}& 0& C_{+(p1)}\\ 0& 0& \mathrm{}& 0& 0\end{array}\right),$$ (3.3) where for each $`a=1,\mathrm{},p1`$ the mapping $`C_a`$ takes values in the space of $`k_{a+1}\mathrm{\times }k_a`$ complex matrices, and the mapping $`C_{+a}`$ takes values in the space of $`k_a\mathrm{\times }k_{a+1}`$ complex matrices. Besides, these mappings should satisfy the relations $$_+C_a=0,_{}C_{+a}=0.$$ Parametrise the mapping $`\gamma `$ as $$\gamma =\left(\begin{array}{ccccc}\beta _1& 0& \mathrm{}& 0& 0\\ 0& \beta _2& \mathrm{}& 0& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \\ 0& 0& \mathrm{}& \beta _{p1}& 0\\ 0& 0& \mathrm{}& 0& \beta _p\end{array}\right),$$ (3.4) where the mappings $`\beta _a`$ take values in the groups $`\mathrm{GL}(m_a,)`$. In this parametrisation Toda equations (2.1) take the form $`_+(\beta _1^1_{}\beta _1)=\beta _1^1C_{+1}\beta _2C_1,`$ (3.5) $`_+(\beta _a^1_{}\beta _a)=\beta _a^1C_{+a}\beta _{a+1}C_a+C_{(a1)}\beta _{a1}^1C_{+(a1)}\beta _a,1<a<p,`$ (3.6) $`_+(\beta _p^1_{}\beta _p)=C_{(p1)}\beta _{p1}^1C_{+(p1)}\beta _p.`$ (3.7) The consideration of more general $``$-gradations gives nothing new. Indeed, the $``$-gradations with all integers $`m_a`$ equal to 1 exhaust all possible subgroups $`G_0`$. Furthermore, the mappings $`c_\pm `$ corresponding to a general $``$-gradations should take values in subalgebras $`𝔤_{\pm l}`$, where $`l`$ is less or equal to the minimal value of the positive integers $`m_a`$. It is clear that the blocks $`(c_\pm )_{ab}`$ are nonzero only if $`|ab|=1`$. Therefore, the general form of the mappings $`c_\pm `$ is again given by (3.3), where the mappings $`C_{\pm a}`$ corresponding to the grading indexes greater than $`l`$ should be zero mappings. ## 4 Complex orthogonal group It is convenient for our purposes to define the complex orthogonal group $`\mathrm{O}(n,)`$ as the Lie subgroup of the Lie group $`\mathrm{GL}(n,)`$ formed by matrices $`a\mathrm{GL}(n,)`$ satisfying the condition $$\stackrel{~}{I}_na^t\stackrel{~}{I}_n=a^1,$$ (4.1) where $`\stackrel{~}{I}_n`$ is the antidiagonal unit $`n\mathrm{\times }n`$ matrix, and $`a^t`$ is the transpose of $`a`$. The corresponding Lie algebra $`𝔬(n,)`$ is the subalgebra of $`𝔤𝔩(n,)`$ which consists of the matrices $`x`$ satisfying the condition $$\stackrel{~}{I}_nx^t\stackrel{~}{I}_n=x.$$ (4.2) For a $`k_1\mathrm{\times }k_2`$ matrix $`a`$ we will denote by $`a^T`$ the matrix defined by the relation $$a^T=\stackrel{~}{I}_{k_2}a^t\stackrel{~}{I}_{k_1}.$$ Using this notation, we can rewrite conditions (4.1) and (4.2) as $`a^T=a^1`$ and $`x^T=x`$. The Lie algebra $`𝔬(n,)`$ is simple. For $`n=2r+1`$ it is of type $`B_r`$, while for $`n=2r`$ it is of type $`D_r`$. Discuss these two cases separately. Consider the $``$-gradation of $`𝔬(2r+1,)`$ arising when we choose $`q_d=1`$ for some fixed $`d`$ such that $`1dr`$, and put all other labels of the Dynkin diagram be equal to zero. The Cartan matrix for the Lie algebra $`𝔬(2r+1,)`$ is given by $$k=\left(\begin{array}{cccccccc}\hfill 2& \hfill 1& \hfill 0& \mathrm{}& \hfill 0& & \hfill 0& \hfill 0\\ \hfill 1& \hfill 2& \hfill 1& \mathrm{}& \hfill 0& & \hfill 0& \hfill 0\\ \hfill 0& \hfill 1& \hfill 2& \mathrm{}& \hfill 0& & \hfill 0& \hfill 0\\ \hfill \mathrm{}& \hfill \mathrm{}& \hfill \mathrm{}& \mathrm{}& \hfill \mathrm{}& & \hfill \mathrm{}& \hfill \mathrm{}\\ \hfill 0& \hfill 0& \hfill 0& \mathrm{}& \hfill 2& & \hfill 1& \hfill 0\\ & & & & & & & \\ \hfill 0& \hfill 0& \hfill 0& \mathrm{}& \hfill 1& & \hfill 2& \hfill 2\\ \hfill 0& \hfill 0& \hfill 0& \mathrm{}& \hfill 0& & \hfill 1& \hfill 2\end{array}\right),$$ and for its inverse one has the expression $$k^1=\frac{1}{2}\left(\begin{array}{ccccccccccc}2& & 2& & 2& \mathrm{}& 2& & 2& & 2\\ & & & & & & & & & & \\ 2& & 4& & 4& \mathrm{}& 4& & 4& & 4\\ & & & & & & & & & & \\ 2& & 4& & 6& \mathrm{}& 6& & 6& & 6\\ \mathrm{}& & \mathrm{}& & \mathrm{}& \mathrm{}& \mathrm{}& & \mathrm{}& & \mathrm{}\\ 2& & 4& & 6& \mathrm{}& 2(r2)& & 2(r2)& & 2(r2)\\ & & & & & & & & & & \\ 2& & 4& & 6& \mathrm{}& 2(r2)& & 2(r1)& & 2(r1)\\ & & & & & & & & & & \\ 1& & 2& & 3& \mathrm{}& r2& & r1& & r\end{array}\right).$$ Using relation (2.2), one gets $`\stackrel{\left(d\right)}{𝑞}={\displaystyle \underset{i=1}{\overset{r1}{}}}ih_i+{\displaystyle \frac{1}{2}}rh_r,d=r,`$ $`\stackrel{\left(d\right)}{𝑞}={\displaystyle \underset{i=1}{\overset{r1}{}}}ih_i+{\displaystyle \frac{1}{2}}(r1)h_r,d=r1,`$ $`\stackrel{\left(d\right)}{𝑞}={\displaystyle \underset{i=1}{\overset{d}{}}}ih_i+d{\displaystyle \underset{i=d+1}{\overset{r1}{}}}h_i+{\displaystyle \frac{1}{2}}dh_r,1d<r1.`$ It is convenient to choose the following Cartan generators of $`𝔬(2r+1,)`$: $`h_i=e_{i,i}e_{i+1,i+1}+e_{2r+1i,2r+1i}e_{2r+2i,2r+2i},1i<r,`$ $`h_r=2(e_{r,r}e_{r+2,r+2}),`$ where the matrices $`e_{i,j}`$ are defined by (3.1). Using these expressions one obtains $$\stackrel{\left(d\right)}{𝑞}=\underset{i=1}{\overset{d}{}}e_{i,i}\underset{i=1}{\overset{d}{}}e_{2r+2i,2r+2i}.$$ Denoting $`k_1=d`$ and $`k_2=2(rd)+1`$, we write $`q`$ in block matrix form, $$\stackrel{\left(d\right)}{𝑞}=\left(\begin{array}{ccc}I_{k_1}& 0& 0\\ 0& 0& 0\\ 0& 0& I_{k_1}\end{array}\right),$$ (4.3) where zero on the diagonal stands for the $`k_2\mathrm{\times }k_2`$ block of zeros. The Cartan matrix for the Lie algebra $`𝔬(2r,)`$ has the form $$k=\left(\begin{array}{cccccccc}\hfill 2& \hfill 1& \hfill 0& \mathrm{}& & \hfill 0& \hfill 0& \hfill 0\\ \hfill 1& \hfill 2& \hfill 1& \mathrm{}& & \hfill 0& \hfill 0& \hfill 0\\ \hfill 0& \hfill 1& \hfill 2& \mathrm{}& & \hfill 0& \hfill 0& \hfill 0\\ \hfill \mathrm{}& \hfill \mathrm{}& \hfill \mathrm{}& \mathrm{}& & \hfill \mathrm{}& \hfill \mathrm{}& \hfill \mathrm{}\\ & & & & & & & \\ \hfill 0& \hfill 0& \hfill 0& \mathrm{}& & \hfill 2& \hfill 1& \hfill 1\\ \hfill 0& \hfill 0& \hfill 0& \mathrm{}& & \hfill 1& \hfill 2& \hfill 0\\ \hfill 0& \hfill 0& \hfill 0& \mathrm{}& & \hfill 1& \hfill 0& \hfill 2\end{array}\right),$$ and its inverse is $$k^1=\frac{1}{4}\left(\begin{array}{cccccccccccc}4& & 4& & 4& \mathrm{}& & 4& & 2& & 2\\ & & & & & & & & & & & \\ 4& & 8& & 8& \mathrm{}& & 8& & 4& & 4\\ & & & & & & & & & & & \\ 4& & 8& & 12& \mathrm{}& & 12& & 6& & 6\\ \mathrm{}& & \mathrm{}& & \mathrm{}& \mathrm{}& & \mathrm{}& & \mathrm{}& & \mathrm{}\\ & & & & & & & & & & & \\ 4& & 8& & 12& \mathrm{}& & 4(r2)& & 2(r2)& & 2(r2)\\ & & & & & & & & & & & \\ 2& & 4& & 6& \mathrm{}& & 2(r2)& & r& & r2\\ & & & & & & & & & & & \\ 2& & 4& & 6& \mathrm{}& & 2(r2)& & r2& & r\end{array}\right).$$ In this case one obtains $`\stackrel{\left(d\right)}{𝑞}={\displaystyle \frac{1}{2}}{\displaystyle \underset{i=1}{\overset{r2}{}}}ih_i+{\displaystyle \frac{1}{4}}(r2)h_{r1}+{\displaystyle \frac{1}{4}}rh_r,d=r,`$ $`\stackrel{\left(d\right)}{𝑞}={\displaystyle \frac{1}{2}}{\displaystyle \underset{i=1}{\overset{r2}{}}}ih_i+{\displaystyle \frac{1}{4}}rh_{r1}+{\displaystyle \frac{1}{4}}(r2)h_r,d=r1,`$ $`\stackrel{\left(d\right)}{𝑞}={\displaystyle \underset{i=1}{\overset{d}{}}}ih_i+d{\displaystyle \underset{i=d+1}{\overset{r2}{}}}h_i+{\displaystyle \frac{1}{2}}d(h_{r1}+h_r),1d<r1.`$ Choose as the Cartan generators of $`𝔬(2r,)`$ the elements $`h_i=e_{i,i}e_{i+1,i+1}+e_{2ri,2ri}e_{2r+1i,2r+1i},1i<r,`$ $`h_r=e_{r1,r1}+e_{r,r}e_{r+1,r+1}e_{r+2,r+2}.`$ Then it is easy to see that $`\stackrel{\left(d\right)}{𝑞}={\displaystyle \frac{1}{2}}{\displaystyle \underset{i=1}{\overset{r}{}}}e_{i,i}{\displaystyle \frac{1}{2}}{\displaystyle \underset{i=1}{\overset{r}{}}}e_{2r+1i,2r+1i},d=r,`$ $`\stackrel{\left(d\right)}{𝑞}={\displaystyle \frac{1}{2}}{\displaystyle \underset{i=1}{\overset{r1}{}}}e_{i,i}{\displaystyle \frac{1}{2}}e_{r,r}+{\displaystyle \frac{1}{2}}e_{r+1,r+1}{\displaystyle \frac{1}{2}}{\displaystyle \underset{i=1}{\overset{r1}{}}}e_{2r+1i,2r+1i},d=r1,`$ $`\stackrel{\left(d\right)}{𝑞}={\displaystyle \underset{i=1}{\overset{d}{}}}e_{i,i}{\displaystyle \underset{i=1}{\overset{d}{}}}e_{2r+1i,2r+1i},1d<r1.`$ Note that the grading operators corresponding to the cases $`d=r`$ and $`d=r1`$ are connected by the automorphism $`\sigma `$ of $`𝔬(2r,)`$ defined by the relation $`\sigma (x)=axa^1`$, where $$a=\left(\begin{array}{cccccccc}1& \mathrm{}& 0& 0& 0& 0& \mathrm{}& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& \mathrm{}& 1& 0& 0& 0& \mathrm{}& 0\\ 0& \mathrm{}& 0& 0& 1& 0& \mathrm{}& 0\\ 0& \mathrm{}& 0& 1& 0& 0& \mathrm{}& 0\\ 0& \mathrm{}& 0& 0& 0& 1& \mathrm{}& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& \mathrm{}& 0& 0& 0& 0& \mathrm{}& 1\end{array}\right).$$ There is the corresponding automorphism of the Lie group $`\mathrm{O}(2r,)`$, which is defined by the same formula. Thus, the cases $`d=r`$ and $`d=r1`$ leads actually to the same $``$-gradation. For the case $`d=r`$ the grading operator has the following block form $$\stackrel{\left(r\right)}{𝑞}=\frac{1}{2}\left(\begin{array}{cc}I_k& 0\\ 0& I_k\end{array}\right),$$ (4.4) where we denoted $`k=r`$. In the case $`1d<r2`$ denoting $`k_1=d`$ and $`k_2=2(rd)`$ one sees that the grading operator $`q`$ has form (4.3). The grading operator of a general $``$-gradation of the Lie algebra $`𝔬(n,)`$ is again a linear combination of the grading operators $`\stackrel{\left(d\right)}{𝑞}`$ with non-negative integer coefficients. Using the explicit form of the operators $`\stackrel{\left(d\right)}{𝑞}`$ and taking into account the existence of the automorphism of $`𝔬(2r,)`$ described above, we come to the following explicit description of the $``$-gradations of $`𝔬(n,)`$. A $``$-gradation of $`𝔬(n,)`$ is determined first by a fixation of block matrix representation of the elements of $`𝔬(n,)`$. Here any element $`x`$ is seen as a $`p\mathrm{\times }p`$ block matrix $`(x_{ab})`$, where $`pn`$ and $`x_{ab}`$ is a $`k_a\mathrm{\times }k_b`$ matrix. Now the positive integers $`k_a`$ are not arbitrary. They are restricted by the relation $$k_a=k_{pa+1}.$$ To get a concrete $``$-gradation, one also have to fix a set of positive integers $`m_a`$, $`a=1,\mathrm{},p1`$, subjected to the constraint $$m_a=m_{pa}.$$ The corresponding grading operator has the form (3.2) with $$\rho _a=\frac{1}{2}\left(\underset{b=1}{\overset{a1}{}}m_b+\underset{b=a}{\overset{p1}{}}m_b\right).$$ The structure of the subalgebras $`𝔤_0`$, $`𝔤_{<0}`$, $`𝔤_{>0}`$ and the corresponding subgroups is the same as in the case of general linear group with the exception that we should use only those block matrices which belong to $`\mathrm{SO}(n,)`$. It is clear that the subgroup $`G_0`$ for an odd $`p=2s+1`$ is isomorphic to the Lie group $`\mathrm{GL}(k_1,)\times \mathrm{}\times \mathrm{GL}(k_s,)\times \mathrm{SO}(k_{s+1},)`$ while for an even $`p=2s`$ it is isomorphic to the Lie group $`\mathrm{GL}(k_1,)\times \mathrm{}\times \mathrm{GL}(k_s,)`$. Note that the latter is possible only if $`n`$ is even. Consider now the corresponding Toda equations. As for the case of the general linear group it suffices to consider only the $``$-gradations for which all integers $`m_a`$ are equal to 1. In this case one has $$\rho _a=\frac{p+1}{2}a.$$ The general form of the mappings $`c_\pm `$ is given by (3.3) where $$C_{\pm a}^T=C_{\pm (pa)}.$$ (4.5) We will use the parametrisation of the mapping $`\gamma `$ given by (3.4), where $$\beta _a^T=\beta _{pa+1}^1.$$ (4.6) So for the case $`p=2s+1`$ we have $`s+1`$ independent mappings $`\beta _a`$ and in the case $`p=2s`$ there are $`s`$ independent mappings. The Toda equations has form (3.5)–(3.7), where the mappings $`C_{\pm a}`$ and $`\beta _a`$ satisfy relations (4.5) and (4.6). In the case $`p=2s+1`$ for the independent mappings $`\beta _1,\mathrm{},\beta _{s+1}`$ one can write $`_+(\beta _1^1_{}\beta _1)=\beta _1^1C_{+1}\beta _2C_1,`$ $`_+(\beta _a^1_{}\beta _a)=\beta _a^1C_{+a}\beta _{a+1}C_a+C_{(a1)}\beta _{a1}^1C_{+(a1)}\beta _a,1<as,`$ $`_+(\beta _{s+1}^1_{}\beta _{s+1})=\beta _{s+1}^TC_{+s}^T\beta _s^{1T}C_s^T+C_s\beta _s^1C_{+s}\beta _{s+1}.`$ Note that in this case $`\beta _{s+1}^T=\beta _{s+1}^1`$. In the case $`p=2s`$ the independent equations are $`_+(\beta _1^1_{}\beta _1)=\beta _1^1C_{+1}\beta _2C_1,`$ (4.7) $`_+(\beta _a^1_{}\beta _a)=\beta _a^1C_{+a}\beta _{a+1}C_a+C_{(a1)}\beta _{a1}^1C_{+(a1)}\beta _a,1<a<s,`$ (4.8) $`_+(\beta _s^1_{}\beta _s)=\beta _s^1C_{+s}\beta _s^{1T}C_s+C_{(s1)}\beta _{s1}^1C_{+(s1)}\beta _s,`$ (4.9) where $`C_s^T=C_s`$ and $`C_{+s}^T=C_{+s}`$. ## 5 Complex symplectic group We define the complex symplectic group $`\mathrm{Sp}(2r,)`$ as the Lie subgroup of the Lie group $`\mathrm{GL}(2r,)`$ which consists of the matrices $`a\mathrm{GL}(2r,)`$ satisfying the condition $$\stackrel{~}{J}_ra^t\stackrel{~}{J}_r=a^1,$$ where $`\stackrel{~}{J}_r`$ is the matrix given by $$\stackrel{~}{J}_r=\left(\begin{array}{cc}0& \stackrel{~}{I}_r\\ \stackrel{~}{I}_r& 0\end{array}\right).$$ The corresponding Lie algebra $`𝔰𝔭(r,)`$ is defined as the subalgebra of the Lie algebra $`𝔰𝔩(2r,)`$ formed by the matrices $`x`$ which satisfy the condition $$\stackrel{~}{J}_rx^t\stackrel{~}{J}_r=x.$$ The Lie algebra $`𝔰𝔭(r,)`$ is simple, and it is of type $`C_r`$. Therefore, the Cartan matrix of $`𝔰𝔭(r,)`$ is the transpose of the Cartan matrix of $`𝔬(2r,)`$, and the same is true for the inverse of the Cartan matrix of $`𝔰𝔭(r,)`$. Thus, the explicit form of the Cartan matrix is $$k=\left(\begin{array}{cccccccc}\hfill 2& \hfill 1& \hfill 0& \mathrm{}& \hfill 0& & \hfill 0& \hfill 0\\ \hfill 1& \hfill 2& \hfill 1& \mathrm{}& \hfill 0& & \hfill 0& \hfill 0\\ \hfill 0& \hfill 1& \hfill 2& \mathrm{}& \hfill 0& & \hfill 0& \hfill 0\\ \hfill \mathrm{}& \hfill \mathrm{}& \hfill \mathrm{}& \mathrm{}& \hfill \mathrm{}& & \hfill \mathrm{}& \hfill \mathrm{}\\ \hfill 0& \hfill 0& \hfill 0& \mathrm{}& \hfill 2& & \hfill 1& \hfill 0\\ & & & & & & & \\ \hfill 0& \hfill 0& \hfill 0& \mathrm{}& \hfill 1& & \hfill 2& \hfill 1\\ \hfill 0& \hfill 0& \hfill 0& \mathrm{}& \hfill 0& & \hfill 2& \hfill 2\end{array}\right),$$ and for its inverse one has $$k^1=\frac{1}{2}\left(\begin{array}{ccccccccccc}2& & 2& & 2& \mathrm{}& 2& & 2& & 1\\ & & & & & & & & & & \\ 2& & 4& & 4& \mathrm{}& 4& & 4& & 2\\ & & & & & & & & & & \\ 2& & 4& & 6& \mathrm{}& 6& & 6& & 3\\ \mathrm{}& & \mathrm{}& & \mathrm{}& \mathrm{}& \mathrm{}& & \mathrm{}& & \mathrm{}\\ 2& & 4& & 6& \mathrm{}& 2(r2)& & 2(r2)& & r2\\ & & & & & & & & & & \\ 2& & 4& & 6& \mathrm{}& 2(r2)& & 2(r1)& & r1\\ & & & & & & & & & & \\ 2& & 4& & 6& \mathrm{}& 2(r2)& & 2(r1)& & r\end{array}\right).$$ For any fixed integer $`d`$ such that $`1dr`$, consider the $``$-gradation of $`𝔰𝔭(r,)`$ arising when we choose all the labels of the corresponding Dynkin diagram equal to zero, except the label $`q_d`$, which we choose be equal to $`1`$. Using relation (2.2), we obtain the following expressions for the grading operator, $$\stackrel{\left(d\right)}{𝑞}=\frac{1}{2}\underset{i=1}{\overset{r}{}}ih_i,d=r,\stackrel{\left(d\right)}{𝑞}=\underset{i=1}{\overset{d}{}}ih_i+d\underset{i=d+1}{\overset{r}{}}h_i,1d<r.$$ Using the following choice of the Cartan generators, $`h_i=e_{i,i}e_{i+1,i+1}+e_{2ri,2ri}e_{2r+1i,2r+1i},1i<d,`$ $`h_r=e_{r,r}e_{r+1,r+1},`$ one sees that the grading operator for the case $`d=r`$ has form (4.4) with $`k=r`$, and for the case $`1d<r`$ it has form (4.3) with $`k_1=d`$ and $`k_2=2(rd)`$. So we have the same grading operators and, therefore, the same structure of grading subspaces as we had in the case of the Lie algebra $`𝔰𝔬(2r,)`$. In the case of odd $`p=2s+1`$ the subgroup $`G_0`$ is isomorphic to the Lie group $`\mathrm{GL}(k_1,)\times \mathrm{}\times \mathrm{GL}(k_s,)\times \mathrm{Sp}(k_{s+1},)`$. Note that here $`k_{s+1}`$ is even. In the case $`p=2s`$ the subgroup $`G_0`$ is isomorphic to $`\mathrm{GL}(k_1,)\times \mathrm{}\times \mathrm{GL}(k_s,)`$. Without any loss of generality we assume that all integers $`m_a`$ characterising the $``$-gradation are equal to 1. The general form of the mappings $`c_\pm `$ is given by (3.3), where in the case $`p=2s+1`$ one has $`C_a^T=C_{(pa)},C_{+a}^T=C_{+(pa)},as,`$ $`\stackrel{~}{I}_{k_s}C_s^t\stackrel{~}{J}_{k_{s+1}/2}=C_{(s+1)},\stackrel{~}{J}_{k_{s+1}/2}C_{+s}^t\stackrel{~}{I}_{k_s}=C_{+(s+1)}.`$ In the case $`p=2s`$ the mappings $`C_{\pm a}`$ should satisfy the relations $`C_a^T=C_{(pa)},C_{+a}^T=C_{+(pa)},as,`$ $`C_s^T=C_s,C_{+s}^T=C_{+s}.`$ To write the Toda equations in an explicit form we use again the parametrisation (3.4), where in the case $`p=2s+1`$ one has $`\beta _a^T=\beta _{pa+1}^1,as+1,`$ $`\stackrel{~}{J}_{k_{s+1}/2}\beta _{s+1}^t\stackrel{~}{J}_{k_{s+1}/2}=\beta _{s+1}^1,`$ whereas in the case $`p=2s`$ $$\beta _a^T=\beta _{pa+1}^1$$ for any $`a=1,\mathrm{},2s`$. The independent Toda equations in the case $`p=2s+1`$ are $`_+(\beta _1^1_{}\beta _1)=\beta _1^1C_{+1}\beta _2C_1,`$ $`_+(\beta _a^1_{}\beta _a)=\beta _a^1C_{+a}\beta _{a+1}C_a+C_{(a1)}\beta _{a1}^1C_{+(a1)}\beta _a,1<as,`$ $`_+(\beta _{s+1}^1_{}\beta _{s+1})=\beta _{s+1}^1\stackrel{~}{J}_{k_{s+1}/2}C_{+s}^t\beta _s^{1t}C_s^t\stackrel{~}{J}_{k_{s+1}/2}+C_s\beta _s^1C_{+s}\beta _{s+1}.`$ In the case $`p=2s`$ one has equations (4.7)–(4.9), where $`C_s^T=C_s`$ and $`C_{+s}^T=C_{+s}`$. ## 6 Concluding remarks To construct the general solution for the equations described in the present paper one can apply the method based on the Gauss decomposition. For some partial cases this is done in our paper . The method based on the representation theory was applied to this problem by A. N. Leznov . One can also use the methods considered by A. N. Leznov and E. A. Yusbashjan and by P. Etingof, I. Gelfand and V. Retakh which lead to some very simple forms of the solution but, unfortunately, cannot be applied in general situation. It is worth to note that all nonabelian Toda equations associated with the Lie groups $`\mathrm{SO}(n,)`$ and $`\mathrm{Sp}(n=2m,)`$ can be obtained by reduction of appropriate equations associated with the Lie group $`\mathrm{GL}(n,)`$. Actually this fact can be proved without using concrete matrix realisation of the Lie groups and Lie algebras under consideration.<sup>1</sup><sup>1</sup>1We are thankful to A. N. Leznov for the discussion of this point. The results obtained above can be generalised to the case of higher grading Toda equations and multidimensional Toda-type equations . From the point of view of physical applications it is interesting to investigate possible reductions to real Lie groups. Some results in this direction valid for $``$-gradations generated by the Cartan generator of some $`\mathrm{SL}(2,)`$-subgroup of $`G`$ are obtained by J. M. Evans and J. O. Madsen . We believe that nonabelian Toda equations are quite relevant for a number of problems of theoretical and mathematical physics, and in a near future their role for the description of nonlinear phenomena in many areas will be not less than that of the abelian Toda equations. ## Acknowledgements It is a pleasure to thank J.-L. Gervais and Yu. I. Manin for many fruitful discussions. The research program of the authors was supported in part by the Russian Foundation for Basic Research under grant # 98–01–00015 and by INTAS grant# 96–690.
no-problem/9909/cond-mat9909133.html
ar5iv
text
# Mesoscopic fluctuations of the Coulomb drag ## Abstract We consider mesoscopic fluctuations of the Coulomb drag coefficient $`\rho _D`$ in the system of two separated two-dimensional electron gases. It is shown that at low temperatures sample to sample fluctuations of $`\rho _D`$ exceed its ensemble average. It means that in such a regime the sign of $`\rho _D`$ is random and the temperature dependence almost saturates $`\rho _D1/\sqrt{T}`$. Draft: When two electronic layers are brought close to each other to form a bi-layer system, a current flowing through one of the layers(the active layer) is known to induce a voltage $`V_D=\rho _DI`$ in the other (passive) layer . The effect, which is called the drag, was first predicted theoretically in the model where the carriers in the two spatially separated layers interacted via long-range Coulomb interaction. Experimentally the Coulomb drag was first observed in a three-dimensional electron gas layer while the current was driven through a two-dimensional electron gas (2DEG) . Subsequent experiments studied the effect in 2DEG bi-layers , electron-hole , and normal-metal-superconductor systems . More recently, the drag was studied in the 2DEG bi-layer system in high magnetic field . The quantity, which is studied theoretically is the transconductance $`\sigma _D`$. To the lowest non-vanishing order in the interlayer interaction it is proportional to the drag coefficient $`\rho _D`$ ($`\sigma _i`$ is the Drude conductance of the $`i`$-th layer) $`\rho _D{\displaystyle \frac{\sigma _D}{\sigma _1\sigma _2}}.`$ As a function of temperature the observed $`\sigma _D`$ roughly follows the quadratic law $`\sigma _DT^2`$, although the ratio $`\sigma _D/T^2`$ deviates from the constant value . The $`T^2`$ dependence of the Coulomb drag coefficient follows from the Fermi liquid phase space argument. To create a current in the passive layer, it is necessary to create a pair of electron-like (filled states with energy greater than the Fermi energy $`ϵ>ϵ_F`$) and hole-like excitations (empty states $`ϵ<ϵ_F`$) in a state with non-zero momentum. The energy and momentum of the pair come from an electron in the active layer, which is moving with the driving current. In each layer, the scattering states are limited to the energies of order $`T`$ relative to the Fermi level, which gives two powers of $`T`$ to the drag coefficient. However, the momentum is transferred equally to electrons and holes, therefore in the case of electron-hole symmetry the drag of the electrons cancels that of the holes. Thus the effect is non-zero only due to the electron-hole asymmetry. Similarly, the asymmetry is necessary for the electron and hole system in the active layer to have non-zero total momentum in the first place. The asymmetry can be expressed as a derivative of the density of states $`\nu `$ and/or the diffusion constant $`D`$ with respect to the chemical potential $`\mu `$. This can be obtained rigorously in the diagrammatic formalism . For the case of diffusive layers the disorder-averaged transconductance is $$\sigma _D=\frac{e^2}{\mathrm{}}\frac{\pi ^2}{3}\frac{(\mathrm{}T)^2}{g^2(\kappa d)^2}\left(\frac{}{\mu }\left(\nu D\right)\right)^2\mathrm{ln}\frac{T_0}{2T},$$ (1) where for simplicity we take the layers to be identical, so that they have the same chemical potential, diffusion constant and the dimensionless conductance $`g=25.8k\mathrm{\Omega }/R_{\mathrm{}}`$. The logarithm is cut at the scale $`T_0=D\kappa /d`$ and $`\kappa =2\pi e^2\nu `$ is the inverse Thomas-Fermi screening length. Such effects of the electron-hole asymmetry are well-known, for instance the thermopower in disordered electronic systems or adiabatic pumping . As these effects are due to the electron-hole asymmetry, the average quantities are small, since each derivative with respect to the chemical potential brings one power of the Fermi energy $`E_F`$ to the denominator. On the other hand, the typical energy scale of mesoscopic effects is the Thouless energy $`E_T=\mathrm{}D/L^2`$ ($`L`$ is the sample size), which is much smaller than the Fermi energy. Therefore the effects mentioned above exhibit mesoscopic fluctuations, much larger than the average. In this Letter we show that the mesoscopic fluctuations of the Coulomb drag coefficient can indeed be larger than the average Eq. (1), even if the electron systems in both layers are good metals ($`g1`$). To characterize the fluctuations, we calculate the average square of the (random) transconductance. The result of lengthy albeit straightforward calculations shown below is given by $`\sigma _D^{\alpha \beta }\sigma _D^{\alpha ^{}\beta ^{}}=(\delta ^{\alpha \alpha ^{}}\delta ^{\beta \beta ^{}}+\delta ^{\alpha \beta ^{}}\delta ^{\alpha ^{}\beta })\sigma _D^2,`$ (3) (4) $`\sigma _D^2={\displaystyle \frac{\gamma }{18\pi ^3}}\left({\displaystyle \frac{32\mathrm{ln}214}{3}}\right){\displaystyle \frac{e^4}{\mathrm{}^2}}{\displaystyle \frac{E_T\tau _\phi \mathrm{ln}\kappa d}{g^4(\kappa d)^3}},`$ (5) where the numerical factor $`\gamma =1.0086`$ is the value of the integral $`\gamma `$ $`={\displaystyle \frac{1}{2}}{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{dx_1dx_2x_1x_2}{(x_1^2+x_2^2)}}\left[J_0(x_1)J_0(x_2)+J_2(x_1)J_2(x_2)\right]`$ $`\left(K_0(x_1)K_0(x_2)+\left[{\displaystyle \frac{2}{x_1^2}}K_2(x_1)\right]\left[{\displaystyle \frac{2}{x_2^2}}K_2(x_2)\right]\right),`$ where $`J_i(x)`$ and $`K_i(x)`$ are the Bessel functions . Here $`\tau _\phi (E_T)^1`$ is the dephasing time. This result is valid in the most relevant regime $`L_\phi =\sqrt{D\tau _\phi }L`$ and $`\kappa d1`$. If $`\kappa d1`$, then the average square of the conductance is $`\sigma _D^2\frac{e^4}{\mathrm{}^2}\frac{E_T\tau _\phi }{g^4}`$ with the coefficient of order unity. In what follows we first discuss the experimental consequences of our results, then explain it qualitatively and finally give the rigorous calculation. The fluctuations Eq. (Mesoscopic fluctuations of the Coulomb drag) depend on temperature only through the dephasing time $`1/\tau _\phi T/g`$ and at low enough temperatures they should dominate the behavior of the transconductance. Therefore the $`T^2`$ decrease of $`\sigma _D`$ should at some small temperature $`T_{}`$ be almost saturated at a sample-dependent value. Let us estimate $`T_{}`$ for the samples used in existing experiments using the reported parameters of the samples. Collecting the numerical factors, we write the ratio of the square of the average transconductance Eq. (1) and the averaged square Eq. (Mesoscopic fluctuations of the Coulomb drag) as $`{\displaystyle \frac{\sigma _D^2}{\sigma _D^2}}=\left({\displaystyle \frac{gT}{E_F}}\right)^4{\displaystyle \frac{1}{E_T\tau _\phi }}{\displaystyle \frac{20}{\kappa d\mathrm{ln}\kappa d}}.`$ We take the interlayer spacing to be $`d=200\AA `$ ; the screening length in $`GaAs`$ is $`\kappa ^1=100\AA `$; the Thouless energy is given by $`E_T=g/(2\pi \nu L^2)`$; and the dephasing time $`\tau _\phi ^1T\mathrm{ln}g/g`$ . Then we estimate $`T_{}`$ as the temperature at which the ratio $`\sigma _D^2/\sigma _D^2`$ is equal to unity $`T_{}=E_F\left(16\pi g^2nL^2\right)^{1/5}0.2K,`$ where the Fermi energy in the samples $`E_F60K`$, the electron density $`n=1.5\times 10^{11}cm^2`$, the size of the sample $`L200\mu m`$, and the conductance is calculated from the sheet resistance of the sample $`R=10\mathrm{\Omega }/\mathrm{}`$. The estimated $`T_{}`$ is lower than the temperature range for the existing data , therefore there is no trace of the fluctuations Eq. (Mesoscopic fluctuations of the Coulomb drag) in the data. However, if one takes a dirtier sample, with the sheet resistance, for instance, $`1k\mathrm{\Omega }/\mathrm{}`$, then the estimate for $`T_{}`$ becomes $`2K`$ and the effect of the fluctuations becomes observable. To push $`T_{}`$ even higher, the sample size can be also reduced. Let us now explain Eq. (Mesoscopic fluctuations of the Coulomb drag) qualitatively. First, consider the lowest temperature regime $`TE_T`$, so that the sample is effectively zero-dimensional (0D). The mesoscopic fluctuations of the usual conductance are universal $`\delta \sigma e^2/\mathrm{}`$. The transconductance is associated with the interlayer interactions, thus possessing additional smallness. The value of such smallness can be estimated by the Golden rule argument which is comprised by (i) phase volume; (ii) matrix elements; (iii) electron-hole asymmetry (dependence of the density of states on the energy). Matrix elements in 0D do not depend on energy and give smallness $`1/g^2`$ . Therefore the phase volume is limited by temperature only, which gives the factor $`T^2`$. Finally, the electron-hole asymmetry $`_\mu (\mathrm{ln}\nu (\mu ))`$ is the random quantity with the typical value $`1/E_T`$. Putting everything together, we arrive to the estimate $`\mathrm{r}.m.s.\delta \sigma _D{\displaystyle \frac{e^2}{\mathrm{}}}{\displaystyle \frac{T^2}{(E_Tg)^2}}`$ (6) At higher temperatures the averaging should be performed by dividing the sample into patches of the size $`L_\phi \times L_\phi `$, since on larger scales the phase coherence is destroyed. The contribution of each patch $`\delta \sigma _D^2(L_\phi )`$ is approximately the same and they can be simply comboned as a network of random resistors $`\delta \sigma _D^2=\delta \sigma _D^2(L_\phi )\left({\displaystyle \frac{L_\phi }{L}}\right)^2`$ (7) Now, $`\delta \sigma _D^{\mathrm{𝑟𝑚𝑠}}(L_\phi )`$ can be found similar to Eq. (6) with two important differences: (i) the fluctuations of the density of states are summed from the scale of order $`T`$, rather than $`E_T(L_\phi )`$. This suppresses the fluctuations in each layer by the factor $`\sqrt{E_T(L_\phi )/T}`$. (ii) The matrix elements become energy dependent on the energy scale larger than $`E_T`$, decreasing with the transmitted energy $`\omega `$ as $`|M|^21/\omega ^2`$. As a result, the transmitted energy is limited by $`\omega E_T(L_\phi )=1/\tau _\phi `$, rather then by $`T`$, so in the estimate of the phase volume we should replace $`T^2`$ by $`T/\tau _\phi `$. So, we find $`\delta \sigma _D(L_\phi ){\displaystyle \frac{T\left(\frac{1}{\tau _\phi }\right)}{(E_T(L_\phi )g)^2}}\left(\sqrt{{\displaystyle \frac{E_T(L_\phi )}{T}}}\right)^2g^2`$ (8) Finally, to estimate the total magnitude of the transconductance fluctuations we substitute Eq. (8) into Eq. (7) to obtain $`\delta \sigma _D^2g^4\left({\displaystyle \frac{L_\phi }{L}}\right)^2g^4E_T\tau _\phi .`$ This estimate yields the same result as Eq. (Mesoscopic fluctuations of the Coulomb drag) up to numerical factors and the dependence on $`\kappa d`$. Our results suggest the following picture of the Coulomb drag. If one starts measuring the drag coefficient at high $`T`$ and proceeds by lowering the temperature, then at first the transresistance will decrease roughly as $`T^2`$, as follows from Eq. (1). At the temperature $`T_{}`$, estimated above, the transresistance will appear to saturate ($`\sigma _D1/\sqrt{T}`$), as the fluctuations Eq. (Mesoscopic fluctuations of the Coulomb drag) will start to dominate. The particular value of the prefactor will be sample dependent and, what is more important, will have random sign. If the temperature will be decreased further, then at very low temperatures $`T<E_T`$ the sample will effectively become zero dimensional and the $`T^2`$ decrease will be restored (also with a random coefficient), so that at $`T=0`$ the drag coefficient vanishes. Let us now present the calculation. The electrons in both layers interact via the Coulomb interaction. The interaction propagators corresponding to the dynamically screened Coulomb interaction can be obtained within the RPA scheme (see Fig. 1B) using the Green’s functions of non-interacting electrons. For our purposes we only need the propagator of the interlayer interactions, which is given by (here we set the layers to be identical, so that they have the same density of states $`\nu `$ and diffusion coefficient $`D`$) $$𝒟^R(\omega ,Q)=\frac{1}{2\nu DQ^2}\frac{(i\omega +DQ^2)^2}{i\omega +(1+\kappa d)DQ^2}.$$ (9) The transresistance in the disordered two-layer system can be expressedin terms of the exact Green’s functions of non-interacting, disordered electron system and the interaction propagators Eq. (9). To the lowest non-vanishing order in the interlayer interaction the transresistance is given by the diagram Fig. 1A. The left and right triangles correspond to the two layers in the system and the wavy line is the interlayer interaction propagator Eq. (9). As the electron Greens’s functions now depend on disorder, this $`\sigma _D`$ is a random quantity and its moments should be averaged over disorder. Before averaging, the expression for $`\sigma _D`$ corresponding to the diagram on Fig. 1 can be written as $$\sigma _D^{\alpha \beta }=\frac{1}{4V}\frac{d\omega }{2\pi }\left(\frac{}{\omega }\mathrm{coth}\frac{\omega }{2T}\right)𝒟_{12}^R\mathrm{\Gamma }_{23}^\alpha 𝒟_{34}^A\mathrm{\Gamma }_{41}^\beta ,$$ (10) where the indices indicate the spatial coordinates. Points $`1,2`$ belong to one layer and $`3,4`$ to the other. The triangular vertices $`\mathrm{\Gamma }^\alpha `$ are given by $$\mathrm{\Gamma }_{12}^\alpha (\omega )=\frac{dϵ}{2\pi }\left[J_{12}^\alpha (\omega ,ϵ)+J_{21}^\alpha (\omega ,ϵ)+I_{12}^\alpha (\omega ,ϵ)\right],$$ (12) where $`J_{12}^\alpha `$ $`(\omega ,ϵ)=\left(\mathrm{tanh}{\displaystyle \frac{ϵ\omega }{2T}}\mathrm{tanh}{\displaystyle \frac{ϵ}{2T}}\right)`$ (15) $`\left[G_{12}^R(ϵ\omega )G_{12}^A(ϵ\omega )\right]\left[G^R(ϵ)j^\alpha G^A(ϵ)\right]_{21};`$ $`I_{12}^\alpha (\omega ,ϵ)`$ $`=\left(\mathrm{tanh}{\displaystyle \frac{ϵ\omega }{2T}}\mathrm{tanh}{\displaystyle \frac{ϵ}{2T}}\right)(r_1r_2)^\alpha `$ (18) $`\left[G_{12}^R(ϵ)G_{21}^R(ϵ\omega )G_{12}^A(ϵ)G_{21}^A(ϵ\omega )\right].`$ The exact electronic Green’s functions used in Eq. (Mesoscopic fluctuations of the Coulomb drag) can be written in terms of the exact wavefunctions of the system as $`G_{12}^{R(A)}(ϵ)={\displaystyle \underset{j}{}}{\displaystyle \frac{\mathrm{\Psi }_j^{}(\stackrel{}{r}_1)\mathrm{\Psi }_j(\stackrel{}{r}_2)}{ϵϵ_j\pm ı0}},`$ where $`j`$ labels the exact eigenstates of the system and $`ϵ_j`$ are the exact eigenvalues. The known result for the averaged transconductance Eq. (1) is obtained by averaging the triangular vertices $`\mathrm{\Gamma }^\alpha `$ independently for each layer (for the effect of the correlated disorder see Ref. ). However, as we discussed above, in the intermediate temperature range the fluctuations exceed the average, and the temperature dependence saturates. To characterize the fluctuations we average the square of the transconductance. The mesoscopic fluctuations of the interaction propagators can be neglected, because they produce only the small fluctuating coefficient in Eq. (10). Therefore, we have to average the product of two triangular vertices $`\mathrm{\Gamma }^\alpha `$ (in the same layer; note that in this temperature regime Eq. (18) does not contribute). The corresponding diagrams are shown in Fig. 2. After averaging, each diagram contains six diffusons and one Hikami box (see Fig. 3). The calculation of the arising 14-dimensional integral is greatly simplified by the following observation. The dominant contribution to the frequency integrals Eq. (12) comes from the region, where the external frequencies \[which are the frequencies of the interaction propagators Eq. (9)\] $`\omega _1+\omega _2\kappa d/\tau _\phi `$ and $`\omega _1\omega _21/\tau _\phi `$ so that the frequency difference is small compared to the sum. The momentum integral is dominated by the region $`1/\tau _\phi Q\kappa d/\tau _\phi `$. Since we are in the intermediate temperature regime $`T>E_T`$, the energy transfer $`\omega `$ is smaller than the temperature, and the vertices Eq. (Mesoscopic fluctuations of the Coulomb drag) can be expanded in the inverse temperature. Now the dimensional analysis gives the resulting expression for the fluctuations Eq. (Mesoscopic fluctuations of the Coulomb drag), which depends on temperature only through $`\tau _\phi `$ as we discussed above. The numerical coefficient can now be obtained by performing the integration without further approximations. The factor $`\gamma `$ comes from the angle integration and the numerical factor in Eq. (5) from the integration over the small frequency difference. In conclusion, we have described the mesoscopic fluctuations of the Coulomb drag coefficient or the transconductance. The fluctuations are characterized by the average square of the random, disorder dependent transconductance Eq. (Mesoscopic fluctuations of the Coulomb drag). Compared to the averaged transconductance Eq. (1) the fluctuations Eq. (Mesoscopic fluctuations of the Coulomb drag) are determined by the Thouless energy, rather than the Fermi energy, as the average. Therefore, there exists an intermediate temperature regime, where the fluctuations are greater than the average, which results in the weak ($`1/\sqrt{T}`$) temperature dependence of the transconductance in this regime. Moreover, in this regime the $`\sigma _D`$ is a random, sample dependent quantity, so that the sign of the measured value is also random. Since the average transconductance Eq. (1) grows as $`T^2`$, at higher temperatures ($`T>T_{}`$) the fluctuations are small and the the measured $`\sigma _D`$ is roughly equal to the average. This was the case in the existing experiments . For the samples used in we estimated the crossover temperature $`T_{}0.2K`$, which was below the temperature range used in these experiments. To observe the effect of the fluctuations, one needs to take a dirtier sample of smaller size. Then $`T_{}`$ can be equal to several Kelvin, and the saturation of $`\sigma _D`$ to a value with the random sign can be observed. Finally, we notice that the Coulomb drag coefficient may also be presented as the product of two random numbers $`\rho _Da_1a_2`$, where $`a_1`$, $`a_2`$ characterize each layer. If the disorder is corelated, the average $`a_1a_2`$ appears, which leads to the results of Ref. . In this respect, the results of Ref. are just a particular manifestation of the mesoscopic fluctuations of $`\rho _D`$, discussed in this Letter. We acknowledge helpful conversations with B.L. Althsuler, Fei Zhou, and especially with A. Kamenev. The work at Ruhr-Universität Bochum was supported by SFB 237 “Unordnung and grosse Fluctuationen”. I.A. is A.P. Sloan and Packard research fellow.
no-problem/9909/math-ph9909014.html
ar5iv
text
# Slow Blow Up in the (2+1)-dimensional 𝑆² Sigma Model ## 1 Introduction In this paper we study a hyperbolic partial differential equation that develops a singularity in finite time. The two-dimensional $`S^2`$ sigma model has been studied extensively over the past few years in , , ,, , . It is a good toy model for studying two-dimensional analogues of elementary particles in the framework of classical field theory. Elementary particles are described by classical extended solutions of this model, called solitons. This model is extended to (2+1) dimensions. The previous solitons are static or time-independent solutions, and then the dynamics of these solitons are studied. Since this model is not integrable in (2+1) dimensions studies are performed numerically, and analytic estimates are made by cutting off the model outside a radius $`R`$. The model can also be regarded as the continuum limit of an array of Heisenberg ferromagnets. The $`S^2`$ sigma model displays both slow blow up and fast blow up. In slow blow up, all relevant speeds go to zero as the singularity is approached. In fast blow up the relevant speeds do not go to zero as the singularity is approached . The charge 1 sector of the $`S^2`$ sigma model exhibits logarithmic slow blow up, whereas the charge 2 sector and the similar Yang Mills (4+1) dimensional model, both investigated in and , exhibit fast blow up. The static Lagrangian density for the $`S^2`$ sigma model is given by $$L=|\stackrel{}{\varphi }|^2,$$ where $`\stackrel{}{\varphi }`$ is a unit vector field. In the dynamic version of this problem, where $`\varphi :^{2+1}S^2`$, the Lagrangian is $$L=_^2|_t\stackrel{}{\varphi }|^2|\stackrel{}{\varphi }|^2.$$ Identifying $`S^2=P^1=\{\mathrm{}\}`$ we can rewrite this in terms of a complex scalar field $`u`$: $$L=_^2\frac{|_tu|^2}{(1+|u|^2)^2}\frac{|u|^2}{(1+|u|^2)^2}.$$ (1) The calculus of variations on this Lagrangian in conjunction with integration by parts yields the following equation of motion for the $`P^1`$ model: $$(1+|u|^2)(_t^2u_x^2u_y^2u)=2\overline{u}(|_tu|^2|_xu|^2|_yu|^2)$$ (2) Here $`\overline{u}`$ represents the complex conjugate of $`u`$. The first thing to identify in this problem are the static solutions determined by equation (2). These are outlined in among others. The entire space of static solutions can be broken into finite dimensional manifolds $`_n`$ consisting of the harmonic maps of degree $`n`$. If $`n`$ is a positive integer, then $`_n`$ consists of the set of all rational functions of $`z=x+iy`$ of degree $`n`$. For this chapter, we restrict our attention to $`_1`$, the charge one sector, on which all static solutions have the form $$u=\alpha +\beta (z+\gamma )^1.$$ (3) In order to simplify, consider only solutions of the form $$\beta z^1$$ The geodesic approximation says that for slow velocities solutions should evolve close to $$\frac{\beta (t)}{z}.$$ Instead of $`\beta (t)`$, we look for a real radially symmetric function $`f(r,t)`$, and find the evolution of: $$\frac{f(r,t)}{z}.$$ The differences between the evolution of $`f(0,t)`$ and that predicted in the geodesic approximation, and the deviation of $`f(r,T)`$ with $`T`$ fixed has from a horizontal line gives us a means to gauge how good the geodesic approximation is. It is straightforward to calculate the evolution equation for $`f(r,t)`$. It is: $$_t^2f=_r^2f+\frac{3_rf}{r}\frac{4r_rf}{f^2+r^2}+\frac{2f}{f^2+r^2}\left(_tf^2_rf^2\right).$$ (4) The static solutions for $`f(r,t)`$ are the horizontal lines $`f(r,t)=c`$. Here $`c=\text{length scale}`$. In the adiabatic limit motion under small velocities should progress from line to line, i.e $`f(r,t)=c(t)`$. $`f(r,t)=0`$ is a singularity of this system, where the instantons are not well defined. We use this to form a numerical approximation to the adiabatic limit to observe progression from $`f(r,0)=c_0>0`$ towards this singularity. ## 2 Numerics for the $`P^1`$ charge 1 sector model A finite difference method is used to compute the evolution of (4) numerically. Centered differences are used consistently except for $$_r^2f+\frac{3_rf}{r}.$$ (5) In order to avoid serious instabilities in (4) this is modeled in a special way. Let $$f=r^3_rr^3_rf=_r^2f+\frac{3_rf}{r}.$$ This operator has negative real spectrum, hence it is stable. The naive central differencing scheme on (5) results in unbounded growth at the origin, but the natural differencing scheme for this operator does not. It is $$fr^3\left[\frac{\left(r+{\displaystyle \frac{\delta }{2}}\right)^3\left({\displaystyle \frac{f(r+\delta )f(r)}{\delta }}\right)\left(r{\displaystyle \frac{\delta }{2}}\right)^3\left({\displaystyle \frac{f(r)f(r\delta )}{\delta }}\right)}{\delta }\right].$$ Questions arise from about the stability of the solitons for this equation. They found that a soliton would shrink without perturbation from what should be the resting state. This seems to be a function of the numerical scheme used for those experiments. The numerical scheme used here has no such stability problems. Experiments show that a stationary solution is indeed stationary unless perturbed by the addition of an initial velocity. For a full analysis of the stability of this numerical scheme see . With the differencing explained, we want to derive $`f(r,t+\mathrm{}t)`$. We always have an initial guess at $`f(r,t+\mathrm{}t)`$. In the first time step it is $`f(r,t+\mathrm{}t)=f(r,t)+v_0\mathrm{}t`$ with $`v_0`$ the initial velocity given in the problem. On subsequent time steps $`f(r,t+\mathrm{}t)=2f(r,t)f(r,t\mathrm{}t)`$. This can be used to compute $`_tf(r,t)`$ on the right hand side of (4). Then solve for a new and improved $`f(r,t+\mathrm{}t)`$ in the differencing for the second derivative $`_t^2f(r,t)`$ and iterate this procedure several times to get increasingly accurate values of $`f(r,t)`$. There remains the question of boundary conditions. At the origin $`f(r,t)`$ is presumed to be an even function, and this gives $$f(0,t)=\frac{4}{3}f(\mathrm{}r,t)\frac{1}{3}f(2\mathrm{}r,t).$$ At the $`r=R`$ boundary we presume that the function is horizontal so $`f(R,t)=f(R\mathrm{}r,t)`$. ## 3 Predictions of the Geodesic Approximation In the time evolution of the shrinking of solitons was studied. They arbitrarily cut off the Lagrangian outside of a ball of radius $`R`$, to prevent logarithmic divergence of the integral for the kinetic energy, and then analyze what happens in the $`R\mathrm{}`$ limit. In the problem of the logarithmic divergence in the kinetic energy integral is solved by investigating the model on the sphere $`S^2`$. The radius of the sphere determines a parameter for the size analogous to the parameter $`R`$ for the size of the ball the Lagrangian is evaluated on in . Our calculations here follow those done in . Equation (1) gives us the Lagrangian for the general version of this problem. In the geodesic approximation or adiabatic limit we have $$u=\frac{\beta }{z}$$ for our evolution. If we restrict the Lagrangian to this space we get an effective Lagrangian. The integral of the spatial derivatives of $`u`$ gives a constant, the Bogomol’nyi bound, and hence can be ignored. If one integrates the kinetic term over the entire plane, one sees it diverges logarithmically, so if $`\beta `$ is a function of time, the soliton has infinite energy. Nonetheless, this is what we wish to investigate. We cannot address the entire plane in our numerical procedure either, hence we presume that the evolution takes place in a ball around the origin of size $`R`$. If $`\beta =f(r,t)`$ shrinks to $`0`$ in time $`T`$, we need $`R>T`$. Under these assumptions, up to a multiplicative constant, the effective or cutoff Lagrangian becomes $$L=_0^Rr𝑑r\frac{r^2_tf^2}{(r^2+f^2)^2}$$ which integrates to $$L=\frac{_tf^2}{2}\left[\mathrm{ln}\left(1+\frac{R^2}{f^2}\right)\frac{R^2}{f^2+R^2}\right]$$ Since the potential energy is constant, so is the purely kinetic Lagrangian, and $$\frac{_tf^2}{2}\left[\mathrm{ln}\left(1+\frac{R^2}{f^2}\right)\frac{R^2}{f^2+R^2}\right]=\frac{c^2}{2},$$ with $`c`$ (and hence $`c^2/2`$) a constant. Solving for $`_tf`$ we obtain $$_tf=\frac{c}{\sqrt{\left[\mathrm{ln}\left(1+{\displaystyle \frac{R^2}{f^2}}\right){\displaystyle \frac{R^2}{f^2+R^2}}\right]}}.$$ (6) Since we are starting at some value $`f_0`$ and evolving toward the singularity at $`f=0`$ this gives: $$_{f_0}^{f(0,t)}𝑑f\sqrt{\mathrm{ln}\left(1+\frac{R^2}{f^2}\right)\frac{R^2}{f^2+R^2}}=_0^tc𝑑t.$$ (7) The integral on the right gives $`ct`$. The integral on the left can be evaluated numerically for given values of $`R`$, $`f_0`$ and $`f(0,t)`$. A plot can then be generated for $`ct`$ vs. $`f(0,t)`$. What we really are concerned with is $`f(0,t)`$ vs. $`t`$, but once the value of $`c`$ is determined this can be easily obtained. One such plot with $`f_0=1.0`$, $`R=100`$ of $`f(0,t)`$ vs $`ct`$ is given in Figure 1. This curve is not quite linear, as seen by comparison with the best fit line to this data which is also plotted in Figure 1. The best fit line is obtained by a least squares method. ## 4 Evolution of the Origin Results The computer model was run under the condition that $`f(r,0)=f_0`$ with various small velocities. The initial velocity is $`_tf(r,0)=v_0`$, other input parameters are $`R=r_{max}`$, $`\mathrm{}r`$ and $`\mathrm{}t`$. Evolution of $`f(0,t)`$ The primary concern with the evolution of the horizontal line is the way in which the singularity at $`f(0,t)=0`$ is approached, because once again as $`r\mathrm{}`$ equation (4) reduces to the linear wave equation $$_t^2f=_r^2f,$$ and so we expect the interesting behavior to occur near $`r=0`$. The model is run with initial conditions that $`f(r,0)=f_0`$, $`_tf(r,0)=v_0`$. Other input parameters are $`R=r_{\mathrm{max}}`$, $`\mathrm{}r`$ and $`\mathrm{}t`$. The evolution of the initial horizontal line seems to remain largely flat and horizontal, although there is some slope downward as time increases. This is shown in figure 2. We track $`f(0,t)`$ as it heads toward this singularity, and find that its trajectory is not quite linear, as seen in figure 3. This is suggestive of the result obtained in the predictions for this model. We want to check the legitimacy of the result from equation (7) in chapter 3.2. This requires a determination of the parameters $`R,`$ the cutoff, and $`c`$. We already have $`f_0`$ and $`f(0,t)`$. To determine $`R`$ and $`c`$, observe from equation (6) that: $$\frac{1}{_tf^2}=\frac{\left[\mathrm{ln}\left(1+{\displaystyle \frac{R^2}{f^2}}\right){\displaystyle \frac{R^2}{f^2+R^2}}\right]}{c^2}$$ (8) Since $`R`$ is large and $`f`$ is small $$\mathrm{ln}\left(1+\frac{R^2}{f^2}\right)\mathrm{ln}\left(\frac{R^2}{f^2}\right)$$ and $$\frac{R^2}{f^2+R^2}1.$$ Consequently we can rewrite equation (8) as $$\frac{1}{_tf^2}\frac{\left[\mathrm{ln}(R^2)\mathrm{ln}(f^2)1\right]}{c^2}.$$ The plot of $`\mathrm{ln}(f)=\mathrm{ln}(f(0,t))`$ vs $`1/_tf^2=1/_tf^2(0,t)`$ should be linear with the slope $`m=2/c^2`$ and the intercept $`b=(2\mathrm{ln}(R)1)/c^2`$. Such a plot is easily obtained from the model, and given slope and intercept, the parameters $`c`$ and $`R`$ are easily obtained. Figure 4 is a plot of $`\mathrm{ln}(f(0,t))`$ vs. $`1/_tf^2(0,t)`$, with initial conditions $`\mathrm{}r=0.01`$, $`\mathrm{}t=0.001`$, $`f_0=1.0`$ and $`v_0=0.01`$. It is easily seen that although the plot of $`\mathrm{ln}(f(0,t))`$ vs. $`1/_tf^2(0,t)`$ is nearly straight, it is not quite a straight line. This may indicate that the values of $`R`$ and $`c`$ are changing with time. The best fit line $`y=mx+b`$ has slope $`m=2810`$ and $`b=10200`$. We have $$c=\sqrt{\frac{2}{m}}$$ and $$R=\mathrm{exp}\left(\frac{b}{m}+\frac{1}{2}\right).$$ This gives values $`c=0.0267`$ and $`R=62.1`$. Using these values of $`c`$ and $`R`$ in the calculation of equation (7), we obtain the plot of $`f(0,t)`$ vs $`t`$ given in Figure 5. This is overlayed with the model data for $`f(0,t)`$ vs. $`t`$ for comparison. These two are virtually identical. This shows that the phenomenon of cutting off the Lagrangian outside of a ball of radius $`R`$ is not just an artifact of necessity because the full Lagrangian is divergent, but an inherent feature of this system. Table 1 contains the data for $`c`$ and $`R`$ vs. change in the initial velocity $`v_0`$, under the initial conditions $`f_0=1.0`$, $`\mathrm{}r=0.01`$ and $`\mathrm{}t=0.001`$. The data for $`R`$ varying with $`1/v_0`$ fits well to the line $$y=0.5407x+6.032.$$ This fit is shown in figure 6. A linear fit makes sense, since as the velocity tends toward zero, we expect the cutoff to head toward infinity. Table 2 containing the data for $`c`$ and $`R`$ vs. change in $`f_0`$, the initial height. The parameter $`R`$ varies close to linearly with $`f_0`$, while the parameter $`c`$ remains nearly constant, changing by less than $`7\%`$ over the course of the runs. Characterization of time slices $`f(r,T)`$ Making a closer inspection of the time profiles $`f(r,T)`$ with $`T`$ fixed as in Figure 2, one may observe that the initial part of the data is close to a hyperbola as seen in Figure 7. The best hyperbolic fit is determined by a least squares method. The equation for the hyperbola is $$\frac{(yk)^2}{b^2}\frac{x^2}{a^2}=1.$$ One would naturally ask about the evolution of the hyperbolic parameters $`a`$ and $`b`$ with time, however, neither of these is particularly edifying. A simple calculation shows that $`k`$ should follow $`f(0,t)`$ closely if $`b`$ is small, as it is. The evolution of $`b/a`$ gives the slope of the asymptotic line to the hyperbola, and this evolution is close to linear as seen in Figure 8. ## 5 Conclusions In , solitons are found to be numerically unstable. A solution of the form $$\frac{\beta }{z}$$ shrinks spontaneously under their numerical procedure. This does not occur in our numerical implementation of the $`S^2`$ sigma model. The static solutions do not evolve in time unless given an initial rate of shrinking. Further, stability and convergence analysis of two of the numerical procedures is provided in . We use a method analogous to that in of cutting off the Lagrangian outside of a ball of radius $`R`$, and we find an explicit integral for the shrinking of the soliton, dependent on two parameters: $`c`$ which is a function of the kinetic energy, and $`R`$ which is the size of the ball on which we evaluate the Lagrangian. Once these are specified, this integral gives the theoretical trajectory of the soliton. We find an explicit integral for the shrinking of a soliton where the Lagrangian is cut off outside of the ball of radius $`R`$, and when $`R`$ is calculated (along with the parameter $`c`$), the shrinking seen in the numerical model matches that predicted. The dependence of $`R`$ on the initial conditions appears to be linear with the initial velocity. The validity of the geodesic approximation with a cutoff $`R`$ is shown by this, and the cutoff is an essential part of this system. In addition to this, the shape of a time slice $`f(r,T)`$, with $`T`$ fixed, is characterized by hyperbolic bumps at the origin. ## 6 Acknowledgments I would like to thank my dissertation supervisor, Lorenzo Sadun, for his constructive comments and suggesstions for additional lines of research and improvement on this manuscript and my dissertation.
no-problem/9909/hep-ph9909346.html
ar5iv
text
# References LU TP 99–26 RAL-TR-1999-065 hep-ph/9909346 September 1999 PYTHIA and HERWIG for Linear Collider Physics<sup>1</sup><sup>1</sup>1To appear in the Proceedings of the International Workshop on Linear Colliders, Sitges (Barcelona), Spain, April 28 – May 5, 1999 Torbjörn Sjöstrand<sup>2</sup><sup>2</sup>2torbjorn@thep.lu.se Department of Theoretical Physics, Lund University, Lund, Sweden and Michael H. Seymour<sup>3</sup><sup>3</sup>3M.Seymour@rl.ac.uk Rutherford Appleton Laboratory, Chilton, Didcot, Oxfordshire, OX11 0QX, U.K. Abstract An overview is given of general-purpose event generators, especially Pythia and HERWIG. The current status is summarized, some recent physics improvements are described, and planned future projects are outlined. In order to produce events that can be used for Linear Collider physics and detector studies, the structure of the basic generation process is 1) selection of the hard subprocess kinematics, 2) resonance decays that (more or less) form part of the hard subprocess (such as $`W`$, $`Z`$, $`t`$ or $`h`$), 3) evolution of QCD parton showers (or, alternatively, the use of higher-order matrix elements), 4) hadronization, and 5) normal decays (of hadrons and $`\tau `$’s mainly). Additional aspects, of interest for linear colliders, include 6) beamstrahlung (often handled by an interface to CIRCE ), 7) initial-state QED radiation, e.g. formulated in shower language, 8) the hadronic behaviour of photons (involving topics such as the subdivision into direct and resolved photons, VMD and anomalous ones, parton distributions of real and virtual photons, initial-state QCD radiation, beam remnants of resolved photons and even the possibility of multiple interactions in those remnants), and 9) QCD interconnection effects, e.g. modeled by colour rearrangement and Bose–Einstein . Finally, since a chain is never stronger than its weakest link, one must add 10) the forgotten or unexpected. The historical reason for developing general-purpose generators has often been an interest in QCD physics: initial- and final-state cascades, hadronization, underlying events, and so on. However, once these tools have been developed for simple processes such as $`\gamma ^{}/Z^0`$ production, their generalization to other processes appears a natural task. There exists three commonly used general-purpose generators: Pythia , HERWIG and ISAJET . Their main limitation is that normally only leading-order processes are included, with higher-order QCD and QED corrections included by showers, but no weak corrections at all. Furthermore, the nonperturbative QCD sector is not solved, so hadronization aspects are based on models rather than on theory. Over the years, a long list of physics processes have been added to the programs. These cover topics such as hard and soft QCD, heavy flavours, DIS and $`\gamma \gamma `$, electroweak production of $`\gamma ^{}/Z^0`$ and $`W^\pm `$ (singly or in pairs), production of a light or a heavy Standard Model Higgs, or of various Higgs states e.g. in Supersymmetric (SUSY) models, SUSY particle production (sfermions, gauginos, etc.), technicolor, new gauge bosons, compositeness, leptoquarks, and so on. The most basic processes are included in all the generators, while the selection diverges for exotic physics. Even when a process formally is the same, generators may be based on different theory frameworks (e.g. calculation of SUSY parameters) or approximation schemes, and are thus not expected to agree completely with each other. Comparisons between several generators thus are helpful to assess uncertainties (and, of course, also to find bugs). The Pythia 6.1 program was released in March 1997, based on a merger of Jetset 7.4, Pythia 5.7 and SPythia . Main authors are T. Sjöstrand and S. Mrenna. New subversions are released once every few months — the current one is 6.129, with a size of about 49 000 lines of code. The code itself, including manuals and sample main programs, can be found on http://www.thep.lu.se/$``$torbjorn/Pythia.html. Relative to previous versions, the main news in Pythia 6.1 are the transition to double precision throughout and the new treatment of supersymmetric processes and particles. Also many other processes have been added, e.g. for Higgs and technicolor. Colour rearrangement options for $`W^+W^{}`$ are now included in the code, and the Bose–Einstein routine has been expanded with many new options. A new machinery is being built up for real and virtual photon fluxes and cross sections . An alternative description of popcorn baryon production is available . New standard interfaces are available that should ease the task of matching to external generators of two, four and six fermions. Among other points, of less relevance for $`e^+e^{}`$, one may note the addition of QED radiation off an incoming muon, newer parton distributions, and an energy-dependent $`p_{\mathrm{min}}`$ for multiple interactions. The current HERWIG 5.9 is from July 1996, and has a size of about 21 400 lines. Authors are G. Marchesini, B.R. Webber, G. Abbiendi, I.G. Knowles, M.H. Seymour and L. Stanco. Code, manuals and related programs may be found on http://hepwww.rl.ac.uk/theory/seymour/herwig/. The new version 6.1 is just about to be released, with G. Corcella, S. Moretti, K. Odagiri and P. Richardson added to the list of collaborators. The main new improvement is the introduction of supersymmetric processes within a general MSSM framework, so far only for hadron collisions, however. Mass and decay spectra are not generated intrinsically; instead they are read from a data file, e.g. generated by ISAJET/ISASUSY. All $`R`$-parity conserving $`22`$ sparticle production subprocesses are available and, unlike Pythia and ISAJET, also all resonant $`R`$-parity violating $`22`$ subprocesses and decays. Sparton showering is not yet included. Most resonances decay isotropically, i.e. spin correlations are not systematically included. Among other news one may note a comprehensively expanded set of $`21`$, $`22`$ and $`23`$ Higgs production subprocesses. An $`e^+e^{}4`$ jets matrix-element option has been added, the JIMMY generator for multiparton scattering has been incorporated and improved, the treatment of $`\gamma ^{}\gamma ^{}`$ and $`\gamma `$ remnants improved, and beamstrahlung included by an interface to CIRCE. Generator progress is in many directions, and the growth is largely organic. One main theme in recent times, that will continue to be of importance, is the gradual improvement of the matching between higher-order matrix-element information and the parton-shower language. This is required to obtain an accurate description of event properties, since each approach has its advantages and disadvantages: the former is favoured for the emission of a few widely separated partons, while the latter is likely to do better for multiple emissions at small separations. One example is the improvement of the description of initial-state photon radiation in single-$`\gamma ^{}/Z^0`$ production in Pythia, which is a by-product of the study of $`W^\pm /\gamma ^{}/Z^0`$ production in hadron colliders . The basic idea is to map the kinematics between the parton-shower and matrix-element descriptions, and to find a correction factor that can be applied to hard emissions in the shower so as to bring agreement with the matrix-element expression. Some simple algebra shows that, with the Pythia shower kinematics definitions, the two emission rates disagree by a factor $$R_{ee\gamma Z}(\widehat{s},\widehat{t})=\frac{(\mathrm{d}\widehat{\sigma }/\mathrm{d}\widehat{t})_{\mathrm{ME}}}{(\mathrm{d}\widehat{\sigma }/\mathrm{d}\widehat{t})_{\mathrm{PS}}}=\frac{\widehat{t}^2+\widehat{u}^2+2m_Z^2\widehat{s}}{\widehat{s}^2+m_Z^4},$$ where $`\widehat{s}`$, $`\widehat{t}`$ and $`\widehat{u}`$ are the standard Mandelstam variables and $`m_Z`$ represents the (actual) mass of the $`s`$-channel resonance. This factor is always between $`1/2`$ and 1. The shower can therefore be improved in two ways, relative to the old description. Firstly, the maximum virtuality of emissions is raised from $`Q_{\mathrm{max}}^2m_Z^2`$ to $`Q_{\mathrm{max}}^2=s`$, i.e. the shower is allowed to populate the full phase space. Secondly, the emission rate for the first (which normally also is the hardest) emission on each side is corrected by the factor $`R(\widehat{s},\widehat{t})`$ above, so as to bring agreement with the matrix-element rate in the hard-emission region. Another example of a shower improvement is the description of gluon radiation in top decay in HERWIG . The showering of top decay is done in the top rest frame, where the $`W`$ and $`b`$ are going out back-to-back. In this frame, the gluon emission off the $`b`$ should be smoothly suppressed at large angles relative to the $`b`$ direction, but in HERWIG this is approximated by a sharp step at 90. Thus the $`W`$ hemisphere is left completely empty of gluons, while the $`b`$ one is fully populated. In this kind of “dead come approximation”, the total amount of radiation is about right, but the angular distribution can be badly wrong. The HERWIG improvement consists of two parts. A hard correction is applied in the “dead” region, where tree-level matrix elements are used to populate it (corresponding to roughly 3% of the decays). A soft correction is applied to the populated region, by a reweighting of emissions, to ensure that the kinematical distribution of the hardest emission in the parton shower agrees with the tree-level matrix elements . These corrections can be very important, especially close to threshold . Matrix-element corrections to top production can also be important, and work is here in progress. Finally, a word about the future. Both Pythia and HERWIG continue to be developed and supported. On the physics side, there is a continuous need to increase and improve the support given to different physics scenarios, new and old, and many areas of the general QCD machinery for parton showers and hadronization may require further improvements. On the technical side, the main challenge is a transition from Fortran to more modern computer languages, in practice meaning C++. There are several arguments for such a transition. One is that the major labs, such as SLAC, Fermilab and CERN have decided to discontinue Fortran support and go over to C++ as main language. Another is that C++ offers an educational and professional continuity for students: they may know it before they begin physics, and they can use it after they quit. For experts, C++ is a better programming language. For the rest of us, user-friendly interfaces should still make life easier. Studies have now begun. The Pythia 7 project was formally started in January 1998, with L. Lönnblad as main responsible. What exists today is a strategy document , and code for the event record and the particle object. The particle data and other data base handling is in progress, as is the event generation handler structure. The first piece of physics, the string fragmentation scheme, is being implemented by M. Bertini. The hope its to have a “proof of concept” version soon, and much of the current Pythia functionality up and running by the end of 2000. It will, however, take some further time after that to provide a program that is both more and better than the current Pythia version. HERWIG is currently lagging behind, but a plan has been formulated for a C++ version that would simultaneously offer a significantly improved physics content. Recently the PPARC in the U.K. approved an application for two postdoc-level positions devoted full-time to this project, which therefore will start soon. A copy of the transparencies of this talk, including all the figures not shown here (for space reasons) may be found on http://www.thep.lu.se/$``$torbjorn/talks/sitges99mc.ps. The talk by F. Paige contains complementary information on SUSY simulation .
no-problem/9909/cond-mat9909048.html
ar5iv
text
# Acceleration effect caused by the Onsager reaction term in a frustrated coupled oscillator system ## Abstract The role of the Onsager reaction term (ORT) is not yet well understood in frustrated coupled oscillator systems, since the Thouless-Anderson-Palmer (TAP) and replica methods cannot be directly applied to these non-equilibrium systems. In this paper, we consider two oscillator associative memory models, one with symmetric and one with asymmetric dilution of coupling. These two systems are ideal for evaluating the effect of the ORT, because, with the exception of the ORT, they have the same order parameter equations. We found that the two systems have identical macroscopic properties, except for the acceleration effect caused by the ORT. This acceleration effect does not exist in any equilibrium system. Coupled oscillators are of intrinsic interest in many branches of physics, chemistry and biology. Simple coupled-oscillator models involving uniform and global coupling have been investigated in some detail, and it has been found that they can be used to model many types of chemical reactions in solution kuramoto0 . However, in the modeling of more complicated phenomena, including those studied in the fields of neuronal systems, it is more necessary to consider coupled oscillators with frustrated couplings. The Onsager reaction term (ORT), which describes the effective self-interaction, is of great importance in obtaining a physical understanding of frustrated random systems, because the presence of such an effective self-interaction is one of the characteristics that distinguish frustrated and non-frustrated systems. In the case of equilibrium systems, we can rigorously evaluate the effect of the ORT in the Thouless-Anderson-Palmer (TAP) framework Mezard , and/or using the replica method fukai2 . However, we cannot directly apply these systematic methods to non-equilibrium coupled-oscillator systems. We can define a formal Hamiltonian function on such systems. However, Perez et al. proved that the ground states of this Hamiltonian are not stationary states of the dynamics Perez . Therefore, it is impossible to construct a theory based on a free energy for this class of systems. For this reason, in order to evaluate the macroscopic quantities in such systems that include an ORT, the self-consistent signal to noise analysis (SCSNA) which can be applied to systems without the Hamiltonian function has been used fukai3 . The mathematical treatment of this method is similar to that of the cavity method Mezard . Some results obtained from the SCSNA consist with results from the replica method, but this method includes a few heuristic steps. While the SCSNA has indicated some interesting results, it is not sufficient to give a complete understanding of frustrated systems, and for this reason many theoretically fundamental questions remain in the study of such systems. In fact, even the existence of the type of self-interaction that can be described by the ORT is the subject of some debate aonishi2 ; yoshioka . In this paper, we discuss an effect of the ORT that exists only in frustrated globally coupled oscillator systems, and in particular cannot be found in equilibrium systems. In order to make this effect clear, it would be ideal for us to compare two frustrated systems that, with the exception of the different quantity of the ORT, have the same order parameter equations. In addition, it is desirable for these systems to have a clear correspondence with an equilibrium system, because the effects of the ORT are well understood in equilibrium systems. In consideration of the above-mentioned points, a system of the form: $`{\displaystyle \frac{d\varphi _i}{dt}}=\omega _i+{\displaystyle \underset{ji}{\overset{N}{}}}J_{ij}\mathrm{sin}(\varphi _j\varphi _i+\beta _{ij}+\beta _0),`$ (1) is ideal. In fact, such systems are well known as models of coupled oscillator systems kuramoto0 ; Park . Here, $`\varphi _i`$ is the phase of the $`i`$ th oscillator (with a total of $`N`$) and $`\omega _i`$ represents its quenched natural frequency. The natural frequencies are randomly distributed with a density represented by $`g(\omega )`$. We restrict $`g(\omega )`$ to a unimodal symmetric distribution for satisfying the condition that there exists one large cluster of synchronous oscillators. Also in Eq. (1), $`J_{ij}`$ and $`\beta _{ij}`$ denote the amplitude of coupling from unit $`j`$ to unit $`i`$ and its delay, respectively. In the present study, we have selected the following two generalized Hebb learning rules with random dilutions aoyagi2 to determine $`J_{ij}`$ and $`\beta _{ij}`$: $`J_{ij}\mathrm{exp}(i\beta _{ij})={\displaystyle \frac{c_{ij}}{cN}}{\displaystyle \underset{\mu =1}{\overset{p}{}}}\xi _i^\mu \overline{\xi }_j^\mu ,\xi _i^\mu =\mathrm{exp}(i\theta _i^\mu ),`$ (2) $`c_{ij}=\{\begin{array}{cc}1\hfill & \mathrm{with}\mathrm{probability}c\hfill \\ 0\hfill & \mathrm{with}\mathrm{probability}1c\hfill \end{array},`$ (5) where $`\overline{}`$ means the complex conjugate. $`\{\theta _i^\mu \}_{i=1,\mathrm{},N,\mu =1,\mathrm{},p}`$ are the phase patterns to be stored in the present model and are assigned to random numbers with a uniform probability on the interval $`[0,2\pi )`$. $`\mu `$ is an index of stored patterns and $`p`$ is the total number of stored patterns. We define a parameter $`\alpha `$ (the loading rate) by $`\alpha =p/N`$. When $`\alpha O(1)`$, the system has frustration. The quantity $`c_{ij}`$ is the dilution coefficient. Let $`c_{ij}=1`$ if there is a non-zero coupling from unit $`j`$ to unit $`i`$ and $`c_{ij}=0`$ otherwise. The number of fan-in (fan-out) is restricted to $`O(N)`$, i.e., $`cO(1)`$. Here, we consider both the cases of symmetric dilution (i.e., $`c_{ij}=c_{ji}`$) and asymmetric dilution (i.e., $`c_{ij}`$ and $`c_{ji}`$ are independent random variables)okada . The quantity $`\beta _0`$ in Eq. (1) represents a uniform bias. Due to the effect of this bias, the mutual interaction between a pair of oscillators is asymmetric, even if $`J_{ij}=J_{ji}`$ and $`\beta _{ij}=\beta _{ji}`$. Such an unbalanced mutual interaction is the essence of the acceleration (deceleration) effect meunier ; Hansel1 ; kurata . In the case of $`g(\omega )=\delta (\omega \omega _0)`$, $`\beta _0=0`$ and $`c_{ij}=c_{ji}`$, this system can be mapped to a XY-spin system aoyagi2 ; cook ; okuda . In this way, we can make a bridge between the frustrated coupled oscillator system and the equilibrium system. Let us consider steady states of the system in the limit $`t\mathrm{}`$. Our theory is based on the condition that there exists one large cluster of oscillators synchronously locked at frequency $`\mathrm{\Omega }`$ and the number of this cluster scales as $``$ $`O(N)`$. Under such a condition, Daido demonstrated through a scaling plot obtained from numerical simulations that variation of order parameter scales as $`O(1/\sqrt{N})`$ in ferromagnetic systems with one large synchronous cluster daido2 . According to this result, we assume that the self-averaging property holds in our system and order parameters are constant in the limit $`N\mathrm{}`$. These assumptions in our theory were also introduced by Sakaguchi and Kuramoto (SK) kuramoto . Redefining $`\varphi _i`$ according to $`\varphi _i\varphi _i+\mathrm{\Omega }t`$ and substituting this into Eq. (1), we obtain $`{\displaystyle \frac{d\varphi _i}{dt}}+\omega _i\mathrm{\Omega }=\mathrm{sin}(\varphi _i)h_i^R\mathrm{cos}(\varphi _i)h_i^I,`$ (6) where $`h_i`$ represents the so-called “local field”, which is described as $`h_i=h_i^R+ih_i^I=`$ $`e^{i\beta _0}\left({\displaystyle \underset{\mu }{\overset{p}{}}}\xi _i^\mu m^\mu +{\displaystyle \frac{1}{N}}{\displaystyle \underset{\mu }{\overset{p}{}}}{\displaystyle \underset{ji}{\overset{N}{}}}{\displaystyle \frac{c_{ij}c}{c}}\xi _i^\mu \overline{\xi }_j^\mu s_j\alpha s_i\right).`$ (7) For convenience, we write $`s_i=\mathrm{exp}(i\varphi _i)`$. The order parameter $`m^\mu `$, which is the overlap between the system state $`\{s_i\}_{i=1,\mathrm{},N}`$ and embedded pattern $`\{\xi _i^\mu \}_{i=1,\mathrm{},N}`$, is defined as $`m^\mu ={\displaystyle \frac{1}{N}}{\displaystyle \underset{j=1}{\overset{N}{}}}\overline{\xi }_j^\mu s_j.`$ (8) The effect of the second term of Eq. (7), i.e., $`\frac{1}{N}_\mu ^p_{ji}^N\frac{c_{ij}c}{c}\xi _i^\mu \overline{\xi }_j^\mu s_j`$, is equivalent to that of an effect of additive coupling noise aoyagi2 ; Sompolinsky2 ; okada . In the limit $`c0`$, with $`\alpha /c`$ kept finite, our system is reduced to a glass oscillator. Therefore, our theory proposed here can cover two types of frustrated systems, the oscillator associative memory and the glass oscillator. In general, the fields $`h_i^R`$ and $`h_i^I`$ involve the ORT corresponding to the effective self-feedback fukai3 . We must eliminate the reaction term from these fields. Here, we assume that the local field splits into a “pure” effective local field, $`\stackrel{~}{h}_i=\stackrel{~}{h}_i^R+i\stackrel{~}{h}_i^I`$, and the ORT, $`\mathrm{\Gamma }s_i`$: $`h_i=\stackrel{~}{h}_i+\mathrm{\Gamma }s_i.`$ (9) Here we have neglected the complex conjugate term of the ORT leading to a higher-harmonic term of the response function aonishi . This can be done in the present model because we employ generalized Hebb learning rules. Hence, by substituting Eq. (9) into Eq. (6), we obtain the equation $`{\displaystyle \frac{d\varphi _i}{dt}}+\omega _i\stackrel{~}{\mathrm{\Omega }}=\mathrm{sin}(\varphi _i)\stackrel{~}{h}_i^R\mathrm{cos}(\varphi _i)\stackrel{~}{h}_i^I,`$ (10) $`\stackrel{~}{\mathrm{\Omega }}=\mathrm{\Omega }\left|\mathrm{\Gamma }\right|\mathrm{sin}(\psi ),\psi =\mathrm{Arg}\left(\mathrm{\Gamma }\right),`$ (11) which does not contain the reaction term. The quantity $`\stackrel{~}{\mathrm{\Omega }}`$ here represents the effective frequency of synchronous oscillators. We can regard $`\stackrel{~}{\mathrm{\Omega }}`$ as the renormalized version of $`\mathrm{\Omega }`$, of which the ORT has been pulled out, and therefore $`\stackrel{~}{\mathrm{\Omega }}`$ takes a different value from the observable $`\mathrm{\Omega }`$ in general. Thus, $`\mathrm{\Omega }\stackrel{~}{\mathrm{\Omega }}`$ represents the contribution of the ORT to the acceleration (deceleration) effect. $`\stackrel{~}{\mathrm{\Omega }}`$ is one of the order parameters of our theory. In the analysis that follows, $`\stackrel{~}{h}_i`$ and $`\mathrm{\Gamma }`$ are obtained in a self-consistent manner. Under the above assumption expressed by Eq. (9), by applying SK theory kuramoto to Eq. (10), we obtain the average of $`s_i`$ over $`\omega _i`$: $`s_i_{\omega _i}`$ $`=`$ $`\stackrel{~}{h_i}{\displaystyle _{\pi /2}^{\pi /2}}𝑑\varphi g\left(\stackrel{~}{\mathrm{\Omega }}+|\stackrel{~}{h}|\mathrm{sin}\varphi \right)\mathrm{cos}\varphi \mathrm{exp}(i\varphi )`$ (12) $`+`$ $`i\stackrel{~}{h}_i{\displaystyle _0^{\pi /2}}𝑑\varphi {\displaystyle \frac{\mathrm{cos}\varphi (1\mathrm{cos}\varphi )}{\mathrm{sin}^3\varphi }}`$ $`\times \left\{g\left(\stackrel{~}{\mathrm{\Omega }}+{\displaystyle \frac{|\stackrel{~}{h}_i|}{\mathrm{sin}\varphi }}\right)g\left(\stackrel{~}{\mathrm{\Omega }}{\displaystyle \frac{|\stackrel{~}{h}_i|}{\mathrm{sin}\varphi }}\right)\right\}.`$ Equation (6) implies that $`s_i`$ is a function of $`h_i`$, $`\omega _i\mathrm{\Omega }`$ and $`t`$, and to make this explicit we write $$s_i=X(h_i,\omega _i\mathrm{\Omega },t).$$ (13) Note that $`s_i`$ is not a function of renormalized $`\stackrel{~}{h}_i`$ and $`\stackrel{~}{\mathrm{\Omega }}`$ but, rather, the bare $`h_i`$ and $`\mathrm{\Omega }`$ in appearing Eq. (6). We can properly evaluate the ORT with this careful treatment. Here, we assume that microscopic memory effect can be neglected in the $`t\mathrm{}`$ limit. In this analysis, we focus on memory retrieval states in which the configuration has appreciable overlap with the condensed pattern $`𝝃^1`$ ($`m^1O(1)`$) and has tiny overlap with the uncondensed patterns $`𝝃^\mu `$ for $`\mu >1`$ ($`m^\mu O(1/\sqrt{N})`$). Under this assumption, we estimate the contribution of the uncondensed patterns using the SCSNAfukai3 , and determine $`\stackrel{~}{h}_i`$ in a self-consistent manner. Finally, the equations relating the order parameters $`|m^1|`$, $`U`$ and $`\stackrel{~}{\mathrm{\Omega }}`$ are obtained using the self-consistent local field: $`|m^1|e^{i\beta _0}`$ $`=`$ $`\stackrel{~}{X}(x_1,x_2;\stackrel{~}{\mathrm{\Omega }})_{x_1,x_2},`$ (14) $`Ue^{i\beta _0}`$ $`=`$ $`F_1(x_1,x_2;\stackrel{~}{\mathrm{\Omega }})_{x_1,x_2},`$ (15) where $`\mathrm{}_{x_1,x_2}`$ is the Gaussian average over $`x_1`$ and $`x_2`$, $`\mathrm{}_{x_1,x_2}=Dx_1Dx_2\mathrm{}`$. The quantity $`U`$ corresponds to the susceptibility, which is the measure of the sensitivity to external fields. Since the present system possesses rotational symmetry with respect to the phase $`\varphi _i`$, we can safely set the condensed pattern as $`\xi _i^1=1(i=1,\mathrm{},N)`$. Now, $`\stackrel{~}{h}`$, $`\stackrel{~}{X}`$, $`F_1`$ and $`Dx_1Dx_2`$ can be expressed as follows: $`Dx_1Dx_2={\displaystyle \frac{dx_1dx_2}{2\pi \rho ^2}}\mathrm{exp}\left({\displaystyle \frac{x_1^2+x_2^2}{2\rho ^2}}\right),`$ (16) $`\rho ^2={\displaystyle \frac{\alpha }{2}}\left({\displaystyle \frac{1}{|1U|^2}}+{\displaystyle \frac{1c}{c}}\right),`$ (17) $`\stackrel{~}{h}=|m^1|+x_1+ix_2,`$ (18) $`\stackrel{~}{X}(x_1,x_2;\stackrel{~}{\mathrm{\Omega }})=s_\omega ,`$ (19) $`F_1(x_1,x_2;\stackrel{~}{\mathrm{\Omega }})={\displaystyle \frac{s_\omega }{\stackrel{~}{h}}},`$ (20) where $`s_\omega `$ in Eqs. (19) and (20) is written as Eq. (12). In the case of the symmetric diluted system, $`\mathrm{\Gamma }`$ can be expressed as $`\mathrm{\Gamma }e^{i\beta _0}={\displaystyle \frac{\alpha U}{1U}}+{\displaystyle \frac{\alpha (1c)}{c}}U.`$ (21) In the case of the asymmetric diluted system, on the other hand, we have $`\mathrm{\Gamma }e^{i\beta _0}={\displaystyle \frac{\alpha U}{1U}}.`$ (22) $`\stackrel{~}{h}`$ and $`\stackrel{~}{\mathrm{\Omega }}`$ are the renormalized versions of $`h`$ and $`\mathrm{\Omega }`$ respectively, of which the ORT has been pulled out, and thus $`\stackrel{~}{h}`$ and $`\stackrel{~}{\mathrm{\Omega }}`$ are independent of the ORT. Therefore, the two models we consider have identical order parameter equations, (14) and (15), written in the term of the renormalized quantities $`\stackrel{~}{h}`$ and $`\stackrel{~}{\mathrm{\Omega }}`$. According to Eq. (11), the difference of the ORT in Eqs. (21) and (22) only leads a different value of the observable $`\mathrm{\Omega }`$ only when $`\beta _00`$. In this way we are able to clearly separate the effect of the ORT, and therefore, by observing the macroscopic parameter $`\mathrm{\Omega }`$ of these two systems, we can analyze the effect of the ORT qualitatively and quantitatively. The distribution of resultant frequencies $`\overline{\omega }`$ in the memory retrieval state, which is denoted $`p(\overline{\omega })`$, becomes $`p(\overline{\omega })=r\delta (\overline{\omega }\mathrm{\Omega })`$ $`+{\displaystyle Dx_1Dx_2\frac{g\left(\stackrel{~}{\mathrm{\Omega }}+(\overline{\omega }\mathrm{\Omega })\sqrt{1+\frac{|\stackrel{~}{h}|^2}{(\overline{\omega }\mathrm{\Omega })^2}}\right)}{\sqrt{1+\frac{|\stackrel{~}{h}|^2}{(\overline{\omega }\mathrm{\Omega })^2}}}},`$ (23) $`r={\displaystyle Dx_1Dx_2|\stackrel{~}{h}|_{\pi /2}^{\pi /2}𝑑\varphi g\left(\stackrel{~}{\mathrm{\Omega }}+|\stackrel{~}{h}|\mathrm{sin}\varphi \right)\mathrm{cos}\varphi }.`$ (24) The $`\delta `$-function in Eq. (23) indicates the cluster of oscillators synchronously locked at frequency $`\mathrm{\Omega }`$. The value $`r`$ is the ratio between the number of synchronous oscillators and the total number of oscillators $`N`$. The second term in Eq. (23) represent a distribution of asynchronous oscillators. If $`\beta _0=0`$ and $`g(\omega )`$ is symmetric, our theory reduces to the previously proposed theory aonishi2 . In the case $`g(\omega )=\delta (\omega )`$, $`\beta _0=0`$ and $`c_{ij}=c_{ji}`$, where the present model reduces to XY-spin systems (frustrated equilibrium systems), our theory coincides with the replica theory aoyagi2 ; cook and the SCSNAokuda . In addition, in the limit of $`\alpha 0`$, where the present model reduces to uniform non-equilibrium systems, our theory reproduces the SK theory kuramoto . In the numerical simulations we now discuss, we chose the distribution of natural frequencies as $`g(\omega )=(2\pi \sigma ^2)^{1/2}\mathrm{exp}(\omega ^2/2\sigma ^2)`$. Also, we set $`\sigma =0.2`$, $`\beta _0=\pi /20`$, and $`c=0.5`$. Figure 1(a) displays $`\mathrm{\Omega }`$ as a function of $`\alpha `$ in the memory retrieval states, where the solid curves were obtained theoretically, and the data points with error bars represent results obtained by numerical simulation. As seen from Figure 1(a), the oscillator’s rotation for the symmetric diluted system is faster than that for the asymmetric diluted system. As previously discussed, $`\stackrel{~}{\mathrm{\Omega }}`$ in Figure 1(a), which represents the effective frequency of synchronous oscillators, does not depend on the type of dilution, while the observed $`\mathrm{\Omega }`$ depends strongly on it. This dependence is due to the existence of the ORT $`\mathrm{\Gamma }`$. In the case that the local field $`h`$ does not contain the ORT yoshioka , the plots obtained from the numerical simulations for both models should fit the curve of $`\stackrel{~}{\mathrm{\Omega }}`$. Therefore, the dependence of the observed $`\mathrm{\Omega }`$ on the type of dilution is strong evidence for the existence of the ORT in the present system. In this figure, we have shifted the numerical values of $`\mathrm{\Omega }`$ at $`\alpha =0`$ (in the computer simulation) to their corresponding theoretical values at $`\alpha =0`$, in order to cancel fluctuations of the mean value of $`g(\omega )`$ caused by the finite size effect. Figures 1(b) and (c) display the distributions of the resultant frequencies for the symmetric dilution and asymmetric dilution systems, respectively. As these figures reveal, the theory (solid curve) is in good agreement with the simulation results (histogram). According to the results given in Figures 1(b) and (c), the distributions of the resultant frequencies for the symmetric diluted system is identical to that for the asymmetric diluted system, except for a slight difference between those positions caused by the ORT. From this result we can conclude that the mean field $`\stackrel{~}{h}`$ of the symmetric diluted system is identical to that of the asymmetric diluted system, since $`\stackrel{~}{h}`$ reflects the distribution of resultant frequencies, as represented by Eq. (23). Figures 2(a) and (b) display $`|m^1|`$ as a function of $`\alpha `$ for symmetric and asymmetric diluted systems, respectively. Here, the solid curves were obtained theoretically, and the data plots represent the results obtained from the numerical simulations. According to the results given in Figures 2(a) and (b), the critical memory capacity of the asymmetric diluted systems obtained from numerical simulation is slightly smaller than that of the asymmetric diluted systems, because asymmetric dilutions break the detailed balance, and then, asymmetric dilutions weaken a memory state. At this stage, there is no theory to rigorously treat a system with asymmetric interaction. Almost theoretical studies of asymmetric systems are based on the naive assumption that there exist steady states like symmetric systems okada . In conclusion, we have found that the symmetric and asymmetric diluted systems have the same macroscopic properties, with the exception of the the acceleration (deceleration) effect caused by the ORT. The quantity of the ORT depends on the type of dilution, and this dependence leads to a difference in the rotation speed of the oscillators for the two cases, as shown in Fig. 1(a). The acceleration (deceleration) effect caused by the ORT is a phenomenon peculiar to non-equilibrium systems, since this effect only exists when $`\beta _00`$. There has been fundamental disagreement regarding the existence of the ORT in a typical system corresponding to our model with $`\beta _0=0`$ and a symmetric $`g(\omega )`$ aonishi2 ; yoshioka . In this work we have reached the following conclusion in this regard. Even if $`\beta _0=0`$ and $`g(\omega )`$ is symmetric, the ORT exists in the bare local field given by Eq. (7). In this case, the effect of the ORT is invisible, since it cancels out of Eq. (10).
no-problem/9909/astro-ph9909276.html
ar5iv
text
# Hard Burst Emission from the Soft Gamma Repeater SGR 1900+14 ## 1 Introduction Soft gamma repeaters (SGRs) constitute a group of high-energy transients named for the observed characteristics which set them apart from classical Gamma-Ray Bursts (GRBs). SGRs emit brief ($``$ 0.1 sec), intense (up to 10<sup>3</sup> – 10<sup>4</sup> L<sub>Edd</sub>) bursts of low-energy $`\gamma `$-rays with recurrence times which range from seconds to years (Kouveliotou 1995). The vast majority of SGR burst spectra ($``$ 20 keV) can be fit by an Optically-Thin Thermal Bremsstrahlung (OTTB) model with temperatures between 20 and 35 keV (Fenimore, Laros, & Ulmer 1994; Göğüş et al. 1999). These spectra show little or no variation over a wide range of time scales (Fenimore et al. 1994), both within individual bursts, and between source active periods which cover years. There have been some exceptions, however, where modest hard-to-soft evolution within bursts from SGR 1806$``$20 was detected (Strohmayer & Ibrahim 1997). During the past 20 years, two giant flares have been recorded from two of the four known SGRs: one from SGR $`052666`$ on 5 March 1979 (Mazets et al. 1979), and one from SGR 1900$`+`$14 on 27 August 1998 (Hurley et al. 1999a). These flares differ from the more common bursts in many ways. They are far more energetic (by 3 orders of magnitude in peak luminosity), persist for hundreds of seconds during which their emission is modulated at a period that reflects the spin of an underlying neutron star, and have harder initial spectra. The hard spectra of the peak of these flares have OTTB temperatures 200 – 500 keV (Fenimore et al. 1991; Hurley et al. 1999a; Mazets et al. 1999a), although this model should not necessarily be taken as an accurate valid description of these spectra as severe dead-time problems for most instruments limited the efficacy of spectral deconvolution. Feroci et al. (1999) require the first $``$ 67 s of the August 27<sup>th</sup> flare be fit with a two-component spectrum, consisting of an OTTB (31 keV) and a power-law (– 1.47 photon index). Hard burst emission has also been detected during the brightest burst recorded from the newly discovered SGR 1627$``$41 (Woods et al. 1999a; Mazets et al. 1999b). Further evidence for hard emission from SGRs comes from RXTE observations of SGR 1806$``$20. For a small fraction of the more common short events, high OTTB spectral temperatures in the range 50 – 170 keV were measured (Strohmayer & Ibrahim 1997). Here, we present strong evidence for spectrally hard burst emission from SGR 1900$`+`$14 during its recent active episode which started in May 1998. Two events recorded with BATSE which are positionally consistent with SGR 1900$`+`$14 and temporally coincident with the recent active period of the source, show temporal and spectral signatures quite distinct from typical SGR burst emissions. We show that although the time-integrated spectrum of each event resembles a classical GRB spectrum, spectral evolution is found which obeys a hardness/intensity anti-correlation, never before seen in GRBs. ## 2 Burst Association with SGR 1900+14 On 22 October 1998, during a period of intense burst activity of SGR 1900$`+`$14 (Woods et al. 1999b), BATSE triggered at 15:40:47.4 UT on a $``$ 1 s burst with a smooth, FRED-like (Fast Rise Exponential Decay) temporal profile which is commonly seen in GRB light curves, but is rare for SGR events. This burst was located near SGR 1900$`+`$14 (Figure 1), but was longer than typical bursts from this source and its spectrum was much harder<sup>1</sup><sup>1</sup>1The similarities of this event with GRBs in spectrum and temporal structure was noted independently by D. Fargion (1999) (Figure 2a). Using Ulysses and BATSE, an IPN annulus was constructed and the joint BATSE/IPN error box (Figure 1) contained the known source location of SGR 1900$`+`$14 (Frail, Kulkarni & Bloom 1999). Without invoking any assumptions about source activity, we estimate the probability that a GRB within the BATSE database with an IPN arc would contain any known SGR by chance. For our purposes, we will assume GRBs are isotropic and the angular size of the known SGR error boxes are small compared to the joint BATSE/IPN error box. The probability $`p`$ reduces to $`p(1/4\pi )NA`$ where $`N`$ is the number of known SGRs and $`A`$ is the area of the burst error box in steradians. With four known SGRs, we find a chance probability of 3 $`\times `$ 10<sup>-6</sup> that a GRB with the given error box area will overlap a known SGR. We now normalize this probability by multiplying by the number of trials (i.e. the number of BATSE/Ulysses IPN arcs as of March 1999, which is 641), which gives the probability of a chance association of 2 $`\times `$ 10<sup>-3</sup>. The burst was detected during a period when SGR 1900$`+`$14 was burst active (a fact not used in the probability calculation), further strengthening the association. Ten weeks later on 10 January 1999 at 08:39:01.4 UT, a strikingly similar burst (Figure 2b) was recorded with BATSE, which again located near SGR 1900$`+`$14. This burst also triggered Ulysses, so an IPN annulus was constructed which again contained the position of SGR 1900$`+`$14 (Figure 1). Using the same arguments as before, we find an upper limit to the probability that the burst and any known SGR are related by chance coincidence of 3 $`\times `$ 10<sup>-3</sup>. The combined probability (product of the two individual probabilities) that these two events are GRBs with BATSE/IPN error boxes that are consistent with a known SGR by chance coincidence is 6 $`\times `$ 10<sup>-6</sup>. An alternative possibility is that these two bursts constitute two gravitationally lensed images of the same GRB (Paczyński 1986). However, we consider this highly unlikely given the positional coincidence of SGR 1900$`+`$14, the temporal correlation with a known burst active period for the source, and the anti-correlation between hardness and intensity (see section 3). We conclude that these two bursts originated from SGR 1900$`+`$14. ## 3 Burst Spectra To fit the time-integrated spectrum for each burst, we used High Energy Resolution Burst (HERB) data which have 128 energy channels covering 20 – 2000 keV (see Fishman et al. 1989 for a description of BATSE data types). We fit a third-order polynomial to approximately 300 sec of pre-burst and post-burst data and interpolated between these intervals to estimate the background at the time of the burst. This background was subtracted and we fit the resulting burst spectrum using WINGSPAN (WINdow Gamma SPectral ANalysis) to three models: an OTTB (dN/dE $``$ E<sup>-1</sup> exp\[-E/$`kT`$\]), a simple power-law, and Band’s GRB function (Band et al. 1993). We find that for each burst, the spectrum is not well characterized by the OTTB model based upon the large value of $`\chi _\nu ^2`$ (Table 1). Using a $`\mathrm{\Delta }\chi ^2`$ test between the Band and OTTB models, we find the Band function is strongly favored over the OTTB, with probabilities of 4.6 $`\times `$ 10<sup>-7</sup> and 7.2 $`\times `$ 10<sup>-11</sup> that these $`\chi ^2`$ differences would occur by chance for the respective bursts. Furthermore, the Band function is favored over the simple power-law for the second, brighter event with a significance level of 1.3 $`\times `$ 10<sup>-3</sup>, although inclusion of a low-energy cutoff with the power-law model eliminates this advantage. Figure 3 shows the data and folded Band model for each burst. For comparison to past results, we have included the OTTB fit parameters (Table 1) to clearly demonstrate the difference between the spectra of these bursts and typical SGR burst emissions. Specifically, for SGR 1900$`+`$14 during its recent active episode, a weighted mean of 25.7 $`\pm `$ 0.8 keV was found for the OTTB temperature of 22 events detected with BATSE (Göğüş et al. 1999). Clearly, the temperatures found for these two bursts are much higher than these typical values. Using data with coarser spectral resolution, but finer time resolution, we fit multiple segments of each burst in order to search for spectral evolution. For the initial rise of each event, we used Time-Tagged Event (TTE) data which have only 4 energy channels but 2 $`\mu `$s time resolution. For the peak and tail, we used Medium-Energy Resolution (MER) data which have 16 energy channels and 16 ms time resolution. In order to sustain good signal-to-noise for a reasonable parameter determination, the bursts were divided into 8 and 9 intervals, respectively. Guided by our fit results of the time-integrated spectra and our limited number of energy channels, we chose to fit the power-law model to these spectra. We find significant spectral evolution through each burst as the power-law photon index varies between – 1.5 and – 2.4 (Figures 2c and 2d). We note a general soft-to-hard trend in the time evolution of the spectra of these bursts. This appears to be a consequence of a relatively faster temporal rise than decay and an intensity/hardness anti-correlation for these events (Figure 4). To quantify the significance of this correlation, we calculated the Spearman rank-order correlation coefficient ($`\rho =0.86`$) between the energy flux and photon index. The probability of obtaining a coefficient of this value from a random data set is 8.3 $`\times `$ 10<sup>-6</sup>, thus the anti-correlation is significant. The peak fluxes and fluences of the two events are not exceptional when compared to other burst emissions from this source. We find peak fluxes (0.064 s timescale) of (2.94 $`\pm `$ 0.15) and (4.96 $`\pm `$ 0.18) $`\times `$ 10<sup>-6</sup> ergs cm<sup>-2</sup> s<sup>-1</sup> and fluences (25 – 2000 keV) of (1.14 $`\pm `$ 0.04) and (1.85 $`\pm `$ 0.04) $`\times `$ 10<sup>-6</sup> ergs cm<sup>-2</sup> for the bursts of 981022 and 990110, respectively. For a distance of 7 kpc (Vasisht et al. 1994), these correspond to peak luminosities 1.7 and 2.9 $`\times `$ 10<sup>40</sup> ergs s<sup>-1</sup> and burst energies 0.65 and 1.1 $`\times `$ 10<sup>40</sup> ergs (assuming isotropic emission). The ranges of peak fluxes and fluences for SGR 1900$`+`$14 bursts observed recently with BATSE are $`0.320`$ $`\times `$ 10<sup>-6</sup> ergs cm<sup>-2</sup> s<sup>-1</sup>, and $`0.0225`$ $`\times `$ 10<sup>-6</sup> ergs cm<sup>-2</sup>, respectively (Göğüş et al. 1999). The measured values for these bursts with harder spectra are well within the corresponding observed ranges. ## 4 A Comparison with Observed GRB Characteristics For each event, we have calculated two quantities that are traditionally used to delineate between the two classes (Kouveliotou et al. 1993) of GRBs in the BATSE catalog, specifically t<sub>90</sub> and spectral hardness (Ch 3/Ch 2). We find t<sub>90</sub> durations of 1.2 $`\pm `$ 0.2 and 0.9 $`\pm `$ 0.2 s, and fluence hardness ratios (3/2) of 1.9 $`\pm `$ 0.1 and 2.1 $`\pm `$ 0.1 for the bursts of 981022 and 990110, respectively. When plotted together with the reported values of the 4Br catalog (Paciesas et al. 1999), we find these two bursts fall outside the main concentrations of each distribution (i.e. the long, soft class and the short, hard class), but nearer the centroid of the short, hard class. We are currently looking more closely at events in the same region of this diagram that were classified as GRBs. Given the spectral similarities between these two events and a fair fraction of GRBs, the time-integrated spectrum is not sufficient to distinguish these bursts from GRBs. Many GRBs show a strong hardness/intensity correlation within individual bursts (Ford et al. 1995; Preece et al. 1999), but to the best of our knowledge, a consistent anti-correlation throughout a GRB has never been seen. The two bursts from SGR 1900$`+`$14 do, however, show a strong hardness/intensity anti-correlation (Figure 4). If this behavior is inherent to all hard SGR events, it would be a useful diagnostic (but secondary to location) with which to select them. ## 5 Discussion We have shown strong evidence for hard emission from SGR 1900$`+`$14 during two bursts of average intensity as observed with BATSE. It is clear that this type of burst emission in SGR 1900$`+`$14 is rare (1% of the SGR 1900$`+`$14 events acquired with BATSE during the 1998-1999 active period). Their occurrence following the August 27 flare may suggest a causal relationship, but this is difficult to pin down given the rarity of hard events and the global enhanced burst activity at the time. The clear distinction between these two events in hardness, spectral form, spectral evolution, duration, and lack of temporal variability suggests these bursts are created either by physical processes different from those which produce the more common, soft events, or in a region whose ambient properties that effect the emitted radiation (e.g. magnetic field, optical depth, etc.) are different. The proposed identification of SGRs with very strongly magnetized neutron stars (Duncan & Thompson 1992) has received strong support from the discovery that both SGR 1806$``$20 and SGR 1900$`+`$14 are X-ray pulsars spinning down at rapid rates (Kouveliotou et al. 1998, 1999; Hurley et al. 1999b). In this magnetar model, the typically soft SGR bursts are explained in the following way. The internal magnetic field is strong enough to diffuse rapidly through the core, thereby stressing the crust (Thompson & Duncan 1996). A sudden fracture injects a pulse of Alfvén radiation directly into the magnetosphere, which cascades to high wavenumber and creates a trapped fireball (Thompson & Duncan 1995; Thompson & Blaes 1998). The soft spectrum arises from a combination of photon splitting and Compton scattering in the cool, matter-loaded envelope of the fireball. The relative hardness of the two reported events, combined with luminosities in excess of $`10^{40}`$ erg s<sup>-1</sup>, points directly to an emission region of shallower scattering depth, $`\tau _{\mathrm{es}}<1`$, situated outside $`10^3(L_X/10^{40}\mathrm{erg}\mathrm{s}^1)`$ neutron star radii. Inverse Compton emission is suggested by the lack of a correspondence between the spectral break energy and a cooling energy at this large a radius. For example, bulk Alfvén motions are an effective source of Compton heating even in the absence of Coulomb coupling between electrons and ions (Thompson 1994). In certain circumstances, one expects that Alfvén radiation will disperse rapidly throughout the magnetosphere: if the initial impulse occurs on extended dipole field lines; or if it involves a buried fracture of the crust. This rapid dispersal is made possible by the strong coupling between external Alfvén modes and internal seismic waves (cf. Blaes et al. 1989). The wave energy is then distributed logarithmically with radius, with the wave amplitude $`\delta B/B`$ approaching unity at the Alfvén radius $`R_A/R_{}900(B_{}/4.4\times 10^{14}\mathrm{G})^{1/2}(L_A/10^{40}\mathrm{erg}\mathrm{s}^1)^{1/4}`$ (Thompson & Blaes 1998). Here $`B_{}`$ is the polar dipole magnetic field, $`L_A`$ is the luminosity in escaping waves, and $`R_{}10`$ km. Damping by leakage onto open field lines (which extend beyond the Alfvén radius) occurs on a timescale $`t_{\mathrm{damp}}=3(2R_A/R_{})(R_{}/c)=0.2(B_{}/4.4\times 10^{14}\mathrm{G})^{1/2}(L_A/10^{40}\mathrm{erg}\mathrm{s}^1)^{1/4}`$ s. This lies close to the FWHM of the two reported bursts. Wave excitations near the neutron star undergo a turbulent cascade on a similar timescale, and will generate softer seed X-ray photons (Thompson et al. 1999). Independent of the detailed physical mechanism, these observations demonstrate that a Galactic source – probably a strongly magnetized neutron star with a large velocity – is capable of producing a burst of $`\gamma `$-rays whose time-integrated spectrum resembles that of a classical GRB. The similarity is remarkable in light of the difference in peak luminosities of $`10^{11}`$ (modulo beaming factors). However, the two bursts from SGR 1900$`+`$14 presented here are by no means ‘typical’ GRBs, given their low E<sub>peak</sub> values, unusual spectral evolution, and their position in the duration-hardness plane. Furthermore, we know the currently active SGRs in our Galaxy lie close to the Galactic plane ($`z_{\mathrm{rms}}`$ $``$ 66 pc), so their contribution to the BATSE GRB catalog must be minimal on account of the isotropic spatial distribution of GRBs. A larger contribution from older magnetars that have moved away from the Galactic plane is conceivable, but it would require that the preponderance of hard bursts to soft bursts increases tremendously with age. Acknowledgements – We thank Chip Meegan for useful discussions. PMW acknowledges support under the cooperative agreement NCC 8-65. JvP acknowledges support under NASA grants NAG 5-3674 and NAG 5-7060. KH is grateful for support under JPL Contract 958056 and NASA grant NAG5-7810. CT acknowledges support from the Alfred P. Sloan Foundation.
no-problem/9909/physics9909045.html
ar5iv
text
# 1 Introduction ## 1 Introduction External magnetic fields are well-known to have a strong impact on the properties of particle systems. A number of intriguing phenomena in different areas of physics like, for example, the quantum Hall effect in solid state physics or the interplay of regularity and chaos in few-body atomic systems have their origin in the combination of the magnetic and Coulomb forces. By nature the Coulomb potential together with the paramagnetic and diamagnetic interaction represents a nonseparable and nonlinear problem already on the one-particle level. As a consequence both the dynamics as well as the electronic structure of atoms or molecules are subject to severe changes in the presence of a strong magnetic field. In the present article we review on selected recent developments for Rydberg atoms in strong laboratory magnetic fields and outline future perspectives. We also provide concrete suggestions how to experimentally access and observe the corresponding atomic states and processes. Of course, similar phenomena are to be expected also for astrophysical field strengths but the focus in this work is on highly excited atoms exposed to laboratory field strengths. Our central issue are phenomena and effects arising due to the nonseparability of the center of mass and internal motion for two-body systems in external magnetic fields and/or crossed electric and magnetic fields . Due to the smallness of the ratio of the electron to nuclear mass one might be, at first glance, tempted to believe that the effects of the coupling of the collective and internal motion provide only tiny corrections to the overall dynamics. This is however in general not correct and indeed we will provide a number of different physical situation in which even the correct qualitative behaviour cannot be obtained without including the interaction of the center of mass and relative motion. Prominent effects due to this interaction are the chaotic classical diffusion of the center of mass , the existence of giant dipole states for neutral atoms and dynamical phenomena like the self-ionization process for rapidly moving ions in strong magnetic fields. As we shall see in the following the ongoing research in this area reveals more and more of the beautiful phenomena created by the competition of magnetic and Coulomb forces. In detail we proceed as follows. Section I gives first a brief account of the separation of the CM motion for neutral particle systems in magnetic fields and subsequently investigates a class of decentred magnetically dressed Rydberg states possessing a huge electric dipole moment. A possible experimental set-up for the preparation of these states as well as applications to the positronium atom are discussed. Section II reviews on recent developments for ionic systems like the classical self-ionization process for rapidly moving ions in magnetic fields. A number of most recently discovered quantum properties of ions in fields are outlined and clearly indicate that quantization introduces a number of new features. ## 2 Neutral atoms in magnetic fields ### 2.1 Giant dipole states in crossed fields In the absence of an external magnetic field the CM and electronic motions of an atom separate due to the conservation of the total canonical momentum $`𝐏`$. $`𝐏`$ is the generator for coordinate space translations which represent a symmetry for any system of particles interacting through a potential which depends only on the distances of the particles. In a homogeneous magnetic field this symmetry is lost since the Hamiltonian depends explicitly on the vector potential associated with the external field. Nevertheless, there exists a so-called phase space translation group which represents a symmetry in the presence of the external field. This group is generated by the so-called pseudomomentum. Historically the pseudomomentum was implicitly used by Lamb in order to perform a so-called pseudoseparation of the CM motion for the hydrogen atom in a homogeneous magnetic field. In the late seventies a more profound mathematical treatment of the pseudoseparation for two-body systems was achieved . In the eighties and nineties the pseudoseparation for many-body systems has been reviewed (see ref. and references therein) and it was adapted to the needs of molecular physics . In order to perform the pseudoseparation transformation as mentioned above one starts with the Hamiltonian of the atom in the laboratory frame which reads for a two-body system $$_L=\underset{i=1}{\overset{2}{}}[\frac{1}{2m_i}(𝐩_ie_i𝐀_i)^2e_i\mathrm{𝐄𝐫}_i]+V(|𝐫_1𝐫_2|)$$ (1) where $`e_i,m_i,𝐀_i`$ and $`𝐄`$ denote the charges, masses, vector potential and electric field vector, respectively. $`\{𝐫_i,𝐩_i\}`$ are the Cartesian coordinates and momenta in the laboratory coordinate system. Subsequently one chooses a certain gauge for the vector potential $`𝐀_i`$ belonging to the magnetic field $`𝐁`$. Most commonly this is done by introducing the symmetric gauge $`𝐀_i=\frac{1}{2}𝐁\times 𝐫_i`$. As a next step a coordinate transformation from laboratory to relative and center of mass coordinates follows. Finally the pseudomomentum is introduced as a canonical momentum of the center of mass coordinate by performing a corresponding transformation in momentum space which gives the final transformed Hamiltonian. Choosing, according to the above, a specific gauge in the laboratory Hamiltonian has, however, a serious drawback: It is not possible to discern between gauge dependent and gauge invariant terms in the transformed Hamiltonian. In order to extract the physics, which is inherently gauge-independent, from the transformed Hamiltonian it is a central issue to identify gauge invariant parts of it. The necessity of a gauge-invariant approach has been realized in the nineties by different authors which have chosen different approaches in order to obtain a gauge-independent formalism and results for the problem of two interacting particles. Most importantly it was finally shown that there exists a potential picture (see below) due to a number of gauge-invariant terms emerging from the generalized pseudoseparation. Let us consider the final Hamiltonian resulting from the afore-mentioned gauge independent pseudoseparation $$=𝒯+𝒱$$ (2) where $$𝒯=\frac{1}{2\mu }(𝐩q𝐀(𝐫))^2$$ (3) and $$𝒱=\frac{1}{2M}(𝐊e𝐁\times 𝐫)^2+V(r)+\frac{M}{2}𝐯_D^2+\mathrm{𝐊𝐯}_D$$ (4) with the charge $`q=\frac{e\mu }{\widehat{\mu }}`$ where $`\mu =\frac{mM_0}{M}`$ and $`\widehat{\mu }=\frac{mM_0}{M_0m}`$ are different reduced masses. $`𝐊`$ is now the constant vector of the pseudomomentum which contains a term linear in the external electric field. $`\{𝐫,𝐩\}`$ denote the canonical pair of variables for the internal relative motion. $`𝐯_D=\frac{(𝐄\times 𝐁)}{B^2}`$ is the drift velocity of free charged particles in crossed external fields. The latter is independent of the charges and masses of the particles. The Hamiltonian (1) is the sum of two terms: The kinetic energy $`𝒯`$ of the relative motion and the potential $`𝒱`$. The explicit form of the kinetic energy $`𝒯`$ depends on the chosen gauge via the vector potential $`𝐀`$. The important novelty with respect to our Hamiltonian $``$ is the appearance of the potential term $`𝒱`$. Apart from the Coulomb potential $`V`$ and the constant term $`\frac{M}{2}𝐯_D^2+\mathrm{𝐊𝐯}_D`$ $`𝒱`$ contains an additional potential term $$V_o=\frac{1}{2M}(𝐊e𝐁\times 𝐫)^2$$ (5) The latter is gauge independent, i.e. does not contain the vector potential, and, therefore, fully deserves the interpretation of an additional potential for the internal motion with the kinetic energy (2). Apart from the constant $`\frac{𝐊^2}{2M}`$ the potential $`V_o`$ contains two coordinate dependent parts. The term linear in the coordinates $`\frac{e}{M}(𝐊\times 𝐁)𝐫`$ consists of two Stark terms: one which is due to the external electric field $`𝐄`$ and a second one which is a motional Stark term with an induced constant electric field $`\frac{1}{M}((𝐊+M𝐯_D)\times 𝐁)`$. The latter electric field is always oriented perpendicular to the magnetic one and arises due to the collective motion of the atom through the homogeneous magnetic field. Besides the linear terms there exists a quadratic, i.e. diamagnetic, term $`\frac{e^2}{2M}(𝐁\times 𝐫)^2`$ in the potential $`V_o`$ which is of great importance for the main subject of the present section, i.e. the existence of giant dipole states of two-body system in external fields. In the following we discuss the qualitative properties of the potential $`𝒱`$. Figure 1 shows for the choice $`𝐁=(0,0,B)`$, $`𝐊=(0,K,0)`$ and a vanishing external electric field a two-dimensional intersection of the potential $`𝒱`$ for $`z=0`$. In the neighbourhood of the coordinate origin the Coulomb potential dominates. This regime corresponds in figure 1 to the tube around the origin of the coordinate plane $`(x,y)`$. With increasing distance from the origin along the $`x`$axis ($`y=0`$) the Stark term $`(\frac{e}{M})BKx`$ increases and becomes eventually comparable with the Coulomb potential. This means that we encounter an approximately linear raise of the potential for positive values of the $`x`$coordinate. For negative values of the $`x`$coordinate eventually a saddle point emerges. In this coordinate regime the diamagnetic term of eq.(4) represents a small correction. For even larger distances the Coulomb potential becomes small and the shape of the potential $`𝒱`$ is more and more determined by the diamagnetic term $`(\frac{e^2}{2M})B^2(x^2+y^2)`$. Due to the competition of the Stark and diamagnetic terms our potential $`𝒱`$ develops an outer minimum and a corresponding potential well. The existence of both the saddle point and the outer minimum/well depends, of course, on the values of the magnetic field strength and the motional/external electric field strength. For a derivation and discussion of the corresponding conditions we refer the reader to the literature . We emphasize that the potential $`V_o`$ is inseparably connected with the finite nuclear mass. Assuming an infinite nuclear mass simply yields $`V_o0`$ and the described features of the total potential $`𝒱`$ disappear. The above-discussed properties of our potential $`𝒱`$ have important implications on the dynamical behaviour of the atom. First of all, we observe that the ionization of the atom can take place only in the direction of the magnetic field axis: In the direction perpendicular to the magnetic field vector the diamagnetic term $`\frac{e^2}{2M}(𝐁\times 𝐫)^2`$ is dominating for large distances $`\rho =(x^2+y^2)^{1/2}`$ and causes a confining behaviour. The second important observation is the fact that the existence of the outer well leads to new weakly bound states in this well. Let us provide some main characteristics of these states. In refs. an explicit approximation formula has been given for the position of the outer minimum: $`x_0\frac{K}{B}+\frac{KM}{K^32MB},y=z=0`$. Hence, for a laboratory field strength $`B2.35Tesla`$ and a motional/external electric field of the order of $`E2.8\times 10^3\frac{V}{m}`$ the minimum is located at a distance of about $`5.3\times 10^6m`$ from the Coulomb singularity. For states in the outer well the electron and proton are therefore separated about 100,000 times as much as they are in the ground state of the hydrogen atom without external fields, i.e. we encounter a strongly delocalized atom of mesoscopic size. Since the well exists only on the half-hyperplane with a negative Stark term (see fig.1) these states possess a huge permanent electric dipole moment of the atom. This is in contrast to the well-known Rydberg states in a pure magnetic field which do not exhibit a permanent dipole moment. For energies close to the minimum the outer well is approximately an anisotropic harmonic potential. Low-lying quantum states can therefore be described by an anharmonic oscillator in a magnetic field . The field-dependent kinetic energy (see eq.(2)) hereby determines the extension of the wave function in the plane perpendicular to the magnetic field. The deviation of the exact energies from those of the harmonic approximation changes, as expected, significantly with increasing degree of excitation. It grows stepwise and a major contribution to the anharmonicity comes from the quantum number $`n_z`$ i.e. the excitation in the direction parallel to the magnetic field. This can also be seen in perturbation theory for higher terms of the expansion of the Coulomb potential where the major contributions to the energy corrections are due to those terms containing high powers of $`z`$. For the computational techniques to obtain energies and wavefunctions for a large number of states in the outer well as well as a detailed discussion of their properties we refer the reader to ref.. ### 2.2 Experimental preparation At this point it is natural to pose the question how one can experimentally prepare hydrogen atoms in crossed fields in their giant dipole states. This question has been investigated and discussed in detail in ref.. We provide here the key ideas of the approach developed in this work. The important parameter which controls the formation of the outer well and is at disposal to the experimentalist is the external electric field strength. The preparation scheme consists of a sequence of steps which correspond to different electric field configurations. The first step excites the atom in the presence of a magnetic field but no electric field from the ground state with a laser pulse to a highly excited Rydberg state. In the second step an electric field is switched on within a time period of a few $`ns`$ to a value $`E=E_c`$ which corresponds to the existence of a shallow outer well. Subsequently, during the third step the electric field is kept constant for a time period $`\mathrm{\Delta }t`$. The Rydberg states prepared in step one are localized in the Coulomb well. After the switching of the electric field to $`E_c`$ (step two) their energy is above the saddle point. Keeping the electric field constant during the third time step has the reason to achieve a significant spreading of the prepared state over the shallow outer well. Thereafter, in a third step, the electric field is switched to its final value $`E_f`$ which corresponds to a deep outer well. This last step of the preparation procedure captures the wave packet in the outer well and a significant portion of the hydrogen atoms therefore ends up in low-energy states of the outer well. The second switching of the electric field to its final value $`E_f`$ is much slower than the first fast switching to the value $`E_c`$. The reason herefore is the broadening of the final energy distribution during the trapping of the hydrogen atoms in the outer well. An adiabatic second switching is therefore obligatory in order to end up with a narrow final energy distribution. Figure 2 shows the final energy distribution in the outer well resulting from calculations for an ensemble of classical trajectories which simulate the behaviour of the highly excited Rydberg states during the above-discussed steps of preparation. It can clearly be seen that the maximum of the energy distribution is below the ionization threshold (vertical dashed line in figure 2) and therefore a significant part of the prepared outer well states is strictly bound. For further details of the experimental setup and preparation like, for example, the specific switching procedures of the electric field or the influence of electric stray fields on the preparation scheme we refer the reader to ref.. Let us provide some remarks with respect to the experimental detection of the giant dipole states of the hydrogen atom in crossed electric and magnetic fields. Direct state-to-state transitions for bound states in the outer well should be observable in the radio-frequency regime. The energy gap between the ground and first excited state corresponds, for the above-given field strengths, to a frequency of the order of magnitude of a few tens of MHz. Alternatively the large electric dipole moment suggests itself for detection which could be achieved through deflection of the atoms by a slightly inhomogeneous electric field . Even though the binding energies of the states in the outer well are relatively small they should be stable as long as collisional interaction is prevented. We have discussed the existence and properties as well as the experimental preparation and detection of giant dipole states for the hydrogen atom in crossed fields. The electric field can hereby be either an external one or due to the collective motion of the hydrogen atom through the magnetic field. For these states the magnetic interaction is dominating the Coulomb attraction in the plane perpendicular to the magnetic field whereas parallel to the magnetic field we exclusively encounter Coulomb forces. The electric field causes the decentred character of these states . Of course this kind of states does not only exist for one-electron atoms but should occur also for more-electron systems. Indeed it is a challenging question to ask for the existence and properties of decentred multielectron atoms in crossed external fields. Moreover one can imagine giant dipole molecules which, due to the presence of several heavy particles, might possess completely different features compared to atoms. ### 2.3 Application to Positronium Besides the above-drawn general perspective there exists an intriguing application of the above results to exotic two-body systems, namely the positronium atom . Since the distance of the minimum of the outer well from the Coulomb singularity is approximately proportional to the total mass of the two-body system $`(\frac{EM}{B^2})`$ we expect that the giant dipole states for positronium are significantly less extended than those of hydrogen. At the same time the critical electric field strength necessary for the existence of the outer well scales with $`M^{\frac{2}{3}}`$. As a consequence the typical reduction of size for fixed field strength is of the order of $`10`$, i.e. for $`B2.35T`$ the extension of the Rydberg state is several thousand Bohr radii. The giant dipole states located in the outer well and the traditional Rydberg states located in the Coulomb well are separated by a wide and high potential barrier. For the positronium atom this has important consequences: the potential barrier prevents the particles from contact and therefore decreases the annihilation rate by many orders of magnitude. Indeed, for typical laboratory field strengths the lifetime can become many years and low-lying outer well states of positronium can, for all practical purposes, be considered as stable . Crossed fields offer therefore a unique opportunity for the stabilization of matter-antimatter two-body systems. For a detailed investigation of the positronium atom including dipole transition rates, tunneling probabilities as well as spectra and wavefunctions resulting from both perturbation theoretical and finite element calculations we refer the reader to refs.. We conclude with an important result of ref. which should be of relevance to astrophysical situations: For sufficiently strong fields the energetically lowest decentred outer well state becomes the global ground state of the atom. This statement holds for both the hydrogen atom as well as the Positronium atom. As a consequence the ground state of isolated positronium in strong crossed external fields is prevented from annihilation and represents a long-lived state. ## 3 Atomic ions in magnetic fields ### 3.1 Basic properties For a charged atom in a magnetic field the interaction of its collective motion with the electronic motion is more intricate than for neutral species. From a physical point of view it is evident that the crudest picture for the CM motion describes the ion as an entity in the magnetic field with a charge and mass identical to the net charge and total mass of the ion. For neutral systems the net charge of the system is zero and the crudest picture for the behaviour of the CM is a free straightlined motion through the magnetic field. However, as we shall see below, we encounter coupling terms for the ions CM and electronic motions which can mix them up heavily thereby causing a number of interesting energy transfer processes between the degrees of freedom. From a formal point of view the maximum number of commuting constants of motion is the same for neutral as well as charged systems. For neutral species, however, one can choose the three components of the total pseudomomentum $`𝐊`$ which are exclusively associated with the CM motion of the system. As a consequence the CM coordinates are cyclic and can be completely eliminated by the so-called pseudoseparation from the Hamiltonian (see section 2.1). For charged systems the only maximal set of commuting constants of motion is $`(𝐊_{}^\mathrm{𝟐},_{},𝐊_{})`$ where $`𝐊_{}`$ is the component of the pseudomomentum perpendicular to the magnetic field and $`_{}`$ is the component of the total angular momentum parallel to the field. This set of quantities is not exclusively associated with the CM motion but involves both the CM and internal degrees of freedom. Unlike the neutral system the CM coordinates cannot completely be eliminated from the Hamiltonian. To simplify the Hamiltonian one therefore uses a transformation which possesses a certain analogy to the transformations in the neutral case thereby arriving at the following final form $$=_1+_2+_3$$ (6) where $$_1=\frac{1}{2M}\left(𝐏\frac{Q}{2}𝐁\times 𝐑\right)^2$$ (7) $$_2=e\frac{\alpha }{M}\left(𝐁\times \left(𝐏\frac{Q}{2}𝐁\times 𝐑\right)\right)𝐫$$ (8) $`_3`$ $`=`$ $`{\displaystyle \frac{1}{2m}}\left(𝐩{\displaystyle \frac{e}{2}}𝐁\times 𝐫+{\displaystyle \frac{Q}{2}}{\displaystyle \frac{m^2}{M^2}}𝐁\times 𝐫\right)^2`$ $`+{\displaystyle \frac{1}{2M_0}}\left(𝐩+\left({\displaystyle \frac{e}{2}}{\displaystyle \frac{Q}{2M}}{\displaystyle \frac{m}{M}}\left(M+M_0\right)\right)𝐁\times 𝐫\right)^2+V`$ where $`m,M_0`$ and M are the electron, nuclear and total mass, respectively. $`\alpha =(M_0+Zm)/M`$ and V is the Coulomb potential. For the vector potential $`𝐀`$ we have adopted the symmetric gauge. The magnetic field vector $`𝐁`$ again is assumed to point along the z-axis. $`(𝐑,𝐏)`$ and $`(𝐫,𝐩)`$ are the canonical pairs of variables for the CM and relative motion, respectively. The Hamiltonian $``$ consists of three parts which introduce different types of interaction. The part $`_1`$ in eq.(6), which involves solely the CM degrees of freedom, describes the cyclotron motion of a free pseudoparticle with mass M and charge Q in a homogeneous magnetic field (see comments at the beginning of this subsection). This zeroth order picture is, in general, not sufficient to describe the CM motion of the ion. In fact the behaviour of the CM can deviate strongly from the motion given by the Hamiltonian $`_1`$ and exhibits a variety of different phenomena depending on the parameter values (energy, field strength, CM velocity) . The origin of this rich dynamics lies in particular in the Hamiltonian $`_2`$ in eq.(7) which describes the coupling of the CM and electronic degrees of freedom. It represents a motional Stark term with a rapidly oscillating electric field which is determined by the dynamics of the system. Because of this ”dynamical” electric field the collective and internal motion will, in general, mix up heavily. $`_3`$ in eq.(8) contains only the electronic degrees of freedom and describes, to zeroth order, the relative motion of the electron with respect to the nucleus. ### 3.2 Mixing and localization properties for the collective and electronic motions The Hamiltonian $`_2`$ in eq.(7) destroys the picture of a decoupled CM and electronic motion and causes an interaction of these motions whose strength depends on several parameters (CM and internal energy, field strength). In the following we outline both classical and quantum effects and phenomena due to this interaction . We hereby pursue the track of an increasing strength of the coupling of the collective and internal motion which can in particular be achieved by increasing the energy of the CM motion. In order to investigate the significance and effects of the coupling Hamiltonian $`_2`$ one can use the following method for formally solving the total Schrödinger equation $`\mathrm{\Psi }=E\mathrm{\Psi }`$. The most natural way is to expand the total wave function $`\mathrm{\Psi }`$ in a series of products $$\mathrm{\Psi }(𝐑;𝐫)=\underset{p,q}{}c_{pq}\mathrm{\Phi }_p^L(𝐑)\psi _q(𝐫)$$ (10) where $`c_{pq}`$ are the coefficients of the product expansion. The functions $`\{\mathrm{\Phi }_p^L\}`$ obey the Schrödinger equation $`_1\mathrm{\Phi }_p^L=E_p^L\mathrm{\Phi }_p^L`$ for a free particle with charge $`Q`$ and mass $`M`$ in a homogeneous magnetic field, i.e. they are the corresponding Landau orbitals. One proper choice for the functions $`\{\psi _q\}`$ in the expansion (9) (see also below) are the eigenfunctions of the electronic Hamiltonian, i.e. $`_3\psi _q=E_q^I\psi _q`$ (q stands collectively for all electronic quantum numbers). If we insert the product expansion (9) for the total wave function $`\mathrm{\Psi }`$ in the total Schrödinger equation and project on a simple product $`\mathrm{\Phi }_p^{}^L\psi _q^{}`$ we arrive at the following set of coupled equations for the coefficients $`\{c_{pq}\}`$: $$(\underset{¯}{}_2+\underset{¯}{E}^L+\underset{¯}{E}^I)𝐜=E𝐜$$ (11) where $`𝐜`$ is the column vector with components $`\{c_{pq}\}`$. $`\underset{¯}{E}^L`$ and $`\underset{¯}{E}^I`$ are the diagonal matrices which contain the Landau energies $`\{E_p^L\}`$ and internal energies $`\{E_q^I\}`$, respectively. The original problem of the investigation of the significance and effects of the couplings among the CM and electronic degrees of freedom is now reduced to that of the solution of the eigenvalue problem (10). $`\underset{¯}{}_2`$ contains the matrix elements of the coupling Hamiltonian $`_2`$. These elements can, via certain commutation relations, be transformed into pure dipole transition matrix elements of the CM as well as internal degrees of freedom (see ref. for details) $$\underset{¯}{}_2=ie\alpha (E_p^{}^LE_p^L)\mathrm{\Phi }_p^{}^L|𝐑|\mathrm{\Phi }_p^L[𝐁\times \psi _q^{}|𝐫|\psi _q]$$ (12) Obviously the coupling matrix elements $`(\underset{¯}{}_2)_{\{p^{}q^{}\}\{pq\}}`$ vanish if the two Landau states $`\mathrm{\Phi }_p^{}^L`$ and $`\mathrm{\Phi }_p^L`$ possess the same energy $`E_p^{}^L=E_p^L`$. Transitions, therefore, do occur only for total states which involve different states of the collective motion with respect to their energy. The expansion (9) with $`\{\psi _q\}`$ being the eigenfunctions of the electronic Hamiltonian $`_3`$ is particularly adequate if the coupling is sufficiently weak which means that the series contains only a few dominant terms. Otherwise, i.e. in the case that many terms contribute significantly to the sum, the mixing of the CM and electronic motion is strong and may require for its proper and efficient description either a different choice for the functions $`\{\psi _q\}`$ or an alternative approach (see below) to the expansion (9). Let us evaluate the importance of the coupling terms for different physical situations at laboratory field strengths. We hereby concentrate on the $`He^+`$-ion moving in a magnetic field. The simplest way to treat the couplings is to assume that the electronic wave functions are to zeroth order well-described by the field-free hydrogenic wave functions and to take into account the electronic diamagnetic interaction via perturbation theory. This description is appropriate for laboratory field strengths for not too high electronic excitations, i.e. for a field strength of a few Tesla typically up to $`n=1020`$. The relevant indicator for the mixing of CM and electronic motions is the quotient of the coupling $`\kappa `$ and the energy spacing $`\mathrm{\Delta }`$ of the corresponding diagonal matrix elements in $`(\underset{¯}{E}^L+\underset{¯}{E}^I)`$. For electronic states belonging to the same principal quantum number $`n`$ (the latter being not too large) i.e. for the case of dominant intramanifold coupling, we obtain $$(\kappa /\mathrm{\Delta })\frac{\sqrt{NB}}{M}n^2$$ (13) where $`N`$ is the principal quantum number of the Landau orbitals. For $`n=10`$ and a typical laboratory field strength $`B=2.35T`$ $`N`$ has to be of the order of magnitude $`N10^8`$ to make the coupling $`\kappa `$ as large as the energy spacing $`\mathrm{\Delta }`$. The corresponding energy of the CM motion is some $`10eV`$. For the case of intermanifold coupling, i.e. the coupling of states belonging to different $`n`$-manifolds we obtain $$(\kappa /\mathrm{\Delta })\frac{\sqrt{NB}}{M}Bn^5$$ (14) Choosing $`B=10T`$ and $`n=10`$ the requirement that $`\kappa `$ should be of the order of magnitude of $`\mathrm{\Delta }`$ yields $`N10^{10}`$, i.e. a CM energy of the ion of a few keV. This means that at these energies the couplings become not only dominant for states within the same $`n`$ manifold but also important for states belonging to adjacent $`n`$ manifolds. The above considerations provide an idea how important the coupling of the CM and electronic motion is for different values of the parameters $`n`$(internal energy),$`N`$(CM energy), $`B`$(field strength). In the case $`\frac{\kappa }{\mathrm{\Delta }}_{}^>1`$ we encounter a strong mixing of the Landau orbitals $`\mathrm{\Phi }_p^L`$ and the electronic functions $`\psi _q`$. The expansion (9) of an eigenfunction $`\mathrm{\Psi }`$ of $``$ involves therefore a large number of wave functions $`\{\psi _q\}`$ which means that the typical spatial extension of $`\mathrm{\Psi }`$ with respect to the electronic coordinates is much larger than that of the individual functions $`\psi _q`$. A perturbation theoretical approach with respect to the coupling Hamiltonian $`_2`$ is not appropriate in this case. Instead, as already mentioned above, an efficient description of the wave function might either be achieved by a better (intuitive) choice for the functions $`\{\psi _q\}`$ in the expansion (9) or by pursuing a conceptually different approach (see below). On the other hand side it is interesting to consider the situation of increasing internal energy $`_3`$ for fixed field strength. It is well-known that the fixed nucleus one-electron atom undergoes a classical transition from regularity to chaos with increasing excitation energy and the quantized atom shows the corresponding quantum signatures of chaos . Perturbation theory with respect to the magnetic interaction terms is only applicable for sufficiently small field strengths/internal energies which corresponds, in a classical language, to the regime for which phase space is dominated by regular structures. Treating the ion in the intermediate case of mixed regular and chaotic classical phase space is a difficult task. However for another limiting case, namely the situation of a completely chaotic phase space (Coulomb- and diamagnetic interaction are of equal strength) a statistical approach to the matrixelements and spectrum of the Hamiltonian $`_3`$ seems appropriate. Such an approach has been developed in ref. and allows to extract important statistical properties for the mixing of the electronic (due to $`_3`$) and CM (due to $`_1`$) wave functions. The Hamiltonian $`_3`$ is hereby represented by a random matrix ensemble, i.e. the Gaussian orthogonal ensemble (GOE), which is the appropriate semiclassical description of a completely chaotic system . The GOE provides the fluctuations of the chaotic levels. The mean level density (MLD) as a function of energy, field strength and in particular electronic angular momentum $`L_z`$ is a key ingredient for the specification of the random matrix ensemble and can in our case be obtained via the semiclassical Thomas-Fermi formula . It represents the density of irregular states. The size of the matrix elements of $`_2`$ can be determined from a semiclassical relation between off-diagonal matrix elements of an operator, and the Fourier transform of its classical autocorrelation function . For more details on the concrete appearance of the statistical-semiclassical model for the moving ion we refer the reader to ref. and report here only on some major results. Of particular interest are the properties of the eigenvectors which are obtained through diagonalization of the total matrix Hamiltonian consisting of the parts $`_1,_2`$ and $`_3`$. For sufficiently low CM energies we encounter an exponential localization of the components of the eigenvectors around some maximum component. With increasing CM energy the localization length $`\lambda `$, which reflects the typical length of the mixing process in the space of the quantum states, increases. In ref. it was, however, shown that the eigenvectors of this statistical model possess a finite length $`L_c`$. At some critical CM energy the localization length becomes therefore larger than $`L_c`$, i.e. larger than the size of the system in the space of the quantum states, and we encounter a crossover from localization to delocalization. We conclude with the remark that there exists an intriguing analogy of a simplified version of the above statistical model with models for transport in disordered finite-size wires whose localization lengths and related properties are known exactly. ### 3.3 Energy transfer processes for rapidly moving <br>atomic ions In the present subsection we focus on the situation of a rapidly moving highly excited ion. The corresponding CM energy is much larger than the initial electronic binding energy. Since the coupling of the collective and electronic motion will be large, energy exchange processes between the CM and electronic motion are extremely relevant and provide some intriguing new phenomena . The study of both the classical as well as quantum properties provides additional insight into our understanding of the quantization on the classical energy flow. The classical energy exchange equation for the CM energy $`E_{cm}`$ and for the internal energy $`E_{int}=\frac{\mu }{2}\dot{𝐫}^2+V`$ reads as follows $$\frac{d}{dt}E_{cm}=\frac{d}{dt}E_{int}=e\alpha \left(𝐁\times \dot{𝐑}\right)\dot{𝐫}$$ (15) This equation shows that a permanent flow of energy from the CM to the electronic degrees of freedom and vice versa has to be expected. Let us consider a typical classical trajectory corresponding to the above-described situation of a rapidly moving highly excited Rydberg atom. After a transient time of bound oscillations in the internal motion (energy) a strong flow of energy from the CM to the internal motion takes place. The internal energy is hereby increased above the threshold for ionization , $`E_{th}=0`$, and the ion eventually ionizes, i.e. the electron escapes in the direction parallel to the magnetic field. Note that the motion of the electron is confined in the direction perpendicular to the magnetic field. Figure 3 provides a prototype example for such an ionizing trajectory. The subfigures 3a and 3b illustrate the time-dependencies of the CM-energy and the z-component of the internal relative coordinate, respectively. After the above-mentioned initial phase of oscillations there occurs at approximately $`T=7\times 10^6`$a.u.$`(1.7\times 10^{10}s)`$ a sudden loss of CM kinetic energy simultaneously accompanied by an increase in the internal energy which causes the electron to move away from the nucleus in the positive z-direction. The transferred energy, which is in our case of figure 3 approximately $`6\times 10^3`$ a.u.$`(0.2eV)`$, corresponds to a small fraction of the total initial CM energy which is for our example about $`12.2677`$ a.u. $`(333.8eV)`$. This self-ionization process is only possible due to the presence of the coupling term $`_2`$ in the Hamiltonian (7) which involves both the internal and CM degrees of freedom. The ionization time for an individual trajectory depends, apart from its intrinsic dynamics, on the field strength and in particular on the CM kinetic energy of the ion. In order to gain an idea of the statistical measure for the ionization process it is instructive to consider for an ensemble of trajectories the fraction of ionized orbits as a function of time. The initial internal energy is chosen to correspond to a completely chaotic phase space of the $`He^+`$ion if the nuclear mass were infinite. In figure 4 we have illustrated the fraction of ionized orbits as a function of time up to $`T=10^{10}`$ a.u. for a series of different CM energies and for a strong laboratory field strength $`B=23.5T`$. For an initial CM energy of $`E_{cm}=0.053`$ a.u. which corresponds to an initial CM velocity of $`V_{cm}=8.4\times 10^3\frac{m}{s}`$ about $`70\%`$ of the trajectories are ionized within a time of $`T=10^9`$ a.u. ($`2.4\times 10^8`$s) which is the tenth part of the integration time. In contrast to this we have for $`E_{cm}=0.01`$ a.u. only about $`30\%`$ of ionized orbits within the total integration time of $`T=10^{10}`$ a.u. ($`2.4\times 10^7`$s). The ionization process depends, therefore, very sensitively on the initial CM kinetic energy of the ion. The above investigations have shown the existence of the self-ionization process through energy transfer from the CM to the electronic motion for the classically moving ion in a magnetic field. For the limit of very highly excited electronic states and a large CM energies it is expected that the above described behaviour reflects the true dynamics of the ion. The natural question arises now how quantization changes or enriches this picture for the typical energies accessible experimentally. It is well-known that quantization can alter the effects observed in classical dynamics (see ref. and references therein). Very recently an approach which consists of a combination of semiclassical and wave packet propagation techniques has been developed which is appropriate for the description of the quantum dynamics of the highly excited and rapidly moving quantum ion in a magnetic field. The results of the corresponding investigations show that the quantum self-ionization obeys a time scale which is by orders of magnitude larger than the corresponding classical process. The typical ionization times are for both the classical as well as quantum processes much smaller than the life times of the highly-excited Rydberg states. In addition the ionization signal is seemingly affected by quantum coherence phenomena which yield strong fluctuations for the time-dependence of the ionization rate. More studies have to be performed in order to elucidate the dynamics in the semiclassical and deep quantum regime both theoretically as well as experimentally . Let us comment on a very recent application of the effects of the coupling and mixing of the collective and internal motion in exotic atoms. It was shown that one can stimulate, by using present-day laboratory magnetic fields, transitions between the $`lm`$ sub-levels of fast $`\mu He^+`$ ions formating in muon catalyzed fusion. This gives a possibility to drive the population of the $`lm`$ sub-levels by applying a field of a few Tesla, which affects the reactivation rate of the fusion process and is especially important to its $`K\alpha `$ $`X`$ray production. To conclude, we have discussed a number of intriguing phenomena for two-body atoms in magnetic fields which have their origin in the nonseparability of the CM and internal motion causing an interaction of the collective and internal motion. It has to be expected that this interaction will induce even more phenomena for multi-electron atoms or molecules. For the case of heteronuclear molecules the relevant coupling Hamiltonian contains both the vibrational and rotational degrees of freedom and new vibrational or rotational structures as well as dynamical processes might therefore arise once the coupling becomes significant i.e. comparable to the spacing of the vibrational or rotational energy levels. ## Figure captions Figure 1: A two-dimensional intersection of the potential $`𝒱`$ in the plane perpendicular of the magnetic field. Figure 2: Distribution over energy in the outer well after the preparation is completed. In the calculation a magnetic field strength $`B=14T`$ was used. The ionization threshold is shown by a vertical dashed line. The inset shows the spreading of the trajectories over the saddle point into the outer well during the intermediate time step of a constant electric field $`E_c`$. Figure 3: (a) The CM energy as function of time. (b) The z-component of the internal relative coordinate as a function of time. The total and initial internal energy of the ionizing trajectory are $`E=333.68`$eV and $`E_{int}=8.16\times 10^2`$eV, respectively. The field strength is $`B=23.5`$T. Figure 4: The ionized fraction for an ensemble of $`250`$ trajectories as a function of time.From top to bottom the CM-energies belonging to the ionization curves are $`E_{CM}=1.45,0.63,0.47,0.34,0.27eV`$, respectively. The initial internal energy is always $`E_{int}=9.25\times 10^3`$eV. The field strength is $`B=23.5`$T.
no-problem/9909/astro-ph9909274.html
ar5iv
text
# X-ray Properties of the Abell 644 Cluster of Galaxies ## 1 Introduction Cooling flows occur in clusters of galaxies where the age of the cluster has exceeded the cooling time scale of intracluster gas. In the more luminous clusters, this cooling time is comparable to the Hubble time (and thus cluster ages). Thus we should expect to see these flows, provided they have remained in hydrostatic equilibrium since formation. This result has been validated by studies with Einstein, Ginga, and ROSAT which indicate that cooling flows are common and long-lived phenomenon among nearby clusters (Edge et al. 1992; Peres et al. 1998). Abell 644 (A644), with a bolometric luminosity exceeding $`10^{45}`$ ergs s<sup>-1</sup> and lying at a redshift of $`z=0.0704`$, is one of the brighter clusters in the local universe. However, A644 is only a richness class 0 cluster (Abell, Corwin, & Olowin 1989), and thus has a higher than average X-ray luminosity for its richness class (typically $`L_X=10^{4244}`$ ergs s<sup>-1</sup>; Abramopoulos & Ku 1983; Briel & Henry 1993). This cluster has been previously observed in X-rays with the Einstein, Ginga, and ROSAT observatories. The Einstein observations of A644 implied a significant cooling flow, with an accretion rate of 326 $`M_{}`$ yr<sup>-1</sup> (Edge et al. 1992), and an average gas temperature of 7.2 keV (David et al. 1993). More recent analysis of the ROSAT PSPC and HRI data imply slightly lower values of $``$200 $`M_{}`$ yr<sup>-1</sup> and 6.6 keV (Peres et al. 1998; White, Jones, & Forman 1997). We present new ASCA X-ray spectra observations of A644 (§ 2). The use of ASCA spectra to determine the spatial variation of the spectral properties of the X-ray emitting gas in A644 requires that the effects of the energy-dependent point-spread function (PSF) be corrected. We also analyze archival ROSAT PSPC observations of the X-ray image and spectrum of A644. These spectra allow us to constrain global parameters (§ 3) as well as temperature variations and abundance gradients in the intracluster gas (§ 4). We use the ASCA derived temperature gradients and ROSAT derived X-ray surface brightness to determine the radial variation of the total gravitational mass and gas mass in the cluster (§ 6). We test the hydrostatic assumption by searching for evidence of asymmetries in the temperature distribution (§ 5), which would indicate recent dynamical activity, such as a subcluster merger. We also estimate the cooling rate (§ 7). We summarize our results and discuss their implications in § 8. Throughout this paper, we assume a Hubble constant $`H_o=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and a cosmological deceleration parameter of $`q_o=0.5`$. At a redshift of $`z=0.0704`$, the angular diameter distance to the cluster is 375 Mpc, and $`1\mathrm{}`$ corresponds to 109 kpc. Unless otherwise stated, all errors are 90$`\%`$ confidence intervals for one parameter of interest. ## 2 Observations and Data Selection A644 was observed with ASCA on 1995 April 18-20 for a total of 66.3 ksec with the two Gas Imaging Spectrometer (GIS) detectors and 58.3 ksec with the two Solid-State Imaging Spectrometer (SIS) detectors. The SIS observations were taken in 1 CCD faint mode, with the cluster center placed near the center of chip 1 of SIS0 and chip 3 of SIS1. We applied standard cleaning procedures (Day et al. 1995), and selected data with a minimum cut off rigidity, elevation angle, and angle from the sunlit Earth of 6 GeV/c, 10, and 25, respectively. The screened exposure times were 49.5 ksec for the SIS detectors and 57.6 ksec for the GIS detectors. We also retrieved the ROSAT PSPC observation of A644 from the archive (RP800379N00; PI: Böhringer). A644 was observed for 10.2 ksec on 1993 April 28-29. The PSPC data were filtered for periods of high background and other problems, and corrected for non-X-ray background, vignetting, and exposure using the computer programs developed by Snowden (Plucinsky et al. 1993; Snowden 1995). After filtering, the live exposure was 8.3 ksec. The average Master Veto Rate for the filtered data was 83.7. The analysis of the global cluster spectrum and the ROSAT PSPC spectrum were done primarily with the XSPEC package (version 10.0), and the analysis of the PSPC image was done using the PROS package within IRAF. The deconvolution of the spatial variation of the spectrum from the energy-dependent Point Spread Function (PSF) of ASCA was performed using the algorithms of Markevitch (Markevitch 1996; Markevitch et al. 1998). ## 3 Global X-Ray Properties Figure 8 shows a contour plot of the ROSAT PSPC X-ray image, superposed on a optical image from the Digital Sky Survey (Lasker et al. 1990). The image was corrected for background, vignetting, and exposure, and was smoothed with an adaptive kernel which gave a minimum signal to noise ratio of 5 per smoothing beam (Huang & Sarazin 1996). The cluster is fairly regular, but shows an asymmetric extension to the SSW in both the X-ray and the optical images; a similar extension was seen in the Einstein image (Mohr, et al. 1995). The cluster emission is strongly peaked on the position of the cD galaxy and the elongated X-ray structure is aligned with the cD major axis, an effect often seen in richer clusters. The global X-ray spectrum of the cluster was determined by accumulating the spectra from the ROSAT PSPC and the ASCA GIS instruments in a circular region with a radius of 10$`\mathrm{}`$. Beyond this radius, the X-ray surface brightness rapidly diminishes. We excluded the SIS instruments because the field of view (FOV) is only 11 square in 1 CCD mode, and doesn’t cover the entire region used for the global cluster spectrum. However, including the SIS instruments does not significantly change the global results. The background for the ASCA observation was obtained from long exposure background fields at similar galactic latitude, and was extracted and cleaned identically to the data. The energy ranges adopted for the GIS was 0.6-10.0 keV. The background for the ROSAT PSPC spectrum was extracted from the same observation, using an annulus of 34-44 and masking out any point sources. The X-ray background was corrected for vignetting. Only energies from 0.24–2.5 keV were used for the fit to PSPC data. All of the channels used in the spectral fitting had more than 20 counts, so that the $`\chi ^2`$ distribution should be applicable. Initially we fit both the ASCA and ROSAT data to a single temperature Raymond-Smith optically thin thermal emission model (Raymond & Smith 1977), with the photoelectric absorption component (Morrison & McCammon 1984) fixed at the Galactic neutral hydrogen column density, $`N_H^{gal}=8.45\times 10^{20}`$ cm<sup>-2</sup> (Stark et al. 1992). We will refer to this model as the Single Temperature Model. A MEKAL Mewe-Kaastra thin-thermal plasma model (Mewe, Gronenschild, & van den Oord 1985; Kaastra 1992) was fit as an alternative, yielding nearly identical results. At first, we determined the temperature and overall heavy element abundance by fitting the ASCA and ROSAT spectra separately. The relative abundance ratios of heavy elements were fixed at the solar values given by Anders & Grevesse (1989). The result of the single temperature fit to the ASCA GIS detectors is shown in the first row of Table 8 in the columns labeled “Single Temperature Model.” This is an acceptable fit, and gives reasonably well-determined values for the temperature and heavy element abundance. The result of the single temperature fit to the ROSAT PSPC spectrum is shown in the second row of Table 8. This fit is poorly constrained by the data. In addition, it gives a value for the abundance which is inconsistent with the value given by ASCA, and much higher than values found for similar clusters. The third row of Table 8 gives the result of a joint ASCA and ROSAT PSPC fit and is strongly skewed towards the ASCA results because the ASCA fit was more strongly constrained than that of the ROSAT data. Given the nearly disjoint energy ranges of ASCA and ROSAT, the conflict in the two spectral fits and the poor fit to the ROSAT spectrum suggest that the Single Temperature Model lacks a soft X-ray emission or absorption component which is present in the cluster. We consider two possibilities; an excess soft X-ray absorption or a cooling flow emission component. First, we allowed the soft X-ray absorption to vary. This improved the fit somewhat, particularly for the ROSAT spectrum. In Table 8, we show the results of including an excess absorber with a column density of $`\mathrm{\Delta }N_H`$ and a covering factor of unity at the redshift of the cluster (Excess Absorption Model). The combined fit of the data give an overall temperature of 7.59 keV, an overall heavy element abundance of 0.30 relative to solar, and an intrinsic absorption column of $`\mathrm{\Delta }N_H<1.26\times 10^{20}`$ cm<sup>2</sup>. The $`\chi ^2`$ for the fit with excess absorption is improved by 2.4 for one extra fitting parameter, which is significant at the 85$`\%`$ level for the f-test. However, the excess column is only about 15$`\%`$ of the nominal Galactic column of $`N_H^{gal}=8.45\times 10^{20}`$cm<sup>-2</sup> (Stark et al. 1992), and thus could easily result from interpolating the H I data column to the position of the cluster. Moreover, for Galactic lines–of–sight with columns $`N_H^{gal}5\times 10^{20}`$ cm<sup>-2</sup> such as A644, there is a significant molecular component which increases the X-ray absorbing columns above those of H I (e.g., Arabadjis & Bregman 1999). Thus, it seems likely that the small excess absorption required to fit the X-ray spectrum is Galactic in origin. However, the fact that the best-fit PSPC temperature is still completely inconsistent with the ASCA temperature indicates that a soft emission component is still missing. Second, we added a cooling flow spectrum to the model, assuming the gas cooled isobarically subject to its own radiation from the ambient temperature in the central region to a very low temperature. The cooling gas had the same abundances as the ambient gas. The results are listed in the third column of Table 8 (Cooling Flow Model). The global fit to ASCA GIS is significantly better. The ROSAT temperature, while barely bounded, is now consistent with ASCA and the abundances match. The fit to the ROSAT PSPC spectra is almost unchanged, indicating that these spectra do not require a cooling flow component. We derive a cooling rate of $`214_{91}^{+100}`$ $`M_{}`$ yr<sup>-1</sup>. In the combined fit, the addition of the cooling flow (with one more fitting parameter) decreases $`\chi ^2`$ by 16.9, which is a significant decrease at the 99.9$`\%`$ confidence level for the f-test. Thus, there is evidence for a cooling flow, at roughly the same rate as previous estimates. The flux and luminosity, as derived from ASCA in the energy range of 2.0-10 keV are $`5.45\times 10^{11}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> and $`1.24\times 10^{45}`$ ergs s<sup>-1</sup>, respectively, for a circular region with a radius of 10$`\mathrm{}`$ centered on A644. Previous measurements with Einstein MPC gave a cluster temperature of $`7.2_{1.2}^{+1.8}`$ keV, a flux and luminosity of $`4.13\times 10^{11}`$ ergs cm<sup>-2</sup> s<sup>-1</sup>, and $`9.55\times 10^{44}`$ ergs s<sup>-1</sup> using the 2-10 keV energy band and a circular aperture of $`45\mathrm{}`$ (David et al. 1993). The previous temperature, flux and luminosity results compare quite well with our global values. The global fits show that ASCA constrains both the thermal plasma temperature and heavy metal abundance better than ROSAT because of ASCA’s larger energy range and superior spectral resolution. Likewise, ROSAT determines the absorbing column and cooling rates better than ASCA, because of its lower energy range. ## 4 Radial Variation of the X-Ray Spectrum ### 4.1 Deconvolved ASCA Spectra We determined the radial variation of the X-ray spectrum by accumulating ASCA spectra in 3 concentric annular regions centered on the cD galaxy with radii of 0-1.5, 1.5-5, and 5-10. In Figure 8, the annuli are shown superposed on a contour plot of the ROSAT PSPC image. All of the annuli lie within the GIS FOV, while only the inner two regions, $`01.5^{}`$ and 1.5-5, fall fully within the SIS FOV. Thus we exclude the SIS from analysis of the outer region which only partially overlaps with the SIS detector. The annuli were chosen such that each was larger than the ASCA XRT+GIS PSF. We use methods developed by Markevitch (1996) to deconvolve the emission spectrum of the gas from the energy dependent PSF of ASCA. Because of its higher spatial resolution, we use the ROSAT PSPC image (Figure 8) to constrain the relative emission measure distribution between the regions. However, because of the possible contributions of a cooling flow or excess absorption in the central region, we do not fix the relative emission measure of this region from the ROSAT image, but allow it to vary. Due to the poor calibration of the ASCA telescope at low energies, we restrict our analysis to energies in the range 1.5-10 keV, excluding the gold edge from 2.0-2.5 keV. In addition to statistical errors, we include estimates of the systematic uncertainties of the background (20$`\%`$, 1 $`\sigma `$), the PSF model’s core (5$`\%`$) and wings (15$`\%`$), and the effective area of ASCA (5$`\%`$), as suggested by Markevitch (1996). We also include the effect of the uncertainty in the offset between the ASCA and ROSAT images. The spectra from each annulus and instrument were binned to insure that there were at least 20 counts per energy bin. As evidenced from Figure 8, there is a moderately bright point source in the ROSAT PSPC image roughly $`11^{}`$ east of Abell 644. To ensure that this source is not contaminating the cluster emission in the outer regions, given ASCA’s broad PSF, we made an energy redistribution matrix which included this source as a separate region. For the source to truly be a contaminate, it should make up a significant percentage of the flux received from other regions. Our analysis indicates that this source contributes less than 1$`\%`$ of the flux in any of the regions used to accumulate the cluster spectrum. The results of fitting the annular spectra are listed Table 8. The values of $`\chi ^2`$ given in the first row of the Table are for the simultaneous fit to all of the annuli. The ASCA temperatures are also plotted as a function of radius in Figure 8. The minimum value for $`\chi ^2`$ for all of the deconvolution fits of ASCA spectra are unrealistically low because of the inclusion of systematic errors (Markevitch 1996), which we have estimated pessimistically. For the Single Temperature Model, the temperature in the central region is lower than that in the middle annulus, leading to a non-monotonic radial temperature variation. This low central temperature could be due to the reduction of the mean temperature in the central region by a cooling flow. We also consider the possibility that there is excess absorption toward the cooling flow. The inclusion of excess absorption alone in the spectrum of the central 0-1.5 region improved the fits somewhat, but the excess column is consistent with zero at the 90$`\%`$ confidence level. The best-fit value for the excess absorption of $`\mathrm{\Delta }N_H=1.32\times 10^{20}`$ cm<sup>-2</sup> is again consistent with interpolation errors of the galactic absorption value or from the additional molecular component of the absorbing column. Given the previous evidence from the X-ray surface brightness and spectra for a cooling flow at the center of A644, we included a cooling flow in the spectrum of the central 0-1.5 region in addition to the extra, intrinsic absorption. As in § 3, the gas was assumed to cool isobarically subject to its own radiation from the ambient temperature in that region to a very low temperature and had the same abundances as the ambient gas. The results of the Cooling Flow Model fits are also listed in Table 8. The inclusion of the central cooling flow increased the ambient gas temperature in the central region, but had no significant effect on the temperatures in the outer two annuli. The values of $`\chi ^2`$ were not reduced significantly from those of the Excess Absorption Model. The cooling rate was poorly determined. We used both fixed ($`Z=0.30Z_{}`$) and variable abundance models, and found that they are not statistically different. The results of the variable abundance model are given in Figure 8 and Table 8. The middle annulus has a lower abundance, but it is possible that this is an artifact produced by the energy-dependent PSF of ASCA. In any case, the abundance is constant within the errors, even with the lower abundance in the middle annulus. ### 4.2 ROSAT PSPC Annular Spectra The ROSAT PSPC spectra were accumulated in the same annuli as were used for the ASCA spectra. The results of fits to these spectra are shown in Table 8. As noted in the discussion of the global spectrum (§ 3), the PSPC spectra can strongly constrain the absorption or the cooling rate, but are relatively insensitive to the ambient gas temperature or the iron abundance. The central temperatures in the Single Temperature Model or the model with Excess Absorption disagreed strongly with the ASCA central temperature. For the outer two annuli, the ROSAT and ASCA temperatures were in reasonable agreement. The iron abundances, on the other hand, were rather high and almost unconstrained. Allowing for excess absorption didn’t bring the ROSAT and ASCA temperatures into agreement. The excess absorption did not increase toward the center of the cluster, as has been found in other cooling flow clusters. This suggests that this excess absorption simply indicates that the Galactic absorption toward this cluster is underestimated by the interpolation of the H I measurements. Finally, we included a cooling flow component in the spectra of the two inner regions; the results appear in the rightmost columns of Table 8. This model improved the fit of innermost region as well as the intrinsic absorption model did, but did not significantly change the fit of the $`1\stackrel{}{\mathrm{.}}55\mathrm{}`$ region. The temperatures, although very poorly constrained, are now consistent with the ASCA values. We derive a cooling rate of 146 $`(<380)`$ $`M_{}`$ yr<sup>-1</sup> within the inner $`0\mathrm{}1\stackrel{}{\mathrm{.}}5`$ region and 93 $`(<392)`$ $`M_{}`$ yr<sup>-1</sup> for the $`1\stackrel{}{\mathrm{.}}55\mathrm{}`$ region. ## 5 Azimuthal Variations in the X-ray Spectrum In order to search for azimuthal variations in the X-ray spectrum of A644, we further divided the second and third annular regions into four and two sectors, respectively. The geometry of the sectors is shown in Figure 8. The spectra were simultaneously fit to Single Temperature models, with the abundances allowed to vary (fixing abundance at 0.30 yielded results which were not statistically different from the variable abundance fits). The results of the variable abundance model are shown in Table 8. The temperatures and abundances are also plotted in Figures 8 and 8, respectively. We also did fits with central excess absorption and a central cooling flow, but these did not affect the parameters in the outer sections. Table 8 and Figure 8 show that the high temperature found in the second annulus in the previous analysis (Table 8 and Figure 8) is the result of a very hot region to the west of the cluster center. The temperatures are inconsistent with an isothermal distribution or azimuthally symmetric distribution at $`>99.9\%`$ level. However the temperatures are roughly consistent with an isothermal distribution if we exclude regions 3 and 5. On the other hand, the abundances are poorly determined, and there is no significant evidence for any spatial variation. There is no point source in the ROSAT PSPC or HRI images in the high temperature western region, so it is unlikely that this spectral component is due to a background AGN or some other sources. We also examined the ASCA GIS and SIS images of this region in the 7-10 keV band, and found no evidence of a point source. This test was done to check whether the high temperature might be due to a strongly absorbed or highly variable AGN, which might have been unobservable by ROSAT. The hard X-ray emission which gives the western region of the cluster its very high temperature appears to be extended in the hard band GIS images. ## 6 X-ray Surface Brightness Profile and Masses The ROSAT PSPC counts were extracted in annular regions of increasing size (1′ - 5′) such that each region had an adequate number of counts. Bright point sources were excluded from these regions and X-ray background was subtracted from an annular region where cluster emission was negligible ($`3444\mathrm{}`$). We used the results of the Cooling Flow Model fits to the global PSPC (Table 8) spectrum to convert the surface brightness into a physical flux. We also corrected the surface brightness for the nominal Galactic absorbing column of $`N_H^{gal}=8.45\times 10^{20}`$ cm<sup>-2</sup> (Stark et al. 1992). The resulting X-ray surface brightness is shown in Figure 8. The ROSAT PSPC X-ray surface brightness was de-projected to determine the X-ray emissivity as a function of radius on the assumption that the cluster was spherically symmetric (see below). The gas density $`\rho `$ was determined from the X-ray emissivity and the ASCA temperature profile obtained from the annular regions for the Single Temperature model (Figure 8). Because the ASCA temperatures were only determined in three radial bins, we linearly interpolated the temperature to determine the values at all radii. The gas density was integrated over the interior volume to give the gas mass as a function of radius, $`M_{gas}(r)`$. The assumption of hydrostatic equilibrium allows the gravitational mass $`M_{tot}`$ within a radius $`r`$ to be determined from $$M_{tot}(<r)=\frac{rkT(r)}{\mu m_HG}\left[\frac{d\mathrm{ln}\rho (r)}{d\mathrm{ln}r}+\frac{d\mathrm{ln}T(r)}{d\mathrm{ln}r}\right],$$ (1) where $`T`$ is the gas temperature, and $`\mu `$ is the mean particle mass in terms of the mass of hydrogen, $`m_H`$. Because of the enhanced X-ray surface brightness to the SSW (Figure 8) and the very high temperature in region 3 to the west (Figure 8), it seems unlikely that the cluster is well-relaxed, at least in this area. On the other hand, the northern side of the cluster appears more regular. Thus, we only use sectors 1, 4, 5, and 7 (see Figure 8) to determine the gas density in de-projection. We used this modified gas density to calculate the gas and gravitational masses. Figure 8 shows the accumulated gravitational mass (filled squares) and gas mass (crosses) profiles. The statistical errors for these masses cannot be easily assigned since the errors in the emissivities at different radii are correlated by the de-projection. Thus errors were generated by a Monte Carlo technique (Arnaud 1988; Irwin & Sarazin 1995). We randomly selected X-ray count values for each radial bin from a Poisson distribution with a mean value equal to the number of counts in the actual data. We also generated a simulated temperature profile from randomly selected temperature values for each radial bin from a Gaussian distribution with a mean value equal to the actual temperature derived in that radial bin. We use these two profiles to simulate electron density, gas mass and gravitational mass profiles. We ran this Monte Carlo simulation 1000 times, sorted the values, and chose the 50th and 950th values to define the 90$`\%`$ confidence region. Figure 8 shows that the gas mass (the lower profile) accounts for 15$`\%`$ of the gravitational mass (the upper profile) at a radius of 300 kpc, increasing to 20$`\%`$ at 1.2 Mpc. ## 7 Cooling Mass Deposition Rate The cooling radius, $`r_c`$, is defined as the radius at which the integrated cooling time is equal to the age of the cluster, for which we assume an age of 10<sup>10</sup> yr. For every radius for which the de-projection gave a value of the gas density, we calculated the integrated cooling time in the gas. The heavy element abundance is fixed at 30$`\%`$ of solar. From the derived density profile, we obtained a cooling radius $`r_c`$ = 261$`{}_{64}{}^{}{}_{}{}^{+376}`$ kpc We determined the cooling rate profile using the technique outlined in Arnaud (1988), which assumes a steady-state cooling flow. Using the de-projected X-ray emissivity, we determine the X-ray luminosity output from each de-projected spherical shell. The luminosity from a given shell is assumed to result from the combination of gas cooling out of the flow in that shell, and gas flowing into that shell but not cooling below X-ray emitting temperatures. The rate at which mass drops out of the flow in shell $`i`$, $`\mathrm{\Delta }\dot{M}_i`$, is taken to be $$\mathrm{\Delta }\dot{M}_i=\frac{\mathrm{\Delta }L_i\dot{M}_{i1}(\mathrm{\Delta }H_i+\mathrm{\Delta }\varphi _i)}{H_i+f_i\mathrm{\Delta }\varphi _i},$$ (2) where $`\mathrm{\Delta }L_i`$ is the change in bolometric luminosity across shell $`i`$, $`\dot{M}_{i1}`$ is the rate at which mass passes through shell $`i`$ but does not cool out of the flow, $`H_i=5kT(r)/2\mu m_H`$ is the enthalpy per unit mass in shell $`i`$, $`\mathrm{\Delta }H_i`$ is the change in $`H_i`$ across that shell, $`\mathrm{\Delta }\varphi _i`$ is the change in potential across shell $`i`$, and $`f_i`$ is a geometrical factor to allow for the fact that mass cools out of the flow in a volume-averaged way. We determine the gravitation potential assuming hydrostatic equilibrium as discussed in § 6. This gives $$\varphi (r)\varphi (0)=_0^r\frac{kT(r)}{\mu m_H}\left[d\mathrm{ln}\rho (r)+d\mathrm{ln}T(r)\right],$$ (3) where $`\varphi (0)`$ is the potential at the center of the cluster. The total integrated cooling rate at a distance $`r`$ is simply the sum of the mass dropping out of the flow in all the shells at radii less than $`r_n`$: $$\dot{M}(<r_n)=\underset{i=1}{\overset{n}{}}\mathrm{\Delta }\dot{M}(r_i).$$ (4) Figure 8 shows the resulting integrated cooling rate profile, along with the 5 and 95 percentile values from the 1000 Monte Carlo simulations. As noted above, this technique assumes a steady-state cooling flow, and thus may only apply out to $`r_c=261`$ kpc. The total cooling rate out to this radius is 393 $`M_{}yr^1`$. This is consistent but somewhat larger than the values derived spectroscopically. ## 8 Conclusions The global X-ray spectrum of A644 is well fit by a model with a gas temperature of $`7.69_{0.20}^{+0.22}`$ keV, and an abundance of $`0.30\pm 0.03`$ $`Z_{}`$. The global spectrum suggests that there is a cooling flow in the cluster with a cooling rate of $`214_{91}^{+100}`$ $`M_{}`$ yr<sup>-1</sup>. The spectrum also suggests an absorbing column which is slightly larger than the interpolated Galactic H I column, but this may be due to small scale variations in the H I column or to molecular material. The excess absorption is not concentrated to the cluster center, so it is probably not intrinsic to the cooling flow or cluster. We also determined the spatial variation of the temperature, iron abundance, and cooling rate. There is no significant evidence for any spatial variation in the iron abundance. We find evidence for a region of extremely hot gas (10-25 keV) located 1.5-5 to the west of the cluster center. This region is perpendicular to the enhanced emission in the ROSAT PSPC X-ray image. We suggest that the cluster is undergoing or has recently undergone a merger. The combination of a moderate cooling flow and evidence for a merger make this cluster an interesting case to test the disruption of cooling flows by mergers. Better spatial resolution data (e.g., with Chandra or XMM) is needed to confirm the merger in A644 and to determine its geometry. We determined the X-ray surface brightness profile excluding the hot region to the west and the extended region to the SSW. We use the X-ray surface brightness and temperature profiles to determine the gas and gravitational masses as a function of radius. The total gravitating mass within 1.2 Mpc is $`6.2\times 10^{14}M_{}`$, of which 20$`\%`$ is in the form of hot gas. Since we have not included the mass from individual galaxies, this result gives a lower limit on the baryonic fraction within the cluster well above the upper limit from cosmic nucleosynthesis of 0.06 for an $`\mathrm{\Omega }=1`$ universe (e.g., Walker et al. 1991). The baryonic fraction of A644 is consistent with other clusters and groups (e.g., Allen & Fabian 1994; David et al. 1994). This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. F. E. B. and C. L. S. were supported in part by NASA ASCA grants NAG 5-4516 and NAG 5-8390. C. L. S. was also supported in part by NASA Astrophysical Theory Program grant NAG 5-3057. We would like to thank Maxim Markevitch for many helpful comments on the analysis of ASCA data. F. E. B. would like to thank Jimmy Irwin to his many helpful comments. Figure 8 Figure 8 Figure 8 Figure 8 Figure 8 Figure 8 Figure 8 Figure 8 Figure 8
no-problem/9909/astro-ph9909260.html
ar5iv
text
# The 𝐻⁢𝑆⁢𝑇 Key Project on the Extragalactic Distance Scale XXVIII. Combining the Constraints on the Hubble Constant 1footnote 11footnote 1Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA Contract No. NAS 5-26555. ## 1 Introduction The goal of the Hubble Space Telescope (HST) Key Project on the Extragalactic Distance Scale was announced in 1984 by the newly formed Space Telescope Science Institute to be the determination of the Hubble Constant to an accuracy $`\mathrm{}<10`$%. The recommended approach was classical: to use Cepheid distances to calibrate secondary distance indicators. A plan was developed (Aaronson & Mould 1986), and our proposal was selected, but the project did not get into top gear until HST’s spherical aberration had been corrected. This observing program, which is now complete, has been described in detail by Kennicutt, Freedman & Mould (1995). The accompanying four papers (Sakai et al. 2000; Gibson et al. 2000; Kelson et al. 2000; Ferrarese et al. 2000a) show how Cepheid distances to 18 spirals, within $`25`$ Mpc, are used to calibrate the Tully-Fisher relation for spiral galaxies (TF), the fundamental plane for elliptical galaxies (FP), surface brightness fluctuations (SBF), and (with the 6 additional galaxies of the SN calibration project) Type Ia supernovae. Each of these distance indicators is able to penetrate to sufficient distance (10<sup>4</sup> km s$`^1`$) that perturbations in the Hubble flow are small compared with the expansion of the Universe. None of them is free from implicit assumptions about the stellar population of the galaxies whose distances are being measured. That is the case, whether we are assuming constancy of mass-to-light ratio between the galaxies whose Cepheid distances we have measured (“the calibrators”) and galaxies in, say, the Coma cluster, or whether we are assuming that supernova progenitors are essentially similar in the calibrators and the Calán/Tololo survey galaxies. It therefore seems more prudent to combine constraints from four separate secondary distance indicators with different systematics, than to investigate one alone. The purpose of the present paper is to show how these constraints on H<sub>0</sub> can be combined to yield the local Hubble Constant to an accuracy $`\mathrm{}<10`$% to 1-$`\sigma `$ confidence level. In a subsequent paper (Freedman et al. 2000) we examine the extrapolation to a sufficiently larger volume as to step from a local value of H<sub>0</sub> to the global expansion rate. ## 2 The Key Project Distance Database and Velocity Field Model The primary product of the Key Project has been the discovery of Cepheids in a set of galaxies within 25 Mpc and the measurement of their characteristics. The galaxy distances inferred by means of period luminosity relations are collected by Ferrarese et al. (2000b). Secondary distance indicators extend the range of measurement into the redshift range (2000, 10000) km s$`^1`$, and it is then necessary to relate the recession velocities of these objects to the smooth Hubble flow. One of the major remaining uncertainties in the determination of the Hubble Constant is the correction of the observed velocities of our tracers for large scale motions. Twenty years ago, the “cosmic” velocities of objects were simply taken to be their velocities corrected for galactic rotation and sometimes additionally corrected to the centroid of the Local Group. Slightly more than 20 years ago, the apparent motion of the Milky Way and, by inference, the Local Group with respect to the Cosmic Microwave Background (CMB) was discovered, and it has now been exquisitely measured with COBE (Kogut et al. 1993). Slightly less than 20 years ago, the infall of the Local Group into the core of the Local Supercluster that had been predicted by de Vaucouleurs (1958; 1972), Peebles (1976), Silk (1974) and others was detected (Tonry & Davis 1980; Davis et al. 1980; Aaronson et al. 1982a; Aaronson et al. 1980). Soon after, larger scale flows were seen (Burstein et al. 1986; Lynden-Bell et al. 1988). It is now clear (c.f. Strauss & Willick 1995) that there are motions on scales of tens of Mpc with amplitudes up to of order the Milky Way’s motion with respect to the CMB. However, the exact nature of these motions with respect to the CMB is still unclear (Lauer & Postman 1994; Riess, Press & Kirshner 1995), as are the precise causes of our motion. Recently, several groups have chosen to treat this problem by correcting apparent velocities to the CMB frame. This is generally done by just applying a correction that is our CMB velocity ($``$630 km s$`^1`$) times the cosine of the angle between the direction of motion with respect to the CMB. At large velocities, cz $``$ 10,000 km s$`^1`$, unless there are peculiar velocities on much larger scales or with much larger amplitudes than hitherto seen, this correction is both small and probably proper. Not correcting for it, in fact, can introduce a bias in the determination of H<sub>0</sub> which could be as large as V<sub>CMB</sub>/V<sub>object</sub>, or 6%. In fact, for any sample of objects not uniformly distributed w.r.t. $`\mathrm{cos}\theta _{CMB}`$, such bias could be a non-negligible contribution to the error in H<sub>0</sub>. Worse than that, at smaller redshifts, the flow field is much more complicated (c.f. Dekel et al. 1999); near the centers of rich clusters the infall amplitudes and/or the velocity dispersion can be very large ($`\mathrm{}>`$1,000 km s$`^1`$) and for nearby objects peculiar motions can be a substantial part of the observed velocity. It is also clear that it is a mistake to correct the velocities of very nearby objects by a simple $`\mathrm{cos}\theta _{CMB}`$ term, because nearby objects (e.g. M31 or the M81 group) are closer to being at rest with respect to the Local Group frame than to the CMB frame. To treat this problem for our various determinations of H<sub>0</sub> via several different samples of groups, clusters and individual galaxies we have developed a simple linear multiattractor model based on the Han and Mould (1990) and Han (1992) models, and similar to the multiattractor model advocated by Marinoni et al. (1998a). The model is linear and assumes (1) a flow towards each attractor (e.g. Virgo, the Great Attractor) that is independent of each object so the corrections for each are additive, (2) flows described by a fiducial infall velocity at the position of the Local Group towards each attractor (c.f. Peebles 1976; Schechter 1980), and (3) an essentially cylindrical (section of a cone) masked volume around each attractor where objects are forced to the attractor’s velocity. This last procedure collapses the cluster cores and avoids our having to deal with regions where the flow field is certainly non-linear and usually multi-valued for any observed velocity. We add one additional simplifying assumption, (4) to first order peculiar velocities are small enough so that an object’s apparent velocity in the Local Group frame is the estimate of its distance. Again, this assumption is generally justified for objects far from our attractors. With these assumptions, it is trivial to include additional attractors (e.g. the Shapley Supercluster, Scaramella et al. 1989) if desired. The simple linear infall model has been described by a number of authors, most notably Schechter (1980). In this model, the estimated radial component (with respect to the Local group) of peculiar velocity induced by an attractor is, by the law of cosines, $$V_{infall}V_{fid}\mathrm{cos}\theta +V_{fid}\left(\frac{V_oV_a\mathrm{cos}\theta }{r_{oa}}\right)\left(\frac{r_{oa}}{V_a}\right)^{1\gamma }(1),$$ where $`V_{fid}`$ is the amplitude of the infall pattern to that attractor at the Local group, $`V_o`$ is the observed velocity of the object (in the LG frame), $`V_a`$ is the observed distance of the attractor expressed as a velocity, $`\gamma `$ is the slope of the attractor’s density profile, $`\rho (r)r^\gamma `$, $`\theta `$ is the projected angle between the object and the attractor, and $`r_{oa}`$ is the estimated distance of the object from the attractor expressed as a velocity, $$r_{oa}=\sqrt{V_o^2+V_a^22V_oV_a\mathrm{cos}\theta }(2).$$ The first term in equation (1) is the projection of the LG infall velocity into the attractor ($`V_{fid}`$), and the second term is the projection of the object’s infall into the attractor. Note that we have modified the normal form of this relation, which uses the true relative distances of the objects in question, to instead express distances as velocities. To produce our simple flow field corrections, rather than solve for the actual relative distances of the objects in question, we have assumed that, to first order, the apparent radial velocity of an object (in the Local Group frame) represents its distance. We fix and use the cosmic velocity of the attractors, after solving for their own motions with respect to the other attractors. A more complete treatment would iteratively solve for the true velocity of each source, and an even more complete treatment would be to use the actual observed density field (Marioni et al. 1998b), but since our goal is to provide just a first order flow field correction to investigate and eliminate significant flow field biases in our H<sub>0</sub> determinations, we stop here. The details are given in Appendix A. This seems reasonable given the uncertainty in the absolute distance, and location for the main attractors, our significant lack of knowledge of the flow field at distances much beyond 4500 km/s and the other simplifying assumptions that are generally made such as assuming spherical attractors. We will test the above assumption in future work (Huchra et al. 2000), and implement an iterative solution to the flow field corrections if it is warranted. For the present, we note that if we abandoned our flow field model and assumed instead that only the observer was in motion relative to the smooth Hubble flow, the maximum change in our result for H<sub>0</sub> would be a 4% increase in the SBF result. There would be a 2% decrease in the result from supernovae and smaller effects from TF and FP. ## 3 The Virtual Key Project The cosmic distance ladder is a notable example of the concatenation of measurement uncertainties in a multi-step experiment (Rowan Robinson 1986). Careful distinction between random and systematic error is required (e.g. Madore et al. 1998), and bias is a concern (Sandage 1996). For the purposes of this paper, we have developed a simulation code which recreates the Key Project in the computer, allowing the uncertainties and parameter dependences to be followed extensively and investigated rigorously. One fundamental assumption of the Key Project is that the distance of the Large Magellanic Cloud is 50 kpc (m-M = 18.50 $`\pm `$ 0.13 mag). Indeed, our result might best be expressed in units of km/sec/LMC-distance. Nevertheless, a 6.5% uncertainty in the distance of the LMC is incorporated in the project error budget. In $`\mathrm{\S }`$5 we also explore use of a literature survey (Figure 1) as a probability distribution function for the LMC distance. Westerlund’s (1996) survey has been updated, as discussed in more detail by Freedman et al. (2000). The LMC Cepheid period luminosity (PL) relation in the simulation also has a 0.02 mag zeropoint uncertainty (Madore & Freedman 1991; Tanvir 1997). This estimate will be tested when photometry of a larger sample of LMC Cepheids is complete (Sebo et al. 2000). A second systematic error in the Key Project arises from the residual uncertainty of WFPC2’s correction for Charge Transfer Efficiency (see Appendix B) and calibration on to the (V,I) system. This is amplified to a 0.09 mag uncertainty in distance modulus by the approach we have adopted to reddening correction, because each galaxy’s absolute distance modulus is a linear combination of the apparent moduli: $`\mu _0`$ = 2.45 $`\mu _I`$ – 1.45 $`\mu _V`$. The metallicity dependence of the PL relation has proved difficult to constrain (Kennicutt et al. 1998). For most of the galaxies for which we have measured Cepheid distances, however, measurements of oxygen abundances in HII regions in or near the Cepheid fields are also available. Sakai et al. (2000), Gibson et al. (2000), Kelson et al. (2000) and Ferrarese et al. (2000a) present results, both neglecting PLZ, and also correcting the galaxy distances published in papers I–XXI by the coefficient $`\gamma _{VI}`$ = d(m-M)/d\[O/H\] = –0.24 $`\pm `$ 0.16 mag/dex (Kennicutt et al. 1998), and this is followed in the simulation. In each simulation a value of $`\gamma `$ is drawn from a normal distribution for this purpose. Normal distributions are employed throughout these simulations, except where otherwise noted. For each of the galaxies in the virtual key project a Cepheid distance is generated assuming a 0.05 mag intercept uncertainty in its PLV relation and a similar intercept uncertainty in PLI. These are typical values; some of the real galaxies have more Cepheids and better determined distances (e.g. NGC 925), and others fewer Cepheids (e.g. NGC 4414), and hence larger uncertainties. Implicitly, we have assumed that the reddening law is universal, adopting an uncertainty in its slope: R<sub>V</sub> = 3.3 $`\pm `$ 0.3. As a follow-on to the Key Project, this will be tested with NICMOS observations of HST Cepheids for some galaxies. A primary, and long awaited (Aaronson & Mould 1986) outcome of the Key Project is a calibration of the TF relation. Sakai et al. (2000) find an $`rms`$ scatter about this relation, and this is included in the 18 galaxy calibration simulation. The calibration is then applied to a sample of 5 clusters with $`cz`$ $`>`$ 5000 km s$`^1`$. (And this is all then realized half a million times.) Sakai et al. analyze a larger sample than this, but their final result is based on the most distant members of the cluster dataset. Comparison of the simulated and input Hubble relations yields an H<sub>0</sub> error from the TF calibration component of the Key Project. In the simulation velocities are drawn from a normal distribution with a $`\sigma `$ = 300 km s$`^1`$(Giovanelli et al. 1998). The real flow field is more complex, and the model adopted by Sakai et al. (2000), Gibson et al. (2000), Kelson et al. (2000) and Ferrarese et al. (2000a) is specified in Appendix A. We have incorporated the Tully-Fisher error budget given by Sakai et al. in their Table 6. Noting the discrepancy they report between cluster distances based on I band photometry and those based on H band photometry, we adopt an uncertainty of 0.18 mag to allow for systematics in the galaxy photometry. Second, the Key Project provides a very direct calibration of the SBF relation. Following Ferrarese et al. (2000a), the simulation takes six galaxies and derives a zeropoint for the relation between SBF magnitude and color. This relation, which is assumed to have an $`rms`$ scatter of 0.11 mag (Tonry et al. 1997), is then applied to the four galaxies in the redshift range 3000 – 5000 km s$`^1`$with HST Planetary Camera SBF measurements. We have omitted Coma and NGC 4373, just as Ferrarese did. Comparison of the simulated and input Hubble relations yields an H<sub>0</sub> error from the SBF calibration component of the Key Project. The approach here follows the error budget in Table 5 of Ferrarese et al. (2000a). The third, and in some respects strongest, component of the distance scale calibrated by HST’s Cepheid database is the relation between the maximum luminosity of the SNIa light curve and the supernova decline rate (Hamuy et al. 1996; Phillips et al. 1999). Uncertainties in the observed magnitudes and reddening of the supernovae and their Cepheid distances affect the calibration. In the simulation we have adopted the uncertainties quoted by Gibson et al. (2000) for six calibrators and incorporated the error budget given in their Table 7. The calibration was then applied to the 27 supernovae of Hamuy et al. between 6,000 and 30,000 km s$`^1`$. Finally, to simulate the calibration of the fundamental plane we have have employed the error budget in Table 3 of Kelson et al. and assumed that the Leo ellipticals lie within 1 Mpc of the respective mean distances of their Cepheid-bearing associates. In the case of Virgo and Fornax we have assumed elongation of the cluster along the line of site, described by Gonzales & Faber (1997) as an exponential fall-off with a 2.5–4 Mpc scale length. The calibration is then applied to 8 clusters, ranging from Hydra to Abell 3381 in distance. We have assumed that the clusters have the same 300 km s$`^1`$$`rms`$ noise that is seen in the TF sample. ## 4 The Error Distributions Based on 5 $`\times `$ 10<sup>5</sup> realizations, Figure 2a shows that the simulated TF error distribution, which gives the probability that H<sub>0</sub> determined by Sakai et al. (2000) alone has a given percentage error, is rather normal looking, biased at no more than the 1% level, and has $`\sigma _{TF}`$ $``$ 12%. In fact, Sakai et al. have produced four realizations of the TF H<sub>0</sub> measurement at different wavelengths, obtaining 74 km s$`{}_{}{}^{1}\mathrm{Mpc}_{}^{1}`$from an I band calibration and 67 km s$`{}_{}{}^{1}\mathrm{Mpc}_{}^{1}`$from an H band calibration. The chance that this 10% discrepancy would occur by chance, especially when the same calibrator distances have been assumed in both cases, is small. Sakai et al. consider systematics in the linewidths, I band extinction corrections, and H band aperture/diameter ratios as possible contributors to this discrepancy. Uncertain homogeneity in galaxy diameters leads to lower weight for the H band result. The other panels in Figure 2 show the SBF error distribution, the SN error distribution, and the FP error distribution. The narrowest is the SN error distribution (9% $`rms`$, compared with 12% in the other two cases). Figure 3 shows the covariance between the SN and TF calibration errors, which occurs because a number of the SNIa calibrators are also TF calibrators. Cepheid bearing galaxies represent only a selected sample (or in the FP case, merely neighbors) of the population of galaxies to which the calibration we have derived is applied. Stellar population effects are calibrated empirically in the case of SBF, but we make no allowance for parameters (beyond decline rate and reddening), which may still remain hidden in the SN case. ## 5 Combining The Constraints The results of the previous section will aid us in optimally combining the four secondary distance indicators. We can compare $`<H_0>`$ from a straight mean of the four measurements and compare this with a weighted mean. We choose weights for H$`{}_{}{}^{TF}{}_{0}{}^{}`$ and H$`{}_{}{}^{SBF}{}_{0}{}^{}`$ which are the inverse of $`\sigma _{TF}^2`$ and $`\sigma _{SBF}^2`$, respectively. Effectively, this 1.5 times weights the SN distance indicator relative to the other three. Combining H$`{}_{}{}^{TF}{}_{0}{}^{}`$ = 71 $`\pm `$ 4 (random) $`\pm `$ 7 (systematic) (Sakai et al. 2000) with H$`{}_{}{}^{SBF}{}_{0}{}^{}`$ = 69 $`\pm `$ 4 $`\pm `$ 6 (Ferrarese et al. 2000a), and H$`{}_{}{}^{FP}{}_{0}{}^{}`$ = 78 $`\pm `$ 8 $`\pm `$ 10 (Kelson et al. 2000), and H$`{}_{}{}^{SNIa}{}_{0}{}^{}`$ = 68 $`\pm `$ 2 $`\pm `$ 5 (Gibson et al. 2000), we obtain H<sub>0</sub> = 71 $`\pm `$ 6 km s$`{}_{}{}^{1}\mathrm{Mpc}_{}^{1}`$, without the weighting influencing the outcome a great deal. The error distribution for the combined constraints is shown in Figure 4. The width of this distribution is $`\pm `$9% (1$`\sigma `$). A dimension which each of the error distributions shares is dependence on the assumed distance of the LMC. This is illustrated in Figure 5, which shows the 67% probability contours for each of the four secondary distance indicators. The only comparable distance indicator in the local volume which does not depend on the LMC distance (but is still influenced by SN1987A), is the Expanding Photospheres Method applied to supernovae of Type II. This yields H<sub>0</sub> = 73 $`\pm `$ 12 km s$`{}_{}{}^{1}\mathrm{Mpc}_{}^{1}`$(Schmidt et al. 1994) at 95% confidence. If we adopt Figure 1 as the probability distribution of the distance of the LMC, the error distributions broaden and reflect the skew seen in Figure 1. The combined constraint is shown in Figure 6. The uncertainty in H<sub>0</sub> grows to 12% and the bias, the amount by which H<sub>0</sub> is underestimated through our assumption of a 50 kpc distance becomes 4.5%. It is likely that Figure 1 exaggerates the probability of LMC distance moduli as low as 18.1 mag, as it weights recent estimates based on the brightness of the “red clump” almost as highly as it weights Cepheids. A critical literature review on the LMC distance is provided by Freedman et al. (2000). Expressing our result as a self-contained experiment, we obtain H<sub>0</sub> = 3.5 $`\pm `$ 0.2 km s$`^1`$per LMC distance. We conclude that the expansion rate within the area mapped by the secondary distance indicators we have calibrated is 71 $`\pm `$ 6 km s$`{}_{}{}^{1}\mathrm{Mpc}_{}^{1}`$. The distribution of galaxies in Figure 7 renders this result relatively immune to a low amplitude bulk flow of the sort detected by Giovanelli et al. (1998). Similarly, velocity perturbations due to a Comacentric bubble or a Local Void (Tully & Fisher 1987) would tend to generate a dipole in Giovanelli’s results, which is not seen, at least in the Arecibo sky sample available to date. Finally, we note that adoption of the metallicity dependence of the Cepheid PL relation described in $`\mathrm{\S }`$3 reduces the combined H<sub>0</sub> by 4% to 68 $`\pm `$ 6 km s$`{}_{}{}^{1}\mathrm{Mpc}_{}^{1}`$. ## 6 Future Work A large scale, locally centered bubble would require that this local H<sub>0</sub> be corrected for the density anomaly. Tammann (1998) and Zehavi et al. (1998) have estimated that this amounts to a few percent, and this deserves careful evaluation focussed on the volume sampled in the accompanying papers. Truly $`large`$ scale density perturbations are correspondingly unlikely (Shi & Turner 1998). To complete the Key Project, we intend to examine this matter and several other limitations of current work, which have been identified here and in the accompanying papers. These include the limited LMC Cepheid PL relation, the excessively large uncertainty in the photometric calibration we have adopted for WFPC2, and the comparison of results from this classical approach to the Extragalactic Distance Scale with recent progress in the analysis of gravitationally lensed quasar time delays and the x-ray gas in rich clusters of galaxies (the Sunyaev Zeldovich effect). This work is in progress (Freedman et al 2000). The Key Project’s error analysis will also be developed in more detail than we have presented here. These additional steps should secure the Key Project’s goal – a 10% Hubble Constant – to a higher level of confidence than the 1$`\sigma `$ level reported here. ###### Acknowledgements. The work presented in this paper is based on observations with the NASA/ESA Hubble Space Telescope, obtained by the Space Telescope Science Institute, which is operated by AURA, Inc. under NASA contract No. 5-26555. Support for this work was provided by NASA through grant GO-2227-87A from STScI. SMGH and PBS are grateful to NATO for travel support via a Collaborative Research Grant (960178). Collaborative research on HST data at Mount Stromlo was supported by a major grant from the International S & T program of the Australian Government’s Department of Industry, Science and Resources. LF acknowledges support by NASA through Hubble Fellowship grant HF-01081.01-96A. SS acknowledges support from NASA through the Long Term Space Astrophysics Program, NAS-7-1260. We are grateful to the Lorentz Center of Leiden University for its hospitality in 1998, when this series of papers was planned. We would like to thank Riccardo Giacconi for instigating the HST Key Projects. Appendix A. The Local Flow Field The model outlined in $`\mathrm{\S }`$2 employs a five step procedure to convert heliocentric velocities to velocities characteristic of the expansion of the Universe. 1) Correction of the observed heliocentric velocity of our objects to the centroid of the Local Group. We use here the Yahil, Tammann and Sandage (1977) prescription (YST) for consistency, but note that use of other prescriptions (e.g. the IAU 300 sin(l)cos(b)) generally does not make a large difference beyond halfway to Virgo. The YST correction to the Local Group centroid is $$V_{LG}=V_H79\mathrm{cos}(l)\mathrm{cos}(b)+296\mathrm{sin}(l)\mathrm{cos}(b)36\mathrm{sin}(b)(A1).$$ As indicated in $`\mathrm{\S }`$2, we set $`V_0=V_{LG}`$. 2) Correction for Virgo infall. Note that the Virgo cosmic velocity is derived by correcting the observed heliocentric velocity (Huchra 1995) to the LG centroid, for our infall velocity and for its infall into the GA. Note, again, that the correction for Virgo infall includes two components, the change in velocity due to the infall of the object into Virgo plus the vector contribution due to the Local Group’s peculiar velocity into Virgo. That term is just $`V_{fid}\mathrm{cos}(\theta _v)`$ 3) Correction for GA infall as in 2). 4) Correction for Shapley supercluster infall. The correction adopted is set so that it reproduced the amplitude of the CMB dipole as V $`\mathrm{}`$. 5) Correction for other concentrations as necessary. Since we have set the solution to be additive, the final corrected “Cosmic” velocity w.r.t. the LG is then $$V_{Cosmic}=V_H+V_{c,LG}V_{in,Virgo}V_{in,GA}V_{in,Shap}\mathrm{}(A2),$$ where $`V_H`$ is the observed heliocentric velocity and $`V_{c,LG}`$ is the correction to the Local Group centroid described above. Note that the STY correction to the Local Group centroid is not the same as the IAU correction, so some of the models and assumptions made in earlier Virgo flow fits have to be modified. We have used the YST assumption primarily because it is what was used to derive the corrected CMB dipole. For our initial attempt at a detailed flow field correction, we include just three attractors, the Local Supercluster, the Great Attractor and the Shapley Supercluster. The parameters we use for the attractors are given in Table A1 and are taken (and estimated) from a variety of sources including AHMST, Han (1992), Faber & Burstein (1989), Shaya, Tully & Pierce (1992) and Huchra (1995). For simplicity, we also assume $`\gamma `$ = 2. For this first cut model, we assume an infall into Virgo of 200 km s$`^1`$at the LG, an infall into the GA of 400 km s$`^1`$and an infall into Shapley of 85 km s$`^1`$. These numbers give good agreement with the amplitude of the CMB dipole, but with only these three attractors, the direction of maximum LG motion is 27 degrees away from the CMB direction. Appendix B: The Photometric Zeropoint The status and calibration of the Wide Field Planetary Camera 2 (WFPC2) has been reviewed by Gonzaga et al. (1999), who find that photometric accuracies of a few percent are routinely possible. The baseline photometric calibration for WFPC2 is given by Holtzman et al. (1995). The standard calibration for papers IV to XXI in the Key Project series is that of Hill et al. (1998), and accounts for the principal systematic CTE effect, the so-called long vs. short exposure effect (Wiggs et al. 1999). Photometric stability has been satisfactory over the duration of the project with fluctuations of $`\mathrm{}<`$2% or less peak-to-peak over 4 years at the wavelengths observed here (Heyer et al. 1999). Images obtained with the CCDs in WFPC2 are known to be subject to charge loss during readout, presumably due to electron traps in the silicon of the detectors. Approximate corrections for this charge loss have been published by Whitmore & Heyer (1997) and Stetson (1998), but these corrections are based on comparatively short exposures of comparatively bright stars, so the observations exhibit a comparatively narrow range of apparent sky brightness. Furthermore, the zero points of the WFPC2 photometric system are primarily determined from comparatively bright stars, since those are the ones for which the ground-based photometry is most reliable. Since the amount of charge lost from a stellar image appears to be a function of both the brightness of the star image and the apparent brightness of the sky, these dependences must be quite well determined to enable reliable extrapolation from bright stars as observed against a faint sky (standard stars in relatively uncrowded fields in short exposures) to faint stars observed against a bright sky (distant Cepheids project against galactic disks in long exposures). The study of Stetson (1998) was intended to provide that extrapolation, based upon comparatively short exposures of the nearby globular cluster $`\omega `$ Centauri, and both short and long exposures of the remote globular cluster NGC 2419. These data seemed to yield consistent charge-loss corrections and zero points, but various tests suggested that there remained some uncontrolled systematic effects which might amount to of order $`\pm `$0.02 mag or so in each of the $`V`$ and $`I`$ filters. Improved correction for Charge Transfer Efficiency effects in the WFPC2 CCDs has been presented by Stetson (1998). Presumably because of different illumination levels, correction of our photometry affects the V magnitudes and the I magnitudes differently. In the mean, and based on the reference stars photometry published in papers IV to XXI, distance moduli on the Stetson (1998) system are 0.07 $`\pm `$ 0.02 mag closer than on the Hill et al. system. In attempting to improve upon this situation, Stetson (work in progress) has added ground-based and WFPC2 data for the nearby globular cluster M92 (= NGC 6341) to the solution. The WFPC2 data for M92 are intermediate in exposure time between those for $`\omega `$ Cen and the short exposures of NGC 2419 on the one hand, and the long exposures of NGC 2419 on the other. As in Stetson (1998), the $`\omega `$ Cen, NGC 2419, and M92 data were all combined into a single solution to determine the optimum coefficients relating the amount of charge lost from a stellar image to its position on the detector, its brightness, and the surface brightness of the local sky. When these corrections are applied to determine the optimum photometric zero points from the data for each globular cluster, it is found that the results for $`\omega `$ Cen and NGC 2419 are consistent, as before, but the zero points implied by the M92 data are substantially different: $`+0.054\pm 0.003`$mag in $`V`$, and $`0.038\pm 0.003`$ in $`I`$. Adoption of a zeropoint based on M92 would move the Key Project galaxies 0.14 mag closer than the Hill et al. reference point. On the other hand, Saha (in preparation) finds different results from analysis of Cycle 7 calibration data. He has determined that CTE correction will yield Cepheid colors bluer by $``$0.02 mag, corresponding to distance moduli $`moredistant`$ by 0.05 mag. Given these uncertainties, we continue to adopt the Hill et al. (1998) calibration, but we note that CTE effects render our distance moduli more uncertain than we have previously estimated. The modulus uncertainty adopted here is $`\pm `$0.09 mag. That makes Stetson’s M92 results a 1.5$`\sigma `$ anomaly. Physically, the M92 results seem anomalous, since CTE correction should be intrinsically grey. Further investigation of the zeropoint for the Cepheid photometry database is required. Figure Captions Figure 1. Distribution of published LMC distance moduli from the literature. Values from 1983–1995 are from the review by Westerlund (1996). Values up to the end of 1998 have been collated by Freedman (1999). Figure 2. The distribution of uncertainties in H<sub>0</sub> for each of the four secondary distance indicators calibrated and applied in the Virtual Key Project. Figure 3. Percentage error contours for the supernova and Tully-Fisher measurements of H<sub>0</sub>. The outer contour encloses 80% of the realizations. Figure 4. The uncertainty distribution for the combined constraints on H<sub>0</sub>. Figure 5. The Key Project calibration constrains H<sub>0</sub> in four ways, but these are each, in turn, dependent on the assumed distance of the Large Magellanic Cloud which provides the reference Cepheid PL relation for the project. Figure 6. The corresponding distribution calculated from the probability distribution of LMC distances in Figure 1. Figure 7. The distribution of secondary distance indicators in projection on the supergalactic plane. Open circles: TF clusters from Sakai et al. (2000); solid circles: SBF clusters from Ferrarese et al. (2000a); asterisks: SNeIa from Hamuy et al. (1996); crosses: FP clusters from Kelson et al. (2000).
no-problem/9909/hep-ph9909277.html
ar5iv
text
# Weak Matrix Elements without Quark Masses on the Lattice ## 1 Introduction Since the original proposals of using lattice QCD to study hadronic weak decays , substantial theoretical and numerical progress has been made: the main theoretical aspects of the renormalization of composite four-fermion operators are well understood ; the calculation of $`K^0`$$`\overline{K}^0`$ mixing, relevant to the prediction of the CP-violation parameter $`ϵ`$, has reached a level of accuracy which is unpaired by any other approach ; increasing precision has been gained in the determination of the electro-weak penguin amplitudes necessary to the prediction of the CP-violation parameter $`ϵ^{}/ϵ`$ ; finally matrix elements of $`\mathrm{\Delta }S=2`$ operators which are relevant to study FCNC effects in SUSY models have been computed . Methods and symbols used in this talk and all the results we report are fully described in . ## 2 Matrix elements without quark masses The analysis of $`K^0\overline{K}^0`$ mixing with the most general $`\mathrm{\Delta }S=2`$ effective Hamiltonian requires the knowledge of the matrix elements $`\overline{K}^0|O_i|K^0`$ of the parity conserving parts of the following operators $`O_1`$ $`=`$ $`\overline{s}^\alpha \gamma _\mu (1\gamma _5)d^\alpha \overline{s}^\beta \gamma _\mu (1\gamma _5)d^\beta ,`$ $`O_2`$ $`=`$ $`\overline{s}^\alpha (1\gamma _5)d^\alpha \overline{s}^\beta (1\gamma _5)d^\beta ,`$ $`O_3`$ $`=`$ $`\overline{s}^\alpha (1\gamma _5)d^\beta \overline{s}^\beta (1\gamma _5)d^\alpha ,`$ (1) $`O_4`$ $`=`$ $`\overline{s}^\alpha (1\gamma _5)d^\alpha \overline{s}^\beta (1+\gamma _5)d^\beta ,`$ $`O_5`$ $`=`$ $`\overline{s}^\alpha (1\gamma _5)d^\beta \overline{s}^\beta (1+\gamma _5)d^\alpha .`$ On the lattice, matrix elements of weak four-fermion operators are computed from first principles. But, following the common lore, they are usually given in terms of the so-called $`B`$-parameters which measure the deviation of their values from those obtained in the Vacuum Saturation Approximation (VSA). For operators in (1), the $`B`$-parameters are usually defined as $`\overline{K}^0|O_1(\mu )|K^0`$ $`=`$ $`{\displaystyle \frac{8}{3}}M_K^2f_K^2B_1(\mu ),`$ (2) $`\overline{K}^0|O_i(\mu )|K^0`$ $`=`$ $`{\displaystyle \frac{C_i}{3}}\left({\displaystyle \frac{M_K^2f_K}{m_s(\mu )+m_d(\mu )}}\right)^2B_i(\mu ),`$ where $`C_i=5,1,6,2`$ for ($`i=2,\mathrm{},5`$). In (2), $`\overline{K}^0|O_1|K^0`$ is parameterized in terms of well-known experimental quantities and $`B_1(\mu )`$ ($`B_K(\mu )B_1(\mu )`$). On the contrary, $`\overline{K}^0|O_i|K^0`$ ($`i=2,\mathrm{},5`$) depend quadratically on the quark masses in (2), while they are expected to remain finite in the chiral limit and depend only linearly on the quark masses. Contrary to $`f_K`$, $`M_K`$, etc., quark masses can not be directly measured by experiments and the present accuracy in their determination is still rather poor. Therefore, whereas for $`O_1`$ we introduce $`B_K`$ as an alias of the matrix element, by using (2) we replace each of the SUSY matrix elements with 2 unknown quantities, i.e. the $`B`$-parameter and $`m_s+m_d`$. To overcome these problems, we propose the following new parameterization of $`\mathrm{\Delta }S=2`$ operators $`\overline{K}^0|O_1(\mu )|K^0`$ $`=`$ $`{\displaystyle \frac{8}{3}}M_K^2f_K^2B_1(\mu ),`$ (3) $`\overline{K}^0|O_i(\mu )|K^0`$ $`=`$ $`M_K^{}^2f_K^2\stackrel{~}{B}_i(\mu ).`$ The $`\stackrel{~}{B}_i(\mu )`$ parameters are still dimensionless quantities and can be computed on the lattice by studying appropriate ratios of three- and two point functions . By simply using them, we have eliminated any fictitious reference to the quark masses, hence reducing the systematic errors on the corresponding physical amplitudes. An alternative parameterization which has not been used in our numerical analysis, but may be very useful in the future, can be found in . The VSA and $`B`$-parameters are also used for matrix elements of operators which enter the $`\mathrm{\Delta }S=1`$ effective Hamiltonian. Notice that this ”conventional” parameterization is the only responsible for the apparent quadratic dependence of $`ϵ^{}/ϵ`$ on the quark masses. This introduces a redundant source of systematic error which can be avoided by parameterizing the matrix elements in terms of measured experimental quantities and therefore a better determination of the strange quark mass $`m_s(\mu )`$ will not improve our theoretical knowledge of $`ϵ^{}/ϵ`$. In this work we have computed the matrix elements $`\pi |O_i^{3/2}|K`$ of the four fermion operators $`O_i^{3/2}`$ ($`i=7,8,9`$) which contribute to the $`\mathrm{\Delta }I=3/2`$ sector of $`ϵ^{}/ϵ`$. In fact in the chiral limit $`\pi \pi |O_i^{3/2}|K`$ can be obtained, using soft pion theorems, from $`\pi ^+|O_i^{3/2}|K^+`$. For degenerate quark masses, $`m_s=m_d=m`$, and in the chiral limit, we find $`\underset{m0}{lim}\pi ^+|O_7^{3/2}|K^+`$ $`=`$ $`M_\rho ^2f_\pi ^2\underset{m0}{lim}\stackrel{~}{B}_5(\mu )`$ $`\underset{m0}{lim}\pi ^+|O_8^{3/2}|K^+`$ $`=`$ $`M_\rho ^2f_\pi ^2\underset{m0}{lim}\stackrel{~}{B}_4(\mu )`$ $`\underset{m0}{lim}\pi ^+|O_9^{3/2}|K^+`$ $`=`$ $`{\displaystyle \frac{8}{3}}M_\pi ^2f_\pi ^2\underset{m0}{lim}B_1(\mu ).`$ In the limit $`m_s=m_d`$ complicated subtractions of lower dimensional operators are avoided for $`\mathrm{\Delta }I=3/2`$ operators. This is not the case for $`\mathrm{\Delta }I=1/2`$ operators which enter the determination of $`ϵ^{}/ϵ`$ : in this case the mixing with lower dimensional operators makes the computation much more involved. A reliable lattice estimate of these matrix elements is still missing but encouraging preliminary results with domain-wall fermions have been presented in ref. . ## 3 Renormalization Group Invariant Operators Physical amplitudes can be written as $$F|_{eff}|I=F|\stackrel{}{O}(\mu )|I\stackrel{}{C}(\mu ),$$ (4) where $`\stackrel{}{O}(\mu )(O_1(\mu ),\mathrm{},O_N(\mu ))`$ is the operator basis (for example the basis defined in (1) for the $`\mathrm{\Delta }S=2`$ effective Hamiltonian) and $`\stackrel{}{C}(\mu )`$ the corresponding Wilson coefficients represented as a column vector. $`\stackrel{}{C}(\mu )`$ is expressed in terms of its counter-part, computed at a large scale $`M`$, through the renormalization-group evolution matrix $`\widehat{W}[\mu ,M]`$ $$\stackrel{}{C}(\mu )=\widehat{W}[\mu ,M]\stackrel{}{C}(M),$$ (5) where the initial conditions $`\stackrel{}{C}(M)`$, are obtained by perturbative matching of the full theory to the effective one at the scale $`M`$ where all the heavy particles have been removed. $`\widehat{W}[\mu ,M]`$ can be written as (see for example ) $`\widehat{W}[\mu ,M]=\widehat{M}[\mu ]\widehat{U}[\mu ,M]\widehat{M}^1[M],`$ (6) where $`\widehat{U}=(\alpha _s(M)/\alpha _s(\mu ))^{(\gamma _O^{(0)T}/2\beta _0)}`$ is the leading-order evolution matrix and $`M(\mu )`$ is a NLO matrix defined in which can be obtained by solving the Renormalization Group Equations (RGE) at the next-to-leading order. The Wilson coefficients $`\stackrel{}{C}(\mu )`$ and the renormalized operators $`\stackrel{}{O}(\mu )`$ are usually defined in a given scheme, at a fixed renormalization scale $`\mu `$, and they depend on the renormalization scheme and scale in such a way that only $`H_{eff}`$ is scheme and scale independent. This is a source of confusion in the literature, especially when (perturbative) coefficients and (non-perturbative) matrix elements are computed using different techniques, regularization, schemes and renormalization scales. To simplify the matching procedure, we propose a Renormalization Group Invariant (RGI) definition of Wilson coefficients and composite operators which generalizes what is usually done for $`B_K`$ and for quark masses . We define $$\widehat{w}^1[\mu ]\widehat{M}[\mu ]\left[\alpha _s(\mu )\right]^{\widehat{\gamma }_O^{(0)T}/2\beta _0},$$ (7) and using Eqs. (6) and (7) we obtain $$\widehat{W}[\mu ,M]=\widehat{w}^1[\mu ]\widehat{w}[M].$$ (8) The effective Hamiltonian (4) can be written as $`_{eff}=\stackrel{}{O}^{RGI}\stackrel{}{C}^{RGI},`$ (9) where $$\stackrel{}{C}^{RGI}=\widehat{w}[M]\stackrel{}{C}(M),\stackrel{}{O}^{RGI}=\stackrel{}{O}(\mu )\widehat{w}^1[\mu ].$$ (10) $`\stackrel{}{C}^{RGI}`$ and $`\stackrel{}{O}^{RGI}`$ are scheme and scale independent at the order we are working. Therefore the effective Hamiltonian is splitted in terms which are individually scheme and scale independent. This procedure is generalizable to any effective weak Hamiltonian. The $`\stackrel{~}{B}`$-parameters defined in eqs. (3) satisfy the same RGE as the corresponding operators and the RGI $`\stackrel{~}{B}`$-parameters can be defined as $$\stackrel{~}{B}_i^{RGI}=\underset{j}{}\stackrel{~}{B}_j(\mu )w(\mu )_{ji}^1.$$ (11) ## 4 Numerical results All details concerning the extraction of matrix elements from correlation functions and the computation of the non-perturbative renormalization constants of lattice operators can be found in . In this talk we report the results obtained in . The simulations have been performed at $`\beta =6.0`$ (460 configurations) and $`6.2`$ (200 configurations) with the tree-level Clover action, for several values of the quark masses and for different meson momenta. The physical volume is approximatively the same on the two lattices. Statistical errors have been estimated with the jacknife method. The main results we have obtained for $`\mathrm{\Delta }S=2`$ and $`\mathrm{\Delta }I=3/2`$ matrix elements and their comparison with the results in are reported in Tables 1 and 2. In Figure 1 we show the strong dependence of $`\overline{K}^0|O_4|K^0`$ on the strange quark mass when the ”conventional” parameterization (2) is used, to be compared with the results obtained with the new parameterization. It is also evident that with the same set of data the new parameterization allows to determine the matrix elements with smaller systematic uncertainties. Although we have data at two different values of the lattice spacing, the statistical errors, and the uncertainties in the extraction of the matrix elements, are too large to enable any extrapolation to the continuum limit $`a0`$ : within the precision of our results we cannot study the dependence of $`\stackrel{~}{B}`$-parameters on $`a`$. For this reason, we estimate our best values of the $`B`$-parameters by averaging the results obtained at the two values of $`\beta `$ . Since the results at $`\beta =6.0`$ have smaller statistical errors but suffer from larger discretization effects, we do not weight the averages with the quoted statistical errors but take simply the sum of the two values divided by two. As far as the errors are concerned, we take the largest of the two statistical errors. Our best results are reported in Table 3 and are shown in fig. 2. It is interesting to note, as expected from chiral perturbation theory, that matrix elements of $`\mathrm{\Delta }S=2`$ SUSY operators are enhanced respect to the SM one ($`\widehat{O}_1`$) by a factor $`212`$ at $`\mu =2`$ GeV. Therefore, low energy QCD effects can enhance contributions beyond the Standard Model to $`ϵ_K`$ which, compared with the other SM predictions, becomes a promising observable to detect signals of new physics at low energy. The results for the analogous $`\mathrm{\Delta }C=2`$ and $`\mathrm{\Delta }B=2`$ matrix elements are reported in .
no-problem/9909/cond-mat9909109.html
ar5iv
text
# Evidence for a critical velocity in a Bose–Einstein condensed gas \[ ## Abstract We have studied dissipation in a Bose–Einstein condensed gas by moving a blue detuned laser beam through the condensate at different velocities. Strong heating was observed only above a critical velocity. \] Macroscopic quantum coherence and collective excitations are key features in our understanding of the phenomenon of superfluidity. The superfluid velocity is proportional to the gradient of the phase of a macroscopic wavefunction. Collective excitations determine a critical velocity below which the flow is dissipationless. This velocity is given by Landau’s criterion, $$v_c=\mathrm{min}\left(\frac{\epsilon (p)}{p}\right)$$ (1) where $`\epsilon `$ is the energy of an excitation with momentum $`p`$. Critical velocities for the breaking of Cooper pairs in <sup>3</sup>He and the generation of rotons and vortices in <sup>4</sup>He have been extensively studied. Bose-Einstein condensed gases (BEC) are novel quantum fluids. Previous work has explored some aspects related to superfluidity such as the macroscopic phase and the phonon nature of low-lying collective excitations. In this Letter we report on the measurement of a critical velocity for the excitation of a trapped Bose-Einstein condensate. In analogy with the well known argument by Landau and the vibrating wire experiments in superfluid helium , we study dissipation when an object is moved through the fluid. Instead of a massive macroscopic object we used a blue detuned laser beam which repels atoms from its focus to create a moving boundary condition. The experiment was conducted in a new apparatus for the production of Bose-Einstein condensates of sodium atoms. The cooling procedure is similar to previous work —the new features have been described elsewhere . Briefly, laser cooled atoms were transferred into a magnetic trap in the Ioffe-Pritchard configuration and further cooled by rf evaporative cooling for 20 seconds, resulting in condensates of between 3 and 12 $`\times 10^6`$ atoms. After the condensate was formed, we reduced the radial trapping frequency to obtain condensates which were considerably wider than the laser beam used for stirring. This decompression was not perfectly adiabatic, and heated the cloud to a final condensate fraction of about 60%. The final trapping frequencies were $`\nu _r=65`$ Hz in the radial and $`\nu _z=18`$ Hz in the axial direction. The resulting condensate was cigar-shaped with Thomas-Fermi diameters of 45 and 150 $`\mu `$m in the radial and axial directions, respectively. The final chemical potential, transition temperature $`T_c`$ and peak density $`n_0`$ of the condensate were 110 nK, 510 nK and $`1.5\times 10^{14}\text{cm}^3`$, respectively. The laser beam for stirring the condensate had a wavelength of 514 nm and was focused to a Gaussian $`1/e^2`$ beam diameter of $`2w=13\mu `$m. The repulsive optical dipole force expelled the atoms from the region of highest laser intensity. A laser power of 400 $`\mu `$W created a 700 nK barrier resulting in a cylindrical hole $`13\mu `$m in diameter within the condensate. The laser barrier created a soft boundary, since the Gaussian beam waist was more than 10 times wider than the healing length $`\xi =(8\pi an_0)^{1/2}=0.3\mu `$m, $`a`$ being the two-body scattering length. The laser was focused on the center of the cloud. Using an acousto-optic deflector, it was scanned back and forth along the axial dimension of the condensate (fig. 1). We ensured a constant beam velocity by applying a triangular waveform to the deflector. The beam was scanned over distances up to $`60\mu `$m, much less than the axial extent of the condensate of $`150\mu `$m. Therefore, axial density variations were small within the scan range. The scan frequencies were varied by a factor of 3 between 56 and 167 Hz. Scan velocities close to the speed of sound required scan frequencies much larger than the axial trapping frequency $`\nu _z`$. After exposing the atoms to the scanning laser for 900 ms, we allowed the cloud to equilibrate for 100 ms, then turned off the magnetic trap and recorded the time-of-flight distribution after 35 ms on a CCD camera using near-resonant absorption imaging. The condensate fraction $`N_0/N`$ was determined from fits to the bimodal velocity distribution. We found that the decreasing condensate fraction was a more robust measure of the heating induced by the moving laser beam than the temperature extracted from the wings of the thermal cloud. No heating was observed when the laser beam was kept stationary for the entire 900 ms. Figure 2 shows the effect of the moving laser on the condensate for three different scan rates $`f=56,83`$ and $`167`$ Hz. The heating rate is higher (larger final thermal fraction) for larger drive amplitudes $`d`$ and higher scan frequencies $`f`$. When the same data are replotted as a function of the velocity of the laser beam $`2df`$ (fig. 3), two features emerge immediately. First, all the three data sets collapse onto one universal curve, indicating that the heating of the condensate depends primarily on the velocity of the beam, and not on either frequency or amplitude independently. This suggests that the observed dissipation is not strongly affected by the trapping potential and discrete resonances, but rather, reflects bulk properties of the condensate. The second feature is that we can distinguish two regimes of heating separated by the dashed line in fig. 3. For low velocities, the dissipation rate was low and the condensate appeared immune to the presence of the scanning laser beam. For higher velocities, the heating increased, until at a velocity of about 6 mm/s the condensate was almost completely depleted for a 900 ms exposure time. The cross-over between these two regimes was quite pronounced and occurred at a velocity of about $`1.6`$ mm/s. This velocity should be compared with the speed of sound in the condensate. Since a condensate released from the magnetic trap expands with a velocity proportional to the speed of sound, we could determine its value directly from time-of-flight absorption images to be 6.2 mm/s (at the peak density), almost a factor of 4 larger than the observed critical velocity. To rule out the possibility of heating through the non-condensed fraction, a control experiment was performed on clouds at two different temperatures above $`T_c`$, 800 nK and 530 nK, the latter quite close to the transition temperature. No heating of the cloud was observed in either case, for scan velocities of up to 14 mm/s; this reflects the small overlap between the scanning laser and the large thermal cloud. Since these clouds are typically not in the hydrodynamic regime of collisions, we used a single particle model for heating based on collisions with a moving wall to scale our measurements from above to below $`T_c`$. We obtained a conservative upper bound of 15% of the observed temperature rise which could be attributed to the non-condensed fraction. What determines the critical velocity? In a quasi-homogeneous dilute Bose gas near zero temperature, the critical velocity for phonon excitation is the speed of Bogoliubov sound, which depends on the density $`n(r)`$ through $$c_B(r)=\sqrt{\frac{4\pi \mathrm{}^2a}{M^2}n(r)}$$ (2) where M is the atomic mass. Alternately, one may expect that the rapid flow around the laser beam generates vorticity in the fluid. For the excitation of vortex pairs in a channel of diameter D, eq. (1) leads to a critical velocity $$v_c=\frac{\mathrm{}}{MD}\mathrm{ln}\left(\frac{D}{\xi }\right).$$ (3) Since this velocity scales inversely with the size of the system, it is lower than the value predicted by eq. 2. For our situation eq. 3 can only give an approximate estimate due to the inhomogeneous density and the different geometry. For a channel width of $`15\mu `$m and a healing length of $`0.3\mu `$m it yields a critical velocity of about 0.7 mm/s, a factor of 2 below our measurement. The criterion given in eq. 3 is only based on considerations of energy and momentum and provides a lower bound to $`v_c`$. In addition, one must consider how to produce the excitations dynamically. Several mechanisms for creating vortex lines have been discussed, including remanent vorticity and vorticity pinned to the surface . In the latter case, the critical velocity depends on the surface roughness. The gaseous condensates are confined to magnetic traps which provide perfectly smooth boundary conditions. Therefore, we expect that the critical velocity is determined by nucleation dynamics rather than by purely energetic arguments. The relevant criterion for the onset of dissipation in a Bose condensed gas obeying the nonlinear Schrödinger equation has been discussed in several papers. According to these theories, dissipation ensues when the relative velocity between the object and the fluid exceeds the speed of sound $`c_B`$ locally . For an incompressible flow around a cylindrical object this velocity peaks at the side, reaching twice the object’s speed. The hydrodynamic equations for the compressible condensate imply that the faster the velocity of the flow field, the lower is the condensate density. This effect lowers the critical velocity for a cylindrical object even further: $$v_c=\sqrt{\frac{2}{11}}c_{max}=0.42c_{max}$$ (4) This result is independent of the size of the object and was corroborated by numerical simulations of the nonlinear Schrödinger equation in a homogeneous gas . For our conditions, this estimate yields a critical velocity of $`2.6`$ mm/s, or $`1.6`$ times the observed threshold. However, the finite size of the condensate in our experiments, its inhomogeneous density distribution and the soft boundary of the laser beam are not accounted for by the theory. All these effects should lower the critical velocity. What happens to the condensate above $`v_c`$? Numerical simulations of the nonlinear Schrödinger equation were used to study the flow field around an object moving through a homogeneous condensate. These studies show that above a critical velocity given by Eq. (4) the superfluid flow becomes unstable against the formation of quantized vortex lines, which signals the onset of a new, dissipative regime. Pairs of vortices with opposite circulation are generated at opposite sides of the object in order to reduce the high local flow speed. The rate of heating can be estimated from the energy of vortices and the vortex shedding frequency. The energy of a vortex pair $`\epsilon _{vp}`$ is estimated with the assumption that the vortices are separated by the object diameter $`2w`$, $$\epsilon _{vp}=2\pi D\frac{n\mathrm{}^2}{M}\mathrm{ln}\frac{2w}{\xi }$$ (5) where $`D`$ is the radial width of the condensate and $`n`$ is the density. This yields a value of about 3.4 mK or 1.3 nK/atom. Numerical simulations have shown that for $`v>v_c`$ the rate of vortex pair shedding is proportional to $`vv_c`$. The proportionality constant, besides a numerical factor, is the mean field energy in frequency units divided by the speed of sound. Thus the rate of change in temperature should have the form $`\dot{T}=\kappa (vv_c)`$. Using the estimate for the vortex energy, the model predicts $`\kappa 160`$ nK/mm, in rough agreement with our measured heating rate near the threshold which gives $`\kappa =62`$ nK/mm. In conclusion, we have established a method for studying dissipation in a Bose condensate by implementing a scanning “hole” induced by a far-off-resonant laser beam. Both the laser beam and the trapped condensate provide clean boundary conditions. This and the simplicity of the system makes it amenable to theoretical treatments of vortex nucleation and dissipative dynamics. We found evidence for a critical velocity for excitation of the condensate. Both the onset and the magnitude of the observed heating are in qualitative agreement with model calculations based on the non-linear Schrödinger equation which predict dissipation when the flow field becomes locally supersonic. In contrast, a similar study on rotational stirring concluded that the speed of sound near the stirrer is irrelevant for vortex nucleation . In further studies, we plan to vary the geometry and density in order to distinguish between different predictions for critical velocities which depend only on geometry (eq. 3), or only on the density (eq. 4). All the calculations were done in two dimensions and at zero temperature. In the experiment, the laser beam passes through the surface of the condensate where the density vanishes. Because of this and the non-zero temperature, we expect finite dissipation even at low velocities and a smooth crossover between low and high dissipation. More precise measurements of the heating should allow us to study these finite-size and finite-temperature effects. We thank A. Chikkatur and A. Görlitz for experimental assistance and L. Pitaevskii, J.C. Davis and G. Pickett for useful discussions on critical velocities in liquid helium. This work was supported by the ONR, NSF, ARO, NASA, and the David and Lucile Packard Foundation. M.K. also acknowledges support from Studienstiftung des Deutschen Volkes.
no-problem/9909/gr-qc9909026.html
ar5iv
text
# On the Stability of the Iterated Crank-Nicholson Method in Numerical Relativity ## I Introduction There is currently a large worldwide effort underway attempting to solve Einstein’s equations numerically for astrophysically interesting scenarios. The problem is extremely challenging technically. Among the difficulties is that of finding a finite-difference scheme that allows a stable time evolution of the system. It is well-known that implicit differencing schemes tend to be stable. However, the difficulty of solving the resulting implicit algebraic equations, especially in three spatial dimensions, has led most researchers to stay with explicit methods and their potential instabilities. Several years ago, Choptuik proposed solving the implicit Crank-Nicholson scheme by iteration. This would effectively turn it into an explicit scheme, but hopefully by iterating until some convergence criterion was met one would preserve the good stability properties of Crank-Nicholson. The iterated Crank-Nicholson scheme has subsequently become one of the standard methods used in numerical relativity. In this note, we point out that when using iterated Crank-Nicholson, one should do exactly two iterations and no more. While the limit of an infinite number of iterations is the implicit Crank-Nicholson method, it can in fact be worse to do more than two iterations, and it never helps. ## II Iterated Crank-Nicholson To understand this paradoxical result, consider differencing the simple advective equation $$\frac{u}{t}=\frac{u}{x}.$$ (1) (Many equations in numerical relativity are generalizations of this form, and the differencing techniques are similar.) A simple first-order accurate differencing scheme is FTCS (Forward Time Centered Space): $$\frac{u_j^{n+1}u_j^n}{\mathrm{\Delta }t}=\frac{u_{j+1}^nu_{j1}^n}{2\mathrm{\Delta }x}.$$ (2) Here $`n`$ labels the time levels and $`j`$ the spatial grid points. It is a standard textbook result that this scheme is unconditionally unstable. One sees this with a von Neumann stability analysis: Put $$u_j^n=\xi ^ne^{ikj\mathrm{\Delta }x}$$ (3) and find that the amplification factor $`\xi `$ is $$\xi =1+i\alpha \mathrm{sin}k\mathrm{\Delta }x,$$ (4) where $`\alpha =\mathrm{\Delta }t/\mathrm{\Delta }x`$. Since $`|\xi |^2>1`$ for any choice of $`\alpha `$, the method is unconditionally unstable. Backwards differencing gives a stable scheme: $$\frac{u_j^{n+1}u_j^n}{\mathrm{\Delta }t}=\frac{u_{j+1}^{n+1}u_{j1}^{n+1}}{2\mathrm{\Delta }x},$$ (5) for which $$\xi =\frac{1}{1+i\alpha \mathrm{sin}k\mathrm{\Delta }x}.$$ (6) Now $`|\xi |^2<1`$ for any choice of $`\alpha `$, and so the method is unconditionally stable. The Crank-Nicholson scheme is a second-order accurate method obtained by averaging equations (2) and (5). Now one finds $$\xi =\frac{1+\frac{1}{2}i\alpha \mathrm{sin}k\mathrm{\Delta }x}{1\frac{1}{2}i\alpha \mathrm{sin}k\mathrm{\Delta }x}.$$ (7) Since $`|\xi |^2=1`$, the method is stable. It is the presence of the quantities $`u^{n+1}`$ on the right hand side of equation (5) that makes the method implicit. The first iteration of iterated Crank-Nicholson starts by calculating an intermediate variable $`{}_{}{}^{(1)}\stackrel{~}{u}`$ using equation (2): $$\frac{{}_{}{}^{(1)}\stackrel{~}{u}_{j}^{n+1}u_j^n}{\mathrm{\Delta }t}=\frac{u_{j+1}^nu_{j1}^n}{2\mathrm{\Delta }x}.$$ (8) Then another intermediate variable $`{}_{}{}^{(1)}\overline{u}`$ is formed by averaging: $${}_{}{}^{(1)}\overline{u}_{j}^{n+1/2}=\frac{1}{2}({}_{}{}^{(1)}\stackrel{~}{u}_{j}^{n+1}+u_j^n).$$ (9) Finally the timestep is completed by using equation (2) again with $`\overline{u}`$ on the right-hand side: $$\frac{u_j^{n+1}u_j^n}{\mathrm{\Delta }t}=\frac{{}_{}{}^{(1)}\overline{u}_{j+1}^{n+1/2}{}_{}{}^{(1)}\overline{u}_{j1}^{n+1/2}}{2\mathrm{\Delta }x}.$$ (10) (Iterated Crank-Nicholson can alternatively be implemented by averaging the right-hand side of equation (1). For linear equations, this is completely equivalent.) Iterated Crank-Nicholson with two iterations is carried out in the same way. After steps (8) and (9), we calculate $`{\displaystyle \frac{{}_{}{}^{(2)}\stackrel{~}{u}_{j}^{n+1}u_j^n}{\mathrm{\Delta }t}}`$ $`=`$ $`{\displaystyle \frac{{}_{}{}^{(1)}\overline{u}_{j+1}^{n+1/2}{}_{}{}^{(1)}\overline{u}_{j1}^{n+1/2}}{2\mathrm{\Delta }x}},`$ (11) $`{}_{}{}^{(2)}\overline{u}_{j}^{n+1/2}`$ $`=`$ $`\frac{1}{2}({}_{}{}^{(2)}\stackrel{~}{u}_{j}^{n+1}+u_j^n).`$ (12) Then the final step is computed analogously to equation (10): $$\frac{u_j^{n+1}u_j^n}{\mathrm{\Delta }t}=\frac{{}_{}{}^{(2)}\overline{u}_{j+1}^{n+1/2}{}_{}{}^{(2)}\overline{u}_{j1}^{n+1/2}}{2\mathrm{\Delta }x}.$$ (13) Any number of iterations can be carried out in the same way. Now consider the stability of these iterated schemes. If we define $`\beta =(\alpha /2)\mathrm{sin}k\mathrm{\Delta }x`$, and call the FTCS scheme (2) the zeroth-order method, then direct calculation shows that the amplification factors are $`{}_{}{}^{(0)}\xi `$ $`=`$ $`1+2i\beta ,`$ (14) $`{}_{}{}^{(1)}\xi `$ $`=`$ $`1+2i\beta 2\beta ^2,`$ (15) $`{}_{}{}^{(2)}\xi `$ $`=`$ $`1+2i\beta 2\beta ^22i\beta ^3,`$ (16) $`{}_{}{}^{(3)}\xi `$ $`=`$ $`1+2i\beta 2\beta ^22i\beta ^3+2\beta ^4,`$ (17) and so on. As one would expect, these are exactly the same values one gets by expanding equation (7) in powers of $`\beta `$ and truncating at the appropriate point. To check stability, compute $`|\xi |^2`$ for each of these expressions. You find an alternating pattern. Levels 0 and 1 are unstable; levels 2 and 3 are stable provided $`\beta ^21`$; levels 4 and 5 are unstable; levels 6 and 7 are stable provided $`\beta ^21`$; and so on. Since the stability requirement must hold for all wave numbers $`k`$, it translates into $`\alpha ^2/41`$, or $`\mathrm{\Delta }t2\mathrm{\Delta }x`$. This is just the Courant condition (the 2 occurs because of the 2 in eqn. ). Now we see the resolution of the paradox: While the magnitude of the amplification factor for iterated Crank-Nicholson does approach 1 as the number of iterations becomes infinite, the convergence is not monotonic. The magnitude oscillates above and below 1 with ever decreasing oscillations. All the cases above 1 are unstable, although the instability might be very slowly growing for a large number of iterations. The accuracy of the scheme is determined by the truncation error. This remains second order in $`\mathrm{\Delta }t`$ and $`\mathrm{\Delta }x`$ from the first iteration on. Doing more iterations changes the stability behavior, but not the accuracy. Since the smallest number of iterations for which the method is stable is two, there is no point in carrying out more iterations than this. Note that there was nothing special about using the advective equation (1) for this analysis. Similar behavior is found for the wave equation, written in first-order form $`{\displaystyle \frac{u}{t}}`$ $`=`$ $`v,`$ (18) $`{\displaystyle \frac{v}{t}}`$ $`=`$ $`{\displaystyle \frac{^2u}{x^2}},`$ (19) with the standard centered difference formula for the second derivative term. One recovers the usual Courant condition (without the factor of 2) for the stable cases. ## III Acknowledgements This work was supported in part by NSF grant PHY 99-00672 and NASA Grant NAG5-7264 to Cornell University.
no-problem/9909/cond-mat9909403.html
ar5iv
text
# Features of level broadening in a ring-stub system ## Abstract When a one dimensional (1D) ring-stub system is coupled to an electron reservoir, the states acquire a width (or broadening characterized by poles in the complex energy plane) due to finite life time effects. We show that this broadening is limited by anti-resonances due to the stub. The differences in level broadening in presence and absence of anti-resonance is exemplified by comparison to a 1D ring coupled to an infinite reservoir. We also show that the anti-resonances due to the stub has an anchoring effect on the poles when a magnetic flux through the ring is varied. This will have implication on change in distribution of the poles in disordered multichannel situation as magnetic flux is varied. PACS numbers: 72.10.-d; 73.20.Dx Dephasing in mesoscopic systems is of current research interest and there are various models for dephasing in situations where there is a spatial separation between elastic and inelastic processes. As such it is a difficult problem to model dissipation theoretically because dissipative systems are open systems which are in general more difficult to deal with than closed systems. Real systems are on the other hand most of the time dissipative and it is extremely difficult to obtain a non-dissipative sample in the laboratory. A pioneering idea of Landauer offered a very simple approach to dissipation in electronic systems when he showed that an electron reservoir can introduce some basic features of dissipation like time irreversibility and resistance . This idea evoked a lot of research and resulted in the Landauer-Büttiker conductance formula which is now experimentally realized to be correct . The idea was later extended by Büttiker to introduce level broadening in a ring (or any finite system) penetrated by magnetic flux and there is at least one experiment which verifies the magnitude of persistent current in such a open system . Some more precise testing ground for persistent currents in open systems could be the experimental observation of its directional dependence i.e., if a persistent current loop is coupled to a DC current carrying quantum wire then the magnitude of persistent current will depend on the direction of the DC current . It is to be noted that noise or the DC current itself does not depend on directions. It was also shown that a quantum current magnification effect is possible with a DC bias voltage or with a temperature difference within this theoretical frame work when one can have large circulating currents in the ring in the absence of an Aharonov-Bohm flux. In the present work, we apply this model for level broadening to study a simple system that consists of a one dimensional (1D) ring coupled to a 1D side branch (or stub). This elementary system has received some attention in the past where it was shown that one can have persistent currents without parity effect , coulomb blockade , bistability , etc. Parity effect of persistent currents in single channel rings mean consecutive states have opposite slopes , whereas in the ring-stub system there are two kinds of eigenstates: parity conserving and parity violating . There are $`l/u`$ number of consecutive states forming a group with the same slope after which there are $`l/u`$ consecutive states with slope opposite to the those in the first group . Here $`l`$ is the length of the stub and $`u`$ is the length of the ring. Two consecutive states with same slope are called parity violating pair whereas two consecutive states with opposite slope are called parity conserving pair. When the side branch is adjusted one can have situations when the length of the ring is smaller than the phase coherence length $`l_\varphi `$ while the length of the side branch is smaller, comparable or greater than $`l_\varphi `$. When the side branch is much larger than $`l_\varphi `$ then the situation is the same as a ring coupled to a reservoir. One can also apply the known mechanism of level broadening of a ring-stub system by coupling to a reservoir which is completely physical as long as the stub length is smaller than $`l_\varphi `$ but can be much larger than the ring length. We show that in this regime the presence of anti-resonances that always occur in quantum wires has drastic effects on level broadening. Only in the absence of such anti-resonances the persistent current in a ring coupled to a large stub with some level broadening is similar to persistent current in a ring coupled to a reservoir. These anti-resonances will also affect the transition between different universality classes in Random Matrix Theory in disordered multichannel situations. The ring-stub system coupled to a reservoir is shown in Fig. 1. A flux $`\varphi `$ penetrates the ring. An electron reservoir can be attached in three ways as shown in (a), (b) and (c). In (b) and (c) there are two junctions in the set up. One is the reservoir-system junction (X) and the other is the ring-stub junction (Y). In (a) the two junctions are merged into one and there is only a single junction (Z). While (a) is a ring-stub system coupled to an electron reservoir, (b) and (c) will have some additional features due to multiple scattering and resonance between points X and Y. However, (b) and (c) will behave similarly to each other which leads us to exclude the situation in (c). In (b) the length of the ring is $`u`$, the distance between X and Y is $`v`$ and the distance between X and the dead end of the stub is $`w`$. Hence length of the stub is $`l=v+w`$. $`u`$ is taken to be the unit of length and we will mention the other lengths in numbers without mentioning the units. When $`v0`$ the the system in (b) continuously goes over to the system in (a). Free particle quantum mechanical wave function can be written down in different regions of the system in (b) and can be matched at the junctions using the Griffith boundary conditions or the three way splitter S-matrix where $`ϵ`$ determines the strength of coupling . Analytical expression for the persistent current $`dI`$ in an wave vector interval $`dk`$ is given below using the S matrix approach for the system in Fig. 1(b). $$dI/dk=\frac{e\mathrm{}k}{2\pi m}(1)ϵ_2ϵ_1\mathrm{sin}(\alpha )\mathrm{sin}(ku)\mathrm{sin}^2(kw)/D$$ $$D=4a_2^2\mathrm{sin}^2(kw)A^2+b_2^2B^2$$ $$A=b_1\mathrm{sin}(kv)[\mathrm{cos}(\alpha )\mathrm{cos}(ku)]+a_1\mathrm{sin}(ku)\mathrm{cos}(kv)$$ $$B=b_1\mathrm{sin}(kl)[\mathrm{cos}(\alpha )\mathrm{cos}(ku)]+a_1\mathrm{sin}(ku)\mathrm{cos}(kl).$$ (1) Where $$a_i=\frac{1}{2}(\sqrt{12ϵ_i}1),b_i=\frac{1}{2}(\sqrt{12ϵ_i}+1)$$ (2) $`i=1`$ for the ring-stub junction and 2 for the reservoir-stub junction. Here $`\alpha =2\pi \varphi /\varphi _0`$, $`\varphi _0`$ being the flux quantum. There can be two kinds of processes in the junctions that can lead to reflection of an electron waveguide. First is due to diffraction as the wave front splits up at the junction which disappears for $`ϵ=0.5`$. For smaller values of $`ϵ`$ this contribution to reflection is always there. $`ϵ=4/9`$ corresponds to Griffith boundary conditions exactly for a free junction. The second is the reflection due to the weak coupling or a potential scatterer at the junction that leads to weak coupling. However, there is a third way of getting a reflection at the junction (X) in (b) and that is due to an interference effect that produces an anti-resonance. Such anti-resonances occur very generally in a quantum wire of finite width due to evanescent modes that can be mapped exactly into the 1D stub model . An electron coming from the reservoir on reaching junction X can go towards the ring or can go towards the dead end of the stub, get reflected back and then go towards the ring. Interference between these two paths can be constructive or destructive depending on the wave vector and can lead to reflection from the junction X. We are here discussing first order reflection from the junction X which determines the strength of coupling between the reservoir and system. Besides this there is always second order reflection (reflection from other junctions in the system partly flow out of the junction X towards the reservoir) that makes the total current in the lead to be always zero. When the state of the ring-stub system is broadened by the reservoir, each broadened state will have a pole in the reflection amplitude at junction X that behave similarly as the eigen energies as the flux is varied and the persistent current at the broadened areas can be described using the on shell scattering matrix . In Fig. 2 we plot the persistent current in an infinitesimal energy range ($`\frac{2\pi \mathrm{}}{e}\frac{dI}{dE}`$) versus incident wave vector $`ku`$ for this system for two different flux values. The solid curve is for $`u`$=1, $`v`$=0.01, $`w`$=9.99, $`\alpha =2\pi \varphi /\varphi _0`$=0.1 and for Griffith boundary conditions at the free junctions. Here $`\varphi `$ is the flux through the ring and $`\varphi _0`$ is the flux quantum. The dotted curve is for $`\alpha =1.5`$ with other parameters remaining the same. We have only plotted up to $`ku`$=$`2\pi `$ because at higher energies the curve repeats itself qualitatively. It can be noted that in the solid curve there are 10 diamagnetic peaks ($`l/u`$ being 10) consecutively, followed by 10 paramagnetic peaks. As the flux is increased we get the dotted curve in which the diamagnetic peaks shift to higher energy and paramagnetic peaks shift to lower energy compared to the solid curve. Diamagnetic states (broadened by the coupling to the reservoir) group together because of a discontinuous phase change as Fermi energy crosses the zero (or anti-resonance) in the persistent current between each broadened peak which arises because of total first order reflection at X due to the interference effect discussed above. Level broadening in this case is limited by the presence of zeroes (or anti-resonances) and the peaks do not overlap with each other. As a result when the stub is made very long, the persistent current in the ring coupled to a stub does not bear any similarity with that of a ring coupled to a reservoir. This is shown in Fig. 3 where dotted curve is for $`u`$=1, $`v`$=0.01, $`w`$=99.99 and $`\alpha `$=1.0 for Griffith boundary conditions. The thick curve is the persistent current in a ring of length $`u`$ coupled to an infinite reservoir for $`\alpha =1.0`$. The two curves have no similarity at all. Also this is a very simple example that show the zeroes anchor the poles and the poles cannot move freely as the magnetic field is varied. The magnetic field has no effect on the zeroes because the zeroes are determined by the localized states of the stub. This anchoring effect of the zeroes on the poles can drastically change the distribution of the poles in disordered systems like that considered in Ref. i.e., a disordered multichannel ring threaded by a magnetic flux. The magnetic flux is known to make the eigen energies rigid which in turn leads to a transition between different universality classes of level statistics in Random Matrix Theory. The anchoring effect of the zeroes, that will be unaffected by the magnetic flux, will substantially add color to this rigidity phenomenon. The effect of the anti-resonances on transport currents has been studied to some extent . Here we have shown its effects on persistent currents. Transport currents or transmission coefficient being independent of magnetic flux and bounded by unity does not exhibit the drastic effects shown in Figs. 2 and 3. To exemplify this further let us study a situation when there are no anti-resonances. The features of level broadening and level statistics when a finite size system is coupled to a reservoir is studied in Ref. when there are no anti-resonances. So the features that we will obtain in the following are in accordance with the theory developed in Ref. but completely different from the situation in Figs. 2 and 3. Essentially in the following we will get a situation that can continuously go over to a ring-reservoir system when the stub length becomes large. The anti-resonances can be removed by a different boundary condition at the dead end of the stub instead of a hard wall boundary condition as used in this work or by a magnetic field in the stub region if the stub is quasi-one dimensional etc,. In order to show this we use a simple trick to remove the first order total reflection at the junction X. We make a special choice of parameters that is $`u`$=1, $`v`$=9.99 and $`w`$=0.01 for Griffith boundary conditions. In this case the first order total reflection occur at $`ku`$=0, 100$`\pi `$, 200$`\pi `$ and so on. At $`ku`$=50$`\pi `$, 150$`\pi `$, 250$`\pi `$ etc, the first order reflection at X is zero. Hence in an energy regime like $`ku`$=20$`\pi `$ to 22$`\pi `$ the first order reflection coefficient at X is approximately 0.6 and almost independent of energy. So in this energy window of $`2\pi `$ the ring-stub system is weakly coupled to a reservoir for such parameters. In Fig. 4 the thin solid curve is the persistent current versus $`ku`$ for $`\alpha `$=1.5 and the thick solid curve is that for $`\alpha `$=0.1. The broadening is already enough to make the resonances overlap with each other. In the absence of the anchoring effect on the resonances, some resonances can shift a lot with the magnetic field as compared to that in Fig. 2. The dashed curve and the dotted curve are the persistent current in a ring coupled to an infinite reservoir at $`\alpha `$=1.5 and 0.1, respectively. Keeping all parameters same as in Fig. 4 we plot the same things in Fig. 5 in a different energy window (40$`\pi `$ to 42$`\pi `$) where the first order reflection at the junction X is 0.18. Hence this is a situation of strong coupling between the system and the reservoir and the peaks have broadened further compared to Fig. 4. Curve conventions are the same as in Fig. 4. Now keeping the ring length ($`u`$=1) to be the same we make the stub very long i.e. $`v`$=99.99 and $`w`$=0.01 and plot persistent current versus $`ku`$ in the same energy interval as that in Fig. 5, for a magnetic field that gives $`\alpha `$=1.0 in Fig. 6. Here we have just interchanged the values of $`v`$ and $`w`$ as compared to that in Fig. 3 and we have exactly the same number of poles as that in Fig. 3. Persistent current of a ring of length $`u`$=1 connected to an infinite reservoir is shown by the solid curve. From Figs. 4, 5 and 6 it is seen that as the length of the stub is made longer and broadening of the states of the ring-stub system is made larger, the persistent current is rapidly oscillating around a mean value which turns out to be the persistent current in a ring coupled to an infinite reservoir. The rapid oscillations are resonance effects that are again due to the fact that there are no inelastic processes in the stub. A finite number of inelastic processes (which will arise in a situation when the stub length is comparable to $`l_\varphi `$) will smear out these oscillations and then the situation in Fig. 6 will be similar to that of a ring coupled to a reservoir. But the situation in Fig. 3 is different because of the limiting effects of the anti-resonances. Most of the large current carrying peaks occur around a region where the thick solid curve is very small. Inelastic processes are expected to affect the different peaks equally and for a finite number of inelastic processes a difference between the two situations in Fig. 3 and 6 will continue to exist. However, when the stub length becomes much larger than $`l_\varphi `$ then there will be a saturation of the broadening effect it produces on the resonances. Once this saturation effect sets in, the broadening will affect the larger resonances more than the smaller ones. Such situations cannot be studied in this model. Finally in both cases one will get the situation when the stub becomes a part of the reservoir and the persistent current in the ring will be that of a ring coupled to a reservoir. One can design similar problems with transport current across the stub but as mentioned before, the transmission coefficient being independent of flux and bounded by unity, the features are not so prominent. We hope that further work on this model will help us to understand how a finite number of inelastic process in a spatially separated region compete with elastic processes (like interference and resonance) and affect persistent currents that show similar scaling behavior as transport currents. We have therefore shown that Fig. 1(b) gives us a situation where we can easily switch off or on the anti-resonances and also strongly or weakly couple a reservoir and thus study the exact effects produced by anti-resonances. The presence or absence of anti-resonances are however an universal feature of finite width quantum wires. In summary, we have shown that anti-resonances can drastically limit the broadening of eigen energies by an electron reservoir and as a result, when the stub length is made large the system bears no resemblance to a ring-reservoir system. In absence of the anti-resonances, the system can continuously go over to a ring-reservoir system as the stub length is made large. Although this is strictly valid when there is a complete spatial separation of inelastic and elastic processes, the two situation demonstrated in Fig. 3 and 6 are so dramatically different that some of this effect will survive even in presence of a finite number of inelastic processes in the stub. This effect may be relevant in the dephasing effects observed in the experiment of Ref. where the zeroes were also observed. We have also shown that the anti-resonances has an anchoring effect on the poles that can modify the distribution of poles in situations similar to that considered in Ref. . Figure captions Fig. 1 Three ways of attaching a reservoir to a system of a ring coupled to a stub. Fig. 2 Persistent current versus incident energy at two different fields in a ring coupled to a 10 times longer stub. The reservoir is attached close to the ring-stub junction. Fig. 3 Persistent current versus incident energy in a ring coupled to a 100 times longer stub (dotted curve). The reservoir is attached close to the ring-stub junction. Thick solid curve gives persistent current in the ring if it were attached to an infinite reservoir. Fig. 4 Persistent current versus incident energy at two different fields in a ring coupled to a 10 times longer stub (thick and thin solid curves). The reservoir is attached weakly close to the dead end of the stub. Thick and thin solid curves show oscillations about dotted and dashed curves, respectively, which are persistent currents in the same ring coupled to an infinite reservoir. Fig. 5 Same as in Fig. 4 but the reservoir is coupled strongly. Fig. 6 Same as in Fig. 5 but stub length is made 100 times the ring length.
no-problem/9909/astro-ph9909249.html
ar5iv
text
# Cluster Reconstruction from Combined Strong and Weak Lensing ## 1 Introduction The different regimes of cluster lensing—the inner region where multiple images, including giant arcs are found, further out where highly sheared but singly imaged arclets are found, and the outer regions showing statistical shear—have in the past been approached with quite different modeling methods. But these apparently separate regimes can be studied in a unified way. The key is to express lensing information in terms of the arrival surface. In earlier work (see AbdelSalam et al. 1998) we combined the multiple-image and arclet regimes and argued that extending to statistical shear was a straightforward algorithmic issue. This paper makes that extension. ## 2 The arrival-time surface The creature we will mostly be concerned with is the scaled arrival-time surface $`\tau (\text{`})`$, and it is expressed as follows: $`\tau (\text{`})`$ $`=`$ $`\frac{1}{2}(\text{`}\text{})^2{\displaystyle \frac{(D_{\mathrm{ls}}/D_\mathrm{s})}{\pi }}{\displaystyle \mathrm{ln}|\text{`}\text{`}^{}|\kappa (\text{`}^{})d^2\text{`}^{}}`$ (1) $`=`$ $`\frac{1}{2}(\text{`}\text{})^22(D_{\mathrm{ls}}/D_\mathrm{s})^2\kappa (\text{`}).`$ (2) As usual, \` and are angular locations on the image and source planes respectively, and the $`D`$’s are angular diameter distances in units of $`c/H_0`$. However, $`\kappa `$ is the convergence for sources at infinity, (hence the factors of $`D_{\mathrm{ls}}/D_\mathrm{s}`$) to make dealing with multiple source redshifts more convenient. The form of Eq. (1) is then familiar, while Eq. (2) is just a shorthand for the same thing with $`^2`$ denoting an ‘inverse Laplacian operator’ in two dimensions. To get back to physical arrival time and surface density we use time $`=`$ $`h^1\stackrel{~}{z}\times 80\mathrm{days}\mathrm{arcsec}^2\times \tau (\text{`})`$ (3) surface density $`=`$ $`h^1\stackrel{~}{z}\times 1.2\times 10^{11}M_{}\mathrm{arcsec}^2\times \kappa (\text{`})`$ (4) where $`\stackrel{~}{z}=(1+z_\mathrm{l})D_\mathrm{l}`$ but is $`z_\mathrm{l}`$. Lensing data constrain the arrival-time surface and hence the mass distribution $`\kappa (\text{`})`$ in various ways. We can identify three types of constraints: on the values of the arrival time surface at particular points, on the first derivative, and on the second derivative. The first type is when we know the height difference between two or more points on the arrival-time surface. This is all-important in time-delay quasars and $`H_0`$ measurements, because time delays supply just this height-difference information. But it is not relevant to cluster lensing, at least not yet, though we can always hope for a supernova in a giant arc. The second type of constraint comes from stating Fermat’s principle at each image location, i.e., $$\tau (\text{`}_{\mathrm{image}})=0.$$ (5) Since image positions $`\text{`}_{\mathrm{image}}`$ can generally be measured very accurately, the Eq. (5) tells us that the arrival-time surface has zero gradient at some known point. This type of constraint is only useful if we have multiple images; in that case we have $`2\text{images}`$ equation of the form (5) but only two unknown source coordinates to solve for, giving us some net constraints on $`\kappa (\text{`})`$. In general there are $`2(\text{images}\text{sources})`$ constraints. The third type of constraint involves the curvature or second derivative of the arrival-time surface, and may come from either arclets or from statistical distortions. Consider first a situation like Fig. 1, where an arclet is observed elongated with position angle $`\omega `$. If we are confident that the arclet has been elongated by a factor of at least $`k`$, we can consider the coordinate system $`(\theta _x^{},\theta _y^{})`$ aligned with the elongation and write $$k\left|\frac{^2}{\theta _x^{}^2}\tau (\text{`}_{\mathrm{image}})\right|\left|\frac{^2}{\theta _y^{}^2}\tau (\text{`}_{\mathrm{image}})\right|.$$ (6) Further, since $`\omega `$ is known, we can transform Eq. (6) to the unrotated $`(\theta _x,\theta _y)`$ coordinates. It is not necessary to have highly accurate values of the elongation or its orientation $`\omega `$—all we have to do is set $`k`$ conservatively enough that the inequality (6) is valid. The arclet used may itself be part of a multiple-image system; if so we will have to guess the parity to remove the absolute value signs in (6). The same idea can be applied to statistical distortion. In this case the data provide an estimate and uncertainty $$\frac{\gamma _i}{1\kappa }=g_i\pm \mathrm{\Delta }g_i,$$ (7) where $`\gamma _i`$ and $`\kappa `$ are components of the second derivative of $`\tau (\text{`})`$. We can rewrite this in various ways, such as $$(1\kappa (\text{`}_{\mathrm{image}}))g_i\mathrm{\Delta }g_i\gamma (\text{`}_{\mathrm{image}})(1\kappa (\text{`}_{\mathrm{image}}))g_i+\mathrm{\Delta }g_i.$$ (8) (We have assumed $`\kappa \mathrm{\Delta }g_i`$ is negligible here.) A key feature of the constraint equations (5), (6), and (8) is that they are all linear in the unknowns and $`\kappa (\text{`})`$. Which is to say<sup>1</sup><sup>1</sup>1With one exception: scalar magnification data are quadratic in $`\kappa `$ and so (9) does not apply. Taylor and collaborators (see e.g., Dye & Taylor 1998) have developed reconstruction techniques for this case. However, if magnification information is present along with shear information, together they give (at least in principle) the tensor magnification which is linear in $`\kappa `$, and so (9) again applies. $$\left(\begin{array}{c}Lensing\\ data\end{array}\right)=\left(\begin{array}{ccccc}A& messy& but& linear& operator\\ also& involving& the& same& data\end{array}\right)\left(\begin{array}{c}The\\ lens^{}s\\ projected\\ mass\\ distribution\end{array}\right)$$ (9) and moreover this is true for all lensing regimes: multiple-image, arclet, and statistical shear. ## 3 Mass reconstruction Eq. (9) summarizes the advantage of casting the observational information as constraints on the arrival-time surface. Lens reconstruction is reduced to a linear inversion problem. It is (as symbolized by the $`2\times 5`$ matrix) a highly underdetermined problem, so in order to reconstruct the lens it is necessary to add extra information (sometimes called a prior). Also as indicated in the $`2\times 5`$ matrix, the data enter into the linear operator—an unusual complication. However, as we discussed above, the data enter there either as image positions which are known very accurately, or as inequalities which can be set conservatively; so this issue does not introduce new difficulties. At least four different possibilities for lens reconstruction now suggest themselves * Put $`\tau (\text{`})`$ on a grid and regularize. This may be the simplest approach. * Use basis function expansions for $`\kappa (\text{`})`$ and $`\tau (\text{`})`$. A Fourier-Bessel expansion $$\tau (\text{`})=\underset{mn}{}c_{mn}J_m(k_{mn}\theta )\mathrm{exp}(im\varphi )$$ is particularly attractive as it would trivially relate $`\tau `$ and $`\kappa `$. * Pixellate $`\kappa (\text{`})`$ and regularize. This is the one we have implemented. The mass is distributed on square tiles, each having constant but adjustable density, and $`\tau (\text{`})`$ comes from computing Eq. (1) exactly. Note that although the mass distribution is discontinuous, the arrival-time surface is smooth. Unlike in the two previous possible approaches, pixellating the mass makes it is easy to enforce non-negativity of the mass distribution. * Pixellate $`\kappa (\text{`})`$ and use maximum entropy. See Bridle et al. (this volume) for an example of this in a somewhat different context. The regularization we applied is to minimize $$\left(\kappa \frac{\kappa }{L}L\right)^2d^2\text{`}+ϵ^4(^2\kappa )^2d^2\text{`}$$ (10) while of course enforcing all the lensing constraints. The first term in (10) tends to minimize mass-to-light variation, since $`\kappa /L`$ is a mean $`M/L`$. The second term tends to smooth, with $`ϵ`$ a sort of smoothing scale. By regularizing with respect to different light distributions we can get an estimate of the uncertainty. Since the regularizing functional (10) is quadratic in $`\kappa `$, minimizing it subject to the linear lensing constraints is readily implemented through quadratic programming algorithms. The disadvantage is that the storage needed is $`2\text{number of pixels}^2`$ limiting us to of order 5000 pixels. The solution is to have adaptive pixellation (much like tree codes in $`N`$-body simulations), with smaller pixels for the inner parts of the cluster and large pixels outside. ## 4 The mass-sheet degeneracy After enthusing about how easy lens reconstruction is, it is well to make a cautionary remark about the main source of uncertainty. Here again, the arrival-time surface is very useful. If we multiply $`\tau (\text{`})`$ by a constant factor, we just stretch the surface vertically, and geometrically it is clear that neither image locations nor their relative magnifications change. More formally, we can rearrange Eq. (2) and drop the irrelevant term $`\frac{1}{2}\text{}^2`$ to get $$\tau (\text{`})=2^2\left(1\frac{D_{\mathrm{ls}}}{D_\mathrm{s}}\kappa \right)\frac{1}{2}\text{`}\text{}.$$ (11) If we now multiply both $`\left(1D_{\mathrm{ls}}/D_\mathrm{s}\kappa \right)`$ and by some constant $`a`$, the image structure will be unchanged; only time delays and total magnification will get multiplied by $`a`$ and $`a^1`$ respectively (the latter because rescaling rescales all the sources). Therefore lens reconstruction from image structure (without absolute magnifications) leaves $`\left(1D_{\mathrm{ls}}/D_\mathrm{s}\kappa \right)`$ uncertain by a constant factor. This is the mass-sheet degeneracy. It may appear at first that the mass-sheet degeneracy can be eliminated by requiring $`\kappa `$ to vanish at large distances. But outside the observed region, the monopole part of $`\kappa `$ is completely unconstrained. So the mass-sheet degeneracy is equally effective with a mass disk larger than the observed region. Figure 2 illustrates. ## 5 Reconstruction of Cl 1358+62 (The Seahorse Cluster) This cluster seemed the obvious test case for combined strong and weak lensing reconstruction in a relatively large field. The cluster itself is at $`z=0.33`$ and its inner part lenses a $`z=4.92`$ galaxy into a red arc; this was identified by Franx et al. (1997) who also presented a strong lens model. Hoekstra et al. (1998) measured the shear field in an HST WFPC2 mosaic and reconstructed a mass map from weak lensing. The present work combines both regimes. Figure 3 illustrates the data on this cluster that we have used: a shear field derived by binning the individual polarization measurements kindly provided by Hoekstra and collaborators and assuming a constant $`z=1`$ for the background galaxies, and a smoothed $`V_z`$ light distribution. Figure 4 shows the pixellation we used. Figure 5 is our mass map, computed by regularizing with respect to the light and a smoothing scale $`ϵ`$ changing from $`5^{\prime \prime }`$ in the center to $`1^{}`$ at the edge. Our estimated uncertainty is derived from an ensemble of reconstructions where we rotated the light map by arbitrary angles and shifted it randomly by up to $`100^{\prime \prime }`$ and regularized with respect to these altered light maps. The mass map resembles Fig. 15 of Hoekstra et al. (1998) but tends to be smoother. Also the overall normalization is somewhat higher; this is probably due to different treatments of the boundary, though with the mass-sheet degeneracy. (The red arc at $`z=4.92`$ compared to $`z=1`$ assumed for the weakly lensed galaxies, reduces the effect of the mass-sheet degeneracy, but does not eliminate it.) Our central density is higher, but this is as expected since inclusion of a multiply-imaged system immediately forces $`\kappa >1`$. There is some indication that the mass peak is offset (by some 10s of kpc) to the south of the light peak, a natural thing to expect if the cluster is asymmetric and galaxy formation is biased. But this offset is tentative; in Abell 2218, where the inner region is much richer in lensing and better constrained, the evidence for an offset is more compelling (AbdelSalam et al. 1998). Figure 6 shows the enclosed mass out to different radii and the reconstructed shear. It is surprising that the enclosed mass after angular averaging, looks so ‘isothermal’ even though the mass map is very asymmetric. We obtain $`M=(10\pm 1)\times 10^{14}M_{}\mathrm{Mpc}^1`$, corresponding to a formal Einstein radius of $`43^{\prime \prime }`$ and a formal isothermal los dispersion of $`(990\pm 70)\mathrm{km}\mathrm{sec}^1`$. The estimated mass-to-light is $`(380\pm 60)hM_{}L_V^1`$. The CNOC survey (Carlberg et al. 1997) measured $`910\mathrm{km}\mathrm{sec}^1`$ for the los velocity dispersion and estimated $`M/L=229h^1M_{}L_V^1`$. ## 6 Discussion We have developed a mass reconstruction technique that deals with all the cluster lensing regimes simultaneously, from multiple images in the central regions to weak shear in the outer regions, and moreover with adaptive resolution. Code implementing this is available from the authors. Variants of our technique (suggested in Section 3) may also be of interest in future work. At this stage it appears that cluster mass reconstruction is very good at recovering features—a good example are offsets between mass and light peaks in clusters, which may be indirect evidence for biased galaxy formation. But the calibration of the mass (even with perfect shear data) is still problematic—the mass sheet degeneracy is a vicious effect. We suggest that recovering the redshift distribution and the absolute magnification of the background galaxies (even with large uncertainties) are the best hope of breaking this degeneracy, and is likely to be very rewarding in future work. ###### Acknowledgements. We are grateful to Henk Hoekstra and collaborators for supplying us with their shear data on Cl 1358+62.
no-problem/9909/hep-ph9909509.html
ar5iv
text
# Q-ball Formation through Affleck-Dine Mechanism \[ ## Abstract We present the full nonlinear calculation of the formation of a Q-ball through the Affleck-Dine (AD) mechanism by numerical simulations. It is shown that large Q-balls are actually produced by the fragmentation of the condensate of a scalar field whose potential is very flat. We find that the typical size of a Q-ball is determined by the most developed mode of linearized fluctuations, and almost all the initial charges which the AD condensate carries are absorbed into the formed Q-balls, whose sizes and the charges depend only on the initial charge densities. \] A Q-ball is a non-topological soliton with some conserved global charge in the scalar field theory . A Q-ball solution exists if the energy minimum develops at non-zero $`\varphi `$ with fixed charge $`Q`$. In terms of the effective potential of the field $`\varphi `$, $`V(\varphi )/\varphi ^2`$ takes the minimum at $`\varphi 0`$ . The Q-ball naturally appears in the spectrum of the minimal supersymmetric standard model (MSSM) . In particular, very large Q-balls could exist in the theory with a very flat potential , such as in the MSSM which has many flat directions, which consist of squarks and sleptons, carrying baryonic and/or leptonic charges . It provides interesting attention to phenomenology and astrophysics . Cosmologically, it is interesting that large Q-balls carrying baryonic charge (B-ball) can be promising candidates for the dark matter of the universe, and/or the source for the baryogenesis . Moreover, they can explain why the energy density of baryons is as large as that of the dark matter (at least within a few orders of magnitude). If the effective potential of the field $`\varphi `$ carrying the baryonic charge is very flat at large $`\varphi `$, as in the theory that the supersymmetry (SUSY) breaking occurs at low energy scales (gauge-mediated SUSY breaking), B-ball energy per unit charge decreases as the charge increases . For large enough charge, such as $`B10^{12}`$, the B-ball cannot decay into nucleon, and is completely stable, which implies that B-balls themselves can be the dark matter with charges $`B=10^{14}10^{26}`$ , while baryons are created by the conventional Affleck-Dine (AD) mechanism. In the case of gravity-mediated SUSY breaking scenario, the B-balls can decay into quarks or nucleons, with the decay (evaporation) rate of the Q-ball proportional to the surface area , and if they decay after the electroweak phase transition, there are some advantages over the conventional AD baryogenesis . For example, B-balls can protect the baryon asymmetry from the effects of lepton violating interactions above the electroweak scale when anomalous $`B+L`$ violation is in thermal equilibrium. Another one is that Q-balls with $`BL`$ charge survive the sphaleron effects to create the same amounts of baryon and lepton numbers. In either case, it is necessary for Q-balls to have large charges, such as $`Q=10^{22}10^{28}`$ . In this scenario, dark matter is lightest supersymmetric particle (LSP) which arises from the Q-ball decay, and parameters of MSSM could be constrained by investigating the Q-ball cosmology . Note that it is also possible to have stable Q-balls in the gravity-mediated SUSY breaking theory depending on the details of the features of the hidden sector . Those large Q-balls are expected to be created through AD mechanism in the inflationary universe . The coherent state of the AD scalar field which consists of some flat direction in MSSM becomes unstable and instabilities develop. These fluctuations grow large to form Q-balls. The formation of large Q-balls has been studied analytically only with linear theory and numerical simulations were done in one dimensional lattices . Both of them are based on the assumption that the Q-ball configuration is spherical so that we cannot really tell that the Q-ball configuration is actually accomplished. Recently, some aspects of the dynamics of AD scalar and Q-ball formation were studied in Ref. , but the whole evolution was not investigated, which is important for the investigation of the Q-ball formation. (In the context of non-relativistic Bose gas, the dynamics of drops of Bose-Einstein condensate, which is a non-topological soliton, are studied in Ref. .) In this Rapid Communication, we study the dynamics of a complex scalar field with very flat potential numerically in one, two, and three dimensional lattices, without assuming spherical Q-ball configuration. On one dimensional lattices, it is equivalent to the system independent of other two dimensions, so that we are observing plane-like objects. We call them Q-walls. Likewise, string-like objects, which we call Q-strings, appear on two dimensional lattices. First we show where the instabilities of a scalar field come from. To be concrete, let us assume that the complex scalar field $`\mathrm{\Phi }`$ has an effective potential of the form $$V(\mathrm{\Phi })=m^4\mathrm{ln}\left(1+\frac{|\mathrm{\Phi }|^2}{m^2}\right)cH^2|\mathrm{\Phi }|^2+\frac{\lambda ^2}{M^2}|\mathrm{\Phi }|^6,$$ (1) where $`m`$ is the mass of the field, $`H`$ is the Hubble parameter, $`\lambda `$ is a dimensionless coupling constant, $`M2.4\times 10^{18}`$ GeV is the (reduced) Planck mass, and $`c`$ is a constant. Hereafter, we assume the matter-dominated universe, where $`H=2/3t`$. This form of the potential arises naturally in the gauge-mediated SUSY breaking scenario in MSSM . In addition to Eq.(1), the A-term (e.g., $`A(\mathrm{\Phi }^4+\mathrm{\Phi }^4)`$) is necessary for baryogenesis, since it makes the AD field rotate to create baryon numbers. Here we assume that the A-term does not crucially affect other terms of the potential, and take ad hoc initial conditions neglecting the A-term. The AD field thus has the initial charge density. Notice that the effective potential for the flat direction has very similar form in the gravity-mediated SUSY breaking scenario , and the features of the Q-ball formation are expected to be the same. If we note that the field is dominated by the logarithmic potential and the homogeneous mode has only a real part (i.e., no rotational motion), neglecting the second and third term in Eq.(1), the instability band is approximately estimated as $$\frac{k}{a}\frac{m^2}{|\mathrm{\Phi }|},$$ (2) for large field value $`|\mathrm{\Phi }|`$, which is exactly the same as the result of Ref. (note that there are additional instability bands which come from the parametric resonance effects because of the oscillation of the homogeneous field, but they are subdominant effects). Therefore, the instability band grows as time goes on, since the amplitude of the homogeneous mode $`|\mathrm{\Phi }|`$ decreases as $`a^3`$ for the logarithmic potential. These fluctuations originate from the negative curvature of the logarithmic potential, which produces negative pressure . It is expected that Q-balls with corresponding scales are formed. As we mentioned, we calculate the dynamics of the complex scalar field on one, two, and three dimensional lattices. We formulate the equation of motion for the real and imaginary part of the field: $`\mathrm{\Phi }=\frac{1}{\sqrt{2}}(\varphi _1+i\varphi _2)`$. Rescaling variables with respect to $`m`$, we have dimensionless variables $$\phi =\frac{\varphi }{m},h=\frac{H}{m},\tau =mt,\xi =mx.$$ (3) We have exhaustedly calculated the dynamics of the scalar field and Q-balls for various parameters, and find that Q-balls are actually formed through the Affleck-Dine mechanism in three dimensional lattices. They have thick-wall profiles which are approximately spherical, and its charge is conserved as time goes on. Here we show one example. Figure 1 shows the configuration of the Q-ball at $`\tau =10^6`$ on $`64^3`$ three dimensional lattices with $`\mathrm{\Delta }\xi =1.0`$ in the matter dominated universe. Initial conditions are $`\phi _1(0)=2.5\times 10^6,\phi _1^{}(0)=0,\phi _2(0)=0,\phi _2^{}(0)=4.0\times 10^4,\tau (0)=100`$. We can see more than 30 Q-balls in the box. The charge of the largest Q-ball is $`Q1.96\times 10^{16}`$ evaluated by $$Q=d^3\xi q=d^3\xi \frac{1}{2}(\phi _1\phi _2^{}\phi _2\phi _1^{}).$$ (4) Though the best way to investigate the nature of Q-ball formation is to calculate in three dimensions, we must use the large-sized box to take into account low momenta effects and the large number of lattices in order to have the enough resolution. Therefore, we have calculated in one and two dimensions, and use these results complementarily with results in three dimensions. Therefore, let us now compare the evolutions of Q-balls in one, two, and three dimensions. As is mentioned, a Q-ball is a non-trivial configuration of the scalar field, which we can obtain by estimating the energy minimum with finite charge $`Q`$ fixed. From Eq.(4) we can approximately estimate the charge of a Q-ball as $$Q=a^3Q_Da^3R^D\stackrel{~}{q}\mathrm{const}.,$$ (5) where $`\stackrel{~}{q}=\varphi _1\dot{\varphi _2}\dot{\varphi _1}\varphi _2`$ is the charge density, and $`Q_D`$ is the charge in $`D`$ dimension. Charge conservation tells us that $`Q`$ is constant. If we assume the form of a Q-ball as $$\varphi (𝐱,t)=\varphi (𝐱)\mathrm{exp}(i\omega t),$$ (6) the energy of a Q-ball can be calculated as $`E`$ $`=`$ $`{\displaystyle d^3x\left[\frac{1}{2}(\varphi )^2+V(\varphi )\frac{1}{2}\omega ^2\varphi ^2\right]}+\omega Q`$ (7) $`=`$ $`{\displaystyle d^3x[E_{grad}+V_1+V_2]}+\omega Q,`$ (8) where $`E_{grad}`$ $``$ $`{\displaystyle \frac{\varphi ^2}{a^2R^2}},`$ (9) $`V_1`$ $``$ $`m^4\mathrm{log}\left(1+{\displaystyle \frac{\varphi ^2}{2m^2}}\right)\mathrm{const}.,`$ (10) $`V_2`$ $``$ $`\omega ^2\varphi ^2.`$ (11) When the energy takes the minimum value, the equipartition is achieved: $`E_{grad}V_1`$ and $`E_{grad}V_2`$. From these equations and the charge conservation, we obtain the following evolutions: $`R`$ $``$ $`a^{4/(D+1)},`$ (12) $`\varphi `$ $``$ $`a^{(D3)/(D+1)},`$ (13) $`\omega `$ $``$ $`a^{(D3)/(D+1)},`$ (14) which we observed have approximately the same features numerically . Therefore, the physical size $`R_{phys}=Ra`$ of the Q-balls for $`D=1,2`$ shrinks, while remaining constant for $`D=3`$. Thus, stable Q-balls (three dimensional objects) can be formed, but Q-walls or Q-strings shrink because of the charge conservation. Figure 2 shows the power spectra of the cases of one dimensional lattices with the box size $`L=4096`$ and linearized fluctuations at $`\tau =4.5\times 10^5`$ and $`5\times 10^5`$. As expected, both spectra are very similar at $`\tau =4.5\times 10^5`$, since fluctuations have not fully developed yet. After they are fully developed ($`\tau =5\times 10^5`$), the spectrum becomes smooth and broad because of rescattering (panel 2(c)). But, even at this time, the most developed mode is the same as that of linearized fluctuations (Compare panels 2(c) and 2(d)). Therefore, we conclude that the size of Q-balls is determined by the most developed mode of linearized fluctuations when the amplitude of fluctuations becomes as large as that of the homogeneous mode, $`\delta \varphi ^2\varphi ^2`$. For the case of Fig. 2, the typical size is $`k_{max}0.04`$, which implies that $`R_{phys}a(\tau _f)/k_{max}7.3\times 10^3`$, where $`\tau _f=5\times 10^5`$ is the Q-ball formation time. At this time, the ratio to the horizon size is $`10^2`$, which corresponds to the results of Ref. , but this value may not have important meaning, since the sizes of the horizon and Q-balls have different time evolutions. This size is consistent with the actual sizes appearing on three dimensional lattices, where the largest Q-ball has the size $`R_{phys}1.1\times 10^4`$, and the average size of the second to fifth largest Q-balls is $`5.2\times 10^3`$. We also observed that almost all the initial charges carried by the condensate of AD field are absorbed into Q-balls formed from the fragmentations of the condensate, and the amplitude of the homogeneous mode is highly damped, which means that they carry only a small fraction of the total initial charges. In the case of Fig. 1, more than $`95\%`$ of the charges are stored in the Q-balls. Actually, the charges and sizes of Q-balls depend on the initial value of the charge carried by the AD condensate. Since the initial charge density of AD scalar is written as $`q(0)=\phi _1(0)\phi _2^{}(0)`$, the larger the initial amplitude or angular velocity of the AD condensate, the larger the charge stored in Q-balls. The dependences of the charge and the (comoving) size of the (largest) Q-ball on the initial charge density are shown in Fig. 3. Here we take one dimensional lattices, so actually we observe Q-walls. We thus estimate their conserved charges as $`Q_{max}=a^3𝑑xq(x)`$ according to Eq.(5). We find that one large Q-ball and a few small Q-balls are formed in most of the cases, while one Q-ball is formed in some cases such that the initial charge density $`q(0)`$ is relatively small. Open circles denote the dependence on $`\phi _2^{}(0)`$ for fixed $`\phi _1(0)`$, while solid triangles denote the dependence on $`\phi _1(0)`$ for fixed $`\phi _2^{}(0)`$ with $`L=256`$. Since both results show the same dependence, the only relevant variable which determines the charge and the size is the initial charge density $`q(0)`$. The dashed line represents the fitted line, $`Q_{max}=94.3(q(0))^{1.03}`$. Crosses are obtained on $`L=512`$ lattices. They seem to be a little larger, and the box size effect might be remained. Notice that those Q-balls with negative charge can be produced when the initial angular velocity of the AD condensate (or, in our simulations, $`\phi _2^{}(0)`$) is small enough . Since the (comoving) size of the Q-balls changes as time goes on, as we mentioned above, the actual values of the size at $`\tau =5\times 10^7`$ in one dimension is not so important. What is important is how the size depends on $`q(0)`$. The dotted line shows the fitted line written as $`R=9.46\times 10^7(q(0))^{0.464}`$, though we expect the relation $`R(q(0))^{1/3}`$, which means that the charge is proportional to the volume as in Eq.(5). It may be one of the reasons for the discrepancy that the values of $`R`$ for small charge Q-balls may have considerable error because of poor resolution in spatial lattices. In conclusion, we consider the full nonlinear equations of motion of the Affleck-Dine scalar field in order to see the formation of the Q-ball through the Affleck-Dine mechanism by numerical simulations. It is shown that large Q-balls are actually produced by the fragmentation of the condensate of a scalar field whose potential is very flat, as in the supersymmetric standard theory. We find that the typical size of Q-balls is determined by that of the most developed mode of linearized fluctuations when the amplitude of fluctuations grows as large as that of the homogeneous mode: $`\delta \varphi ^2\varphi ^2`$. Almost all the initial charges carried by the AD condensate are absorbed into the Q-balls formed, leaving only a small fraction in the form of the remaining coherently oscillating AD condensate. We thus can constrain parameters of MSSM through the fraction of the baryon in Q-balls ($`f_B`$) in the context of the Q-ball decay producing LSP dark matter Moreover, the actual sizes and the charges stored within Q-balls depend on the initial charge density of the AD condensate, which is in good agreement with the condition of the existence of the Q-ball; that is, it exists if the scalar field can take non-trivial energy minimum configuration with a fixed charge. Therefore, Q-balls with huge charges necessary for Q-balls to be dark matter could be produced if the initial charge density that the AD condensate carries is large enough. We also find that the evolution of Q-balls crucially depends on the form of their dimensions, and the stable Q-balls can only exist in the form of three dimensional objects. Smaller dimensional objects such as Q-walls and Q-strings shrink as the universe expands. Finally, we will mention the Q-axiton (the higher energy state Q-ball), which was studied in Ref. . In our simulations, Q-balls are actually formed. The field orbits in the complex plane inside Q-balls are almost complete circles. Moreover, even if the initial AD field orbit is extremely oblique such that the initial angular velocity is very small so as to create the negatively charged Q-balls, circular orbits can be seen inside both positive and negative Q-balls . It thus seems that Q-axitons may appear, if ever, at the very beginning of the Q-ball formation. M.K. is supported in part by the Grant-in-Aid, Priority Area “Supersymmetry and Unified Theory of Elementary Particles”($`\mathrm{\#}707`$).
no-problem/9909/astro-ph9909499.html
ar5iv
text
# Mass - radius relations for white dwarf stars of different internal compositions ## 1 Introduction It is a well known fact that about 90% of stars will end their lives as white dwarf (WD) stars. At present we know different routes that drive stellar objects to such a fate. It is widely accepted, for instance, that low mass WDs with stellar masses $`M0.45`$ $`\mathrm{M}_{\mathrm{}}`$ are composed of helium and that they have had time enough to evolve to such state as a result of binary evolution. For intermediate mass WDs, stellar evolution theory predicts an internal composition dominated by carbon and oxygen. Finally, for the high mass tail of the WD mass distribution, theory predicts interiors made up by neon and magnesium. Over the years, it has been customary to employ mass - radius relations to confront theoretical predictions on the internal composition of WDs with observational data. This is so because, as it is well known since Hamada & Salpeter (1961) (hereafter HS, see also Shapiro & Teukolsky 1983), zero - temperature configurations are sensitive to the internal composition. One of the effects that allow us to discriminate the WD internal composition for a given stellar mass is related to the dependence of the non - ideal contributions to the equation of state (EOS) of degenerate matter (such as Coulomb interactions and Thomas - Fermi corrections) on the chemical composition. These contributions to the EOS are larger the higher the atomic number $`Z`$ of the chemical constituent. Another very important effect is that, in the case of heavy elements like iron, nuclei are no longer symmetric ($`Z=26,A=56`$ for iron), yielding a mean molecular weight per electron higher than 2. Accordingly, for a fixed mass value, the WD radius is a decreasing function of $`Z`$. Recently, Provencal et al. (1998) (and other references cited therein) have presented the Hipparcos parallaxes for a handful of WDs. These parallaxes have enabled to significantly improve the mass and radius determination of some WDs, thus allowing for a direct confrontation with the predictions of WD theory. In particular, the suspicion that some WDs would fall on the zero - temperature, mass - radius relation consistent with iron cores (see Koester & Chanmugam 1990) has been placed on a firm observational ground by these satellite - based measurements (see Provencal et al. 1998) (however, see below). Indeed, some WDs have much smaller radii than expected if their interior were made of carbon and oxygen, suggesting that, at least, two of the observed WDs have iron - rich cores. Specifically, the present determinations indicate that Procyon B and EG 50 have radii and masses consistent with zero - temperature, iron WDs. Obviously, such results are in strong contradiction with the standard predictions of stellar evolutionary calculations, which allow for an iron - rich interior only in the case of presupernova objects. Although these conclusions are based on the HS zero - temperature, mass - radius relations (note that EG 50 has an effective temperature, $`T_{\mathrm{eff}}`$, of $`T_{\mathrm{eff}}`$$`21000`$ K), it is clear that unless observational determinations are incorrect, the interior of the above - mentioned WDs is much denser than expected before. Before the above - mentioned determinations, an iron composition has been considered as quite unexpected. In fact, the only attempt of proposing a physical process able to account for the formation of iron WDs is, to our knowledge, that of Isern et al. (1991). In their calculations, Isern et al. find that an explosive ignition of electron - degenerate ONeMg cores may, depending critically upon the ignition density and the velocity of the burning front, give rise to the formation of neutron stars, thermonuclear supernovae or iron WDs. It is therefore not surprising that, apart from the study carried out long ago by Savedoff et al. (1969), who did not consider the effects of electrostatic corrections, convection and crystallization in their calculations, very little attention has been paid to the study of the evolution of iron WDs. We should warn the reader that the existence of WDs with an iron \- rich interior is still under debate. In particular, despite recent claims of an iron - rich interior for Procyon B, in the report of this work our referee has told us about a new reanalisis of the observational data which, in a preliminar stage, seems to indicate an interior composition for this object consistent with a carbon one. Another interesting possibility is that these objects may content some extremely compact core, as proposed by Glendenning et al. (1995a, b). They suggested the existence of stellar objects composed by a strange quark matter (with a density of $`5\times 10^{14}`$ g cm<sup>-3</sup>) surrounded by an extended, normal matter envelope. These configurations have been called “strange dwarfs”. It is presently known that these objects have, for a given mass and chemical composition for the normal matter layers, a much lower radius than a standard WD, and also that they evolve in a very similar way compared to standard WDs (Benvenuto & Althaus 1996a, b). However, at present, it is difficult to account for the formation of a strange quark matter core inside a WD star. In view of the above considerations, we present in this paper a detailed set of mass - radius relations for WD models with different assumed internal compositions, with the emphasis placed on models with iron - rich composition. Despite the fact that many researchers have addressed the problem of theoretical mass - radius relations for WD of helium (Vennes et al. 1995; Benvenuto & Althaus 1998; Hansen & Phinney 1998 and Driebe et al. 1998), carbon and oxygen (Koester & Schönberner 1986; Wood 1995 amongst others), we judge it to be worthwhile to extend our computations to the case of models with these compositions in the interests of presenting an homogeneous sequence of mass - radius relations. In particular, we shall consider the internal layers as made up by helium (<sup>4</sup>He), carbon (<sup>12</sup>C), oxygen (<sup>16</sup>O), silicon (<sup>28</sup>Si) and iron (<sup>56</sup>Fe), surrounded by a helium layer with a thickness of $`10^2M_{}`$ (where $`M_{}`$ is the stellar mass). We considered models with an outermost hydrogen layer of $`10^5M_{}`$ ($`3\times 10^4M_{}`$ in the case of helium core models) and also models without any hydrogen envelope. In doing so, we shall employ a full stellar evolution code, updated in order to compute the properties of iron - rich, degenerate plasmas properly. This paper is organized as follows. In Sect. 2, we present the general structure of the computer code that we have employed and the main improvements we have incorporated in it. In Sect. 3, we describe the strategy employed in the computations, and the numerical results. Finally, in Sect. 4, we discuss the main implications of our results. ## 2 The computer code The WD evolutionary code we employed in this study is fully described in Althaus & Benvenuto (1997, 1998), and we refer the reader to those works for a general description. Briefly, the code is based on the technique developed by Kippenhahn et al. (1967) for calculating stellar evolution, and it includes a detailed and updated constitutive physics appropriate to WD stars. In particular, the EOS for the low - density regime is that of Saumon et al. (1995) for hydrogen and helium plasmas, whilst the treatment for the completely ionized, high - density regime includes ionic contributions, coulomb interactions, partially degenerate electrons, electron exchange and Thomas - Fermi contributions at finite temperature. Radiative opacitites for the high - temperature regime ($`T6000`$ K) with metallicity $`Z=0`$ are those of OPAL (Iglesias & Rogers 1993), whilst for lower temperatures we use the Alexander & Ferguson (1994) molecular opacities. High - density conductive opacities and the various mechanisms of neutrinos emission for different chemical composition (<sup>4</sup>He, <sup>12</sup>C, <sup>16</sup>O, <sup>20</sup>Ne, <sup>24</sup>Mg, <sup>28</sup>Si, <sup>32</sup>S, <sup>40</sup>Ca and <sup>56</sup>Fe) are taken from the works of Itoh and collaborators (see Althaus & Benvenuto 1997 for details). In addition to this, we include conductive opacities and Bremsstrahlung neutrinos for the crystalline lattice phase following Itoh et al. (1984a) and Itoh et al. (1984b; see also erratum Itoh et al. 1987), respectively. The latter becomes relevant for WD models with iron core since these models begin to develop a crystalline core at high luminosities (up to two orders of magnitude higher than the luminosity at which a carbon - oxygen WD of the same mass begins to crystallize). With respect to the energy transport by convection, for the sake of simplicity, we adopt the mixing length prescription usually employed in most WD studies. This choice has no effect on the radius of the models. Finally, we consider the release of latent heat during crystallization in the same way as in Benvenuto & Althaus (1997). As in our previous works on WDs, we started the computations from initial models at a far higher luminosity than that corresponding to the most luminous models considered as meaningful in this paper. The procedure we follow to construct the initial models of different stellar masses and internal chemical composition is based on an artificial evolutionary procedure described in our previous papers cited above. In particular, to produce luminous enough initial models, we considered an artificial energy release. After such “heating”, models experience a transitory relaxation to the desired WD structure. Obviously, the initial evolution of our WD models is affected by this procedure but, for the range of luminosity and $`T_{\mathrm{eff}}`$ values considered in this paper this is no longer relevant (see below) and our mass - radius relations are completely meaningful. ## 3 Numerical results In order to compute accurate mass - radius relations, we evolved WD models with masses ranging from 0.15 $`\mathrm{M}_{\mathrm{}}`$ to 0.5 $`\mathrm{M}_{\mathrm{}}`$ at intervals of 5% for helium core WDs; from 0.45 $`\mathrm{M}_{\mathrm{}}`$ to 1.2 $`\mathrm{M}_{\mathrm{}}`$ at intervals of 0.01 $`\mathrm{M}_{\mathrm{}}`$ for carbon, oxygen and silicon cores; and finally from 0.45 $`\mathrm{M}_{\mathrm{}}`$ to 1.0 $`\mathrm{M}_{\mathrm{}}`$ for the case of an iron core at intervals of 0.01 $`\mathrm{M}_{\mathrm{}}`$. The evolutionary sequences were computed down to $`\mathrm{log}L`$/$`\mathrm{L}_{\mathrm{}}`$= -5. Mass - radius relations for interior compositions of <sup>12</sup>C, <sup>16</sup>O, <sup>28</sup>Si and <sup>56</sup>Fe are presented for $`T_{\mathrm{eff}}`$ values ranging from $`T_{\mathrm{eff}}`$= 5000K to 55000 K with steps of 10000 K and from 70000 to 145000 K with steps of 15000 K. For the case of helium WD models we considered $`T_{\mathrm{eff}}`$ values from $`T_{\mathrm{eff}}`$= 4000 K to 20000 K with steps of 4000 K. To explore the sensitivity of our results to a hydrogen envelope, we considered two values: $`M_\mathrm{H}/M_{}=10^5`$ ($`3\times 10^4M_{}`$ in the case of He core models) and $`M_\mathrm{H}/M_{}=0`$. For the sake of comparison, for each of the considered core compositions we have also computed the zero - temperature HS models. It is worth noting that all the models included in the present work have densities below the neutronization threshold for each chemical composition ($`1.37\times 10^{11}`$ g cm<sup>-3</sup> for helium, $`3.90\times 10^{10}`$ g cm<sup>-3</sup> for carbon, $`1.90\times 10^{10}`$ g cm<sup>-3</sup> for oxygen, $`1.97\times 10^9`$ g cm<sup>-3</sup> for silicon, and $`1.14\times 10^9`$ g cm<sup>-3</sup> for iron). Such densities represent the end of the WD sequences because electron capture softens the EOS and the stellar structure becomes unstable against gravitational collapse (see Shapiro & Teukolsky 1983 for further details). In recent years, both observational (Marsh 1995; Moran et al. 1997; Landsman et al. 1997; Edmonds et al. 1999 amongst others) and theoretical (Althaus & Benvenuto 1997, Benvenuto & Althaus 1998, Hansen & Phinney 1998, Driebe et al. 1998) efforts have been devoted to the study of helium WDs. It is now accepted that these objects would be the result of the evolution of certain binary systems, in which mass transfer episodes would lead to the formation of helium degenerates within a Hubble time (see, e.g., Iben & Tutukov 1986; Alberts et al 1996; Ergma & Sarna 1996). Connected with the age determination for millisecond pulsars from WD cooling is the existence or not of hydrogen flashes in helium WDs. In particular, detailed calculations predict that hydrogen flashes do not occur on WDs of mass less than 0.2 $`M_{}`$ (see also Driebe et al. 1998), but instead such low - mass helium degenerates experience long - lasting phases of hydrogen burning (but see Sarna et al. 1998). With regard to the main topic of the present work, it is worth mentioning that Vennes, Fontaine & Brassard (1995) presented a set of static mass - radius relations for hot WDs However, the authors considered a linear relation between the internal luminosity and the mass, thus avoiding the computation of evolutionary sequences. This approximation is equivalent to neglecting neutrino emission, which is not a good assumption for their hottest models. In a recent paper, Driebe et al. (1998) have computed the evolution of low mass stars from the main sequence up to the stage of helium WD. In that work the binary evolution has been mimicked by applying, at appropriate positions, large mass loss rates from a single star. More importantly, diffusion was neglected throughout the entire evolution. In this connection, gravitationally induced diffusion is expected to lead to noticeable changes in the surface gravity of their helium WD models, the envelope of which at the end of mass loss phase is a mixture of helium and hydrogen. Indeed, during their evolution, WDs should modify the outer layers chemical composition making essentially the bulk of the hydrogen float to the surface and the helium sink out of surface layers. In this way, this effect causes the outer layers composition to approach to pure composition layers, the case we assumed in the present paper. Preliminary results to be presented below indicate this to be the actual case, as we suggested previously (Benvenuto & Althaus 1999). To address the problem of diffusion in helium WDs, we have developed a code which solves the equations describing gravitational settling and chemical and thermal diffusion. Here, we present some details of our code, deferring a thorough description to a further publication. In broad outline, we have solved the diffusion and heat flow equations presented by Burgers (1969) for the case of a multicomponent medium appropriate for the case we are studying here (see also Muchmore 1984 for an application of the set of Burgers’s equations to the study of diffusion in WDs). The resistence coefficients are from Paquette et al. (1986). To solve the continuity equation we have generalized the semi - implicit finite difference method presented by Iben & McDonald (1985) to include the effects of thermal diffusion. We have followed the evolution of the isotopes <sup>1</sup>H, <sup>3</sup>He, <sup>4</sup>He, <sup>12</sup>C and <sup>16</sup>O. The diffusion code has been coupled to our evolutionary code to follow the chemical evolution of our models self - consistently. Let us first compare our models with those of Driebe et al. (1998) in the case when diffusion is neglected. In Fig. 1 we show the surface gravity in terms of $`T_{\mathrm{eff}}`$for 0.195 and 0.3 $`M_{}`$ helium WD models. In order to make a direct comparison with Driebe et al.’s predictions, we have adopted for these models the same envelope mass and hydrogen surface abundance as quoted by these authors. The initial models were generated in the same fashion as described previously. Despite the assertions by Driebe et al., note that our gravity values after the relaxation phase of our models are very similar to those predicted by these authors. We should remark that their “contracting models” are very different from our initial ones. In fact, they start with a homogeneous main sequence model in which nuclear energy release has been suppressed. Then, it is not surprising that they get contracting models with gravities comparable to those obtained with evolutionary models only when they are very cool (at $`T_{\mathrm{eff}}`$$``$ 3000K for a 0.2 $`M_{}`$ model). On the contrary, in our previous works on helium WDs with hydrogen envelopes, we generated our initial models from a cool helium WD model, adding to it an artificial energy release up to the moment in which the model is very luminous. Then, we switch it off smoothly, getting a model very close to the cooling branch. Thus, notwithstanding Driebe et al. comments, our artificial procedure gives rise to mass - radius relations in good agreement with those found with a fully evolutionary computation of the stages previous to the WD phase. A further comparison performed with low - mass helium WD models calculated by Hansen & Phinney (1998) with thick hydrogen envelopes reinforces our assertion. However, for more massive models some divergences appear between our results and those of Hansen & Phinney. Such differences are the result of the fact that Hansen & Phinney massive models do not converge to the HS predictions for zero temperature configurations, a limit to which our models tend. Now, let us consider what happens when diffusion is considered. To this end, we have computed the evolution of a 0.195 $`M_{}`$ helium WD model with an envelope characterized by a mass fraction $`M_{\mathrm{env}}/M_{}=6\times 10^3`$ and an abundance by mass of hydrogen ($`X_\mathrm{H}`$) of 0.538, as quoted in Driebe et al. (1998). We begin by examining Fig. 2 in which we show the evolution of the hydrogen and helium profiles as a function of the internal mass fraction for various selected values of $`T_{\mathrm{eff}}`$. Even in this case of low surface gravity, diffusion proceeds in a very short timescale, giving rise to pure hydrogen outermost layers. If we start out the computations at $`T_{\mathrm{eff}}`$$``$ 4.1, we find that at the $`T_{\mathrm{eff}}`$value corresponding to the WD companion to PSR J1012 + 5307, our model is characterized by a pure hydrogen envelope of $`M_{\mathrm{env}}/M_{}4\times 10^4`$ (curve e). Needless to say, this will affect the surface gravity as compared with the case when diffusion is neglected. Thus, in order to accurately estimate the mass of that WD we do need to account for the diffusion process. This expectation is borne out by Fig.1, in which we have included the results corresponding to the situation when diffusion is included (solid line) and to the case of the model with assumed pure hydrogen outer layers (dot dashed lines) throughout its entire evolution (i.e. the conditions we would have it were diffusion instantaneous). In both situations we assumed that the total initial amount of hydrogen is the same. The differences in the value of the surface gravity compared with the case of no diffusion are noticeable. Finally, note that the track asymptotically merges the corresponding to complete separation of hydrogen and helium, the structures we assumed in our previous works. These results clearly justifies the assumptions we made in our referred papers. We should also note that in the case of a somewhat thicker hydrogen envelope, hydrogen burning increases significantly, thus making the evolution considerably slower. Thus, in the plane surface gravity versus $`T_{\mathrm{eff}}`$, the asymptotic conditions of total separation of hydrogen and helium would be reached far earlier in the evolution. Thus, diffusion is a fundamental ingredient if we want a solid surface gravity versus $`T_{\mathrm{eff}}`$relation. In Fig. 3, we show the mass - radius relation for helium core models. We considered models with masses up to 0.5 $`\mathrm{M}_{\mathrm{}}`$ because higher mass objects should be able to ignite helium during previous evolutionary stages and should not end their lives as helium WDs. As it is well known, models have a larger radius the higher the $`T_{\mathrm{eff}}`$. It is also clearly noticeable the effect on the stellar radius induced by the presence of an outer hydrogen envelope. These effects are particularly important for low mass models. Let us consider the case of a 0.3 $`\mathrm{M}_{\mathrm{}}`$ helium WD model. In the case of no hydrogen envelope, at the highest $`T_{\mathrm{eff}}`$ considered here, the object has a radius $``$50% larger than that corresponding to the HS model. If we include the hydrogen envelope, the radius is $``$80% larger than the HS one (let us remind the reader that in this case we have included a hydrogen layer 30 times more massive than in the case of the other core compositions). Notice that for $`T_{\mathrm{eff}}`$$``$ 0, the radius of the objects tends to the HS values, as expected. In Figs. 4 to 7 we show the results for carbon, oxygen and silicon interiors respectively. Although the effects due to finite temperature and the presence of an outer hydrogen envelope are also noticeable, these are not so large as in the case of the low mass helium WD models shown in the previous figure. For example, for 1.2 $`\mathrm{M}_{\mathrm{}}`$ models, both effects are able to inflate the star only up to $`19\%`$. This is expected because as mass increases, internal density (and electron chemical potential $`\mu _e`$) also increases. Thus, as thermal effects enter the EOS of the degenerate gas as a correction $`(T/\mu _e)^2`$, EOS gets closer to the zero temperature behaviour, i.e. to the HS structure. As the thickness of the hydrogen layer is $`g^1`$ ($`g`$ is the surface gravity) it also tends to zero for very massive models. Note that carbon, oxygen and silicon have a mean molecular weight per electron very near 2 ($`\mu _e=`$ 2.001299, 2.000000, 1.999364, and 1.998352 for helium, carbon, oxygen and silicon respectively), thus, the differences in radii for a given stellar mass are almost entirely due to the non - ideal, corrective terms of the EOS. Because of this, the differences in radii are small, of the order of few percents. Also, in each figure we included the corresponding HS sequence. Notice that, for a given mass value, HS models have smaller radii and that there exist some minute differences even for the lowest $`T_{\mathrm{eff}}`$ models. This is due simply to the presence of a helium (and hydrogen) layer (if present), the effect of which on the model radius was not considered in our computation of the HS sequences. In Fig. 4 we also included the radii corresponding to strange dwarfs of 0.4, 0.55 and 0.8 $`\mathrm{M}_{\mathrm{}}`$ models for $`T_{\mathrm{eff}}`$ from $`T_{\mathrm{eff}}`$= 10000 K to 50000 with steps of 10000 K. These objects were computed assuming a carbon - oxygen composition for the normal matter envelope, but despite the precise chemical profile, it is clear that they have much smaller radii than WD models of the same mass. In Fig. 8, we show the mass - radius sequences corresponding to iron. These are noticeably different from the previously shown, due mainly to the higher mean molecular weight per electron ($`\mu _e=2.151344`$) and also to the much higher atomic number ($`Z=26`$) that indicates a much strongly interacting, degenerate gas compared to the case of a standard composition. In the case of an iron core, for a fixed mass value, the mean density is almost twice the corresponding to carbon and oxygen cores. Thus, it is not surprising that, for the range of $`T_{\mathrm{eff}}`$’s considered here, thermal effects are less important than in the standard case. For example, for the 0.45 $`\mathrm{M}_{\mathrm{}}`$ iron model at $`T_{\mathrm{eff}}`$$``$ 25000K, thermal effects inflate its radius only by $`17\%`$. For the iron core case, we have only considered models up to a mass value of 1.0 $`\mathrm{M}_{\mathrm{}}`$. Higher mass objects are very near the mass limit for such composition (i.e. the central density becomes very near the neutronization threshold, see also Koester & Chanmugam 1990) and would have internal densities so high that our description of the EOS would not be accurate enough for our purposes. The evolutionary sequences corresponding to an iron core composition are the most detailed and accurate computed to date, and a thorough discussion of them will be deferred to a separate publication. Finally, in Fig. 9 we compare the results of our calculations for carbon WD models with a hydrogen envelope against the computations performed by Wood (1995). As we mentioned, the mass \- radius relation have been the subject of many authors, amongst others, Koester 1978, Iben & Tutukov (1984), Mazzitelli & D’Antona (1986), Wood (1995). Here, we shall compare with Wood’s models since they have been thorougly employed by the WD community. Note that the general trend of our results and those of Wood is very similar. As Wood considered more massive hydrogen envelopes ($`M_\mathrm{H}/M_{}=10^4`$) than we did, we have recomputed models with 0.6, 0.7 and 0.8 $`\mathrm{M}_{\mathrm{}}`$ and with that hydrogen mass. We find that, for the same value of $`M_\mathrm{H}/M_{}`$, Wood models have radii a bit larger than ours (of course, for the case of $`M_\mathrm{H}/M_{}=10^5`$ the differences are much larger). Note that as models evolve, such differences become smaller. ## 4 Discussion and conclusions In this work, we have computed accurate and detailed mass - radius relations for white dwarf (WD) stars with different core chemical compositions. In particular we have considered interiors made up by helium, carbon, oxygen, silicon and iron surrounded by a helium layer containing 1% of the stellar mass. With regard to the presence of a hydrogen envelope, we have considered two extreme values: $`M_\mathrm{H}/M_{}=10^5`$ ($`M_\mathrm{H}/M_{}=3\times 10^4`$ for helium core models) and $`M_\mathrm{H}/M_{}=0`$. The first three interior compositions are standard (according to stellar evolution theory), whereas iron - rich interiors has recently been suggested on the basis of new parallax determinations for some objects (Provencal et al. 1998). For computing each sequence we employed a full stellar evolutionary code which incorporates most of the currently physical processes considered relevant to the physics of WDs. We computed a set of evolutionary sequences for each considered core composition by employing a small step in the stellar mass. We believe that these calculations may be valuable for the interpretation of future observations of this type of WDs. We have also investigated the effect of gravitational settling and chemical and thermal diffusion on low - mass helium WDs with envelopes made up of a mixture of hydrogen and helium. To this end, we included in our evolutionary code a set of routines which solve the diffusion and heat flow equations for a multicomponent medium. For the case analysed in this paper, we found that diffusion gives rise to appreciable changes in the theoretical mass - radius relation, as compared with the case when diffusion is not considered (Driebe et al. 1998). In Figs. 4 - 8 we included the data for 40 Eri B ($`T_{\mathrm{eff}}`$=16700 K), EG 50 ($`T_{\mathrm{eff}}`$=21000 K), Procyon B ($`T_{\mathrm{eff}}`$=8688 K) and GD 140 ($`T_{\mathrm{eff}}`$=21700 K) taken from Provencal et al. (1998). In the case of 40 Eri B for instance, the observed mass, radius and $`T_{\mathrm{eff}}`$ are consistent with models having a carbon, oxygen and silicon interior and thin hydrogen envelopes. It is worth noting that the observed determinations are also consistent with iron core models with hydrogen envelope composition but for models with $`55000K`$ (models without hydrogen envelope would need to be even hotter). Since this temperature is far larger than the observed one we should discard an iron core for this object. On the contrary, for the cases of the other considered objects, they fall clearly below the standard composition sequences, indicating a denser interior. If we assume GD 140 and Procyon B to have an iron core, we find that they fall on a sequence of a $`T_{\mathrm{eff}}`$ compatible with the observed value. Nevertheless, the EG 50 mean radius is smaller than predicted for an iron core object for the observed $`T_{\mathrm{eff}}`$. Thus, on the basis of the current observational determinations for EG 50, this WD seems to be even denser than an iron WD. At present, it seems that the physics that determines the radius of a WD star is fairly well understood, thus the indication of an iron core should not be expected to be due to some error in the treatment of equation of state of a degenerate plasma. Accordingly, if observations are confirmed to be accurate enough, we should seriously consider some physical process capable to produce an iron core for such low mass objects. Detailed tabulations of the results presented in this paper are available upon request from the authors at their e-mail address. ## Acknowledgments O.G.B. wishes to acknowledge to Jan - Erik Solheim and the LOC of the 11th European Workshop on White Dwarfs held at Tromso (Norway) for their generous support that allowed him to attend that meeting were he became aware of the observational results that motivated the present work. We also acknowledge to our referee for his remarks and comments which significantly improved the original version of this work.
no-problem/9909/astro-ph9909191.html
ar5iv
text
# The PSCz catalogue ## 1 Introduction The PSCz survey was started in 1992, when accumulation of IRAS redshifts had made a complete redshift survey of the Point Source Catalog (Beichman et al. 1984, ES) possible. Our goals were to (a) maximise sky coverage, and (b) to obtain the best possible completeness and flux uniformity within well-defined area and redshift ranges. The availability of digitised optical information allowed us to use conservative IRAS selection criteria, and use optical identification as part of the selection process. The sky coverage is essentially limited only by requiring that optical extinction be small enough to allow complete identifications. The PSC was used as starting material, because of its superior sky coverage and treatment of confused and extended sources as compared with the Faint Source Survey (Moshir et al. 1989, FSS). The depth of the survey ($`0.6\mathrm{Jy}`$) derives from the depth to which the PSC is complete over most of the sky. ## 2 Construction of the Catalogue The basis for the PSCz was the QIGC (Rowan-Robinson et al. 1990). However, many additions and changes were made to improve the completeness, uniformity and sky coverage. Sky coverage. The mask includes the coverage gaps, areas flagged as High Source Density at $`12`$, $`25`$ or $`60\mu \mathrm{m}`$, and the LMC and SMC. We also masked all areas with $`I_{100}>25\mathrm{MJy}/ster`$ or extinction $`A_B>2^m`$, as derived from Rowan-Robinson et al. (1991) and Boulanger and Perault (1988), and including a simple model for dust temperature variations across the galaxy. The overall PSCz coverage is 84% of the sky. For statistical studies of the IRAS galaxy distribution, where uniformity is more important than sky coverage, we made a ‘high $`|b|`$’ mask, as above but including all areas with $`A_B>1^m`$, and leaving 72% of the sky. PSC selection criteria. Our aim was to include virtually all galaxies, even at low latitude, purely from their IRAS properties while keeping contamination by Galactic sources to a reasonable level. Our final selection criteria were: | 1. | $`log(S_{60}/S_{25})`$ | $`>`$ | $`0.3`$ | | --- | --- | --- | --- | | 2. | $`log(S_{25}/S_{12})`$ | $`<`$ | $`1.0`$ | | 3. | $`log(S_{100}/S_{25})`$ | $`>`$ | $`0.3`$ | | 4. | $`log(S_{60}/S_{12})`$ | $`>`$ | $`0.0`$ | | 5. | $`log(S_{100}/S_{60})`$ | $`<`$ | $`0.75`$ | Upper limits were used only where they guaranteed inclusion or exclusion. We made no constraint as to Correlation Coefficient or identification in the PSC. In total, we selected 16885 sources from the PSC. Extended sources. All IRAS surveys are bedevilled by the question of how to deal with galaxies which are multiple or extended with respect to the IRAS $`60\mu \mathrm{m}`$ beam ($`1.5^{}\times 4^{}`$); in general, there is no consistent way to deal with multiple sources in a survey whose beam size is large compared with that of the sources. We chose to preferentially use PSC fluxes, except for sources identified with individual galaxies with extinction-corrected diameter $`D_{25c}>2.25^{}`$, whose flux is likely to be significantly underestimated in the PSC. To select such galaxies, diameters were taken from the LEDA database (Paturel et al. 1989), extinctions were estimated as for the mask described above, and the resulting corrections to the diameters are taken from Cameron (1990). IRAS fluxes for these galaxies were taken from Rice et al. (1988), or from addscans supplied by IPAC using software supplied by Amos Yahil. The corresponding PSC source was flagged and a new source created using the addscan fluxes. We find that addscan fluxes are systematically 10% larger than PSC, even for small galaxies; the addscan fluxes quoted in the catalogue have been arbitrarily decreased by 10%, to bring them statistically into line with the PSC fluxes. 1466 new sources associated with large galaxies entered the catalogue, and 1291 corresponding PSC sources flagged for deletion. Local Group galaxies were excised from the catalogue, and a separate Local Group catalogue made. Other problem sources. In all other cases, we opted to use PSC fluxes. We were particularly concerned to avoid introducing any latitude-dependent biases into the catalogue. The confusion flag in the PSC is set very conservatively, and in any case the PSC processing in general deals better with confusion than addscans. Poor Correlation Coefficient generally results from a source being (a) extended and hence dealt with above, or (b) low S/N and/or confused with cirrus, in which case the PSC is an unbiased estimator, while addscanning risks noise-dependent biases. Supplementary sources. Two separate hours-confirmed detections (HCONS) are required for a source to be included in the PSC. In the 25% of the sky scanned only twice, the PSC completeness is not guaranteed to $`0.6\mathrm{Jy}`$ (ES). We supplemented the catalogue with 1HCON sources with galaxy-like colours from the Point Source Catalog Reject File (ES), where there is a corresponding source in the FSS. This revealed many sources where two individual HCON detections had failed to be merged in the PSC processing, as well as sources which failed at least one HCON for whatever reason. New sources were created or merged with existing ones, and the Flux Correction Factors (ES) recalculated accordingly. We also searched for additional sources with SES flags 1113, 1122 and 1121. Altogether we found an additional 490 galaxies, and 143 sources were deleted through merging. Optical identifications. Optical material for virtually all sources was obtained from COSMOS or APM data, including new APM scans taken of 150 low-latitude POSS-I E plates. In general, we used red plates at $`|b|<10^{}`$ and blue otherwise. The magnitudes from the digitised sky survey plates are not in general more accurate than $`0.5^m`$. At this stage, non-galaxies were weeded out by a combination of optical appearance, IRAS colours and addscan profiles, NVSS maps (Condon et al. 1998), SIMBAD and other literature data, and new $`K^{}`$-band imaging data. We found a total of 1527 confirmed non-galaxies. We are left with 15257 confirmed galaxies, and a further 175 where no identification is known but the IRAS properties are not obviously Galactic. The source density amounts to 1460 gals/steradian at high latitudes. The redshift survey. We maintained a large database of redshift information from the literature, NED, LEDA and ZCAT, and also other survey work in progress. Of the 15,000 galaxies in the sample at the outset, two-thirds either had redshift known or expected from other surveys. We were allocated spectroscopic time on the INT (7 weeks), AAT (6 nights), CTIO 1.5m (18 nights) and INAOE 2.1m (2 weeks) telescopes over a total of 4 years. 4600 redshifts were obtained in this time, mostly from blended $`\mathrm{H}\alpha `$/$`\mathrm{NII}`$ lines at low dispersion. The final external error averages $`120\mathrm{km}\mathrm{s}^1`$. ## 3 Reliability, completeness, uniformity, flux accuracy Reliability. The major sources of unreliability in the catalogue are (a) spurious identifications with galaxies nearby in angular position, and (b) incorrect redshift determination. Selection of targets for spectroscopy used the likelihood methods of Sutherland and Saunders (1992) and the number of incorrect identifications is likely to be very small. For 101 sources, our own redshifts are of marginal quality, and some fraction will be erroneous. Literature redshifts were selected originally on the basis of simple $`2^{}`$ proximity (Rowan-Robinson et al. 1990), later cross-correlation used the methods of Sutherland and Saunders, but still depended on quoted positions in the literature. For 143 sources, we have both literature and our own redshifts; 5 are badly discrepant ($`\mathrm{\Delta }V>1000\mathrm{km}\mathrm{s}^1`$), suggesting that about 3% of the literature redshifts (300 in total) are seriously in error. Completeness. 1) In the 25% of sky covered by only 2 HCONs, the PSC incompleteness is estimated as 20% (differential) and 5% (cumulative) at $`0.6\mathrm{Jy}`$ (ES). At $`|b|>10^{}`$ we estimate that we have recovered $`\frac{3}{4}`$ of these as supplementary sources (figure 1a). Lower-latitude 2HCON sky (2% of the area) retains the PSC incompleteness. 2) The PSC is confusion limited in the Plane and also affected by noise lagging. However, the mask excludes the worst regions and the source counts shown in figure 1b show no evidence for low-latitude incompleteness to $`0.6\mathrm{Jy}`$. 3) Some galaxies are excluded by our colour criteria. Comparison with the 1.2Jy survey suggests that in total, about 50 galaxies from the PSC have been excluded. 4) No attempt was made to systematically obtain redshifts for galaxies with $`b_J>19.5^m`$. The distance beyond which this causes incompleteness depends on extinction; we estimate that there is incompleteness for $`\mathrm{log}z>1(0.2(A_B+0.1A_B^2))`$. 5) Redshifts are still unknown for 175 galaxies with $`b_J<19.5^m`$ (1.1% of the sample). Flux accuracy. The error quoted in ES for PSC $`60\mu \mathrm{m}`$ fluxes for genuine point sources is 10%. We are more concerned with any absolute error component, since this will lead to Malmquist-type biases. Lawrence et al. (1999) find an additional absolute average error of $`0.059\pm 0.007\mathrm{Jy}`$. This error leads to Malmquist biases in the source densities which depend on noise level and hence position, especially latitude and 2 vs 3HCON sky. We estimate that the resulting non-uniformity introduced into the catalogue is less than 5% (differential at the flux limit) and 2% (cumulative), and this agrees with the source counts (figure 1a). However, note that at some level Malmquist effects will be masked by incompleteness. Flux uniformity. 1) The absolute calibration of the third 3HCON was revised by a few percent after the release of the PSC (and FSS). The effect of this revision on PSC fluxes would be to change those in the 75% of the sky covered by 3HCONS by 1-2%. 2)Regions affected by the South Atlantic Anomaly may suffer from calibration problems. However, the source counts for declinations $`40\mathrm{deg}<\delta <10\mathrm{deg}`$, where data is most likely to be affected, shows no evidence for any variation (figure 1a) above the few % level in source counts. 3) Whenever the satellite crossed the Galactic Plane, or other very bright sources, the detectors suffered from hysteresis. Strauss et al. (1990) found that the likely error is typically less than 1% and always than 2.2%; This is in agreement with the source counts for identified galaxies in the PSCz as a function of $`I_{100}`$ (figure 1b). Summary. Overall, variations in source density across the sky due to incompleteness, Malmquist effects and sensitivity variations are not believed to be greater than a few percent anywhere at high latitudes for $`z<0.1`$. Tadros et al. (1999) found an upper limit to the amplitude of large scale, high latitude harmonic components to the density field of the PSCz of 3%, confirming our estimate. At lower latitudes, variations are estimated to be no greater than 10% for $`z<0.05`$.
no-problem/9909/astro-ph9909092.html
ar5iv
text
# A search for the submillimetre counterparts to Lyman break galaxies ## 1 Introduction Using a variety of techniques in different wavebands, the detailed study of young galaxies is being pushed to higher redshifts. The method of selecting high redshift galaxies using multi-colour broadband observations of the rest-frame UV stellar continuum has been successfully applied to ground-based surveys, and also to the Hubble Deep Field (HDF). In particular, the absence of emission in the $`U`$-band at $`z`$3 due to the presence of the Lyman break feature has been very effective \[Steidel & Hamilton 1993, Steidel, Pettini & Hamilton 1995, Steidel et al. 1998\], and is usually referred to as the Lyman break technique (e.g. Steidel et al. 1996a). A full characterization of the properties of the population of galaxies chosen in this way, called Lyman break galaxies (LBGs), will likely provide answers to some key questions of galaxy formation and evolution, particularly those dealing with the murky role of dust at high redshift. The colours of Lyman break galaxies are observed to be redder than expected for dust-free star-forming objects. The spectral slope of the ultraviolet continuum and the strength of the H$`\beta `$ emission line suggest that some interstellar dust is already present in these young galaxies, and that it attenuates their UV luminosities by a factor of $`4.5`$, although factors of as much as $`100`$ are implied for some LBGs (Steidel et al. 1998). In the most extreme objects this implies star formation rates (SFRs) of several times $`100\mathrm{M}_{}\mathrm{yr}^1`$, reaching as high as $`1000\mathrm{M}_{}\mathrm{yr}^1`$ in some cases. The resulting revisions to the global star formation rate contributed by galaxies at redshifts $`z>\mathrm{\hspace{0.17em}2}`$ can be significant. However, the prescription for a ‘correct’ de-reddening is still unknown at present (see for example Meurer et al. 1997). This leads to an uncertainty in the estimates of dust obscuration in LBGs derived from rest frame ultraviolet spectra (see e.g. Pettini et al. 1998; Calzetti et al. 1996). The role dust plays in these young galaxies is not clear cut, since redder spectra can also occur from an aging population, or from an initial mass function (IMF) deficient in massive stars (see e.g. Bruzual & Charlot 1993). Recent studies of nearby star-forming galaxies (Tresse and Maddox 1998) have concluded that the extinction at a wavelength of $`2000`$Å is typically 1.2 mag. The situation at earlier epochs is unclear, but there is no need to assume that at higher redshifts an increasing fraction of the star formation activity takes place in highly obscured galaxies. The precise amounts of reddening required, and its interpretation, are therefore still open questions. Observations using a sensitive new bolometric array, SCUBA on the JCMT (described in section 2), have discovered a population of submillimetre detected galaxies \[Smail et al. 1997, Hughes et al. 1998, Barger et al. 1998, Eales et al. 1999, Holland et al. 1998, Blain et al. 1999b\] which might push up the global SFR at $`z\mathrm{\hspace{0.17em}3}`$ by a factor of perhaps five (see for example Smail et al. 1997). Identifying the optical counterparts to these galaxies, however, is a non-trivial matter for two main reasons: 1) the SCUBA beamsize at 850$`\mu `$m (the optimal wavelength for these studies) is $`\mathrm{\hspace{0.17em}15}`$ arcsec, with pointing errors for the telescope of order 2 arcsec; 2) the large, negative K-corrections of dusty star-forming galaxies at these wavelengths (i.e. the increase in flux density as the objects are redshifted, because of the steep spectrum of cool dust emission) imply that sub-mm observations detect such objects at $`z>\mathrm{\hspace{0.17em}1}`$ in an almost distance-independent manner (Blain & Longair 1993). Thus several candidate optical galaxies are often present within the positional uncertainty of the sub-mm detection, and there is a very strong possibility that the actual counterpart is at much higher redshift and undetectable with current optical imaging. Because of these difficulties, it is presently unknown whether these newly discovered sub-mm galaxies are drawn from a population similar to the LBGs, or whether the two methods select entirely different types of object at similar redshifts. Attention has so far focussed on constraining SCUBA source counts and using follow-up observations in other wavebands to identify the sub-mm galaxies. Direct estimates of the source counts have recently come from several survey projects covering small regions on the sky to 3$`\sigma `$ sensitivities ranging from $`0.5`$mJy to $`8`$mJy (Smail et al. 1997; Hughes et al. 1998; Barger et al. 1998; Holland et al. 1998; Eales et al. 1999; Blain et al. 1999b; Chapman et al. 1999b). In addition, statistical upper limits can be put on the source counts at even fainter limits through fluctuation analysis of ‘blank field’ data \[Hughes et al. 1998, Borys, Chapman & Scott 1999\]. There is currently some debate about what fraction of detected sources lie at redshifts above $`z\mathrm{\hspace{0.17em}2}`$, and what fraction may be at more modest distances, $`z<1`$ (see Hughes et al. 1998; Smail et al. 1998; Lilly et al. 1999; Barger et al. 1999). It is fair to say that the numbers are still so small, and the optical identification procedure still sufficiently ambiguous, that this debate is currently unresolved. Furthermore, the issue of the importance of AGN-fuelled star formation, i.e. the fraction of SCUBA-bright sources which are active galaxies, is also still an open issue (see e.g. Ivison et al. 1999a; Almaini et al. 1999). A statistical estimate of the number density of LBGs whose UV spectra imply SFR $`>400\mathrm{M}_{}\mathrm{yr}^1`$, after correcting for extinction and including photometric errors (Adelberger & Steidel 2000), results in a comoving number density that is roughly comparable to the comoving number density of SCUBA sources with this SFR (2000 per square degree – e.g. Eales et al. 1999; Blain et al. 1999b). This raises the question of whether the populations are the same or at least related, and whether, if the uncertainties were better constrained in the LBG population, and if more near-IR data were available, it would be relatively easy to select star forming galaxies detectable with SCUBA. In order to address these issues, we have targetted a sample of LBGs, which have high UV-estimated SFRs, for photometry in the sub-mm. The full sample of more than 700 LBGs which have spectroscopic redshifts (Steidel et al. in preparation) has a range of extinction implied by far-UV models from zero to a factor of more than 100. We have chosen a small number of candidates from this group whose UV properties indicate that they are the most likely to be detectable with SCUBA. The sub-mm observations provide an estimate of the global dust mass and SFR which is unaffected by the obscuration uncertainties inherent in the UV continuum, but on the other hand suffers from many model dependencies. Relating the SFRs obtained from the two wavelength regimes may help to elucidate the energetics of star formation in luminous star forming galaxies. The bulk of this paper therefore focuses on comparison of SCUBA flux densities with UV-predicted 850$`\mu `$m emission, or equivalently of the SFRs estimated from the rest-frame UV and far-IR wavebands. ## 2 Observations A total of 16 LBGs were targetted in three different regions: the HDF flanking fields \[Williams et al. 1996\], the Westphal 14 hour field (also known as the ‘Groth Strip’, Groth et al. 1994), and several deep redshift survey fields at 22 hours (Steidel et al. in preparation). These are three of the regions for which extensive study of LBGs has already been carried out (see e.g. Steidel et al. 1996a). Our initial choice of targets was based on the qualitative assumption that high UV-derived SFRs and bright magnitudes might imply large sub-mm fluxes. Hence the LBGs chosen for sub-mm observation had the largest SFRs inferred from the UV and optical data available at the time of observation, ranging from 360 to $`860\mathrm{M}_{}\mathrm{yr}^1`$ (corresponding to 1.8–4.3 mJy at 850$`\mu `$m ). Subsequent careful analysis of the errors through Monte Carlo simulations (Adelberger & Steidel 2000) have shown that actually the brighter galaxies, with relatively small corrections to their UV SFRs, are more likely to have large sub-mm fluxes. In addition, optical spectral and imaging data, obtained subsequent to the sub-mm observations, also tended to lead to reductions in the initial estimates; in fact a few of the objects which had the most extreme reddening estimates turned out to be at much lower redshift. As a result, half of the original 16 LBGs targetted no longer have implied SFRs which would make them detectable with SCUBA. Nevertheless, it is still interesting to study carefully what the observed sample may be telling us. Those objects with implied SFRs which place them at a SCUBA flux density level of at least 0.5 mJy are included in Table 1. (In fact all have implied 850$`\mu `$m flux densities $`>`$1 mJy). The remaining objects are included in Table 2, and they can be regarded, in a sense, as a control sample. There are no redshift estimates for the objects in Table 2. Object names in the tables denote catalogue entries for these fields as discussed in Steidel et al. (in preparation). ### 2.1 Submillimetre observations The observations were conducted with the Submillimetre Common-User Bolometer Array (SCUBA, Holland et al. 1999) on the James Clerk Maxwell Telescope. The data set was obtained over six nights from observing runs in May and June 1998. We operated the 91 element Short-wave array at 450$`\mu `$m and the 37 element Long-wave array at 850$`\mu `$m simultaneously in photometric mode, giving half-power beam widths of 7.5, and 14.7 arcsec respectively. Conditions were generally reasonably good in May, with $`\tau _{850}`$ ranging from 0.31 to 0.40, and very good in June ($`\tau _{850}`$=0.13–0.16). Observations were divided into scans lasting about $`40`$min for $`100\times 18`$s integrations. The usual 9-point jiggle pattern was employed to reduce the impact of pointing errors and thereby produce greater photometric accuracy by averaging the source signal over a slightly larger area than the beam. Whilst jiggling, the secondary was chopped by 45 arcsec in azimuth at the standard 7.8125 Hz. This mode allows deeper integration on the central pixel for a fixed observing time than in mapping mode. Pointing errors were below 2 arcsec, checked on nearby blazars every $`80`$min. The total exposure times and resulting rms sensitivities are listed in Tables 1 and 2. The data were reduced using both the Starlink package SURF (Scuba User Reduction Facility, Jenness & Lightfoot 1998), and independently with our own routines (see Borys, Chapman & Scott 1999). Spikes were rejected from the double difference data, which were then corrected for atmospheric opacity and sky emission using the median of all the pixels except for the central pixel and any obviously bad pixels. We improved the effective sensitivity to source detection by incorporating the flux density in negative off-beam pixels of the LBG source, as described in the following sub-section, and by removing possible sources in the field before estimating background flux levels; any bolometer that had a signal $`>2\sigma `$ was removed from the sky estimation. Whether or not the negative beam bolometers were used in the sky subtraction made no discernible difference. We have written routines which extend the capabilities of SURF by analysing the photometry of each bolometer independently, weighting each scan by its variance. This is crucial for data of the same object taken over several nights of observing. Our routines also allow us to characterise the noise spectrum and to check for residual gradients across the array. In practice, however, we found it was unnecessary to subtract gradients from this data set. Analysis of the entire array used in photometry mode is essential to extract meaningful numbers for faint signal levels, since we are approaching the noise levels for which source counts are beginning to make a sizable contribution to the background noise. Confusion limit is at $`\mathrm{\hspace{0.17em}1}`$mJy (Hughes et al. 1998). We discuss this further in Section 2.1.2. We calibrated our data against Uranus and IRC 10216. The calibrations agree with each other, and with the gains found by other observers using SCUBA at around the same time, to within 10 per cent at 850$`\mu `$m, and to 20 per cent at 450$`\mu `$m, remaining stable night to night. Folding this calibration uncertainty in to our error budget has little effect at the low signal-to-noise ratio levels of these observations. #### 2.1.1 Folding in the off-beams The signal from the off-beams that chop on to the source can be used to improve the estimate of the flux density from the source. For a source with flux density $`S`$, measured with an efficiency $`ϵ`$ and measurement error $`\sigma `$, the probability that the measured value is $`x`$ is given by: $$P(x)\mathrm{exp}\left[\frac{1}{2}\left(\frac{xϵS}{\sigma }\right)^2\right].$$ (1) By minimizing the joint probability of $`N`$ measurements, the maximum likelihood estimator of the source flux density is $$\overline{S}=\frac{\underset{i}{\overset{N}{}}x_{}^{}{}_{i}{}^{}\sigma _{}^{}{}_{i}{}^{2}}{_i^N\sigma _{}^{}{}_{i}{}^{2}},$$ (2) where $`x^{}=x/ϵ`$ and $`\sigma ^{}=\sigma /ϵ`$. As the sky rotates during an experiment, the effective efficiency, $`ϵ`$, for the off beams varies. Fig. 1 shows an illustration of this effect during our observations of W-MMD11. For our double-difference observations there are instantaneously $`N=\mathrm{\hspace{0.17em}3}`$ beams, with the central beam having an efficiency of unity and the two off beams having $$ϵ=0.5\mathrm{exp}\left(\frac{d^2}{2\sigma _\mathrm{b}^2}\right),$$ (3) where $`d`$ is the angular distance of the off-beam centre from the source, and $`\sigma _\mathrm{b}`$ is the Gaussian width of the beam. In the case of W-MMD11, our detection level increases from $`\mathrm{\hspace{0.17em}3.0}\sigma `$ to 3.9$`\sigma `$, after folding in the negative flux density from the outer pixels. Our sub-mm flux density measurements for all of our targets are presented in Tables 1 and 2, including the upper limits at 450$`\mu `$m as well as the 850$`\mu `$m data. #### 2.1.2 Confusion noise Given that our 850$`\mu `$m integrations go relatively faint, we need to be concerned about the issue of confusion noise \[\[Scheuer 1957, \[Scheuer 1974, Condon 1974, Wall et al. 1982\]. In other words, we need to consider to what extent the fluctuations due to undetected sources contribute to our error bars. This is additionally complicated by our use of double-difference data, rather than data from fully-sampled maps. Blain et al. (1998) quote a value of $`0.44`$mJy for the variance due to confusion, derived from their source counts. We find that any reasonable fit to the counts, including extrapolation to low fluxes (with the constraint of not over-producing the far-IR background) lead to a variance of no more than $`\sigma _{conf}\mathrm{\hspace{0.17em}0.5}`$mJy. This then is the confusion noise for a single bolomter observation of the sub-mm sky. Since the JCMT gives a triple-beam response, i.e. $`\mathrm{On}(\mathrm{Off}_1+\mathrm{Off}_2)/2`$, then the rms in our photometry observations is expected to be $`\sqrt{3/2}`$ higher than this, for the simplest configuration. But, since we chopped in azimuth while the sky rotated, then in practice we are taking the central value minus the average of several other sky positions. So the rms of our photometry observations ends up being only a little higher than for an individual bolometer measurement. We checked with Monte Carlo simulations (of sources drawn randomly from reasonable count models) that the confusion noise for our observations was unlikely to be higher than $`0.55mJy`$. This represents the expected error on our fluxes due to the presence of undetected sources in the beam. We confirmed this value with Monte Carlo simulations of a large number of sources drawn from a model of the number counts. This confusion noise is already contained in our error bars, and should not be added to the error budget. Our noise estimates come from the variance among the different flux samples of each target. Therefore our noise estimates will already contain at least part of the confusion noise, since the off-beams were sampling different positions on the sky. Hence the variation in the flux of an object throughout the observation period could have partly been due to confusion noise. We believe that the error bars we quote on the flux for each target is a reasonable estimate of the uncertainty in the flux at that central sky position. Each of our target measurements had a final estimated uncertainty of about $`1`$mJy. Since confusion and noise and noise from the atmosphere or the instrument add in quadrature, then the contribution from confusion noise is sub-dominant – contributing less than 30% to the variance. Note also that when we average our measurements together (in Section 4.1) we can reach, in principle, below both the individual sky/instrumental noise and the confusion noise in each measurement. ### 2.2 Optical spectroscopy Spectra for these objects were obtained using the LRIS instrument on the Keck telescope in the course of following up the $`U`$-band dropout candidate objects in the survey fields, to confirm their redshifts. Details of the observations can be found elsewhere \[Steidel et al. 1996b, Adelberger et al. 2000\]. Spectra typically show weak Ly$`\alpha `$ emission and several absorption lines from the interstellar medium within these galaxies. None of our targetted objects have remarkable optical spectra, strong emission lines being absent for example. Fig. 2 shows the spectra of two representative objects, including the possible sub-mm detection. ## 3 Comparing the UV with the far-IR In order to compare optical (rest-frame UV) observations to our sub-mm (rest-frame far-IR) data we can calculate a far-IR flux consistent with the UV extinction and compare that to our direct SCUBA measurements. Alternatively, we can calculate the SFR implied by each of the data sets (UV and sub-mm) and compare those. For our one detected source, the second, perhaps less direct comparison may help elucidate the underlying physical processes. We list both sets of estimates in Table 1, and now describe how we arrived at the values given there. ### 3.1 Estimates of star formation rates from the sub-mm data To estimate the SFR from the measured sub-mm flux density we follow an approach which has become conventional for dealing with IRAS galaxies, for example. In essence this involves estimating the $`60\mu `$m flux, which is approximately linearly related to the SFR. We first calculate the quantity $$_{\mathrm{rest}}\left(\nu L_\nu \right)_{\mathrm{rest}}=4\pi D_\mathrm{L}^2\nu _{\mathrm{obs}}S_{\mathrm{obs}},$$ (4) where $`L_\nu `$ is the luminosity density and $`D_\mathrm{L}`$ is the usual luminosity distance. In the rest frame we are observing the galaxy at $`850/(1+z)\mu `$m. We then assume that the dust can be described by a mass absorption coefficient of the form $$k_\mathrm{d}=0.14(\lambda /850\mu \mathrm{m})^{\beta _\mathrm{d}}\mathrm{m}^2\mathrm{kg}^1$$ (5) (see Hughes et al. 1997 and references therein), with a typical value of $`\beta _\mathrm{d}=1.5`$ for the index. Note that other authors (e.g. Lisenfeld, Isaak & Hills 2000 and references therein) suggest values of $`k_\mathrm{d}`$ which are smaller by a factor $`\mathrm{\hspace{0.17em}1.5}`$; our dust masses (and other derived qunatities) are thus fairly conservative. Since $$L_\nu S_\nu k_{\mathrm{d},\nu }B_\nu ,$$ (6) with $`B_\nu (T_\mathrm{d})`$ the Planck function, then we can obtain $`(60\mu \mathrm{m})`$ from $`(\mathrm{rest})`$ using the ratio of $`\nu k_{\mathrm{d},\nu }B_\nu `$ at the two wavelengths. We also assume for definiteness that $`H_0=\mathrm{\hspace{0.17em}50}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and $`\mathrm{\Omega }_0=\mathrm{\hspace{0.17em}1.0}`$. Both $`L_{\mathrm{FIR}}`$ and SFR (as well as inferred properties such as dust mass) scale as $`h^2`$, and at these redshifts, the values are about 2 times higher in an open $`\mathrm{\Omega }_0=0.3`$ universe. As an intermediate step we could also estimate the dust mass at this point, assuming that the dust is optically thin. The dust mass estimate is just $$M_\mathrm{d}=\frac{S_{\mathrm{obs}}D_\mathrm{L}^2}{k_\mathrm{d}^{\mathrm{rest}}B(\nu ^{\mathrm{rest}},T_\mathrm{d})(1+z)}$$ (7) (Hughes et al. 1997), with $`B(\nu ^{\mathrm{rest}},T_\mathrm{d})`$ the Planck function evaluated in the rest frame, assuming the dust to have a temperature $`T_\mathrm{d}`$. We take $`T_\mathrm{d}=\mathrm{\hspace{0.17em}50}`$K as our standard value. Note that we are assuming that the full sub-mm flux is due to thermal dust emission. A strong synchrotron flux would cause our mass estimates to be high. Furthermore if the dust is not entirely optically thin this would also affect the estimate of $`M_\mathrm{d}`$ and correspondingly $``$. We can proceed from the estimate of $`(60\mu \mathrm{m})`$ to the SFR by firstly applying a bolometric correction, and then using a standard conversion factor. Rowan-Robinson et al. (1997) suggest that $`L_{\mathrm{FIR}}\mathrm{\hspace{0.17em}1.7}(60\mu \mathrm{m})`$, where $`L_{\mathrm{FIR}}`$ means the total luminosity over say $`1\mu `$m–$`1000\mu `$m. The far-IR luminosity is expected to be directly proportional to the star formation rate $`\mathrm{SFR}`$, i.e. $$L_{\mathrm{FIR}}=K\times \mathrm{SFR},$$ (8) where the coefficient $`K`$ is estimated from well-studied local objects to be $`K=2.2\times \mathrm{\hspace{0.17em}10}^9\mathrm{L}_{}\mathrm{M}_{}^1`$yr (Rowan-Robinson et al. 1997). Estimates in the literature range from $`1.5\times 10^9`$ to $`4.2\times 10^9\mathrm{L}_{}\mathrm{M}_{}^1`$ yr (Thronson & Telesco 1986; Scoville & Young 1983). Significant differences between local and distant galaxies could of course change this scaling factor. Note the limitations of this procedure in assuming an isothermal distribution; dust components with higher temperature, or AGN contributions to the flux, could lead to higher $`L_{\mathrm{FIR}}`$ for the same $`M_\mathrm{d}`$. The estimated far-IR flux and SFR will also strongly depend on the assumed form of the grey body. Bolometric corrections will typically vary as $`T_\mathrm{d}^{4+\beta _\mathrm{d}}`$ (see e.g. Blain et al. 1999c) and so different dust temperatures or emissivity indices can give significantly different results. We make an empirical estimate of the total uncertainty involved in the dust models by fitting Eq. (6), for $`T_\mathrm{d}=\mathrm{\hspace{0.17em}50}`$K, to the long wavelength data for several vigorous star-forming galaxies for which we have rest frame IR data. The discrepancy between our fits at $`60\mu `$m and the actual data at $`\lambda =850\mu \mathrm{m}/(1+z)\mathrm{\hspace{0.17em}200}\mu \mathrm{m}`$ spans a factor of 3. We adopt this as the uncertainty in $`L_{\mathrm{FIR}}`$ inferred from our SCUBA data (see Hughes & Dunlop 1998 for a recent discussion of the various uncertainties in such estimates). Fig. 3 provides an illustration of the uncertainties involved in estimating $`L_{\mathrm{FIR}}`$ from measurements at a single wavelength. Dashed curves in Fig. 3 are for model dust emission spectra with $`T_\mathrm{d}`$ = 30, 50, and 70 K, normalized to our measurement. Notice that these three curves differ by a factor of three at $`\lambda =60\mu \mathrm{m}`$, the wavelength at which one ordinarily infers the SFR for nearby galaxies. In principle we could use our own $`450\mu \mathrm{m}`$ data to remove the uncertainty in dust parameters, but none of our data are precise enough to be very useful. For W-MMD11 we can infer that $`T_\mathrm{d}90`$K, with much weaker limits for the other objects. The data for our one detected source, W-MMD11, and for our highest sensitivity non-detection, W-DD20, are shown in Fig. 3, along with the redshifted spectrum of a representative star-forming galaxy, M82 (using the model fit from Efstathiou, Rowan-Robinson & Siebenmorgen 1999). W-MMD11 is fairly well approximated by the M82 SED, except for the larger $`K`$-band flux. W-DD20, on the other hand, has a much lower sub-mm flux than M82, when normalized to the $`K`$ and $`R_\mathrm{s}`$ magnitudes; DD20 is clearly not an analogue of nearby starburst galaxies. ### 3.2 Estimate of the sub-mm flux density directly from the UV To estimate the $`850\mu `$m flux, $`S_{850\mathrm{UV}}`$ we use the Meurer et al. (1999) empirical relationship between UV slope $`\beta `$ and the ratio $`L_{\mathrm{FIR}}/(1600\mathrm{\AA })`$, where $`(1600\mathrm{\AA })`$ is $`\nu L_\nu `$ at $`1600`$Å and $`L_{\mathrm{FIR}}`$ the approximately bolometric dust luminosity as estimated from IRAS 60 and $`100\mu `$m fluxes. From this we can directly estimate the flux at $`850\mu `$m, assuming an isothermal modified blackbody with a dust temperature of $`50`$K and an emissivity index of 1.5 (see for example Rowan-Robinson et al. 1990). This approach is consistent with the Calzetti law applied to observations of local starburst galaxies (Meurer et al. 1997; Calzetti et al. 1996). Ouchi et al. (1999) have also demonstrated that such a procedure is consistent with the empirical relation for $`L_{\mathrm{FIR}}/L_{\mathrm{UV}}`$ discussed by Meurer et al. (1997, 1999). Our UV-based predictions for sub-mm emission are listed in Table 1. Note, that the value of $`K`$ (from equation (8)) that we use is approximately a factor of 3 lower than the effective value used by Meurer et al. (1999). Our results are therefore conservative in the sense that other reasonable estimates could predict even higher sub-mm fluxes, and hence make our non-detections even harder to understand. ### 3.3 Estimates of star formation rates from the UV spectra It is worth describing the UV-based SFR estimates in more detail, to reveal various sources of potential uncertainty. The star formation rate is estimated from the rest-frame UV by calculating an implied bolometric luminosity ($`L_{\mathrm{bol}}`$) and extrapolating to the SFR using an assumed initial mass function (IMF). Dust opacity leads to both a global dimming and a reddening of a galaxy’s SED, with the UV region being most affected. We also need to take into account inter-galactic dimming, due to intervening Ly$`\alpha `$ blanketing and photoelectric absorption (Madau et al. 1996). The largest uncertainty in the SFR calculation from the rest frame UV results from the dust extinction. Extinction curves show some environmental dependence, e.g. differences between the Milky Way, the LMC, the SMC and M31 (see Calzetti 1997 for a discussion). With extended objects the effective obscuration is a function of the dust distribution and geometry (Calzetti et al. 1996). We have focussed on the dust effects modelled by Calzetti (1997), which will be referred to hereafter as the Calzetti attenuation law (see also Calzetti et al. 1995, 1996; Calzetti & Heckman 1999; Meurer, Heckman & Calzetti 1999). Meurer et al. (1999) found that their general procedure produced results which were in good agreement with radio data (Richards 2000). The correction factors used to obtain the UV-corrected SFRs in Tables 1 and 2 are based on the Calzetti attenuation curve. The slope of the UV continuum, usually referred to as $`\beta `$, can be estimated for $`z\mathrm{\hspace{0.17em}3}`$ objects from broad band colours covering the $`G`$\- and $`R_\mathrm{s}`$-bands \[Steidel et al. 1996a\]. An unreddened UV stellar continuum slope of $`\beta _0=2.1`$, thought to be appropriate for high redshift stellar populations \[Calzetti et al. 1995\], is used throughout; the difference between $`2.1`$ and the measured slope indicates the total dust column density. The mean dust correction for the entire LBG sample is $`\mathrm{\hspace{0.17em}4.5}`$. The extinction corrections are computed using a $`10^9`$ yr starburst, which is likely to be a conservative assumption since younger starbursts would give bluer intrinsic UV spectra, implying larger reddening corrections and more dust. The SFR can vary considerably for differences in the assumed reddening curve. The Calzetti model results in larger corrections, by a factor of $`\mathrm{\hspace{0.17em}2}`$, than those derived from the SMC (Bouchet et al. 1985). However, reddening curves derived from resolved stars, such as in the SMC, do not include scattered light along the line-of-sight, as do galactic-derived curves. Meurer et al. (1999) have recently shown that under the assumption of an SMC-type reddening curve, the far-IR/far-UV relation observed in local UV-selected star-forming galaxies would not be reproduced, because of this effect. Our corrections to the SFR are based on the properties of local starburst galaxies (which themselves have a far-IR/far-UV flux relation containing considerable scatter, Meurer et al. 1997). This relationship could of course be different at high redshift. In summary then, we estimate the quantity $`\mathrm{SFR}_{\mathrm{UV}\mathrm{corr}}`$ as follows. Meurer et al. (1999) express their empirical $`\beta L_{\mathrm{FIR}}/_{1600}`$ relation also as a $`\beta A_{1600}`$ relation, where $`A_{1600}`$ is the estimated extinction in magnitudes at $`1600`$Å. The SFRs are calculated using the value at $`1600`$Å from this relationship to estimate the dust-corrected $`1600`$Å luminosity of the LBGs, and then using another relationship (from Madau, Pozzetti & Dickinson 1998) between $`_{1600}`$ and SFR to estimate the star-formation rates. This is a somewhat different method than was used in estimating SFRs from the SCUBA fluxes. We chose to follow two slightly different procedures since one is more familiar in the optical literature and the other in the far-IR literature. The extent to which they are different should also indicate the level of uncertainty. ### 3.4 Bias caused by uncertainties in the UV-estimated SFR There is an obvious concern that in selecting those objects with the highest implied SFRs we may have preferentially selected statistical outliers which have the biggest positive excursions from the true SFR, which may be considerably lower. In other words, it is more likely that we have overestimated the dust correction factor than underestimated it, due to a Malmquist-type bias. The number density of LBGs with small correction factors (a factor $`\mathrm{\hspace{0.17em}4}`$ or so) is much higher than the number density with large corrections. As a result, a galaxy which appears to require a large correction factor may actually be an object with a small correction factor and large photometric errors. We have estimated the size of this effect through Monte Carlo estimates of error bars on UV-derived values. In detail, we estimate the intrinsic distributions of dust-obscured luminosities and dust extinctions from our full sample. We can then draw mock LBGs from these distributions, calculating the true colours of the objects, and then measuring their colours using the same procedure as for the real LBGs. From this procedure we can estimate the likely distribution of dust-corrected SFRs for an object with a given redshift and $`G`$ and $`R_\mathrm{s}`$ magnitudes. This accounts for Malmquist bias, photometric errors and most other potential sources of bias in our selection. The resulting distributions are not symmetric, so we simply calculate the rms scatter in the SFRs for each LBG. These uncertainties in $`\mathrm{SFR}_{UVcorr}`$ and $`S_{850\mathrm{UV}}`$ are quoted in Tables 1 and 2. The full procedure is described in Adelberger & Steidel (2000). Further, the uncertainties in the SFR correction factor scale roughly with the magnitude of the correction (Steidel et al. 1998), and there is some danger that the largest implied SFRs are more uncertain than typical. On the other hand, while it is true that the SFR correction factors for our sample are somewhat larger than average for LBGs, in fact the objects with the largest corrected SFRs also have higher than average uncorrected SFRs; our targets do not have the steepest UV-slopes of the whole Lyman break sample. The errors in inferred SFR could conceivably be relatively large compared to the variation in UV parameters – UV luminosity and continuum slope in particular – especially at fainter magnitudes. Monte Carlo simulations (Adelberger & Steidel 2000) reveal that photometric errors of order 0.2 mag in $`GR_\mathrm{s}`$ affect the corrected SFR by a factor of up to 3. ## 4 Detection of sources At 850$`\mu \mathrm{m}`$, there is one likely detection in our current sample, Westphal-MMD11. We have tried different weightings and edits of the data, and find that the detection is independent of the details of the data analysis. The flux density estimate (the weighted average of all scans, including contribution from off-beams) is about $`4\sigma `$. At 450$`\mu \mathrm{m}`$, we marginally detect Westphal-MM27 at $`3\sigma `$. The complete absence of flux at 850$`\mu \mathrm{m}`$ indicates that this is most likely a low redshift foreground object, and we do not consider it further. No other galaxy among the 16 we observed is detected. ### 4.1 Statistical results It is important to ask whether there is statistically a detection of the collective emission from the eight galaxies listed in Table 1. If we combine the 850$`\mu `$m flux density for all the objects, with inverse variance weighting, the mean is $`0.51\pm 0.39`$mJy, which is consistent with zero. Omitting W-MMD11, the result is $`0.08\pm 0.41`$mJy. There is little evidence that, on the average, we obtain positive flux density when we point at our targets. This is to be contrasted with the total signal of $`1.46\pm 0.36`$ mJy predicted from the UV using a dust temperature of 50 K (as shown in Fig. 4), and taking into account simulations of photometric uncertainties. This group of LBGs thus does not form a large part of the population of sub-mm sources. This result points out the power of our approach (see also Scott et al. 2000): statistical analysis of photometry allows us to measure statistical contributions to the source counts, from targetted objects, below $`0.5`$mJy sensitivity (around the confusion limit for mapping), and for only a modest amount of telescope time. In order to clarify the constraints for the sample as a whole, in Fig. 4 we plot the actual measured sub-mm flux densities against the predictions for $`T_\mathrm{d}=\mathrm{\hspace{0.17em}50}`$K (solid squares), and also show the predicted lines for 30, 50 and 70 K dust models. A best-fitting slope is consistent with zero, and certainly inconsistent with the measured flux densities being equal to the UV-predicted ones (dashed line labelled ‘50 K’, for which $`\chi ^2=21.6`$ for the 7 undetected objects, using the vertical error bars). Another statistical approach to the data set is to ask how much lower the SFR has to be compared with the UV-based estimates. We constrain the factor $`f`$ in the relation $`\mathrm{SFR}_{\mathrm{submm}}=f\mathrm{SFR}_{\mathrm{UV}\mathrm{corr}}`$, by minimizing the variance for the entire sample. We find that $`f=0.04\pm 0.24`$<sup>1</sup><sup>1</sup>1This is using only the observational (vertical) error bars. If we were to take into account the estimated horizontal error bars, we would obtain $`f=0.07\pm 0.39`$. if we exclude the detected object, and the straight line is a reasonable fit. However, if we include the detection we obtain a high value for $`f`$ but a terrible $`\chi ^2`$, indicating that W-MMD11 is very much an outlier compared with the other seven galaxies. Excluding W-MMD11 then, the corresponding 95 per cent upper limit is $`f<0.50`$, indicating that for our undetected objects, the sub-mm flux densities are overestimated by a factor of at least 2. This could be due to either a higher temperature, or one of the other factors assumed in converting from UV to sub-mm properties. Many of these uncertainties are dealt with in our Monte Carlo studies, as indicated by the rms errors in Table 1. If these (horizontal) error bars are included in the fit, then $`f=1`$ cannot be very strongly excluded. Focussing specifically on dust temperature, if we assume that all the other parameters are held fixed, the seven galaxies (aside from W-MMD11) in Table 1 are best described by very hot dust, at or even above $`100`$K. The 95 per cent confidence lower limit for $`T_\mathrm{d}`$ is $`65`$K. ### 4.2 Lyman break galaxies in the HDF The HDF has been mapped deeply with SCUBA \[Hughes et al. 1998\]. There are two LBGs from our catalogue lying within the Hubble Deep Field which also have a high UV-implied SFR ($`\mathrm{\hspace{0.17em}150}\mathrm{M}_{}\mathrm{yr}^1`$), corrected using the Calzetti attenuation curve. None of the 5 sources detected in the deep HDF SCUBA map correspond with the positions of these two LBGs. When combined with our results (Table 1), this increases to 9 the number of high SFR LBGs which are undetected at a $`3\sigma `$ level of $`\mathrm{\hspace{0.17em}4}`$mJy. What we see for the HDF upper limits (Fig. 4) is that the UV-based predictions of the far-IR flux density continue to overestimate that measured by SCUBA (even for objects with somewhat less extreme intrinsic luminosities than those in our sample). Thus the HDF non-detections support our conjecture that the simplest UV-predicted sub-mm flux densities are off by at least a factor $`\mathrm{\hspace{0.17em}2}`$. ### 4.3 Westphal MMD11 The first question to address is whether it is very improbable to have found one or more $`\mathrm{\hspace{0.17em}4}\sigma `$ detections purely by chance, given that we searched a relatively large number of objects. The simplest statistical estimate is that, for our full sample of 16 targets, there is roughly a 5 per cent chance to obtain a spurious $`4\sigma `$ detection by statistical fluctuation alone (2.5 per cent for our revised sample of 8 presented in Table 1). Next we should assess the possible contamination by the sorts of galaxies which turn up in blank field observations with SCUBA. The probability that a random galaxy lies within the beam is given by $`P=1\mathrm{exp}(\pi n\theta ^2)`$, where $`n(>S)`$ is the cumulative surface density for the population in question, and $`\theta `$ is the beam radius. For a flux density of $`5.5`$mJy the source counts are about 1000 per square degree (e.g. Blain et al. 1999b), and so $`P\mathrm{\hspace{0.17em}1}`$ per cent per pointing. The chance that at least one of our 16 observations yields a $`5.5`$mJy source is then $`\mathrm{\hspace{0.17em}15}`$ per cent (or about half of that if only our revised sample of 8 likely sub-mm candidates is considered). It is then statistically conceivable that the positive flux density for W-MMD11 could be completely unassociated with the LBG, and instead due to a random sub-mm-bright galaxy within the same SCUBA beam. As we argue below, circumstantial evidence for the LBG suggests that it is indeed the source of the sub-mm emission. But without high resolution sub-mm observations of this region we cannot discount the interloper possibility entirely. There is, in fact a foreground galaxy which has slightly corrupted the UV measurements of W-MMD11. Fig. 2 (top panel) presents a Keck spectrum of W-MMD11, with some of the usual lines identified. Because of a brighter object partially on the same slit, the continuum of W-MMD11 is over-subtracted. The resulting spectrum then has a slope which is inconsistent with the very red $`R_\mathrm{s}K`$ colour. For comparison the bottom panel of Fig. 2 shows the optical spectrum for another of our targetted LBGs, W-MMD109, which is typical of the remainder of the galaxies in the sample (note that the vertical axis on the figure depicts only relative flux). The spectrum of W-MMD11 is similar except for contamination by the foreground object (4.0 arcsec North and 0.5 arcsec East of W-MMD11), with little indication of strong emission lines for example. W-MMD11 is, however, bright in $`K`$-band ($`K\mathrm{\hspace{0.17em}19.6}`$), and with $`R_\mathrm{s}K=4.45`$, which is about 1.5 mag redder than average for those LBGs for which near-IR photometry has been obtained. In fact, out of 70 LBGs for which we currently have $`K`$-band photometry, only one is redder than W-MMD11. Following recent discoveries that at least some $`z>\mathrm{\hspace{0.17em}2}`$ SCUBA sources can be identified with galaxies having extremely red near-IR colours (Smail et al. 1999a), this argues in favour of the identification of the SCUBA detection with the LBG, W-MMD11. We discount the possibility that the particular foreground object is actually the sub-mm source. This nearby source is at $`z=\mathrm{\hspace{0.17em}0.532}`$, with the \[O ii\] line having a rest equivalent width of $`49`$Å, which is somewhat on the high side for general field galaxies. However, it has magnitudes $`R_\mathrm{s}=\mathrm{\hspace{0.17em}22.16}`$, $`GR_\mathrm{s}=\mathrm{\hspace{0.17em}0.88}`$ and $`U_\mathrm{n}G=\mathrm{\hspace{0.17em}0.54}`$, indicating a normal, somewhat sub-$`L^{}`$ galaxy, which would not be expected to emit strongly at sub-mm wavelengths. We estimate that it could contribute at most 0.6 mJy to the integrated flux density within the SCUBA beam centred on W-MMD11, based on nearby spiral galaxies \[Krugel et al. 1998\]. The most likely explanation for the positive flux density is that we have detected the LBG, which is re-enforced by the red $`RK`$ of this LBG relative to the others in our sample. We treat it as a detection in the discussion below. ### 4.4 Comparing LBGs to sub-mm selected galaxies The LBG which is detected with SCUBA, W-MMD11, has a large $`R_\mathrm{s}K`$ colour in addition to its extreme rest-frame UV properties. Moreover, only about 10 per cent of the whole current sample of LBGs ($`>\mathrm{\hspace{0.17em}700}`$ objects) are redder than W-MMD11 in terms of $`GR_\mathrm{s}`$ colour, and only 10 per cent are brighter in $`R_\mathrm{s}`$ magnitude. It is worth exploring how these features compare with other known high redshift SCUBA sources. We have obtained or derived filter-matched data in the $`R_\mathrm{s}`$, $`G`$ and $`U_\mathrm{n}`$ filter set (Steidel et al. 1998) for the two high-$`z`$ sub-mm selected galaxies with secure redshifts: SMM J02399$``$0136 (Ivison et al. 1998b) and SMM J14011+0252 (Ivison et al. 1999b), with the aim of understanding whether our selection techniques successfully predict the measured sub-mm flux densities (explicitly, we made filter corrections for the first object, and reobserved the second object). Table 3 directly compares these three sub-mm bright galaxies. The conclusion is that SMM J02399$``$0136 and SMM J14011+0252 are very similar to W-MMD11, although possibly more luminous (the lensing amplification by a foreground cluster may be uncertain by up to 50 per cent). We also note that SMM J02399$``$0136 is thought to have an AGN component (Ivison et al. 1998b), and thus would probably have been excluded from our sample of LBGs to be studied with SCUBA. Other LBGs with large UV-derived star-formation rates ($`>\mathrm{\hspace{0.17em}100}\mathrm{M}_{}\mathrm{yr}^1`$), but rather modest $`RK`$ colours, were not detected with SCUBA at the $`\mathrm{\hspace{0.17em}0.5}`$mJy rms level (when averaged together). This implies that $`K`$-band information may help significantly to pre-select sub-mm bright LBGs. However, photometric errors conspire to make it difficult to know whether any particular UV-selected object is truly a prodigious star former, at the level of the SCUBA-selected sources. For the two SCUBA-selected objects in Table 3, the sub-mm flux densities and ratio, ($`S_{450}/S_{850}`$), are consistent with what we would predict from the UV magnitudes and slope (see section 3.3) using a dust temperature of $`\mathrm{\hspace{0.17em}50}`$K and emissivity $`\beta _\mathrm{d}=\mathrm{\hspace{0.17em}1.5}`$. SMM 02399 is better modelled by a lower dust temperature ($`T_\mathrm{d}\mathrm{\hspace{0.17em}40}`$K) while SMM 14011 is better modelled by a higher dust temperature ($`T_\mathrm{d}\mathrm{\hspace{0.17em}60}`$K). This implies that the technique of predicting the dust emission from the UV parameters is reasonably sound if the dust temperature distribution is known. However, the 850$`\mu `$m flux density for W-MMD11 is under-predicted, unless the dust temperature used is much lower (say $`T_\mathrm{d}=\mathrm{\hspace{0.17em}32}`$K). This is in strong contrast to the results for the rest of the LBGs which would indicate quite a hot $`T_\mathrm{d}`$ if we fixed all the conversion factors at typical values. Of course, different temperatures are not the only explanations for why we detect W-MMD11 alone. Another possibility is that the spatial distribution of the dust could have given rise to a detection in emission, but less effect on the absorption properties. Although our targetted sample are not particularly bright in the sub-mm, recent simulations (Adelberger & Steidel 2000) suggest that some of the LBGs in the total sample must have large enough SFRs to be detected with SCUBA ($`>\mathrm{\hspace{0.17em}400}\mathrm{M}_{}\mathrm{yr}^1`$). The most likely galaxies to be detected might be those with brighter apparent magnitudes and smaller implied dust corrections, since errors in $`R_\mathrm{s}`$ are smaller than errors in $`GR_\mathrm{s}`$, and the corrected star-formation rates are less sensitive to errors in $`R_\mathrm{s}`$ than in $`GR_\mathrm{s}`$. Although some galaxies in our sample are already of this type (our detection W-MMD11 being one of them), this possibility should be further checked in future with separate samples. Also, as deep $`K`$-band photometry becomes available for the LBG samples, the $`RK`$ colour may prove to be a useful selection criterion for sub-mm follow-up. Indeed, several SCUBA-selected galaxies, beyond those with known redshifts discussed in section 4.3, are now believed to have optical counterparts with very red $`RK`$ colours (Smail et al. 1999a) classifying them as Extremely Red Objects (see for example Dey et al. 1998). ## 5 Discussion There is currently a great deal of debate on reconciling the various techniques for estimating the SFR using different wavelength regimes, such as sub-mm, UV continuum or H$`\alpha `$ (see for example: Kennicutt 1998; Ouchi et al. 1999; Cowie, Songaila & Barger 1999). Regardless of the precise relationships between SFR estimators, we can certainly say that the LBGs with the largest expected SFRs are apparently not excessively bright sub-mm emitters, and are less luminous than the typical sub-mm sources discovered in a SCUBA ‘blank field’ survey \[Smail et al. 1997, Barger et al. 1998, Hughes et al. 1998, Lilly et al. 1999\]. As discussed in section 4.2, if we assume that the SCUBA flux density is proportional to the UV-predicted flux density, then our data can be used to infer that the UV-predicted flux densities are on average too high by factors of a few, depending on the dust temperature. For $`T_\mathrm{d}\mathrm{\hspace{0.17em}50}`$K this factor would be even higher, while for For $`T_\mathrm{d}\mathrm{\hspace{0.17em}70}`$K, the UV predictions are consistent with the measured SCUBA flux densities, within uncertainties. By contrast, the detection (W-MMD11) in our sample is at a level above that implied by the UV calculations, even for unusually cool dust temperatures. Since we currently have only a small sample of sufficiently deep integrations, including one probable detection, we cannot conclude from our results that all LBGs are likely to be weak sub-mm emitters. As discussed in section 3.3, recent results (Adelberger & Steidel 2000) suggest that the LBGs with the largest SFR correction factors (e.g. W-MMD109 in Table 1) are probably subject to larger UV photometric and Malmquist-type errors, and have lower corrected SFRs when the uncertainties are taken into account. This could lead to a situation where only a few of the galaxies in this sample truly have large SFRs, which would also be consistent with our observations. On the other hand, we should emphasize that there is no reason a priori to assume that the high redshift galaxies have a similar relationship between far-IR/far-UV flux and UV continuum slope to the local starburst population, which itself has large scatter (Meurer et al. 1999). This may well play a role in any discrepancy between our sub-mm data and predictions based on the UV. A reasonable conclusion is therefore that the simplest predictions for SCUBA flux densities for our targets are on the average overestimated by a factor of order a few. The precise reason for this is difficult, at present, to ascertain. ### 5.1 Relationship with sub-mm selected sources The central question remains: is there any overlap between samples selected as LBGs and SCUBA-detected blank field sources, when they are at the same redshift? The difficulty in obtaining accurate optical counterparts and redshifts for the sub-mm sources means that the volume and luminosity function of sub-mm ‘blank field’ surveys is still essentially unknown. Initial spectroscopic follow-up (Barger et al. 1999; Lilly et al. 1999) has suggested that a significant fraction of the SCUBA population does not lie at $`z>\mathrm{\hspace{0.17em}2}`$. However, use of the Carilli and Yun (1999) relation of radio/far-IR flux to predict the redshifts for a large sample of SCUBA sources (Smail et al. 1999b), has suggested that the population may have a median value lying between $`z=\mathrm{\hspace{0.17em}2.5}`$–3. It is worth noting that the detection of W-MMD11 in our sample currently represents the highest redshift source detected with SCUBA, which is not an AGN. This one detection suggests that with reasonable SCUBA integrations we might expect to detect just those few LBGs that are far-IR brightest. This would be consistent with ‘blank field’ sources lying over a wider range of redshifts than that probed by the $`U`$-band drop-out selection technique, where a few extreme specimens at any given epoch radiate strongly in the sub-mm. The fact that our high SFR Lyman break sample is on average undetected in the sub-mm implies that probably only the very highest star formers would constitute part of the blank-field sub-mm sources. LBGs might also be harder to detect if they had higher dust temperatures (as depicted in Fig. 4). We now consider scenarios which are consistent with our data and in which the population of LBGs is indeed related to the population of sub-mm selected sources. If none of these proves plausible one could always consider the possibility that the relationship between far-IR and far-UV flux, and UV continuum slope to be different at high redshift than it is for local starburst galaxies. If many of the blank field sub-mm sources are in fact star-forming, merging galaxies at even higher redshifts ($`z\mathrm{\hspace{0.17em}4}`$–5), as yet undiscovered in optical surveys, they might represent an era when the star formation was much more vigorous and short lived. Recent results (Steidel et al. 1998) have revealed more $`z\mathrm{\hspace{0.17em}4}`$ LBGs than implied by number counts and modelling of data from the HDF. These are not expected to be strong sub-mm emitters, since a similar colour range to the $`z\mathrm{\hspace{0.17em}3}`$ population is observed. Massive merging fragments are now thought to be responsible for the prodigious SFRs observed in many SCUBA sources (Blain et al. 1999a), perhaps implying more SCUBA-bright sources during the period of most merging activity. But certainly, accurate prediction of sub-mm properties from optical properties will await a more complete understanding of the galaxy formation process. Recent observations have revealed possible analogues to the ‘blank field’ discovered sub-mm sources \[Ivison et al. 1998b, Chapman et al. 1999\], where the sub-mm flux density is likely to be dominated by AGN heated dust. It is possible that spectroscopic follow up of ‘blank field’ sources may reveal that more are AGN dominated, implying a large population of high redshift dusty quasars. Indeed consideration of x-ray results suggest that a significant fraction of the submm sources could involve active galaxies \[Almaini, Lawrence & Boyle 1999\]. If the primary engines powering these sub-mm sources are AGNs, then there may indeed be little relation between these galaxies and the Lyman break population. Both the LBGs and SCUBA-selected sources are thought to be associated with elliptical galaxies in the process of formation. They can be reconciled if the dust content of young galaxies is coupled to their mass or luminosity, with more massive galaxies being dustier \[Dickinson 1998\]. The most massive young elliptical galaxies could then be associated with the sub-mm sources, while the Lyman break population would be identified with less massive ellipticals and bulges. To verify this possibility, accurate identifications and dynamical mass estimates of the sub-mm galaxies are required, which may have to wait for the next generation of sub-mm interferometers to detect CO lines (see e.g. Stark et al. 1998). ### 5.2 Contribution to the far-IR background One assessment of the significance of the sub-mm emission of LBGs is to estimate their contribution to the IR background at 850$`\mu `$m. We can calculate this using the average implied SFRs obtained using the same methods outlined here, together with the surface density estimate for $`z\mathrm{\hspace{0.17em}3}`$ LBGs. We find that typical redshift 3 LBGs then account for about 0.2 per cent of the 850$`\mu `$m background estimates \[Fixsen et al. 1998, Lagache et al. 1999\]. From our SCUBA observations, we find no evidence that the LBG population contributes more than this. However, W-MMD11 is actually much more SCUBA-bright than predicted, and so the full contribution will depend on how common such objects are in the LBG population, which we certainly cannot estimate from our small sample. A significant problem with this estimate is that there is a strong bias, in present samples, against selecting just the sorts of LBG candidates which might dominate the far-IR background. This bias arises from several causes: highly redenned objects have colours satisfying our $`UGR`$ selection criteria over a much shorter range of redshifts than for bluer objects; at fixed SFR a much smaller fraction of reddened than unreddened objects will satisfy the $`R_\mathrm{s}<\mathrm{\hspace{0.17em}25.5}`$ magnitude cut; and red objects tend to be so faint in $`G`$ that they are difficult to detect and recognize as LBGs down to the $`R_\mathrm{s}`$ limit. We certainly know that highly redenned LBGs are very underrepresented in the current sample, and it is impossible to accurately estimate the 850$`\mu `$m flux for the whole LBG population without fully understanding the relevant selection biases. Because of these biases, it is still possible that the $`z\mathrm{\hspace{0.17em}3}`$ LBG population contributes a significantly higher portion of the background radiation at these wavelengths. ## 6 Conclusions 1. On average we find that the 850$`\mu `$m flux density must be at least 2 times lower than the simplest predictions obtained from the UV colours. This could be accounted for by a combination of photometric errors, uncertainties in $`T_\mathrm{d}`$, $`\beta _\mathrm{d}`$, or the estimates of $`L_{\mathrm{bol}}`$ from the rest-frame UV and far-IR wavelengths or from the scatter in the UV-slope/far-IR relation. Our sample also had some bias against the most highly reddened objects. 2. In the case of the detection of W-MMD11, the flux density is a factor $`\mathrm{\hspace{0.17em}5}`$ times greater than predicted by the UV for dust temperatures $`T_\mathrm{d}>\mathrm{\hspace{0.17em}50}`$K. 3. The similarities in the properties of W-MMD11 and SCUBA-selected sources of known high redshift, SMM J02399$``$0136 and SMM J14011+0252, suggest that the large UV-implied SFR in conjunction with a red $`RK`$ colour may be a good indicator of significant sub-mm flux density. The comparison may indicate that the prediction of far-IR flux density from UV colours is fairly reliable, even for quite reddened objects, provided that parameters, such as the dust temperature are known reasonably well. 4. Estimates for the contribution of the $`z\mathrm{\hspace{0.17em}3}`$ LBGs to the far-IR background give around 0.2 per cent. Our non-detections certainly provide no evidence that the contribution is significantly higher than this. However, given the fact that our one detection has considerably lrager sub-mm flux than predicted, and that there are selection biases against highly redenned objects, it is difficult to estimate precisely the overall contribution to the far-IR background. Our detection of W-MMD11 certainly indicates that there is some overlap between $`z\mathrm{\hspace{0.17em}3}`$ LBG and SCUBA galaxies. However, our small sample makes it hard to draw any firmer conclusions about how great the overlap might be. It also remains to be seen whether the galaxies contributing to the bulk of the far-IR background, over the full range of redshifts, would be also be selected by the UV-dropout technique. Further progress will require significantly more telescope time to improve the sub-mm limits and detections. We have shown that targetted SCUBA photometry is a useful approach here. With larger sub-mm data sets, and selection of LBG samples in different ways, it should be possible to test the predictive power of the far-UV for luminous star forming galaxies at high redshift, and to more fully investigate the role of dust in the galaxy formation process. ## ACKNOWLEDGMENTS This work was supported by the Natural Sciences and Engineering Research Council of Canada. The James Clerk Maxwell Telescope is operated by The Joint Astronomy Centre on behalf of the Particle Physics and Astronomy Research Council of the United Kingdom, the Netherlands Organisation for Scientific Research, and the National Research Council of Canada. We would like to thank the staff at JCMT for facilitating these observations. We are also grateful to Remo Tilanus and his colleagues for their willingness to discuss their own LBG observations.
no-problem/9909/hep-ph9909312.html
ar5iv
text
# Final state interactions: from strangeness to beauty11footnote 1 Invited plenary talk at the Chicago Conference on Kaon Physics (Kaon’99), June 21-26, 1999, Chicago, IL. ## 1 Motivation Final state interactions (FSI) play an important role in meson decays. The presence of FSI forces one to consider several coupled channels, so their net effect might be significant, especially if one is interested in rare decays. This obvious observation, of course, does not exhaust the list of the motivations for better understanding of FSI. Many important observables that are sensitive to New Physics could also receive contributions from the final state rescatterings. An excellent example is provided by the $`T`$-violating lepton polarizations in $`K`$ decays (such as $`K\pi l\nu `$ and $`K\gamma l\nu `$) that are not only sensitive to New Physics but could also be induced by the electromagnetic FSI. However, the most phenomenologically important effect of FSI is in the decays of $`B`$ and $`D`$ mesons used for studies of direct $`CP`$-violation, where one compares the rates of a $`B`$ or $`D`$ meson decay with the charged conjugated process . The corresponding asymmetries, in order to be non-zero, require two different final states produced by different weak amplitudes which can go into each other by a strong interaction rescattering and therefore depend on both weak CKM phase and strong rescattering phase provided by the FSI. Thus, FSI directly affect the asymmetries and their size can be interpreted in terms of fundamental parameters only if these FSI phases are calculable. In all of these examples FSI complicates the interpretation of experimental observables in terms of fundamental parameters . In this talk I review the progress in understanding of FSI in meson decays. The difference of the physical picture at the energy scales relevant to $`K`$, $`D`$ and $`B`$ decays calls for a specific descriptions for each class of decays. For instance, the relevant energy scale in $`K`$ decays is $`m_K1`$ GeV. With such a low energy release only a few final state channels are available. This significantly simplifies the theoretical understanding of FSI in kaon decays. In addition, chiral symmetry can also be employed to assist the theoretical description of FSI in $`K`$ decays. In $`D`$ decays, the relevant scale is $`m_D1`$ GeV. This region is populated by the light quark resonances, so one might expect their significant influence on the decay rates and $`CP`$-violating asymmetries. No model-independent description of FSI is available, but it is hinted at experimentally that the number of available channels is still limited, allowing for a modeling of the relevant QCD dynamics. Finally, in $`B`$ decays, where the relevant energy scale $`m_B1`$ GeV is well above the resonance region, the heavy quark limit might prove useful. ## 2 Some formal aspects of FSI Final state interactions in $`Af`$ arise as a consequence of the unitarity of the $`𝒮`$-matrix, $`𝒮^{}𝒮=1`$, and involve the rescattering of physical particles in the final state. The $`𝒯`$-matrix, $`𝒯=i\left(1𝒮\right)`$, obeys the optical theorem: $$𝒟isc𝒯_{Af}\frac{1}{2i}\left[f|𝒯|Af|𝒯^{}|A\right]=\frac{1}{2}\underset{i}{}f|𝒯^{}|ii|𝒯|A,$$ (1) where $`𝒟isc`$ denotes the discontinuity across physical cut. Using $`CPT`$ in the form $`\overline{f}|𝒯|\overline{A}^{}=\overline{A}|𝒯^{}|\overline{f}=f|𝒯^{}|A,`$ this can be transformed into $$\overline{f}|𝒯|\overline{A}^{}=\underset{i}{}f|𝒮^{}|ii|𝒯|A.$$ (2) Here, the states $`|i`$ represent all possible final states (including $`|f`$) which can be reached from the state $`|A`$ by the weak transition matrix $`𝒯`$. The right hand side of Eq. (2) can then be viewed as a weak decay of $`|A`$ into $`|i`$ followed by a strong rescattering of $`|i`$ into $`|f`$. Thus, we identify $`f|𝒮^{}|i`$ as a FSI rescattering of particles. Notice that if $`|i`$ is an eigenstate of $`𝒮`$ with a phase $`e^{2i\delta }`$, we have $$\overline{i}|𝒯|\overline{A}^{}=e^{2i\delta _i}i|𝒯|A.$$ (3) which implies equal rates for the charge conjugated decays<sup>2</sup><sup>2</sup>2This fact will be important in the studies of $`CP`$-violating asymmetries as no $`CP`$ asymmetry is generated in this case.. Also $$\overline{i}|𝒯|\overline{B}=e^{i\delta }T_ii|𝒯|A=e^{i\delta }T_i^{}$$ (4) The matrix elements $`T_i`$ are assumed to be the “bare” decay amplitudes and have no rescattering phases. This implies that these transition matrix elements between charge conjugated states are just the complex conjugated ones of each other. Eq. (4) is known as Watson’s theorem . Note that the problem of unambiguous separation of “true bare” amplitudes from the “true FSI” ones (known as Omnés problem) was solved only for a limited number of cases. ### 2.1 $`K`$ decays The low scale associated with $`K`$ decays suggests an effective theory approach of integrating out heavy particles and making use of chiral symmetry of QCD. This theory has been known for a number of years as chiral perturbation theory ($`\chi `$PT), which makes use of the fact that kaons and pions are the Goldstone bosons of chiral $`SU(3)_L\times SU(3)_R`$ broken down to $`SU(3)_V`$, and are the only relevant degrees of freedom at this energy. $`\chi `$PT allows for a consistent description of the strong and electromagnetic FSI in kaon system. The discussion of strong FSI is naturally included in the $`\chi `$PT calculations of kaon decays processes at one or more loops . In addition, kaon system is rather unique for its sensitivity to the electromagnetic final state interaction effects. Normally, one expects this class of corrections to be negligibly small. However, in some cases they are still very important. For instance, it is known that in non-leptonic $`K`$ decays the $`\mathrm{\Delta }I=1/2`$ isospin amplitude is enhanced compared to the $`\mathrm{\Delta }I=3/2`$ amplitude by approximately a factor of 22. Since electromagnetism does not respect isospin symmetry, one might expect that electromagnetic FSI might contribute to the $`\mathrm{\Delta }I=3/2`$ amplitude at the level of $`22/13720\%`$! Of course, some cancellations might actually lower the impact of this class of FSI . There is a separate class of observables that is directly affected by electromagnetic FSI. It includes the $`T`$-violating transverse lepton polarizations in the decays $`K\pi l\nu `$ and $`Kl\nu \gamma `$ $$P_l^{}=\frac{\stackrel{}{s}_l(\stackrel{}{p}_i\times \stackrel{}{p}_l)}{|\stackrel{}{p}_i\times \stackrel{}{p}_l|},$$ (5) where $`i=\gamma ,\pi `$. Observation of these polarizations in the currently running experiments implies an effect induced by New Physics. A number of parameters of various extensions of the Standard Model can be constrained via these measurements . It is, however, important to realize that the polarizations as high as $`10^3(10^6)`$ could be generated by the electromagnetic rescattering of the final state lepton and pion or due to other intermediate states. These corrections have been estimated for a number of experimentally interesting final states . ### 2.2 $`D`$ decays The relatively low mass of the charm quark puts the $`D`$ mesons in the region populated by the higher excitations of the light quark resonances. It is therefore natural to assume that the final state resacttering is dominated by the intermediate resonance states . Unfortunately, no model-independent description exists at this point. Yet, the wealth of experimental results allows for the introduction of testable models of FSI . These models are very important in the studies of direct $`CP`$-violating asymmetries $$A_{CP}=\frac{\mathrm{\Gamma }(Df)\mathrm{\Gamma }(\overline{D}\overline{f})}{\mathrm{\Gamma }(Df)+\mathrm{\Gamma }(\overline{D}\overline{f})}\mathrm{sin}\theta _w\mathrm{sin}\delta _s,$$ (6) which explicitly depend on the values of both weak ($`\theta _w`$) and strong ($`\delta _s`$) phases. In most models of FSI in $`D`$ decay, the phase $`\delta _s`$ is generated by the width of the nearby resonance and by calculating the imaginary part of loop integral with the final state particles coupled to the nearby resonance. It is important to realize that the large final state interactions and the presence of the nearby resonances in the $`D`$ system has an immediate impact on the $`D\overline{D}`$ mixing parameters. It is well known that the short distance contribution to $`\mathrm{\Delta }m_D`$ and $`\mathrm{\Delta }\mathrm{\Gamma }`$ is very small, of the order of $`10^{18}`$ GeV. Nearby resonances can enhance them by one or two orders of magnitude . In addition, they provide a source for quark-hadron duality violations, as they populate the gap between the QCD scale and the scale set by the mass of the heavy quark normally required for the application of heavy quark expansions. ### 2.3 $`B`$ decays In the $`B`$ system, where the density of the available resonances is large due to the increased energy, a different approach must be employed. One can use the fact that the $`b`$quark mass is large compared to the QCD scale and investigate the behavior of final state phases in the $`m_b\mathrm{}`$ limit. Significant energy release in $`B`$ decays allows the studies of inclusive quantities, for instance inclusive $`CP`$-violating asymmetries of the form of Eq. (6). There, one can use duality arguments to calculate final state phases for charmless $`B`$ decays using perturbative QCD . Indeed, $`bc\overline{c}s`$ process, with subsequent final state rescattering of the two charmed quarks into the final state (penguin diagram) does the job, as for the energy release of the order $`m_b>2m_c`$ available in $`b`$ decay, the rescattered $`c`$-quarks can go on-shell generating a CP conserving phase and thus $`𝒜_{CP}^{dir}`$, which is usually quite small for the experimentally feasible decays, $`𝒪(1\%)`$. It is believed that larger asymmetries can be obtained in exclusive decays. However, a simple picture is lost because of the absence of the duality argument. It is known that scattering of high energy particles may be divided into ‘soft’ and ‘hard’ parts. Soft scattering occurs primarily in the forward direction with limited transverse momentum distribution which falls exponentially with a scale of order $`0.5`$ GeV. At higher transverse momentum one encounters the region of hard scattering, which can be described by perturbative QCD. In exclusive $`B`$ decay one faces the difficulty of separating the two. It might prove useful to employ unitarity in trying to describe FSI in exclusive $`B`$ decays. It is easy to investigate first the elastic channel. The inelastic channels have to share a similar asymptotic behavior in the heavy quark limit due to the unitarity of the $`𝒮`$-matrix. The choice of elastic channel is convenient because of the optical theorem which connects the forward (imaginary) invariant amplitude $``$ to the total cross section, $$m_{ff}(s,t=0)=2k\sqrt{s}\sigma _{f\mathrm{all}}s\sigma _{f\mathrm{all}},$$ (7) where $`s`$ and $`t`$ are the usual Mandelstam variables. The asymptotic total cross sections are known experimentally to rise slowly with energy and can be parameterized by the form , $`\sigma (s)=X\left(s/s_0\right)^{0.08}+Y\left(s/s_0\right)^{0.56},`$ where $`s_0=𝒪(1)`$ GeV is a typical hadronic scale. Considering only the imaginary part of the amplitude, and building in the known exponential fall-off of the elastic cross section in $`t`$ ($`t<0`$ by writing $$im_{ff}(s,t)i\beta _0\left(\frac{s}{s_0}\right)^{1.08}e^{bt},$$ (8) one can calculate its contribution to the unitarity relation for a final state $`f=ab`$ with kinematics $`p_a^{}+p_b^{}=p_a+p_b`$ and $`s=(p_a+p_b)^2`$: $`𝒟isc_{Bf}`$ $`=`$ $`{\displaystyle \frac{i}{8\pi ^2}}{\displaystyle \frac{d^3p_a^{}}{2E_a^{}}\frac{d^3p_b^{}}{2E_b^{}}\delta ^{(4)}(p_Bp_a^{}p_b^{})m_{ff}(s,t)_{Bf}}`$ (9) $`=`$ $`{\displaystyle \frac{1}{16\pi }}{\displaystyle \frac{i\beta _0}{s_0b}}\left({\displaystyle \frac{m_B^2}{s_0}}\right)^{0.08}_{Bf},`$ where $`t=(p_ap_a^{})^2s(1\mathrm{cos}\theta )/2`$, and $`s=m_B^2`$. One can refine the argument further, since the phenomenology of high energy scattering is well accounted for by the Regge theory . In the Regge model, scattering amplitudes are described by the exchanges of Regge trajectories (families of particles of differing spin) with the leading contribution given by the Pomeron exchange. Calculating the Pomeron contribution to the elastic final state rescattering in $`B\pi \pi `$ one finds $$𝒟isc_{B\pi \pi }|_{\mathrm{Pomeron}}=iϵ_{B\pi \pi },ϵ0.21.$$ (10) It is important that the Pomeron-exchange amplitude is seen to be almost purely imaginary. However, of chief significance is the identified weak dependence of $`ϵ`$ on $`m_B`$ – the $`(m_B^2)^{0.08}`$ factor in the numerator is attenuated by the $`\mathrm{ln}(m_B^2/s_0)`$ dependence in the effective value of $`b`$. The analysis of the elastic channel suggests that, at high energies, FSI phases are mainly generated by inelastic effects, which follows from the fact that the high energy cross section is mostly inelastic. This also follows from the fact that the Pomeron elastic amplitude is almost purely imaginary. Since the study of elastic rescattering has yielded a $`𝒯`$-matrix element $`𝒯_{abab}=2iϵ`$, i.e. $`𝒮_{abab}=12ϵ`$, and since the constraint of unitarity of the $`𝒮`$-matrix implies that the off-diagonal elements are $`𝒪(\sqrt{ϵ})`$, with $`ϵ`$ approximately $`𝒪(m_B^0)`$ in powers of $`m_B`$ and numerically $`ϵ<1`$, then the inelastic amplitude must also be $`𝒪(m_B^0)`$ with $`\sqrt{ϵ}>ϵ`$. Similar conclusions follow from the consideration of the final state unitarity relations. This complements the old Bjorken picture of heavy meson decay (the dominance of the matrix element by the formation of the small hadronic configuration which grows into the final state pion “far away” from the point it was produced and does not interact with the soft gluon fields present in the decay, see also for the discussion) by allowing for the rescattering of multiparticle states, production of whose is favorable in the $`m_b\mathrm{}`$ limit, into the two body final state. Analysis of the final-state unitarity relations in their general form is complicated due to the many contributing intermediate states, but we can illustrate the systematics of inelastic scattering in a two-channel model. It involves a two-body final state $`f_1`$ undergoing elastic scattering and a final state $`f_2`$ which represents ‘everything else’. As before, the elastic amplitude is purely imaginary, which dictates the following one-parameter form for the $`S`$ matrix $$S=\left(\begin{array}{cc}\mathrm{cos}2\theta & i\mathrm{sin}2\theta \\ i\mathrm{sin}2\theta & \mathrm{cos}2\theta \end{array}\right),T=\left(\begin{array}{cc}2i\mathrm{sin}^2\theta & \mathrm{sin}2\theta \\ \mathrm{sin}2\theta & 2i\mathrm{sin}^2\theta \end{array}\right),$$ (11) where we identify $`\mathrm{sin}^2\theta ϵ`$. The unitarity relations become $`𝒟isc_{Bf_1}`$ $`=`$ $`i\mathrm{sin}^2\theta _{Bf_1}+{\displaystyle \frac{1}{2}}\mathrm{sin}2\theta _{Bf_2},`$ $`𝒟isc_{Bf_2}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{sin}2\theta _{Bf_1}i\mathrm{sin}^2\theta _{Bf_2}`$ (12) Denoting $`_1^0`$ and $`_2^0`$ to be the decay amplitudes in the limit $`\theta 0`$, an exact solution to Eq. (12) is given by $$_{Bf_1}=\mathrm{cos}\theta _1^0+i\mathrm{sin}\theta _2^0,_{Bf_2}=\mathrm{cos}\theta _2^0+i\mathrm{sin}\theta _1^0.$$ (13) Thus, the phase is given by the inelastic scattering with a result of order $$m_{Bf}/e_{Bf}\sqrt{ϵ}\left(_2^0/_1^0\right).$$ (14) Clearly, for physical $`B`$ decay, we no longer have a simple one-parameter $`𝒮`$ matrix, and, with many channels, cancellations or enhancements are possible for the sum of many contributions. However, the main feature of the above result is expected to remain: inelastic channels cannot vanish and provide the FSI phase which is systematically of order $`\sqrt{ϵ}`$ and thus does not vanish in the large $`m_B`$ limit. A contrasting point of view is taken in a recent calculation . The argument is based on the perturbative factorization of currents (i.e. the absence of infrared singularities) in the matrix elements of four quark operators in the Bjorken setup. It is claimed that the leading contribution is given by the naive factorization result with non-leading corrections suppressed by $`\alpha _s`$ or $`1/m_b`$ (see, however, ). However, the role of multihadron intermediate states is not yet clear in this approach. Moreover, even accepting the result of , it would be premature to claim that the theory of exclusive $`B`$ decays to light mesons is free of hadronic uncertainties. In fact, many important long distance final state rescattering effects involve exchange of global quantum numbers, such as charge or strangeness, and thus are suppressed by $`1/m_B`$. These were shown to be important and can be quite large . This is easy to see in the Regge description of FSI where this exchange is mediated by the $`\rho `$ and/or higher lying trajectories. This fact raises a question whether the scale $`m_b5`$ GeV is large enough for the asymptotic limit to set. (i) Bounds on the FSI Corrections. In view of the large theoretical uncertainties involved in the calculation of the FSI contributions, it would be extremely useful to find a phenomenological method by which to bound the magnitude of the FSI contribution. The observation of a larger asymmetry would then be a signal for New Physics. Here the application of flavor $`SU(3)`$ flavor symmetry provides powerful methods to obtain a direct upper bound on the FSI contribution . The simplest example involves bounding FSI in $`B\pi K`$ decays using $`B^\pm K^\pm K`$ transitions . (ii) Direct Observation. Another interesting way of studying FSI involves rare weak decays for which the direct amplitude $`A(Bf)`$ is suppressed compared to $`A(Bi)`$. They offer a tantalizing possibility of the direct observation of the effects of FSI. One of the possibilities involves dynamically suppressed decays which proceed via weak annihilation diagrams. It has been argued that final state interactions, if large enough, can modify the decay amplitudes, violating the expected hierarchy of amplitudes. For instance, it was shown that the rescattering from the dominant tree level amplitude leads to the suppression of the weak annihilation amplitude by only $`\lambda 0.2`$ compared to $`f_B/m_B\lambda ^2`$ obtained from the naive quark diagram estimate. Alternatively, one can study OZI-violating modes, i.e. the modes which cannot be realized in terms of quark diagrams without annihilation of at least one pair of the quarks, like $`\overline{B}_d^0\varphi \varphi ,D^0\varphi `$ and $`J/\psi \varphi `$. Unitarity implies that this decay can also proceed via the OZI-allowed weak transition followed by final state rescattering into the final state under consideration . In $`B`$-decays these OZI-allowed steps involve multiparticle intermediate states and might provide a source for violation of the OZI rule. For instance, the FSI contribution can proceed via $`\overline{B}_d^0\eta ^{()}\eta ^{()}\varphi \varphi `$, $`\overline{B}_d^0D^0\eta ^{()}D^0\varphi `$ and $`\overline{B}_d^0\psi ^{}\eta ^{()}J/\psi \varphi `$. The intermediate state also includes additional pions. The weak decay into the intermediate state occurs at tree level, through the $`(u\overline{u}+d\overline{d})/\sqrt{2}`$ component of the $`\eta ^{()}`$ wavefunction, whereas the strong scattering into the final state involves the $`s\overline{s}`$ component. Hence the possibility of using these decay modes as direct probes of the FSI contributions to $`B`$ decay amplitudes. It is however possible to show that there exist strong cancellations among various two body intermediate channels. In the example of $`\overline{B}_d^0\varphi \varphi `$, the cancellation among $`\eta `$ and $`\eta ^{}`$ is almost complete, so the effect is of the second order in the $`SU(3)`$-breaking corrections $$Disc_{B\varphi \varphi }=O(\delta ^2,\mathrm{\Delta }^2,\delta \mathrm{\Delta })f_\eta F_0A,\delta =f_\eta ^{}f_\eta ,\mathrm{\Delta }=F_0^{}F_0,$$ (15) with $`As^{\alpha _01}e^{i\pi \alpha _0/2}/8b`$. This implies that the OZI-suppressed decays provide an excellent probe of the multiparticle FSI. Given the very clear signature, these decay modes could be probed at the upcoming $`B`$-factories. In conclusion, I reviewed the physics of final state interactions in meson decays. One of the main goals of physics of $`CP`$ violation and meson decay is to correctly extract the underlying parameters of the fundamental Lagrangian that are responsible for these phenomena. The understanding of final state interactions is very important for the success of this program.
no-problem/9909/cond-mat9909459.html
ar5iv
text
# Electrostatically-Driven Granular Media: Phase Transitions and Coarsening \[ ## Abstract We report the experimental and theoretical study of electrostatically driven granular material. We show that the charged granular medium undergoes a hysteretic first order phase transition from the immobile condensed state (granular solid) to a fluidized dilated state (granular gas) with a changing applied electric field. In addition we observe a spontaneous precipitation of dense clusters from the gas phase and subsequent coarsening – coagulation of these clusters. Molecular dynamics simulations shows qualitative agreement with experimental results. \] Despite extensive study over the preceding decade, a fundamental understanding of the dynamics of granular materials still poses a challenge for physicists and engineers . Peculiar properties of granular materials can be attributed to strong contact interactions and inelastic collisions between grains . Fascinating collective behavior appears when small particles acquire an electric charge and respond to competing long-range electromagnetic and short range contact forces. The electrostatic excitation of granular media offers unique new opportunities compared to traditional vibration techniques which have been developed to explore granular dynamics . It enables one to deal with extremely fine powders which are not easily controlled by mechanical methods. Fine particles are more sensitive to electrostatic forces which arise through particle friction or space charges in the particle environment. Their large surface to volume ratio amplifies the effect of water or other surfactants. These effects intervene in the dynamics causing agglomeration, charging, etc., making mechanical experiments uncontrollable. Electrostatic driving makes use of these bulk forces, and allows not only for removal of these ”side effects,” but also for control by long-range electric forces. In this Letter we report experimental and theoretical study of electrostatically driven granular material. It is shown that the charged granular medium undergoes a first order phase transition from the immobile condensed state (granular solid) to a fluidized dilated state (granular gas) with a changing applied electric field. A spontaneous precipitation of dense clusters from the gas phase and subsequent coarsening – coagulation of these clusters is observed in a certain region of the electric field values. We find that the rate of coarsening is controlled by the amplitude and frequency of the applied electric field. We have also performed molecular dynamics simulations of electrostatically driven particles. These simulations show qualitative agreement with experiments. The experimental cell is shown in Fig. 1. Conducting particles are placed between the plates of a large capacitor which is energized by a constant or alternating electric field. To provide optical access to the cell, the upper conducting plate is made transparent. We used $`4\times 6`$ cm capacitor plates with a spacing of 1.5 mm and 35 $`\mu `$m copper powder. The field amplitude varied from 0 to 10 kV/cm and the frequencies on the interval of 0 to 250 Hz. The number of particles in the cell was about $`10^7`$. The experiments were performed both in atmospheric pressure and in vacuum ( $`5\times 10^6`$ Torr). When the conducting particles are in contact with the capacitor plate they acquire a surface charge. As the magnitude of the electric field in the capacitor exceeds the critical value $`E_1`$ the resulting (upward) electric force overcomes the gravitational force $`mg`$ ($`m`$ is the mass of the particle, $`g`$ is the acceleration due to gravity) and pushes the charged particles upward. When the grains hit the upper plate, they deposit their charge and fall back. Applying an alternating electric field $`E=E_0\mathrm{sin}(2\pi ft)`$, and adjusting its frequency $`f`$ one can control the particle elevation by effectively turning them back before they collide with the upper plate. The phase diagram is shown in Fig. 2. We have found that at amplitudes of the electric field above a second threshold value, $`E_2>E_1`$, the granular medium forms a uniform gas-like phase (granular gas). This second field $`E_2`$ is 50-70% larger then $`E_1`$ in nearly the whole range of the parameters used. In the field interval $`E_1<E<E_2`$, a remarkable phenomenon analogous to coalescence dynamics in systems exhibiting first order phase transitions was observed. Upon decreasing the field below $`E_2`$, the gas phase loses its stability and small clusters of immobile particles surrounded by the granular gas form. These clusters then evolve via coarsening dynamics: small clusters disappear and large clusters grow. Eventually the clusters assume almost perfect circular form (Figs 3a-c). In the process of coarsening the electric current also changes in time due to decrease of particles in the gas phase. A close-up image of one of the clusters is shown in Fig. 3a. Surprisingly, we can even see a larger particle near the center of the cluster which served as the nucleation point at the early stages of the aggregation. Due to its increased mass the large particle is the first to become immobile in the lower field. After a very large time ($`t30000`$ sec) a single cluster containing about $`10^6`$ grains survives. At the final stage a dynamic equilibrium between the granular solid and the surrounding gas persists – not all the particles join the last cluster. For the cell under atmospheric pressure (open cell) we found that both fields $`E_1`$ and $`E_2`$ grow as a function of frequency for large $`f`$ and show non-monotonous behavior for $`f12`$Hz. This indicates a characteristic time of the order $`100`$ msec. We suggest that cohesion may be responsible for this relatively large time. Indeed, due to the humidity of the air a surface coating should exist, requiring a characteristic time $`\tau `$ for the grain to detach from the capacitor plate. In order to reduce the cohesion we evacuated the cell to $`5\times 10^6`$ Torr. As demonstrated from Fig. 2, the frequency dependence is indeed substantially reduced and becomes almost flat. Small oscillations in the dependence for low frequency are probably due to a residual coating on the particles which does not completely evaporate in vacuum. We have measured the number of clusters $`N`$ and the averaged radius of the clusters $`R`$ as a function of time $`t`$ (see Fig. 4) using images taken in time-lapse by a digital camera and then processed on a computer. The values of $`R`$ for both open and evacuated cells are consistent with a power law $`R\sqrt{t}`$. However, there is a substantial difference in the behavior of $`N`$. In open cells (no vacuum) we observed slow coarsening $`N1/\sqrt{t}`$; whereas the coarsening for the evacuated cell is much faster, $`N1/t`$. Remarkably, for the evacuated cell the exponents for $`N`$ and $`R`$ coincide with the exponents for two-dimensional bi-stable systems in the interface-controlled regime of Ostwald Ripening . We speculate that the slow-down in coarsening for open cells can be a result of cohesion between the particles and with the capacitor plate. This cohesion reduces the mobility of particles at the edges of the clusters, resulting in ”pinning” of the front between the granular solid and gas. Let us compare the forces exerted on an isolated particle and on a particle which is held within a cluster at the same field. The force between an isolated sphere and the capacitor plate in contact can be found as a limit of the problem of two spheres when the radius of one sphere goes to infinity. This problem had been contemplated by several outstanding physicists including Kelvin, Poisson, Liouville and Kirchhoff . Building on the technique of Ref. , we arrive at the force $`F_e`$ in question: $$F_e=ca^2E^2$$ (1) where $`a`$ is the radius of the sphere and $`E`$ is the field in the capacitor. The constant $`c=\zeta (3)+1/61.36`$ comes from summing of infinite series derived in . The electric force $`F_e`$ has to counterbalance the gravitational force $`G=4/3\pi \rho ga^3`$, where $`\rho `$ is the density of the material. Comparing $`F_e`$ and $`G`$ we find the first critical field $`E_1`$: $$E_1=\sqrt{\frac{4\pi \rho ga}{31.36}}$$ (2) Our theory indicates no frequency dependence of $`E_1`$. For the parameters of our experiments we obtain $`E_1=2.05`$ kV/cm. The experimental critical field is $`E_12.4`$ kV/cm. This discrepancy seems reasonable since we did not take into account additional molecular and/or contact forces which will increase the critical field. If the spheres are close to each other, the surface charge will redistribute: each sphere in the cluster acquires smaller charge than that of an individual sphere due to a screening of the field by its neighbors. The exact derivation of the force acting on the particle within the cluster is not available at present. The upper bound can be obtained by replacing the square lattice of spheres with radius $`a`$ by a lattice of squares with area $`4a^2`$. Since the charge density of the corresponding flat layer is $`\sigma =E/4\pi `$, the total electric force on the square is $`F_2=4a^2\sigma E/2`$. The ratio of the fields to lift individual particle $`E_1`$ and the particle in the square lattice $`E_2`$ is $`E_2/E_1=\sqrt{2\pi 1.36}=2.92`$. For closed-packed hexagonal lattice one obtains a slightly higher value $`E_2/E_13.14`$. As one sees, the ratio of the critical fields is independent of particle size and the density and the frequency $`f`$. This is consistent with the experimental findings, see Fig. 2, although it exceeds the experimental value by a factor of 2. A more accurate account for the surface shape will improve the ratio. We have also performed molecular dynamics simulations of conducting spheres in applied electric field. Several simplifications have been implemented: the sphere polarization was neglected; the charge was assumed to be in the center of the sphere; all collisions were assumed to be inelastic. These simplifications are justified by the fact that the particle-particle collisions are rather rare in contrast to deep vibrated granular layers studied in Ref. . Interactions between the spheres are treated as being between point charges if the distance is smaller than the plate spacing. For larger distances we assumed no interaction due to screening of the far charge field by other particles (compare with Debye screening ). In our molecular dynamics simulations we implemented explicitly that the charge acquired by each sphere from the conducting plate decreases gradually with the number of nearest neighbors: it is maximum for an isolated sphere and decreases by factor of two for the sphere inside the cluster. We simulated $`35`$ $`\mu `$m diameter 3000 copper spheres in the domain $`7\times 7`$ mm ($`200\times 200`$ particle diameters) with periodic boundary conditions. The capacitor spacing was $`1.5`$ mm, as it was in the experiment. Although the quantitative correspondence between the molecular dynamics simulations and the experiment was not planned (the number of particles and the size of the system are too small ), the simulations turns out to be in a qualitative agreement with the experiment. We have obtained a gas phase for high field levels and spontaneous formation of clusters and coarsening for lower filed level, see Fig. 5. However, detailed simulations with much large number of particles and more realistic account for sphere polarization effects are necessary to achieve quantitative agreement with experiment. We have reported a hysteretic first-order phase transition accompanied by coarsening behavior in the electrostatically-driven conducting granular medium. The origin of coarsening dynamics and hysteresis is due to a screening of the electric field in dense particle clusters. Our results show surprisingly high sensitivity of the phase boundaries to surface coating of the grains due to humidity. Simplified molecular dynamics simulations with conducting particles demonstrate qualitative agreement with the experiment: existence of two critical fields and coarsening. We thank Sid Nagel, Leo Kadanoff, Thomas Witten, Lorenz Kramer and Baruch Meerson for useful discussions. This research is supported by US DOE, grant W-31-109-ENG-38, and by NSF, STCS #DMR91-20000.
no-problem/9909/astro-ph9909485.html
ar5iv
text
# High-energy accelerators above pulsar polar caps ## Abstract Similar to the terrestrial collision accelerators of e<sup>±</sup>, another kind of accelerator is above a positively or negatively charged pulsar polar cap. In the case of pulsars with magnetic axis parallel (anti-parallel) to rotational axis, relativistic e<sup>+</sup> (e<sup>-</sup>) with Lorentz factor $`\gamma 10^6`$ hit the electrons in the polar caps. These scenarios are investigated both for pulsars being BSSs (bare strange stars) and for pulsars being NSs (neutron stars). Such a study may be valuable to differentiate NSs and BSSs observationally. High-energy electron-positron collision in laboratory, where the energy in the center of mass ($`E_{\mathrm{cm}}`$) is about GeV, has been successfully used to study the structure of elementary particles and their interactions. For anti-pulsars, when a backflow e<sup>+</sup> beats an electron in polar cap, $`E_{\mathrm{cm}}`$ is also about GeV. $`E_{\mathrm{cm}}=0.511\mathrm{MeV}\sqrt{2\gamma }1`$ GeV for $`\gamma 10^6`$. Therefore, the physics in the polar cap accelerators does not go beyond that in laboratory. For rotation-powered pulsars, the rotation energy is converted into the kinetic and radiative energy via an induced electric field around a magnetized rotator, hence, copious e<sup>±</sup> in the open field lines will be accelerated in different directions. The outflow produces a power law radiation, while the backflow heats the polar cap. Therefore, nearly half the loss of rotation energy, $`\dot{E}=9.6\times 10^{30}P_1^4R_6^6B_{12}^2`$ ergs/s , is in outflow and backflow. In fact, relativistic backflow particles radiate away part of their energy before reaching the cap. Considering the polar gap and the out gap accelerations, one can find the residual energy of a charged particles striking the cap<sup>1</sup> being $`5.9`$ ergs (thus $`\gamma 10^6`$). With a Goldreich-Julian current bombardment, the polar cap will receive an energy rate<sup>1</sup> $`E_{\mathrm{rate}}8.1\times 10^{30}P_1^{5/3}B_{12}R_6^3`$ ergs/s, which is in order of $`\dot{E}`$. The cap has a radius of $`r_\mathrm{p}=1.45\times 10^4R_6^{3/2}P_1^{1/2}`$ cm, thus the energy flux received is $`F_\mathrm{p}=1.8\times 10^{22}B_{12}P_1^{2/3}\mathrm{ergs}\mathrm{s}^1\mathrm{cm}^2`$. It was believed that NSs and SSs (strange stars) can be differentiated observationally by studying the global cooling behaviors. However, it is found that SSs cool significantly more rapidly than NSs within the first $``$30 yr after birth<sup>2</sup>, which makes NSs and SSs almost un-distinguishable based only on the global thermal radiation. Now that pulsars may be BSSs (see, e.g., Xu, Qiao, & Zhang in this proceeding), and the materials in the polar caps of NSs and BSSs are very different, we suggest to study the local thermal behavior in the polar caps in order to distinguish NSs and SSs. Such suggestion avoids involving us in the detailed microphysics processes in the interiors of NSs and SSs. BSSs: have cooler polar caps? As a polar cap is hotter than the other part in a pulsar’s surface, heat flows from the cap to the equator. If pulsars are NSs, such heat flow is negligible, and an electromagnetic shower produced by incident particles can almost be converted into thermal radiation re-radiated. If there is no thermal conductivity, the temperature in the cap is $`T_0=(\frac{F_\mathrm{p}}{\sigma })^{1/4}4.22\times 10^6`$K. The coefficient of thermal conductivity mainly due to the transport of heat by electrons in NS surface is given by<sup>3</sup> $`\kappa ^{\mathrm{NS}}=3.8\times 10^{14}\rho _5^{4/3}\mathrm{ergs}\mathrm{s}^1\mathrm{cm}^1\mathrm{K}^1`$ ($`\rho _5`$ is the density in unit of $`10^5`$ g cm<sup>-3</sup>), which are nearly independent of the details of the lattice. The temperature gradient in the crust of a NS is $`T^{\mathrm{NS}}\frac{T_0}{r_\mathrm{p}}`$, then the heat flow $`H^{\mathrm{NS}}\kappa ^{\mathrm{NS}}T^{\mathrm{NS}}r_\mathrm{p}^n\kappa ^{\mathrm{NS}}T_0\sqrt{r_\mathrm{p}}1.9\times 10^{23}`$ ergs/s $`E_{\mathrm{rate}}`$ ($`n=12`$ when the thermal conduction geometry near polar cap is considered, we let $`n=1.5`$ here). However, if pulsars are BSSs, the electron number density in the surface of a BSS, $`n^{\mathrm{BSS}}=1.5\times 10^{34}\mathrm{cm}^3`$ , is much lager than that of an NS, $`n^{\mathrm{NS}}=2.8\times 10^{28}\rho _5\mathrm{cm}^3`$. Hence, the coefficient of thermal conductivity in BSS surface, $`\kappa ^{\mathrm{SS}}(\frac{n^{\mathrm{SS}}}{n^{\mathrm{NS}}})^{4/3}\kappa ^{\mathrm{NS}}4.35\times 10^7\kappa ^{\mathrm{NS}}`$, although there is no detailed calculation of $`\kappa ^{\mathrm{SS}}`$ in literature. Therefore the correspondent heat flow in BSS, $`H^{\mathrm{SS}}8.3\times 10^{30}`$ ergs/s , is comparable with $`E_{\mathrm{rate}}`$, being un-negligible. Such heat flow will result in a much lower polar cap temperature in bare strange stars. Anti-pulsars with $`𝛀𝐁>0`$ Electrons are pulled out, while positrons bombard the cap. Such process is very similar to that in the laboratory collision of e<sup>±</sup>: $`\mathrm{e}^+\mathrm{e}^{}\mathrm{f}\overline{\mathrm{f}}(\mathrm{e}^+\mathrm{e}^{}),\gamma \gamma `$ ($`\mathrm{f}\overline{\mathrm{f}}`$ is a pair of fermion). A gauss-like peak $``$0.511 MeV spectrum could be observed. However, if pulsars are NSs, additional bremsstrahlung may produces a lower continue spectrum superposed on the peak spectrum. Therefore, NSs and BSSs may be distinguished by spectrum observations. Pulsars with $`𝛀𝐁<0`$ Positive charged particles are pulled out, while an electron shower pours on the cap. If pulsars are NSs, the pulled ions being Fe and/or He nucleus (the composition of the cap surface is uncertain) might result in a formation of line spectrum in 0.04 keV $``$ 10 keV according to the energy levels of atom $`E_n=13.6\mathrm{eV}Z^2\frac{1}{n^2}`$. No such line can be observed if pulsars are BSSs. Observationally, there is no convinced signature of ion line in rotation-powered pulsars. Hence, such pulsars may tend to be BSSs. Perhaps, the future missions of Astro-E (with a fine resolution spectroscope in 0.4 - 10 keV) would make it clear, that heavy ion (such as iron) lines can or cannot be seen from rotation-powered pulsars. Conclusion Two conclusions: 1. The polar cap of bare strange star may be cooler than that of neutron star; 2. Spectrum observation might tell that a pulsar is a bare strange star or a neutron star. We thank Dr. J.L. Han, Mr. B.H. Hong, and other members in our pulsar group. This work is supported by NSFC (No. 19803001, N0.19910211260-570-A03), by the Climbing project of China, by Doctoral Program Foundation of Institution of Higher Education in China and by the Youth Foundation of PKU. > `[1] Wang, F., et al. 1998, ApJ, 498, 373` > `[2] Schaab, C., et al. 1997, ApJ, 480, L111` > `[3] Jones, P. B. 1978, MNRAS, 184, 807`
no-problem/9909/gr-qc9909075.html
ar5iv
text
# Introduction to causal sets: an alternate view of spacetime structure ## I Introduction When studying general relativity, students often find that two of the most compelling topics, cosmology and black holes, lead directly to the need for a theory of quantum gravity. However, not much is said about quantum gravity at this level. Those who search for more information will find that most discussions center around the two best know approaches: canonical quantization and superstring theories . This paper seeks to introduce the problem of quantum gravity in the context of a third view, causal sets, which has emerged as an important concept in the pursuit of quantum gravity. The causal set idea is an hypothesis for the structure of spacetime. This structure is expected to become apparent for extremely tiny lengths and extremely short times. This hypothesis, in its current form, has grown out of an attempt to find an appropriate structure for a physical theory of quantum gravity. There is a long tradition of the importance of causality in relativity. Many of the issues faced when confronting the problem of quantum gravity bring considerations of time and causality to the forefront. There are many approaches to quantum gravity. Usually, these approaches go through cycles of rapid progress, during which times an approach will appear very promising, followed by (sometimes long) periods of slow, or no, progress. The causal set approach has gone through these cycles as well, although to a lesser extent than some, with early work by Finkelstein , Myrhiem , ’t Hooft , and Sorkin . The recent upswing of interest in causal sets was ignited by a paper written in the late 1980s . Since the hope is that causal sets will lead to a working model for quantum gravity, it seems appropriate to begin by describing the problem of quantum gravity in general. The basic ideas behind the causal set approach and some of the progress that has been forged in recent years will be discussed in sections III - VI. ## II The problem of quantum gravity ### A What is quantum gravity? The question of what one means by “quantum gravity” is not a simple question to answer for the obvious reason that we do not yet have a complete understanding of quantum gravity. Hence, the answers to this question are both short and long and perhaps as numerous as the number of approaches attempting to solve the problem. Most physicists agree that by “gravity” we mean Einstein’s theory of general relativity (and possibly a few modified versions of it). General relativity most popularly interprets gravitation as a result of the geometrical structure of spacetime. The geometrical interpretation fits because the theory is formally cast in terms of metrical structure $`g_{\mu \nu }`$ on a manifold $`M`$. There is somewhat less agreement on the meaning of “quantum.” At first glance, it seems odd that there would be less agreement on the aspect with which we have much more experience. On the other hand, however, the fact that we have only been able to perform weak-field experimental tests of general relativity leaves us with much less information to debate. Our experience with quantum mechanics tells us that the deviations from classical physics it describes are important when dealing with size scales on the order of magnitude of an atom and smaller. Is there a natural size scale at which we expect the predictions of general relativity to be inaccurate requiring a new more fundamental theory? The scale at which theories become important is set by the values of the fundamental parameters related to the processes being described. For example, the speed of light $`c`$ is the fundamental constant that determines the velocity scale for which relativistic effects (special relativity) are appreciable. Likewise, Planck’s constant $`\mathrm{}`$, among others, sets the scale for systems that must be described by quantum mechanics. The fundamental constants that are relevant to a theory of quantum gravity are the speed of light, Planck’s constant, and the universal gravitation constant $`G`$. These three quantities combine to form the length and time scales at which classical general relativity break down: $$\begin{array}{c}\mathrm{}_P=\left(G\mathrm{}/c^3\right)^{1/2}10^{35}\text{ m}\hfill \\ t_P=\mathrm{}_P/c10^{44}\text{ s,}\hfill \end{array}$$ (1) where $`\mathrm{}_P`$ is called the Planck length and $`t_P`$ is called the Planck time. Size, however, is only one part of what makes a theory “quantum.” Consider, once again, the atom. If we dig deeper than just size and ask why quantum effects are important for atoms, the answer is that a relatively small number of states are occupied (or excited). This fact is more commonly stated in reverse as a correspondence principle requiring that quantum mechanics merge with classical mechanics in the limit of large quantum numbers, that is, a large number of occupied states. It is this latter point that truly characterizes quantum behavior. A quantum theory must therefore enumerate and describe the states of a system in such a way that the known classical behavior emerges for large numbers of states. What, then, do we mean by “quantum gravity?” In this paper, my working definition is that > quantum gravity is a theory that describes the structure of spacetime and the effects of spacetime structure down to sub-Planckian scales for systems containing any number of occupied states. In the above definition, the “effects of spacetime structure” include not only the phenomenon of gravitational attraction, but also any implications that the spacetime dynamics will have for other interactions that take place within this structure. ### B Why do we need quantum gravity? #### 1 The Einstein field equations The content of the Einstein field equations of general relativity, $$G_{\mu \nu }=\kappa T_{\mu \nu },$$ (2) suggests the need for a quantum mechanical interpretation of gravity . Here $`G_{\mu \nu }`$ is the Einstein curvature tensor representing the curvature of space-time, $`T_{\mu \nu }`$ is the energy-momentum tensor representing the source of gravitation, while $`\kappa `$ is just a coupling constant between the two. The energy-momentum content of spacetime is already known to be a quantum operator from other fundamental theories such as quantum electrodynamics (QED). We have confidence in the reliability of this interpretation because, despite the fact that QED may have flaws (discussed below), it has led to extremely accurate agreement between theory and experiment . Since energy-momentum is a quantum operator whose macroscopic version is intimately related to macroscopic spacetime structure, it seems a good working hypothesis that its quantum mechanical version should correspond to a structure of space and time appropriate in the quantum mechanical regime. #### 2 Black holes A black hole is the final stage in the evolution of massive stars. Black holes are formed when the nuclear energy source at the core of a star is exhausted. Once the nuclear fuel has run out, the star collapses. If the remaining mass of the star is sufficiently high, no known force can halt the collapse. General relativity predicts that the stellar mass will collapse to a state of zero extent and infinite density – a singularity. In this singular state there is no spatial extent, time has no meaning, and the ability to extract any physical information is lost. This prediction may be a message which tells us that a quantum theory of gravity is needed if we are to truly understand the inner workings of black holes. While it may be obvious that processes deep within black holes must be treated in the framework of quantum gravity, it is less obvious that processes well away from the singularity not only require quantum gravity, but may also provide important clues to the form a theory of quantum gravity should take. In 1975 Hawking showed that black holes radiate thermally with a blackbody spectrum. This finding, together with a previous conjecture that the area of a black hole’s event horizon can be interpreted as its entropy , has shown that the laws of black hole mechanics are identical to the laws of thermodynamics. This equivalence only comes about if we accept the identification of the area of the black hole (actually $`1/4`$ of it) as its entropy. In traditional thermodynamics the concept of entropy is best understood in terms of discrete quantum states; not surprisingly, attempts to better understand the reasons for this area identification using classical gravity fail. It is widely expected that only a quantum mechanical approach will produce a satisfactory explanation . For this reason, black hole entropy is an important topic for most approaches to quantum gravity . #### 3 The early universe One of the many triumphs of relativistic cosmology is the explanation of the observed redshift of distant galaxies as an expansion of the universe. However, the universal expansion extrapolates backward to an early universe that is infinitesimally small and infinitely dense – the big bang singularity. Here then, is another situation in which general relativity predicts something it is not equipped to describe. It is fully expected that events near the singularity were dominated by quantum mechanical influences both of and on spacetime which necessarily affects the subsequent evolution of the universe. Presently, cosmological implications of the early universe are studied with the techniques of quantum cosmology which is the quantum mechanics of classical cosmological models. It has been pointed out that quantum cosmology cannot be trusted except in very specialized cases . While we have good theories for doing quantum mechanics on background spacetimes, the early universe problem requires a theory for the quantum mechanics of spacetime itself – a theory of quantum gravity. #### 4 Unification Throughout the history of physics, great strides have been made through the unification of seemingly different aspects of nature. One of the most prominent examples is Maxwell’s unification of the laws of electricity and magnetism. Einstein’s theory of special relativity amounts to a unification of Maxwell’s electromagnetism and Newton’s mechanics showing Newton’s laws to be merely a “low speed” approximation to a more accurate relativistic dynamics. Following relativity theory, quantum mechanics was born. Soon thereafter, Dirac unified quantum mechanics and special relativity giving rise to modern quantum field theory. With the above successful unifications behind us, we are left with the present situation of having several fundamental forces known as the strong, weak, and electromagnetic interactions as well as gravitation. Given the benefits that we have reaped from past unifications it seems natural that the search for deeper insight through unification should continue. The recent success of the electroweak theory has confirmed the value of this search. There are now some seemingly consistent models for the unification of the strong and electroweak theories. The very fact that these interactions can be mathematically unified in a manner consistent with macroscopic observations suggests that a truly physical unification exists. Gravity is the only fundamental force yet to have a consistent quantum mechanical theory. It is widely believed that until such a quantum mechanical description of gravity is attained, placing our understanding of gravity on the same level as that of the other interactions, true unification of gravity with the other forces will not be possible . ### C The incompatibility between general relativity and quantum mechanics For all of the interactions except gravity, our present theoretical understanding of physics is such that systems interact and evolve within a background spacetime structure. This background structure serves to tell us how to measure distances and times. In general relativity it is the spacetime structure itself that we must determine. This spacetime structure then, acts both as the background structure for gravitational interactions and as the dynamical phenomenon giving rise to this interaction. In general relativity the structure of spacetime is determined by the Einstein field equations (2). These field equations are, however, purely classical in that they do not meet the requirements of a quantum theory as discussed in section II.A above. The breakdown of general relativity near the singularity of a black hole, or more accurately, the prediction of a singularity inside of a black hole, is just one of many examples. Given that general relativity was formulated prior to quantum mechanics, the fact that it does not meet quantum mechanical requirements is not surprising. The dual role of the metric tensor makes formulating a theory of quantum gravity very different from the formulations of the other interactions. In quantum gravity we must determine the spacetime structure that acts as background to the classical structure of space and time that we have used to understand all other phenomena. Furthermore, this ultimate background to classical spacetime structure must also be dynamic because it is this dynamics that will describe quantum gravity just as the dynamics of classical spacetime describes general relativity. This latter point is the key reason for the incompatibility between general relativity and quantum mechanics. All of our successful experience is with quantum dynamics on a spacetime structure, but we have had very little success handling the quantum dynamics of spacetime structure. This incompatibility challenges some of the most fundamental concepts in physics. In field theory, we take as the source of the field some distribution $`T_{\mu \nu }`$ (of charge, matter, energy, etc.). However, the concept of a spatial distribution of charge, for example, has no meaning apart from the knowledge of the spatial structure. Without the rules for how to measure relative positions we cannot define the spatial distribution of anything. In quantum mechanics, Schrödinger’s equation describes the time evolution of the wave function $`\mathrm{\Psi }(𝐫,t)`$. The concept of time evolution, however, requires an existing knowledge of temporal flow which comes from the spacetime structure. As a final example, consider the concept of interaction. We generally think of interactions in a manner intimately connected with causality: an interaction precedes and causes an effect. Causality, however, is a concept that can only be defined once the structure of spacetime is known. ## III The causal set hypothesis The above discussion implies the need for a spacetime structure that will underpin the classical spacetime structure of general relativity. The causal set hypothesis proposes such a structure. Causal sets are based on two primary concepts: the discreteness of spacetime and the importance of the causal structure. Below I discuss these two founding concepts in more detail. ### A Spacetime is discrete The causal set hypothesis assumes that the structure of spacetime is discrete rather than the continuous structure that physics currently employs. Discrete means that lengths in three-dimensional space are built up out of a finite number of elementary lengths $`\mathrm{}_e`$ which represents the smallest allowable length in nature and the flow of time occurs in a series of individual “ticks” of duration $`t_e`$ which represents the shortest allowable time interval. The idea that something which appears continuous is actually discrete is very common in physics and everyday life. Any bulk piece of matter is made up of individual atoms so tightly packed that the object appears continuous to the naked eye. Likewise, any motion picture is constructed of a series of snapshots so rapidly paced that the movie appears to flow continuously. Why hold a similar view of spacetime? There are many arguments for a discrete structure. The most familiar ones are related to electrodynamics. Here, I will first try to motivate the idea of discreteness by considering the electromagnetic spectrum. The electromagnetic spectrum gives us the range for the frequencies and wavelengths of electromagnetic radiation, or, photons. Of the many interesting aspects of this spectrum, here let’s focus on that fact that it is a continuous spectrum of infinite extent. Current theory predicts that the allowed frequencies of photons extend continuously from zero to infinity. The relation $`E=h\nu `$ implies that a photon of arbitrarily large frequency has arbitrarily large energy. The local conservation of energy suggests that such infinite energy photons should not exist. If one adopts the (somewhat controversial) view that what cannot physically exist should not be predicted by theory, there ought to be a natural cutoff of the electromagnetic spectrum corresponding to a maximum allowed frequency. Since the frequency of a photon is the inverse of its period, a discrete temporal structure provides a natural cutoff in that the minimum time interval $`t_e`$ implies a maximum frequency $`\nu _{\mathrm{max}}=1/t_e`$. Because of relativity any argument for discrete time is also an argument for discrete space. Nevertheless, a similar argument for discrete space can be given in terms of wavelength. The electromagnetic spectrum, being continuous, allows arbitrarily small wavelengths. The deBroglie relation $`p=h/\lambda `$ implies that a wavelength arbitrarily close to zero corresponds to a photon of arbitrarily large momentum. The local conservation of momentum suggests that photons of infinite momentum should not exist. The minimum length implied by a discrete spacetime structure provides a natural cutoff for the wavelength $`\lambda _{\mathrm{min}}=\mathrm{}_e`$. In terms of QED, this problem can be seen in the fact that the infinite perturbation series requires the existence of all the photons in the electromagnetic spectrum. In this sense QED predicts the existence of these photons of infinite energy-momentum. However, it is widely believed that the perturbation series diverges. This divergence is generally overlooked because QED is only a partial theory and not a complete theory of elementary interactions (see Ch. 1 of Ref. 1). Therefore, we use QED under the assumption that it is accurate for the phenomenon it was created to describe and that some aspect of a more fundamental theory will eventually solve its divergence problem. The causal set idea proposes that the aspect of more fundamental theory that will naturally solve this divergence problem in QED is a discrete structure of spacetime. ### B Causal structure contains geometric information Spacetime consists of events $`x^\mu =(x^0,x^1,x^2,x^3)=(ct,x,y,z)`$, that is, points in space at various times. At some events physical processes take place. Processes that occur at one event can only be influenced by those occurring at another event if it is possible for a photon to reach the latter event from the earlier one. To capture the essence of what one means by “causal structure,” consider the example of the flat Minkowski space of special relativity. In flat spacetime, two events are said to be causally connected and their spacetime separation $`ds^2=g_{\mu \nu }dx^\mu dx^\nu `$ is called timelike if it is positive (using a $`+`$ signature) and null if it is zero. Two events are not causally connected if it is not possible for a photon from one event to arrive at the other; these events cannot influence each other and in such cases $`ds^2`$ is negative and referred to as spacelike. When we speak of the causal structure of a spacetime, we mean the knowledge of which events are causally connected to which other events. For a more general discussion of causal structure see reference . It is now well established that the causal structure of spacetime alone determines almost all of the information needed to specify the metric and therefore the gravitational field tensor. The causal structure determines the metric up to an overall multiplicative function called a conformal factor. We say that two metrics $`g_{\mu \nu }`$ and $`\stackrel{~}{g}_{\mu \nu }`$ are conformally equivalent if $`\stackrel{~}{g}_{\mu \nu }=\mathrm{\Omega }^2g_{\mu \nu }`$, where $`\mathrm{\Omega }`$ is a smooth positive function. Since all conformally equivalent spacetimes have the same causal structure , the causal structure itself nearly specifies the metric. ### C Causal sets Lacking the conformal factor from the causal structure essentially means that we lack the sense of scale which allows for quantitative measures of lengths and volumes in spacetime. However, if spacetime is discrete, the volume of a region can be determined by a procedure almost as simple as counting the number of events within that region. Therefore, if nature endows us with discrete spacetime and an arrow for time (the causal structure), we have, in principle, enough information to build complete spacetime metric tensors for general relativity. This combination of discreteness and causal structure leads directly to the idea of a causal set as the fundamental structure of spacetime. A causal set may be defined as a set of events for which there is an order relation $``$ obeying four properties: 1. transitivity: if $`xy`$ and $`yz`$ then $`xz`$; 2. non-circularity: if $`xy`$ and $`yx`$ then $`x=y`$; 3. finitarity: the number of events lying between any two fixed events is finite; 4. reflexivity: $`xx`$ for any event in the causal set. The first two properties say that this ordered set is really a partially ordered set, or poset for short. Specifically, non-circularity amounts to the exclusion of closed timelike curves more commonly known as time machines . The finitarity of the set insures that the set is discrete. The reflexivity requirement is present as a convenience to eliminate the ambiguity of how an event relates to itself. In the present context of using a poset to represent spacetime, reflexivity seems reasonable in that the spacetime separation between an event and itself cannot be negative requiring an event to be causally connected to itself. We can combine these statements to give the following definition: > A causal set is a locally finite, partially ordered set. ## IV The development of causal set theory With the conceptual foundation of causal sets clearly laid, let us now turn to the issue of developing a physical and mathematical formalism which I loosely refer to as causal set theory. The development of causal set theory is still far from complete. In fact, it is even less developed than some of the other approaches to quantum gravity such as superstring theory and canonical quantization mentioned previously. For this reason, quantum gravity by any approach is an excellent theatre for fine tuning ideas about how theory construction should proceed from founding observations, hypotheses, and assumptions. Hence, to better understand current thought on how the development of causal set theory should proceed, I will first discuss some of the ideas on theory development in general that have influenced causal set research. ### A Taketani stages In 1971 Mituo Taketani used Newtonian mechanics as a prototype to illustrate his ideas on the development of physical theories . According to Taketani, physical theories are developed in three stages that he referred to as the phenomenological, substantialistic, and essentialistic stages. The phenomenological stage is where the initial observations occur that place the existence and knowledge of the new phenomenon (or “substance”) on firm standing. In Ref. this stage in the development of Newtonian mechanics is associated with the work of Tycho Brahe who observed the motions of the planets with unprecedented accuracy. Within the substantialistic stage, rules that describe the new substance are discovered; that is, we come to recognize “substantial structure” in the new phenomenon. These rules would then play an important role in helping to shape the final understanding. For Newtonian mechanics, Taketani associated this stage with the work of Johannes Kepler who provided the well known three laws of planetary motion. In the essentialistic stage, “the knowledge penetrates into the essence” of the new phenomenon. This is the final stage when the full theory of this new substance is known within appropriate limits of validity. Of course, the work of Issac Newton himself represents this stage. Even though Taketani used Newtonian mechanics, there are many examples to which his ideas apply. Sakata used Taketani’s philosophy to discuss the development of quantum mechanics . Similarly, the development of electromagnetic theory falls neatly into Taketani’s framework. The phenomenological stage of electromagnetism could be associated with the work of Benjamin Franklin and William Gilbert. The substantialistic stage is nicely represented by the work of Michael Faraday and Hans Oerstead. The essentialistic stage is then represented by James Maxwell’s completion of his famous equations. Taketani realized that his three stages will not always apply identically to the development of all physical ideas. Since we currently know of no observed phenomena whose explanation clearly requires a complete theory of quantum gravity, it is clear that the problem of quantum gravity is not based on experimental observations. As a result of this fact, the development of causal set theory largely skips the phenomenological stage. Therefore, think of the development of causal sets in a two-step process corresponding roughly to Taketani’s substantialistic and essentialistic stages. As a matter of terminology, note that the substantialistic stage, in which phenomenology is described, plays the role of kinematics in Newtonian mechanics, while in the essentialistic stage the full dynamics is developed. Consequentially, I will refer to the two processes in the development of causal set theory as “kinematics” and “dynamics.” ### B Causal set kinematics The kinematic stage concerns gaining familiarity with and further developing the mathematics needed to describe causal sets. This mathematics primarily falls under the combinatorial mathematics of partial orders . These techniques are not part of traditional physics training and have, therefore, not been widely used to analyze physical problems. Moreover, research in this branch of mathematics has been performed largely by pure mathematicians; the problems they have chosen to tackle are generally not those that are of most interest to physicists. What we need from the kinematic stage are the mathematical techniques for how to extract the geometrical information from the causal order (i.e., working out the correspondence between order and geometry) and how to do the counting of causal set elements that will allow us to determine spacetime volumes. For an important, specific example of where causal set kinematics is needed, consider the correspondence principle between spacetime as a causal set and macroscopic spacetime. General relativity tells us that spacetime is a four-dimensional Lorentzian manifold. If causal sets comprise the true structure of spacetime they must produce a four-dimensional Lorentzian manifold in macroscopic limits (such as a large number of causal set elements). The mathematics of how we can see the manifold within the causal set is a kinematic issue that must be addressed. On the natural length scale of the causal set one does not expect to see anything like a manifold. Trying to see a manifold on this scale is like trying to read this article under a magnification that resolves the individual dots of ink making up the letters. To discern the structure of these dots we look upon them at a significantly different scale than the size scale of the dots. Similarly, we need a mathematical change-of-scale in order to extract the manifold structure from the causal set. This change-of-scale is called coarse-graining. Some insight into this issue can be gained by looking at the reverse problem of forming a causal set from a given metric manifold $`(M,g_{\mu \nu })`$. This is achieved by randomly sprinkling points into $`M`$. The order relation of this set of points is then determined by the light-cone structure of $`g_{\mu \nu }`$. Since we need to ensure that every region of the spacetime is appropriately sampled, that is, that highly curved regions are represented equally well as nearly flat regions are, the sprinkling is carried out via a Poisson process such that the number of points $`N`$ sprinkled into any region of volume $`V`$ is directly proportional to $`V`$. Using a two-dimensional Minkowski spacetime, Fig. 1 provides a picture of such a sprinkling at unit density $`\rho =N/V=1`$. Since the causal sets generated by random sprinklings are only expected to be coarse-grained versions of the fundamental causal set, their length and time scales are not expected to be the fundamental length and time of nature. Nevertheless, these studies of random sprinklings are important because the founding ideas behind causal sets in Sec. III suggests that a manifold $`(M,g_{\mu \nu })`$ emerges from the causal set $`C`$ if and only if an appropriately coarse-grained version of $`C`$ can be produced by a unit density sprinkling of points into $`M`$ . This shows us that an important problem in the development of causal set kinematics is to determine how to appropriately form a coarse-graining of a causal set. ### C Causal set dynamics The final stage in the development of causal set theory is the stage in which we come to understand the full dynamics of causal sets. In this stage we devise a formalism for how to obtain physical information from the behavior of the causal set and how this behavior governs our sense of space and time. Here we require something that might be considered a quantum mechanical analog to the Einstein field equations (2). Since our present framework for physical theories is based on a spacetime continuum, our experience is of limited use to us in this effort. Despite this limitation, one commonly used approach stands out as the best candidate for a dynamical framework for causal sets. This method is most commonly known as the path-integral formulation of quantum mechanics . This path-integral technique seems best suited to causal sets because at its core conception (a) it is a spacetime approach in that it deals directly (rather than indirectly) with events; that is, we propagate a system from one event configuration to another; and (b) it works on a discrete spacetime structure. As currently practiced, the path-integral approach determines the propagator $`U(a^\mu ,b^\nu )`$ by taking all paths between the events $`a^\mu `$ and $`b^\nu `$ in a discretized time and summing over these paths using an amplitude function $`\mathrm{exp}(iS/\mathrm{})`$: $$U(a^\mu ,b^\nu )\underset{all\text{ }paths}{}\mathrm{exp}(iS/\mathrm{}),$$ (3) where $`S`$ is the action for a given path. Continuous spacetime enters in at two places. In a continuum there are an infinite number of paths between two events, “all paths” are generated by integrating over all intermediate points between the two events; this is the “integral” part of path-integration. Since each of these paths were discretized into a finite number of points $`N`$, the second place where continuous spacetime is recovered is to take the limit $`N\mathrm{}`$. In a discrete setting the number of paths and the number of points along the paths are truly finite. Hence, the final limit as well as the integration to generate all paths are not performed. Since causal sets would not require integration, calling this the “path-integral formulation” seems inappropriate. This method essentially says that the properties of a system in a given event configuration depend on a sum over all the possible paths throughout the history of this system. Therefore, the alternative name for this technique, the sum-over-histories approach, is better suited for causal sets. The word “histories” is particularly appropriate because, as mentioned earlier, we take the arrow of time to be fundamental. There are several key issues that must be resolved before a sum-over-histories formulation of causal sets can be completed. One such issue involves the need to identify an amplitude function for causal sets analogous to the role played by $`\mathrm{exp}(iS/\mathrm{})`$. Secondly, the required formulation must do more than just propagate the system because the entire dynamics must come from this formalism. The procedure outlined above is presently inadequate for these purposes; a modified, or better, generalized sum-over-histories method must be developed. Perhaps the most significant advance along the dynamical front is the recent development by Rideout and Sorkin of a general classical dynamics for causal sets . In this model, causal sets are grown sequentially, one element at a time, under the governance of reasonable physical requirements for causality and discrete general covariance. When a new element is introduced, in going from an $`n`$-element causal set to an $`(n+1)`$-element causal set, it is associated with a classical probability $`q_n`$ of being unrelated to any existing element according to $`{\displaystyle \frac{1}{q_n}}={\displaystyle \underset{k=0}{\overset{n}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{n}{k}}\right)t_k.`$ The primary restriction is that the $`t_k0`$; hence, there is a lot of freedom with which different models can be explored. This framework has the potential to teach us much about the needed mathematical formalism for causal sets, the effects of certain physical conditions, and the classical limit of the eventual quantum dynamics for causal sets. ## V Illustrative examples To illustrate some of the points discussed above I present the 72 element causal set shown in Fig. 2. The black dots represent the elements of the causal set. The graphical form in which this causal set is shown is known as a spacetime-Hasse diagram. The term “Hasse diagram” is borrowed from the mathematical literature on posets . Figure 2 is also a spacetime diagram in the usual sense. The solid lines in the figure are causal links, i.e., lines are only drawn between events that are causally related; however, for clarity only those relations that are not implied by transitivity (the links) are explicitly shown. The causal set shown has 15 time steps as enumerated along the right side of the figure. Hence, the first time step at the bottom shows 7 “simultaneous” events. ### A Volume As discussed in Sec. III, the causal set hypothesis is partially founded on the fact that the causal structure of spacetime contains all of the geometric information needed to specify the metric tensor up to a conformal factor which prevents us from determining volumes. One example which shows how, in principle, volumes might be extracted from the causal set has been discussed by Bombelli . Gerard ’t Hooft has shown that, in Minkowski space, the volume $`V`$ of spacetime bounded by two causally connected events $`a`$ and $`b`$ is given by $$V=\frac{\pi \tau _{ab}^4}{24},$$ (4) where $`\tau _{ab}`$ is the proper time between events $`a`$ and $`b`$. We can apply this expression to causal sets by relating $`\tau _{ab}`$ to the number of links in the longest path between $`a`$ and $`b`$. The volume $`V`$ is then identified with the number of elements in this region of spacetime. Spacetime is dynamic, however. The above procedure of counting the number of links between two events is subject to (perhaps large) statistical fluctuations. Therefore, while it is believed that the expected proper time $`<\tau >`$ should be proportional to the number of links , the precise relationship between them is yet unknown. Attempts to numerically determine this relationship via computer simulations remain inconclusive . ### B Coarse-graining While the labeling of Fig. 2 clearly suggests that it is a two-dimensional example of a causal set, note that our physical sense of dimensionality (given us by relativity) is intimately related to the manifold concept. Although the causal set in Fig. 2 looks suspiciously regular it may not be immediately obvious whether or not this set can be embedded into any physically viable two-dimensional spacetime. As an example of one form of coarse-graining, we can look at this causal set on a time scale twice as long as its natural scale. Figure 3a is a coarse-grained version of the causal set in Fig. 2 for which only even time steps are shown and Fig. 3b is a subset of Fig. 2 showing events that only occur at odd time steps. In both cases we find causal sets that clearly can be embedded into two-dimensional Minkowski space. In a realistic situation this fact would suggest that the fundamental causal set just might represent a physically discrete spacetime. ### C Dynamics As stated above, a sum-over-histories dynamical law for causal sets requires the identification of an amplitude function. As an example, one could start by considering an amplitude modeled after the familiar amplitude of the continuum path-integral formulation in Eq. (3), i.e., $`\mathrm{exp}(i\beta R)`$. Here, $`\beta `$ plays the role of $`1/\mathrm{}`$ and $`R`$ plays the role of the action $`S`$. In quantum field theory the oscillatory nature of this amplitude causes problems that are sometimes bypassed by performing a continuation from real to imaginary time (often referred to as a Wick rotation). Similarly, it is convenient here to consider the case $`\beta i\beta `$ giving an amplitude $$A=\mathrm{exp}(\beta R),$$ (5) where we can take $`R`$, for example, to be the total number of links in the causal set. This model is interesting because the amplitude $`A`$, which acts as a weight in the sum-over-causal sets, has the form of a Boltzmann factor $`\mathrm{exp}(E/kT)`$. The mathematical structure of the causal set dynamics then becomes very similar to that of statistical mechanics. Studies of the statistical mechanics of certain partially ordered sets have been performed . In the thermodynamic limit, these studies exhibit phase transitions corresponding to successively increasing numbers of layers of the lattice causing the poset to appear more and more continuous. In this analogy, the thermodynamic limit corresponds to one macroscopic limit of causal set theory in which the number of causal set elements goes to infinity. Such results, therefore, are somewhat suggestive that an appropriate choice of amplitude might indeed lead to the expected kind of continuum limit. Another, more detailed, example of a quantum dynamics for causal sets that exhibits the kind of interference effects that are absent from the classical dynamics mentioned previously can be found in Ref. . ## VI Concluding Remarks In this paper we have tried to communicate the primary motivation and key ideas behind the causal set hypothesis. Causal sets has emerged as an important approach to quantum gravity having been found to impact other approaches such as the spin network formalism . Adding to the importance of the causal set approach is the fact that it led Sorkin to predict a non-zero cosmological constant nearly a decade ago . In light of recent findings in the astrophysics community , this result perhaps marks the only prediction to come out of quantum gravity research that might be testable in the foreseeable future. Before a final causal set theory can be constructed, much work remains. Studies of random sprinklings in both flat and curved spacetimes, the mathematics of partial orders, and the behavior of fields that sit on a discrete substructure are just a few areas of needed investigation. Enough progress on causal sets has been made, however, to establish the causal set hypothesis as a very promising branch of quantum gravity research. Those with further interest can find more detailed discussion of causal sets in Refs. . ## VII Acknowledgments The author would like to thank Drs. R. D. Sorkin, J. P. Krisch, N. L. Sharma, and E. Behringer for helpful suggestions. Support of the Michigan Space Grant Consortium is gratefully acknowledged. ## VIII Figure Captions Figure 1. A causal set formed from a unit density sprinkling of points in two-dimensional Minkowski space. Figure 2. A spacetime-Hasse diagram of a two-dimensional causal set. The dots represent the 72 events in this set and the lines are causal links between events. This causal set has 15 time steps as enumerated along the right-hand-side of the figure. Figure 3. Coarse-grainings of the causal set in Fig. 2 formed by doubling the time scale. (a) The subset formed by the odd time steps only. (b) The subset formed by the even time steps only. Both coarse-grainings are more clearly embeddable in two-dimensional Minkowski space than the full poset in Fig. 2.
no-problem/9909/astro-ph9909070.html
ar5iv
text
# The coplanar 𝑛-body gravitational lens : a general theorem ## 1 Introduction Since the earliest studies of Einstein (1915,1916) on the deflection of light by gravitational fields in general relativity, a vast research and associated publication record in gravitational lensing, both observational and theoretical, has blossomed. The simple single mass point lens model is already the main paradigm for the understanding of the many ($`10^2`$) lensing events detected by the various monitoring and follow-up programmes under way. Binary lenses are also the focus of much effort, not only for their own intrinsic interest, but also for their potential as a tool, particularly in the detection of planetary systems. More sophisticated lens models, including many-body systems, are of interest too because of their possible galactic and cosmological applications. ## 2 The lens equation(s) In an orthogonal rectangular co-ordinate system (x,y,z), with the origin at the observer, and the z-axis as optical axis, let a source be located in a (source) plane S at (angular size) distance $`D_\mathrm{S}`$, and let there be $`n`$ masses located in a (lens) plane L at distance $`D_\mathrm{L}`$. Rays from a source point at angular co-ordinates ($`x_s,y_s`$), on crossing the lens plane are so deflected as to produce an image point (of which there may be more than one) at angular co-ordinates ($`x_i,y_i`$). The lensing conditions for small angles, see for example Paczyński (1996), are given by: $$x_ix_s=\alpha _x(D_\mathrm{S}D_\mathrm{L})/D_\mathrm{S};y_iy_s=\alpha _y(D_\mathrm{S}D_\mathrm{L})/D_\mathrm{S}$$ where $`\alpha _x`$ and $`\alpha _y`$ are the angular deflections in the directions of the x and y axes respectively. In a two-dimensional vector notation, with $`\stackrel{}{r}=(x,y)`$ and $`\stackrel{}{\alpha }=(\alpha _x,\alpha _y)`$ the above conditions may be written as the single equation : $$\stackrel{}{r}_i\stackrel{}{r}_s=[(D_\mathrm{S}D_\mathrm{L})/D_\mathrm{S}]\stackrel{}{\alpha }$$ For a spherically symmetric mass $`M`$ the Einstein (1916) gravitational deflection is $`\alpha =4GM/c^2R_0`$, where $`R_0`$ is the radial distance of closest approach or impact parameter. In the (rectilinear) ray optics approximation, the image position is deemed to be where the ray crosses the lens plane. Thus for each lensing mass $`M_l`$ located at $`\stackrel{}{r}_l`$ ($`l=1,n`$), setting $`\stackrel{}{r}_{il}=|\stackrel{}{r}_i\stackrel{}{r}_l|`$, $`r_{il}=|\stackrel{}{r}_{il}|`$, and $`R_0=D_Lr_{il}`$, the contribution to the deflection is $`\stackrel{}{\alpha }_l=\alpha _0(\stackrel{}{r}_i\stackrel{}{r}_l)/r_{il}^2`$ where $`\alpha _0=4GM_l/c^2D_\mathrm{L}`$. The lens equation for the $`n`$-body system then takes the form: $$\stackrel{}{r}_i\stackrel{}{r}_s=[(D_\mathrm{S}D_\mathrm{L})/D_\mathrm{S}]\underset{l=1}{\overset{n}{}}\stackrel{}{\alpha }_l=\underset{l=1}{\overset{n}{}}m_l(\stackrel{}{r}_i\stackrel{}{r}_l)/r_{il}^2$$ with $`m_l=[(D_\mathrm{S}D_\mathrm{L})/D_\mathrm{S}D_\mathrm{L}]4GM_l/c^2=(M_l/M)r_\mathrm{E}^2`$, where $`M=M_l`$ while $`r_\mathrm{E}=[4GMc^2(D_\mathrm{S}D_\mathrm{L})/D_\mathrm{S}D_\mathrm{L}]^{1/2}`$ is the Einstein Ring angular radius for mass $`M`$. Without any loss of generality one may set $`m=m_l=1`$, thus expressing all the angular measures in units of $`r_\mathrm{E}`$. The lens equation may alternatively be written (Witt,1990) in complex variable notation with $`z=x+iy`$ and $`\overline{z}=xiy`$ as : $$z_iz_s=\underset{l=1}{\overset{n}{}}m_l(z_iz_l)/|z_iz_l|^2=\underset{l=1}{\overset{n}{}}m_l/(\overline{z}_i\overline{z}_l)$$ ## 3 Image amplification Since the intensity of radiation is invariant along a ray the brightness of the lensed source relative to that of the unlensed source (the magnification or amplification) is the ratio of the areas or solid angles subtended at the observer, given by $`A=J^1`$, where $`J`$ is the Jacobian (determinant) of the transformation mapping $`\stackrel{}{r}_i`$ into $`\stackrel{}{r}_s`$ : $$J=|\stackrel{}{r}_s/\stackrel{}{r}_i|=|(x_s,y_s)/(x_i,y_i)|=|\delta _{xy}a_{xy}|$$ with $`a_{xx}=a_{yy}`$, $`a_{xy}=a_{yx}`$ and : $$a_{xx}=\underset{l}{}m_l\rho _l^4(\eta _l^2\xi _l^2);a_{xy}=2\underset{l}{}m_l\rho _l^4\xi _l\eta _l$$ where $`\xi _l=x_ix_l`$, $`\eta _l=y_iy_l`$, and $`\rho _l=r_{il}`$. Then : $$J=(1a_{xx})(1+a_{xx})a_{xy}^2=1a_{xx}^2a_{xy}^2=1K$$ $$K=[\underset{l}{}m_l\rho _l^4(\eta _l^2\xi _l^2)]^2+[2\underset{l}{}m_l\rho _l^4\xi _l\eta _l]^2$$ Writing $`K=K_1+K_2`$, expanding each term, and separating the ’diagonal’ and ’non-diagonal’ parts of each, gives $`K_1=K_{11}+K_{12}`$ and $`K_2=K_{21}+K_{22}`$ where : $$K_{11}=\underset{l}{}[m_l\rho _l^4(\eta _l^2\xi _l^2)]^2;K_{21}=\underset{l}{}[2m_l\rho _l^4\xi _l\eta _l]^2$$ $$K_{12}=2\underset{jk}{}\underset{k}{}m_jm_k\rho _j^4\rho _k^4(\eta _j^2\xi _j^2)(\eta _k^2\xi _k^2)$$ $$K_{22}=8\underset{jk}{}\underset{k}{}m_jm_k\rho _j^4\rho _k^4\xi _j\eta _j\xi _k\eta _k$$ Adding the ’diagonal’ terms $`K_{11}`$ and $`K_{21}`$ gives : $$K_3=[\underset{l}{}m_l\rho _l^2]^22\underset{jk}{}\underset{k}{}m_jm_k\rho _j^2\rho _k^2=K_{31}+K_{32}$$ where $`K_{32}`$ may also be written : $$K_{32}=2\underset{jk}{}\underset{k}{}m_jm_k\rho _j^4\rho _k^4(\xi _j^2+\eta _j^2)(\xi _k^2+\eta _k^2)$$ It then follows that : $$K_{12}+K_{32}=4\underset{jk}{}\underset{k}{}m_jm_k\rho _j^4\rho _k^4[\xi _j^2\eta _k^2+\xi _k^2\eta _j^2]$$ whereupon, adding the remaining ’non-diagonal’ term $`K_{22}`$ gives $`K_4=K_{12}+K_{22}+K_{32}`$ with : $$K_4=4\underset{jk}{}\underset{k}{}m_jm_k\rho _j^4\rho _k^4[\xi _j\eta _k\xi _k\eta _j]^2$$ so that finally $`K=K_{31}+K_4`$ or : $$K=[\underset{l}{}m_l\rho _l^2]^24\underset{jk}{}\underset{k}{}m_jm_k\rho _j^4\rho _k^4[\xi _j\eta _k\xi _k\eta _j]^2$$ The same result may naturally be obtained, perhaps more easily, from the complex variable formulation of the lens equation. There it may be shown (Witt,1990) that : $$J=(z_s/z_i)^2|z_s/\overline{z}_i|^2=1|\underset{l}{}m_l\overline{\zeta }_l^2|^2$$ where $`\zeta _l=z_iz_l`$ and $`\rho _l=|\zeta _l|`$. Thus, with $`K=J1`$ : $$K=[\underset{j}{}m_j\zeta _j^2][\underset{k}{}m_k\overline{\zeta }_k^2]=\underset{j}{}\underset{k}{}m_jm_k[\zeta _j\overline{\zeta }_k]^2$$ Expanding and separating the ’diagonal’ and ’non-diagonal’ parts gives $`K=K_5+K_6`$ with : $$K_5=\underset{l}{}m_l^2\rho _l^4=[\underset{l}{}m_l\rho _l^2]^22\underset{jk}{}\underset{k}{}m_jm_k\rho _j^2\rho _k^2$$ $$K_6=\underset{jk}{}\underset{k}{}m_jm_k[\zeta _j^2\overline{\zeta }_k^2+\zeta _k^2\overline{\zeta }_j^2]$$ Putting $`K_5=K_{51}+K_{52}`$, where $`K_{51}=K_{31}`$, the second term $`K_{52}`$ may be rewritten : $$K_{52}=2\underset{jk}{}\underset{k}{}m_jm_k\rho _j^4\rho _k^4[\zeta _j\overline{\zeta }_j\zeta _k\overline{\zeta }_k]=K_{32}$$ while $`K_6`$ may be rewritten as : $$K_6=\underset{jk}{}\underset{k}{}m_jm_k\rho _j^4\rho _k^4[\overline{\zeta }_j^2\zeta _k^2+\zeta _j^2\overline{\zeta }_k^2]$$ Now adding the ’non-diagonal’ terms $`K_{52}`$ and $`K_6`$ gives : $$K_7=\underset{jk}{}\underset{k}{}m_jm_k\rho _j^4\rho _k^4[\overline{\zeta }_j\zeta _k\zeta _j\overline{\zeta }_k]^2=K_4$$ since $`[\overline{\zeta }_j\zeta _k\zeta _j\overline{\zeta }_k]=2i[\eta _j\xi _k\xi _j\eta _k]`$ thus giving the result $`K=K_{51}+K_7=K_{31}+K_4`$ as before. It may be remarked here that $`[\xi _j\eta _k\xi _k\eta _j]=[\stackrel{}{r}_{ij}\times \stackrel{}{r}_{ik}]`$ so that : $$K_4=4\underset{jk}{}\underset{k}{}m_jm_k[_i(r_{ij}^1)\times _i(r_{ik}^1)]^2$$ where the two-dimensional gradient $`_i=/\stackrel{}{r}_i`$. ## 4 The general binary lens In the case $`n=2`$ the double summation reduces to a single term with $`j=1,k=2`$ giving : $$J=1[m_1\rho _1^2+m_2\rho _2^2]^2+4m_1m_2\rho _1^4\rho _2^4[C_{12}]^2$$ where $`C_{12}=[\xi _1\eta _2\xi _2\eta _1]`$ now reduces to : $$C_{12}=[(x_1y_2x_2y_1)+x_i(y_1y_2)y_i(x_1x_2)]$$ which may be simplified by an appropriate choice of co-ordinates. Taking the origin anywhere on the line joining the two masses, then $`y_1/x_1=y_2/x_2`$ and $`x_1y_2x_2y_1=0`$ gives $`C_{12}=[(x_2x_1)y_i(y_2y_1)x_i]`$ which is the same function of $`x_i`$ and $`y_i`$ for given values of $`x_2x_1=2x_0`$ and $`y_2y_1=2y_0`$, namely $`C_{12}=2[x_0y_iy_0x_i]`$. The further simplification of setting $`y_0=0`$, so that the x-axis lies along the line joining the two masses, with $`y_1=y_2=0`$, gives $`C_{12}=2x_0y_i`$. There still remains the choice of the value of $`x_1`$ or $`x_2`$, for which there are three ’natural’ possibilities : Case (a) - Origin mid-way between the masses with $`x_1=x_0,x_2=+x_0`$; Case (b) - Origin at mass $`M_1`$ with $`x_1=0,x_2=+2x_0`$; Case (c) - Origin at mass $`M_2`$ with $`x_1=2x_0,x_2=0`$. ### 4.1 The critical curves The critical curves, or loci of infinite amplification, are given by the condition $`J=0`$, which on rationalising takes the form $`D_{12}=0`$ with : $$D_{12}=\rho _1^4\rho _2^4J=\rho _1^4\rho _2^4[m_1\rho _2^2+m_2\rho _1^2]^2+16m_1m_2x_0^2y_i^2$$ in agreement with Schneider & Weiss (1986). Adopting plane polar co-ordinates $`\stackrel{}{r}_i=(r_i,\theta _i)`$, then $`D_{12}=_ma_m\mathrm{cos}^m(\theta _i)`$ where the coefficients $`a_m`$ depend on the choice of origin : Case (a) - $`\stackrel{}{r}_i=(r_0,\theta _0)`$ : $$a_0=(r_0^2+x_0^2)^4(r_0^2+x_0^2)^2(m_1+m_2)^2+16m_1m_2r_0^2x_0^2$$ $$a_1=4r_0x_0(r_0^2+x_0^2)(m_1^2m_2^2)$$ $$a_2=4r_0^2x_0^2[2(r_0^2+x_0^2)^2+(m_1+m_2)^2]$$ $$a_3=0;a_4=16r_0^4x_0^4$$ In the symmetric case where $`m_1=m_2=1/2`$, $`a_1=0`$, the term in $`\mathrm{cos}\theta _i`$ drops out, and the equation reduces to a quadratic equation in $`\mathrm{cos}^2\theta _i`$, as given by Equation 9a of Schneider & Weiss (1986), with the solution : $$\mathrm{cos}^2\theta _0=[1+2(r_0^2+x_0^2)^2+\{1+8(r_0^4+x_0^4)\}^{1/2}]/(8r_0^2x_0^2)$$ Case (b) - $`\stackrel{}{r}_i=(r_1,\theta _1)`$, with $`r_1=\rho _1`$ : $$a_0=(r_1^2+4x_0^2)^2(r_1^4m_1^2)2r_1^2(r_1^2+4x_0^2)m_1m_2$$ $$+16r_1^2x_0^2m_1m_2r_1^4m_2^2$$ $$a_1=8r_1x_0[r_1^2m_1m_2(r_1^2+4x_0^2)(r_1^4m_1^2)]$$ $$a_2=16r_1^2x_0^2[r_1^4m_1^2m_1m_2]$$ Case (c) - $`\stackrel{}{r}_i=(r_2,\theta _2)`$, with $`r_2=\rho _2`$ : $$a_0=(r_2^2+4x_0^2)^2(r_2^4m_2^2)2r_2^2(r_2^2+4x_0^2)m_1m_2$$ $$+16r_2^2x_0^2m_1m_2r_2^4m_1^2$$ $$a_1=8r_2x_0[(r_2^2+4x_0^2)(r_2^4m_2^2)r_2^2m_1m_2]$$ $$a_2=16r_2^2x_0^2[r_2^4m_2^2m_1m_2]$$ from which it will be evident that Case (b) and Case (c) are equivalent with respect to the interchange of the labels 1 and 2 together with the change of sign of $`x_0`$. Both therefore correct several errors in Equation 9b of Schneider and Weiss (1986), after allowing for their relative disposition of the binary masses. In the limit $`x_0=0`$ the equations for all three cases degenerate to $`r_i^4(m_1+m_2)^2=0`$, giving $`r_i=(m_1+m_2)^{1/2}`$, so that the critical curve is the Einstein Ring for the combined masses of the binary. On the other hand in the limit $`x_0=\mathrm{}`$, where either $`\rho _1=\mathrm{}`$ or $`\rho _2=\mathrm{}`$, the condition $`J=0`$ reduces to $`\rho _l^4m_l^2=0,(l=1,2)`$, with the solution $`\rho _l=m_l^{1/2}`$ representing the now detached Einstein Rings around each mass.
no-problem/9909/astro-ph9909507.html
ar5iv
text
# NLTE Model Atmospheres for Central Stars of Planetary Nebulae ## 1. Introduction During their evolution, the more massive post-AGB stars can reach extremely high effective temperatures: Up to about 700 kK are predicted by Paczynski (1970) for a star with a remnant mass of 1.2 M. Realistic modeling of the emergent fluxes of these stars requires the consideration of all elements from hydrogen up to the iron group under NLTE conditions. ## 2. NLTE Model Atmospheres The model atmospheres (plane-parallel, hydrostatic and radiative equilibrium) are calculated using the code PRO2 (Werner 1986). All elements from hydrogen to the iron group are considered (Rauch 1997, Dreizler & Werner 1993, Deetjen et al. 1999). A grid of model atmosphere fluxes ($`T_{\mathrm{eff}}`$ = 50 – 1 000 kK, $`\mathrm{log}g`$ = 5 – 9 (cgs), H – Ca, solar and halo abundances) is available at the WWW (http://astro.uni-tuebingen.de/rauch/flux.html). ## 3. Impact of light metals (F – Ca) The high-energy model atmosphere fluxes strongly depend on the metal-line blanketing. The impact of H – Ca is shown in Fig. 1 (cf. Rauch 1997). Figure 1. Comparison of NLTE model atmosphere fluxes with different elemental composition at solar abundances ($`T_{\mathrm{eff}}=155`$ kK, $`\mathrm{log}g=6.5`$). Note the drastic decrease of the flux level at wavelengths shorter than 100 Å if the metal-line blanketing of the light metals H-Ca is considered. ## 4. Impact of iron-group elements (Sc – Ni) A detailed consideration of all line transitions of the iron-group elements, like tabulated in Kurucz (1996), is impossible. Thus, we employed an opacity sampling method in order to account for their absorption cross-sections. Their cross-sections are calculated with the newly developed Cross-Section Creation Package CSC (http://astro.uni-tuebingen.de/deetjen/csc.html). Radiative und collisional bound-bound line cross-sections are calculated from Kurucz’s line lists (1996; Fig. 2). Radiative und collisional bound-free photoionization cross-sections of Fe are calculated from Opacity Project data (Seaton et al. 1994). Figure 2. Part of the radiative bound-bound cross-section $`\sigma _{4,6}`$ (band 6 to band 4) of Fe v at $`n_\mathrm{e}=10^{16}`$ cm<sup>-3</sup>, considered in detail (520 360 frequency points within $`2.010^{15}`$ Hz $`\nu 6.810^{15}`$ Hz) and with our opacity sampling grid (973 frequency points) All iron-group elements can be combined in one generic model atom. Its term scheme is typically divided into seven energy bands (Fig. 3, Dreizler & Werner 1993). The statistics of a typical generic model atom are summarized in Tab. 1. Figure 3. Energy band structure of an iron-group model atom In order to investigate the impact of the iron-group elements on the emergent flux, we used a H-Ca trunk model atom (Rauch 1997) and added Sc – Ni in form of a generic model atom (Tab. 1), including all available (experimental + theoretical) levels and lines from Kurucz’s list (1996). In Fig. 4 we show the additional impact of the iron group elements. Figure 4. Comparison of two NLTE model atmosphere fluxes without and with consideration of iron-group elements at solar abundances ($`T_{\mathrm{eff}}=155`$ kK, $`\mathrm{log}g=6.5`$). The additional iron-group line blanketing has a strong influence on the flux level at high energies close to the maximum flux level ## 5. Conclusions and future work Emergent fluxes calculated from NLTE model atmospheres which include iron-group line blanketing show a drastic decrease of the flux level at high energies. For a reliable analysis of UV/EUV and X-ray spectra of central stars of planetary nebulae, or the calculation of ionizing spectra from these (e.g. used as input for photoionization models) the consideration of all elements from hydrogen up to the iron group is mandatory. A detailed consideration of the metal-line blanketing with all available lines has an important influence on the spectrum. PRO2 is permanently updated in order to calculate state-of-the-art models for the analysis of the available spectra. This includes in the near future spherical geometry, element diffusion, and polarized radiation transfer. A new grid of model fluxes which includes a detailed line blanketing by Ca and by the iron group will be soon available on the WWW. ## Acknowledgment This research is supported by the DLR under 50 OR 9705 5. ## References Deetjen, J. L., Dreizler, S., Rauch, T., & Werner, K. 1999, A&A 348, 940 Dreizler, S., & Werner, K. 1993, A&A 278, 199 Kurucz, R.L. 1996, IAU Symp. 176, Kluwer, Dordrecht, p. 52 Paczynski, B. 1970, Acta Astr. 20, 47 Rauch, T. 1997, A&A 320, 237 Seaton, M. J., Yan Y., Mihalas, D., & Pradhan, A. K. 1994, MNRAS 266, 805 Werner, K. 1986 A&A 161, 177
no-problem/9909/hep-th9909090.html
ar5iv
text
# Comments on ‘Existence of axially symmetric solutions in SU(2)-Yang Mills and related theories [hep-th/9907222]’ ## Abstract In \[hep-th/9907222\] Hannibal claims to exclude the existence of particle-like static axially symmetric non-abelian solutions in $`SU(2)`$ Einstein-Yang-Mills-dilaton theory. His argument is based on the asymptotic behaviour of such solutions. Here we disprove his claim by giving explicitly the asymptotic form of non-abelian solutions with winding number $`n=2`$. Particle-like static axially symmetric solutions of Yang-Mills-dilaton (YMD) and Einstein-Yang-Mills-dilaton (EYMD) theory have been investigated numerically and analytically in recent years. The gauge potential, given in a singular gauge in , can locally be gauge transformed into a regular form . On intersecting neighbourhoods the regular gauge potentials can be gauge transformed into each other by regular gauge transformations. Hence these solutions are globally regular. Recently, Hannibal claimed to have shown that the static axially symmetric solutions discussed in are singular in the gauge field part. However, in he only observed that the gauge potential of refs. does not obey a set of sufficient regularity conditions . Repeating his claim in , and noting that the “singular” solutions can be locally gauge transformed into regular solutions as discussed in , Hannibal apparently uses the equality $`solution=gaugepotential`$, despite the fact that the gauge potential is not uniquely defined and the same solution can be given in many different gauges. In Hannibal then turns to the question of existence of static axially symmetric solutions of YMD and EYMD theory. He presents as his results, that i) “there exist only embedded abelian particle-like solutions” and ii) “the solutions constructed by Kleihaus and Kunz are shown to be gauge equivalent to these”. Particle-like solutions of EYMD and related theories – which are not embedded abelian solutions – have been investigated in the past decade by several authors (see e. g. ) and it is generally accepted that these solutions are genuine non-abelian solutions. For rigorous proofs of the existence of these solutions see e. g. . Concerning the static axially symmetric solutions, the argument of Hannibal relies on the fact that he was not able to find asymptotic solutions, which possess the correct symmetries with respect to the discrete transformation $$\mathrm{S}_{(zz)}:\theta \pi \theta .$$ (1) However, he chooses a gauge in which this symmetry is lost for all non-trivial gauge field functions parameterizing the gauge potential. Apparently he takes this into account for one of the functions, but does not take it into consideration for the other two functions. Without taking into account the correct symmetry behaviour of the gauge field functions the results are not reliable. By giving a counterexample, we here disprove Hannibal’s claim, that “there exist only embedded abelian particle-like solutions …” , because asymptotic solutions for genuine non-abelian gauge potentials do not exist. This counterexample represents the correct asymptotic behaviour for a non-abelian static axially symmetric solution of EYMD theory . Using polar coordinates ($`r`$, $`\theta `$, $`\phi `$), we parameterize the static axially symmetric $`su(2)`$ gauge potential as $$A_\mu dx^\mu =\frac{1}{2g}\left\{\left[\frac{H_1}{r}dr+(1H_2)d\theta \right]\tau _\phi ^nn\mathrm{sin}\theta \left[H_3\tau _r^n+(1H_4)\tau _\theta ^n\right]d\phi \right\},$$ (2) where $`H_i`$ are functions of $`r`$ and $`\theta `$ and $`n`$ denotes the winding number. For convenience we set the gauge coupling constant $`g`$ equal to one in the following. The $`su(2)`$ matrices $`\tau _\phi ^n,\tau _r^n,\tau _\theta ^n`$ are defined in terms of Pauli matrices $`\tau _1,\tau _2,\tau _3`$ by $$\tau _\phi ^n=\mathrm{sin}(n\phi )\tau _1+\mathrm{cos}(n\phi )\tau _2,\tau _r^n=\mathrm{sin}\theta \tau _\rho ^n+\mathrm{cos}\theta \tau _3,\tau _\theta ^n=\mathrm{cos}\theta \tau _\rho ^n\mathrm{sin}\theta \tau _3,$$ (3) with $`\tau _\rho ^n=\mathrm{cos}(n\phi )\tau _1+\mathrm{sin}(n\phi )\tau _2`$. In order to compare with ref. we parameterize the functions $`H_i`$ as $`H_1`$ $`=`$ $`\left[\stackrel{~}{F}_1\mathrm{sin}^2\theta +\left(nf(r)+\stackrel{~}{F}_2\right)\right]\mathrm{cos}\theta \mathrm{sin}^{|n|}\theta ,`$ (4) $`(1H_2)`$ $`=`$ $`\left[nf(r)+\stackrel{~}{F}_1\mathrm{sin}^2\theta \mathrm{cos}^2\theta \left(nf(r)+\stackrel{~}{F}_2\right)\mathrm{sin}^2\theta \right]\mathrm{sin}^{|n|1}\theta ,`$ (5) $`H_3`$ $`=`$ $`\left[\left(f(r)+\stackrel{~}{F}_3\mathrm{sin}^2\theta \right)\mathrm{cos}\theta \mathrm{sin}^{|n|1}\theta \right]\mathrm{sin}\theta +\stackrel{~}{F}_4\mathrm{sin}\theta \mathrm{cos}\theta `$ (6) $`=`$ $`F_3\mathrm{sin}\theta +F_4\mathrm{cos}\theta ,`$ (7) $`(1H_4)`$ $`=`$ $`\left[\left(f(r)+\stackrel{~}{F}_3\mathrm{sin}^2\theta \right)\mathrm{cos}\theta \mathrm{sin}^{|n|1}\theta \right]\mathrm{cos}\theta \stackrel{~}{F}_4\mathrm{sin}^2\theta `$ (8) $`=`$ $`F_3\mathrm{cos}\theta F_4\mathrm{sin}\theta ,`$ (9) where $`\stackrel{~}{F}_i`$ are continuous functions of $`r`$ and $`\theta `$ and $`F_3=(f(r)+\stackrel{~}{F}_3\mathrm{sin}^2\theta )\mathrm{cos}\theta \mathrm{sin}^{|n|1}\theta `$, $`F_4=\stackrel{~}{F}_4\mathrm{sin}\theta `$ have been introduced for later convenience. Assuming that $`\stackrel{~}{F}_i`$ are regular functions of $`r^2`$ and $`\mathrm{sin}^2\theta `$ , this parameterization of the ansatz guarantees that the gauge potential is regular on the $`z`$-axis ($`|z|>0`$) and that the functions $`H_1`$ and $`H_3`$ are odd under the transformation $`\mathrm{S}_{(zz)}:\theta \pi \theta `$, whereas the functions $`H_2`$ and $`H_4`$ are even . In order to guarantee regularity at the origin, additional conditions have to be imposed on the functions $`\stackrel{~}{F}_i`$. The gauge potential (2) is form invariant under abelian gauge transformations of the form $$U=\mathrm{exp}\left\{i\mathrm{\Gamma }\tau _\phi ^n/2\right\},$$ (10) where $`\mathrm{\Gamma }`$ is a function of $`r`$ and $`\theta `$. The functions $`H_i`$ transform like $`H_1`$ $``$ $`\widehat{H}_1=H_1r_r\mathrm{\Gamma },`$ (11) $`H_2`$ $``$ $`\widehat{H}_2=H_2+_\theta \mathrm{\Gamma },`$ (12) $`H_3`$ $``$ $`\widehat{H}_3=\mathrm{cos}\mathrm{\Gamma }(H_3+\mathrm{cot}\theta )\mathrm{sin}\mathrm{\Gamma }H_4\mathrm{cot}\theta ,`$ (13) $`H_4`$ $``$ $`\widehat{H}_4=\mathrm{sin}\mathrm{\Gamma }(H_3+\mathrm{cot}\theta )+\mathrm{cos}\mathrm{\Gamma }H_4.`$ (14) Following , we now fix the gauge such that $`\widehat{H}_2=1`$ and assume that $`H_2`$ is given in the form (5). The gauge transformation function is given by $$\mathrm{\Gamma }(r,\theta )=_{\theta _0}^\theta \left\{1H_2(r,\theta ^{})\right\}𝑑\theta ^{}.$$ (16) Again following , we choose $`\theta _0=0`$, in order to maintain regularity on the positive $`z`$-axis ($`z>0`$). Then the gauge potential may diverge on the negative $`z`$-axis . Along the (positive) $`z`$-axis the function $`\widehat{H}_1`$ is still of the order $`O(\mathrm{sin}^{|n|}\theta )`$ and may be written locally as $`\widehat{H}_1=\overline{F}_1\mathrm{cos}\theta \mathrm{sin}^{|n|}\theta `$, with some function $`\overline{F}_1(r,\theta )`$. The gauge transformed function $`\widehat{F}_3`$ is now of order $`O(\mathrm{sin}^{|n|+1}\theta )`$ and may be written as $`\widehat{F}_3=\overline{F}_3\mathrm{cos}\theta \mathrm{sin}^{|n|+1}\theta `$ near the (positive) $`z`$-axes, with some function $`\overline{F}_3(r,\theta )`$. Hence, we find near the (positive) $`z`$-axis $`\widehat{H}_1`$ $`=`$ $`\overline{F}_1\mathrm{cos}\theta \mathrm{sin}^{|n|}\theta ,`$ (17) $`(1\widehat{H}_2)`$ $`=`$ $`0,`$ (18) $`\widehat{H}_3`$ $`=`$ $`\left[\overline{F}_3\mathrm{sin}^{|n|+1}\theta +\overline{F}_4\right]\mathrm{sin}\theta \mathrm{cos}\theta ,`$ (19) $`(1\widehat{H}_4)`$ $`=`$ $`\overline{F}_3\mathrm{cos}^2\theta \mathrm{sin}^{|n|+1}\theta \overline{F}_4\mathrm{sin}^2\theta .`$ (20) This form of the gauge field functions $`\widehat{H}_i`$ could have been obtained from Eqs. (4)-(8) by setting $`f(r)0`$ and $`\stackrel{~}{F}_2=\mathrm{cos}^2\theta \stackrel{~}{F}_1`$ . However, under the assumption that the functions $`\stackrel{~}{F}_i`$ are continuous everywhere, this would not have been correct. This can be seen as follows. The function $`\mathrm{\Gamma }`$ contains an even part and an odd part with respect to the transformation $`\mathrm{S}_{(zz)}`$ . As a consequence the function $`\widehat{H}_1`$ is no longer odd with respect to $`\mathrm{S}_{(zz)}`$ and cannot be written in the form (17) with continuous functions $`\overline{F}_i`$. It is evident from Eqs. (13)-(14) that the functions $`\widehat{H}_3`$ and $`\widehat{H}_4`$ also have lost their antisymmetry, respectively symmetry, with respect to $`\mathrm{S}_{(zz)}`$. Thus all the functions $`\widehat{H}_i`$ may take finite values on the $`\rho `$-axis. If we then insist to use the parameterization Eqs. (17-20) for the gauge transformed functions $`\widehat{H}_i`$, not only near the (positive) $`z`$-axis but everywhere, we must allow the functions $`\overline{F}_1`$ and $`\overline{F}_3`$ to contain the factor $`1/\mathrm{cos}\theta `$. Parameterizing the metric by $$ds^2=\mathrm{g}(r,\theta )dt^2+\frac{\mathrm{m}(r,\theta )}{\mathrm{g}(r,\theta )}(dr^2+r^2d\theta ^2)+\frac{\mathrm{l}(r,\theta )}{\mathrm{g}(r,\theta )}r^2\mathrm{sin}^2\theta d\phi ^2,$$ (21) we substitute the gauge transformed ansatz Eq. (2) together with (17)-(20) and (21) into the coupled Einstein, dilaton and Yang-Mills equations and find for the asymptotic EYMD solution with winding number $`n=2`$, $`\overline{F}_1`$ $`=`$ $`{\displaystyle \frac{\mathrm{C}_2}{r^2}}{\displaystyle \frac{1\mathrm{cos}\theta }{\mathrm{cos}\theta \mathrm{sin}^2\theta }}+(\mathrm{}),`$ (22) $`\overline{F}_3`$ $`=`$ $`{\displaystyle \frac{\mathrm{C}_2}{r^2}}{\displaystyle \frac{2(1\mathrm{cos}\theta )\mathrm{cos}\theta \mathrm{sin}^2\theta }{4\mathrm{cos}\theta \mathrm{sin}^4\theta }}+(\mathrm{}),`$ (23) $`\overline{F}_4`$ $`=`$ $`{\displaystyle \frac{\mathrm{C}_1}{r}}+{\displaystyle \frac{\mathrm{C}_1}{4r^2}}(\overline{\mathrm{g}}_12\varphi _1)+(\mathrm{}),`$ (24) for the gauge field functions and $$\mathrm{\Phi }=\frac{\varphi _1}{r}+(\mathrm{}),\mathrm{g}=1+\frac{\overline{\mathrm{g}}_1}{r}+\frac{\overline{\mathrm{g}}_1^2}{2r^2}+(\mathrm{}),\mathrm{l}=1+\frac{\overline{\mathrm{l}}_2}{r^2}+(\mathrm{}),\mathrm{m}=1+\frac{\overline{\mathrm{l}}_2+\overline{\mathrm{m}}_2\mathrm{sin}^2\theta }{r^2}+(\mathrm{}),$$ (25) for the dilaton function and the metric functions, where $`\mathrm{C}_1`$, $`\mathrm{C}_2`$, $`\varphi _1`$, $`\overline{\mathrm{g}}_1`$, $`\overline{\mathrm{l}}_2`$, $`\overline{\mathrm{m}}_2`$ are constants . The $`(\mathrm{})`$ indicate terms vanishing faster than $`1/r^2`$, which can not necessarily be expanded in powers of $`1/r`$. It is easy to see that $`\overline{F}_1`$ and $`\overline{F}_3`$ are finite on the positive $`z`$-axis and can be expressed as polynomials in $`\mathrm{sin}^2\theta `$ in the vicinity of the $`z`$-axis. Hence, the asymptotic solution is regular on the positive $`z`$-axis. On the $`\rho `$-axis the gauge field functions $`\widehat{H}_i`$ are finite, as expected. Furthermore, $`\widehat{H}_1`$ can be decomposed into an odd part and an even part with respect $`\mathrm{S}_{(zz)}`$, where the latter is a function of $`r`$ only . As a consequence all derivatives $`{\displaystyle \frac{^{2k}\widehat{H}_1}{\theta ^{2k}}}`$ vanish on the $`\rho `$-axis. On the $`z`$-axis the functions $`f_{10}(r):=\overline{F}_1(r,\theta =0)`$ and $`f_{40}(r):=\overline{F}_4(r,\theta =0)`$ are not necessarily zero. Hence, this solution is not an embedded abelian solution and has to be classified as type III according to ref. . For winding number $`n=3,4`$ we have constructed asymptotic solutions, too, which possess all the correct symmetry and regularity properties. These simple solutions disprove the claim of Hannibal that only for embedded abelian gauge potentials asymptotic solutions with the correct symmetries can exist in EYMD and related theories. We have also analysed the solution with winding number $`n=2`$ in the Coulomb gauge $`rH_{1,r}H_{2,\theta }=0`$ , which was employed in the numerical construction of the solutions . In this gauge the gauge potential is not well defined on the $`z`$-axis. However, transforming the asymptotic solution obtained in the Coulomb gauge into the gauge $`H_2=1`$, we find the same form Eqs. (22-24). This shows that at infinity the gauge potential in the Coulomb gauge can be locally gauge transformed into a regular gauge potential and that the asymptotic gauge potentials in the different gauges correspond to the same asymptotic solution. Note, that in the Coulomb gauge the functions $`H_1`$ and $`H_3`$ are antisymmetric, whereas the functions $`H_2`$ and $`H_4`$ are symmetric with respect to $`\mathrm{S}_{(zz)}`$. In Ref. Hannibal failed to obtain asymptotic non-abelian solutions such as the solution Eqs. (22)-(25). Clearly, he rejected solutions because the functions $`\widehat{H}_i`$ did not possess the symmetry property he erroneously expected. Furthermore, he tried to find the solutions in form of power series in $`\mathrm{sin}\theta `$. However, the solution Eqs. (22)-(23) contains the function $`1/\mathrm{cos}\theta `$, which, considered as a power series in $`\mathrm{sin}\theta `$, has radius of convergence $`|\mathrm{sin}\theta |<1`$. Thus, the results at $`\theta =\pi /2`$ may be not reliable. Unfortunately, the presentation in is not consistent at various places, and it remains unclear which calculations exactly were carried out and whether additional sources of error appeared in the analysis. We close with some additional comments on ref. : It is assumed in that the asymptotic solutions can be obtained as a power series in $`1/r`$. However, if one wants to exclude the existence of all solutions (except embedded abelian solutions), then a proof that all solutions are analytic in the variable $`1/r`$ at $`1/r=0`$ would be necessary. Such a proof is not given in ref. . Indeed, we find that in the Coulomb gauge terms in higher order arise, which can not be expanded in a power series in $`1/r`$ . It is claimed in that there exists a regular, globally defined gauge potential for the solutions of ref. . However, the “proof” of existence of such a gauge potential is not complete, because no attention is paid to the regularity of the gauge potential at the origin . It can be seen easily, that the gauge potential is not twice differentiable at the origin, if high powers in $`\mathrm{sin}\theta `$ arise, which are not multiplied by sufficiently high powers in $`r`$. In the static axially symmetric solutions of ref. are classified as gauge equivalent to type II solutions. This is not correct. Defining type II solutions by $`f_{10}(r)=\widehat{H}_{1,\theta \theta }=0`$ on the $`z`$-axis (for $`n=2`$) , Hannibal then argues, that $`f_{10}(r)`$ specifies a boundary condition, which can be chosen freely, and that therefore a non-zero $`\widehat{H}_{1,\theta \theta }`$ on the $`z`$-axis can not be generated. However, only for a local solution can the function $`f_{10}(r)`$ be chosen freely, i. e. for any function $`f_{10}(r)`$ there exists a solution of the differential equations which is valid only in a region near the $`z`$-axis. For a global solution the function $`f_{10}(r)`$ has to be chosen such that the solution fulfills the correct boundary condition at the $`\rho `$-axis. If the global solution is unique, the function $`f_{10}(r)`$ is fixed (provided the gauge is fixed).
no-problem/9909/hep-ex9909034.html
ar5iv
text
# High Altitude test of RPCs for the Argo YBJ experiment ## 1 Introduction The aim of the ARGO-YBJ experiment is the study of cosmic rays, mainly $`\gamma `$-radiation, in an energy range down to about 100 GeV, by detecting small size air showers with a ground detector. This very low energy threshold, which is below the upper limit of the next generation satellite experiments, is achieved in two ways: 1-By operating the experiment at very high altitude to better approach the level where low energy air showers reach their maximum development. The choice of the YangBaJing (YBJ) Cosmic Ray Laboratory (Tibet, China, $`30.11^{}`$ N, $`90.53^{}`$ E.), 4300 m a.s.l, was found to be very appropriate. 2-By utilizing a full coverage detector to maximize the number of detected particles for a small size shower. The choice of the detector is subject to the following requirements. The search for point sources requires the accurate reconstruction of the shower parameters, mainly the direction of the primary particle, in order to suppress the isotropic background. This can be obtained by a diffuse sampling on the arrival times of the shower front particles with nanosecond accuracy. Moreover the full coverage concept requires an extremely large active detector area which is only achievable with a very reliable and low cost detector. Robustness is a further important requirement for a detector to be operated far away from the facilities available in ordinary laboratories. The use of Resistive Plate Chambers (RPCs) has been envisaged to meet these requirements. Indeed, RPCs offer noticeable advantages owing to low cost, large active area, excellent time resolution and possibility of an easy integration in large systems. The ARGO-YBJ detector consists of a single RPC layer of $`5000`$ $`m^2`$ and about $`95\%`$ coverage, surrounded by a ring of sampling stations which recover edge effects and increase the sampling area for showers initiated by $`>5`$ TeV primaries. The trigger and the DAQ systems are built following a two level architecture. The signals of a set of 12 contiguous RPCs, referred to as CLUSTER in the following, are managed by a Local Station. The information from each Local Station is collected and elaborated in the Central Station. According to this logic a CLUSTER represents the basic detection unit. A CLUSTER prototype of 15 RPCs, shown in fig 1, has been put in operation in the YBJ Laboratory in order to check both the performance of RPCs operated in a high altitude laboratory and their capability of imaging a small portion of the shower front. In this paper the results concerning the performance of 2 mm gap, bakelite RPC detectors operated in streamer mode at an atmospheric depth of 606 $`g/cm^2`$ are described. Data collected by the carpet and results from their analysis will be presented elsewhere. ## 2 The experimental set up The detector, consisting of a single gap RPC layer, is installed inside a dedicated building at the YBJ laboratory. The set up, shown in fig. 1, is an array of 3x5 chambers of area $`280\times 112cm^2`$ each, laying on the building floor and covering a total area of $`8.5\times 6.0m^2`$. The active area of $`46.2m^2`$, accounting for a dead area due to a $`7mm`$ frame closing the chamber edge, corresponds to a $`90.6\%`$ coverage. The RPCs, of 2 mm gas gap, are built with bakelite electrode plates of volume resistivity in the range ($`0.5÷1`$) $`10^{12}\mathrm{\Omega }cm`$, according to the standard scheme reported in . The RPC signals are picked up by means of aluminum strips 3.3 cm wide and 56 cm long which are glued on a 0.2 mm thick film of plastic material (PET <sup>2</sup><sup>2</sup>2Poly-Ethylen-Tereftalate) used as a robust support which allows to work out the strips by milling a full aluminum layer. The strips are embodied in a panel, consisting of a 4 mm thick polystyrene foam sheet sandwitched between the PET film and an aluminum foil used as a ground reference electrode. The detector cross section is given in fig. 2. A rigid polystyrene foam plate is used to avoid the direct contact of the RPCs with the concrete floor. The strip panel lays on top of the detector with the strips oriented in the direction of the detector short side as shown in fig. 1 . At the edge of the detector the strips are connected to the front end electronics and terminated with 50 $`\mathrm{\Omega }`$ resistors. The opposite end of the strips, at the center of the detector, is not terminated. The RPC bottom electrode plate is connected to a negative high voltage so that the strips, facing the grounded plate, pick up a negative signal. A grounded aluminum foil (see fig. 2) is used to shield the bottom face of the RPC and an extra PET foil, on top of the aluminum, is used as a further high voltage insulator. The front end electronics that has been used in the present test is not the one envisaged for the final experiment, which will be described elsewere, but is an already existing 16 channel circuit developed for RPCs working in streamer mode. The circuit contains 16 discriminators with about 50 mV voltage threshold and gives the following output signals: * The Fast OR of the 16 discriminators with the same input-to-output delay (10 ns) for all the channels, which is used for time measurements and trigger purposes in the present test. * Serial read out of each channel that could be used for a strip by strip read out. This possibility however is beyond the purposes of the present test. The circuit is mounted on a $`50\times 15cm^2`$ G10 board which is fixed on top of the strip panel near to the edge of the detector as shown in fig. 1. The length of the board is approximately tuned with the width of 16 strips so that very short wires (a few cm) can be used for connecting each strip to the corresponding input electrode on the board. The 16 strips connected to the same front end board are logically organized in a PAD of $`56\times 56cm^2`$ area. Each RPC is therefore subdivided in 10 PADs which work like independent functional units. The PADs are the basic elements which define the space-time pattern of the shower; they give indeed the position and the time of each detected hit. The fast OR signals of all 150 pads are sent through coaxial cables of the same length to the carpet central trigger and read out electronics. The trigger logics allows to select events with a pad multiplicity in excess of a given threshold. At any trigger occurrence the times of all the pads are read out by means of multihit TDCs of 1 ns time bin, operated in common STOP mode. Each TDC has 32 input channels and can store up to 16 events per channel. The multiple hit operation is particularly important in detecting the core of high energy showers where several particles can fall on the same pad in a time interval of hundreds of nanoseconds. The trigger signal is used as the common STOP signal. For each event the trigger multiplicity, the set of all pads which produced the trigger and the times of all pads of the carpet are recorded. As the carpet consists just of a single layer detector, a direct measurement of the detection efficiency and time resolution requires the use of an auxiliary ”telescope” which can clearly define a cosmic ray impinging on it. The set up was therefore completed with a small telescope consisting of 3 RPCs of $`50\times 50cm^2`$ area with 16 pick up strips 3 cm wide connected to front end electronics boards similar to the ones used in the carpet. The 3 RPCs were overlapped one on the other and the triple coincidence of their fast OR signals was used to define a cosmic ray crossing the telescope. The gas system consisted of a central mixing station using three mass flowmeters that measured the gas composition with the require accuracy, better than 1$`\%`$ for all the components, and 5 parallel gas lines each feeding 3 RPCs in series. The gas sharing among the 5 input lines was equalized using identical high impedance capillar pipes in series with each line and the regular gas flow was monitored by bubblers put at the exit of each line. An open gas circuit was used, as only a modest amount of gas, about 15 l/h corresponding to 4 volume changes per day, was needed during about 2 months of carpet operation. Three gas components were used: Argon, iso-Butane $`C_4H_{10}`$ and TetraFluoroEthane $`C_2H_2F_4`$ that will be indicated in the following as Ar, i-But and TFE respectively. The High Voltage system consisted of five 10 kV supplies each one feeding 3 RPCs in parallel. The operating voltage was settable to the wanted value within 10 V accuracy and the operating current was monitored with a 1 $`\mu `$A sensitivity instrument. A further two channel HV supply with 10 nA sensitivity current monitor was used to feed the auxiliary telescope. ## 3 Data taking and experimental results The peculiar working conditions of the mountain YBJ laboratory are not only a very low average pressure of about 600 mbar, corresponding to an atmospheric vertical depth of 606 $`g`$ $`cm^2`$, but also a temperature that could be particularly low in winter even inside the laboratory. The measurements described in this paper were performed in the 2<sup>nd</sup> half of February 1998 with an external temperature ranging between -20 and -5 $`{}_{}{}^{o}C`$ and in the 1st half of May when the temperature was in the range -5 +15 $`{}_{}{}^{o}C`$. The internal temperature was kept, by using some heaters, between +4 and +8 $`{}_{}{}^{o}C`$ in the first run and around 16-18 $`{}_{}{}^{o}C`$ in the second. The laboratory temperature and pressure were monitored during all data taking. The RPCs of the test carpet were operated in streamer mode as foreseen for the final experiment. This mode delivers large amplitude saturated signals, and is less sensitive than the avalanche or proportional mode to electromagnetic noise, to changes in the environment conditions and to mechanical deformations of the detector. On the other hand the larger rate capability achievable in avalanche mode is not needed in a cosmic ray experiment. The first task to be carried out was the optimization of the gas mixture and the search for the detector operating point in the YBJ laboratory conditions. This was accomplished by means of the auxiliary telescope, before the start up of the carpet test. The efficiency of the RPC in the central position of the telescope (RPC2 in the following) was measured as the ratio of the number of triple coincidence events to the number of double coincidences of the other two RPCs. Three gas mixtures were tested which used the same components, Ar, i-But and TFE, in different proportions: TFE/Ar/i-But = 45/45/10; 60/27/13 and 75/15/10. In the three cases the ratio Ar/TFE was changed to a large extent, living the i-But concentration relatively stable. TFE is an heavy gas with good quenching properties . An increase of TFE concentration at expenses of the Ar concentration should therefore increase the primary ionization thus compensating for the 40$`\%`$ reduction caused by the lower gas target pressure (600 mbar) and reduce the afterpulse probability. For each of the three gases a voltage scan was made for RPC2, leaving the other two RPCs at a fixed operating voltage, and the following measurements were made: RPC2 counting rate and current, double and triple coincidence rate. The detection efficiency $`vs`$ the operating voltage for the three gases is shown in fig. 3 The reduction of the Argon concentration in favor of TFE results in a clear increase of the operating voltage as expected from the large quenching action of TFE. The data shown in fig. 3 are consistent with an increase of 30-40 V in operating voltage for a 1$`\%`$ reduction of the Argon concentration in the mixture. In spite of the different operating voltages all three gases approach the same efficiency of about 90$`\%`$ which include the inefficiency due to geometrical effects. A more systematic study of the plateau efficiency is presented below, in connection with the carpet test. Fig. 4 shows the RPC2 current and counting rate vs the operating voltage for the three gases. A small current linearly increasing with the voltage is measurable well below the point where the RPC start to show a significant counting rate. We interpret this as a leak current not flowing through the RPC gas and not taking part in the detector working mechanism. The ratio of the operating current to the counting rate gives the charge per count delivered in the RPC gas, which is shown in fig. 5 as a function of the operating voltage for the three gases. Here the small term corresponding to the current leaks, as mentioned above, is subtracted to the total current. The data presented in fig. 5 show that the higher is the TFE fraction, the lower is the charge delivered in the gas by a single streamer. Concerning the optimization of this parameter the following points should be noted. * The signal charge, in streamer mode operation, is anyway much above the achievable threshold of the front end electronics. This is particularly true for the final front end electronics that will be used for the experiment. Therefore a larger detector signal is not an advantage in this respect. * A lower operating current, on the contrary, is an advantage even if in a cosmic ray experiment the currents are expected to be modest. * In a cosmic ray experiment, on the other hand, the analog measurement of the hit density, which is achievable either from amplitude measurements of the strip signals or by sampling the operating current in appropriate time intervals, is an interesting possibility to be exploited for studing the shower core at energies as high as about 100 TeV. Indeed, according to a MonteCarlo simulation of the final experiment, the digital read out of pads near the shower core, is expected to saturate at about 15-20 TeV. In this respect a lower delivered charge extends the dynamic range achievable for the analog measurement. We decided therefore to operate the test carpet with the gas mixture corresponding to the highest fraction of TFE. The tests performed on the carpet were essentially the same as for the auxiliary telescope. Fig. 6 shows the operating efficiency for the ORed pads 2-3-7-8 of one RPC of the carpet. The efficiency was measured using cosmic ray signals defined by the triple coincidence of the RPCs of the auxiliary telescope which was placed on top of the carpet and centered on the corner among four pads. The counting rate of the same pads OR signal, together with the RPC current, are reported in fig. 7 vs the operating voltage. The results of the gas with TFE=45$`\%`$ are also reported for comparison. A rather flat counting rate plateau is observed corresponding to a rate of about 400 Hz for a single pad of area 56$`\times `$56 $`cm^2`$. The residual slope of the plateau is mostly due to afterpulses occurring after the end of the 250 ns shaped discriminated signals which produce a double counting of the signal due to the same CR track. The rate and efficiency curves rise in the same voltage interval as expected. The time jitter distribution of the pad signals was obtained by measuring the delay of the fast OR signal with respect to RPC2 in the trigger telescope. This distribution is shown in fig. 8 for the four pads. The average of the standard deviations is 1.42 ns corresponding to a resolution of 1.0 ns for the single RPC if we account for the fact that the distributions in fig. 8 show the combined jitter of two detectors. In the detection of extensive air showers however, the primary cosmic ray direction is measured by comparing the times of hits due to different particles of the shower. The space-time distribution of the shower hits allows to fit the front of the shower that can be assumed to a good approximation to be a plane. The time residual distribution of the individual shower particles with respect to the front is reported in fig. 9. The long tail of delayed hits is due to particles arriving much after the shower front. ## 4 Discussion of the results The use of RPCs for the detection of Extensive Air Showers in high altitude laboratories poses some basic questions that the present test contributes to answer: * how do the operating voltage and plateau efficiency scale with the pressure for the streamer mode operation * how does the detector time resolution compare with the intrinsic jitter of the shower front. With the purpose of answering the first question a 2 mm gap RPC was operated at sea level with the same gas, TFE/Ar/i-But=75/15/10, used for the YBJ carpet. The detection efficiency vs operating voltage in fig. 6, compared to the operation at 600 mbar pressure in YBJ, shows an increase of about 2.5 kV in operating voltage. The effect of small changes of temperature T and pressure P on the operating voltage can be accounted for by rescaling the applied voltage Va according to the relationship $$V=V_a\frac{P_0}{P}\frac{T}{T_0}$$ where Po and To are arbitrary standard values, e.g.1010 mb and 293 K respectively for a sea level laboratory. However, starting from the YBJ data, the above formula predicts an operating voltage at sea level which is considerably larger than the experimental one. A large change of pressure produces a proportional change in the gaseous target mass per unit surface, like a change of the gas gap size. The operating voltage as a function of the gap is studied in for 1.5, 2, and 3 mm gap RPCs, in the case of the binary gas mixture TFE/i-But=97/3. The result is shown in fig. 10 where the operating voltage in streamer mode is defined as that giving 50$`\%`$ streamer probability with respect to the plateau. The data which refer to the same pressure of 1010 mbar and temperature of 293 K, show that the voltage do not scale proportionally to the gap, the electric field (voltage/gap) being larger for thinner gaps. Indeed the avalanche to streamer transition occurs when the gas amplification, $`e^{\alpha g}/\alpha g`$, exceeds a given threshold. The larger is the gap the smaller is the $`\alpha `$ value and therefore the electric field that is needed for reaching the streamer threshold. A zero constraint parabolic fit of the three experimental points is also reported in fig. 10. The fitted curve, which refers to the binary gas, can be scaled to the YBJ gas, TFE/Ar/i-But=75/15/10 at $`20^OC`$ temperature, using the point at sea level (8.6 kV at 1010 mbar and $`32^OC`$, rescaled to $`20^OC`$ according to the above formula) and assuming that the ratio of the operating voltages for the two gases is the same for all gap sizes. The result is the lower curve in fig. 10 which represents the operating voltage vs gap for the YBJ gas and fits well the YBJ operating point, 6.12 kV, if we assume that a 2mm gap at the YBJ pressure of 603 mbar is equivalent to a 1.2 mm gap at 1010 mbar. The above assumption is based on the fact that, in the ideal gas approximation, the mass per unit surface of the gaseous target, which fixes the operating voltage for each gas, is given by the parameter $`gappressure/temperature`$. Fig. 6 also shows that the plateau efficiency measured at YBJ is 3-4$`\%`$ lower than at sea level. Although a lower efficiency is expected from the smaller number of primary clusters at the YBJ pressure, we attribute most of the difference to the underestimation of the YBJ efficiency. At the YBJ level indeed the ratio of the cosmic radiation electromagnetic to muon component is about 4 times larger than at sea level. A spatial tracking with redefinition of the track downstream of the carpet would eliminate the contamination from soft particles, giving a more accurate and higher efficiency. On the other hand the lower efficiency could hardly be explained with the gas lower density. The number of primary clusters in the YBJ test, estimated around 9, is the same as in the case of some gas, e.g. Ar/iBut/CF3Br=60/37/3, that was frequently used at sea level with efficiency of 97-98$`\%`$ . The time residual distribution in fig. 9 shows a long tail due to delayed particles traveling well behind the shower front. The gaussian fit, disregarding this tail, gives a standard deviation of 1.6 ns to be compared with the RPC intrinsic time resolution of 1.0 ns. Taking into account the additional uncertainties due to the propagation time of the signal traveling along a strip of 56 cm and to the impact point of the shower particle which can be everywhere inside the PAD we get a total RPC jitter of 1.3 ns. The residual jitter of the shower front can be estimated to be: $`\sigma _{shower}=0.9ns.`$ This is valid for high energy showers selected by the multiplicity trigger as in the case reported in fig. 9 At lower energies the shower jitter increases gradually. The authors are endebted to G. Aielli (Università di Roma “Tor Vergata”) for editing the present paper.
no-problem/9909/astro-ph9909315.html
ar5iv
text
# The ROSAT All-Sky Survey Bright Source Catalogue ## 1 Introduction Sky surveys play a major role in observational astronomy, in particular in the era of multi-wavelength observations. Before the launch of the ROSAT satellite several all-sky X-ray catalogues existed based on collimated counter surveys (see Table 1). One of the main scientific objectives of ROSAT was to conduct the first all-sky survey in X-rays with an imaging telescope leading to a major increase in sensitivity and source location accuracy (Trümper 1983). Actually, the ROSAT mirror system (Aschenbach 1988) and the Position Sensitive Proportional Counter (PSPC) (Pfeffermann et al. 1988) used for the survey were primarily optimized for detecting point sources in the all-sky survey. However, the wide angle and fast optics as well as the low detector background of the PSPC provided excellent conditions for studying extended sources like supernova remnants, clusters of galaxies, and the diffuse X-ray background. In this context the “unlimited field of view” of the all-sky survey was of great advantage. The ROSAT All-Sky Survey (RASS) was conducted in 1990/91, just after the two month switch-on and performance verification phase. The first processing of the ROSAT All-Sky Survey took place in 1991$``$1993 resulting in about 50,000 sources. This source list had not been published, because during the analysis a large number of minor deficiencies and possibilities for improvement were discovered. Nevertheless, the data have been extensively used by the scientific groups at MPE and their collaborators for many scientific projects. Based on the experience with the first RASS processing a second analysis was performed in 1994$``$1995, resulting in 145,060 sources (detection likelihood $``$ 7). The present publication comprises only the brightest 18,811 of this sample. These data represent by far the most complete and sensitive X-ray sky survey ever published. It is a factor of 20 more sensitive than any previous all-sky survey in X-rays and contains about a factor of 4 more sources than all other X-ray catalogues, which sample only a few percent of the sky. In this paper we present the outcome of the RASS in terms of “point sources”. For 98.2% of the 18,811 sources the source extent radius is less than 5 arcmin, and for 99.6% the extent is smaller than 10 arcmin. 0.4% of the sources exhibit a larger source extent and show complex emission patterns (see Section 3.2) rather than a point-like spatial distribution. These sources have been included for completeness as they fulfill the selection criteria of the RASS-BSC (hereafter RBSC). Diffuse sky maps with angular resolution of 40 arcmin and diffuse sky maps in six colours of 12 arcmin resolution have been published elsewhere (Snowden et al. 1995, 1997). In Sect. 2 we summarize the basic properties of the ROSAT All-Sky Survey and of the Standard Analysis Software System (hereafter SASS). The selection strategy for including sources into the RBSC, the screening process, and the content of the RBSC are presented in Sect. 3. The results from the correlation of RBSC sources with various databases are given in Sect. 4. The electronic access to the RBSC is described in Sect. 5. ## 2 The ROSAT All-Sky Survey ### 2.1 Observation strategy and exposure map ROSAT has conducted the first All Sky Surveys in soft X-rays (0.1$``$2.4 keV; 100$``$5 Å) and the extreme ultraviolet (0.025$``$0.2 keV; 500$``$60 Å) bands using imaging telescopes (Trümper 1983, Aschenbach 1988, Wells et al. 1990, Kent et al. 1990). The satellite was launched on June 1, 1990 and saw first light on June 16, 1990 (Trümper et al. 1991). The following 6 week calibration and verification phase already included a small fraction of the sky survey (see Table 2). The main part of the survey began on July 30, 1990 and lasted until January 25, 1991. A strip of the sky which remained uncovered, because of operational problems in January, was retrieved in February and August 1991 (see Table 2). The data obtained until 1991 form the basis of the present analysis. The total survey exposure time amounts to 1.031 $``$ 10<sup>7</sup>s or 119.36 days. The basic survey strategy of ROSAT was to scan the sky in great circles whose planes were oriented roughly perpendicular to the solar direction. This resulted in an exposure time varying between about 400 s and 40,000 s at the ecliptic equator and poles respectively. During the passages through the auroral zones and the South Atlantic Anomaly the PSPC had to be switched off, leading to a decrease of exposure over parts of the sky (see Fig. 1). The sky coverage as a function of the exposure time is displayed in Fig. 2. For exposure times larger than 50 seconds the sky coverage is 99.7% for the observations until 1991. ### 2.2 SASS processing The first analysis of the all-sky survey data was performed for strips of $`2\mathrm{°}\times 360\mathrm{°}`$ containing the data taken during two days. These strips were analysed using various source detection algorithms, comprising two sliding window techniques (differing in how the background was determined) and a maximum-likelihood method. A list of X-ray sources (RASS-I) was produced which included information about sky position and source properties, such as count-rate, hardness-ratios, extent, and source detection likelihoods. The main aim of this analysis was to supply almost immediate information about the X-ray sources and to allow a fast quality check of the survey performance. The RBSC presented in this paper is based on the so-called RASS-II processing which is described below in Sect. 2.2.1–2.2.3. #### 2.2.1 Advantages of the RASS-II processing The main differences between the RASS-II data processing and the RASS-I processing are as follows: (i) the photons were not collected in strips but were merged in 1,378 sky-fields of size $`6.4\mathrm{°}\times 6.4\mathrm{°}`$, so that full advantage was taken of the increasing exposure towards the ecliptic poles; (ii) neighbouring fields overlapped by at least 0.23 degrees, to ensure detection of sources at the field boundaries, which posed a problem in the first processing; (iii) a new aspect solution reduced the number of sources with erroneous position and morphology; (iv) the calculation of the spline-fitted background map was improved, resulting in better determined count-rates; (v) the candidate list for the maximum-likelihood analysis (see Sect. 2.2.2) was enlarged by lowering the threshold values for the two preceding sliding window source detection algorithms, and by changing the acceptance criteria to allow very soft and very hard sources to be included; and (vi) photons obtained with poor aspect solutions were no longer accepted. #### 2.2.2 Source detection algorithm The source detection algorithms of the SASS processing can be divided into 6 different steps: 1. The local-detect method The local-detect algorithm is based on a sliding window technique. It was already successfully used for the analysis of EINSTEIN data and has been modified for ROSAT. A window of $`3\times 3`$ pixels is moved across the binned photon images. These images are produced by binning the data into 512 by 512 pixel images with a pixel size of 45 arcsec for three energy bands (broad: Pulse Height Amplitude (PHA) channels 11–235 (0.1$``$2.4 keV), soft: channels 11–41 (0.1$``$0.4 keV), hard: channels 52–201 (0.5$``$2.0 keV)). The contents of the pixels inside the detection cell are added and compared with the local background taken from the 16 pixels surrounding the $`3\times 3`$ pixel detection window. To detect also extended sources the size of the detection cell is increased systematically by keeping the ratio of the areas of background and source windows fixed at 16/9. 2. The background map Using the source list produced in three energy bands by the local-detect method, circular regions are cut out around each source position. The radius of the circle is dependent on the detection cell size. The resulting “swiss-cheese”-images are fitted by a two-dimensional spline function to fill the holes and to generate three energy dependent background maps. In Fig. 3 we show the background maps in the soft and hard energy bands in galactic coordinates. 3. The map-detect algorithm The map-detect algorithm produces a second source list by repeating the sliding window search, using a $`3\times 3`$ pixel window and the spline fit to the background. Again sources are searched for in three energy bands and with varying cell size. 4. Merging of source lists The source lists from the local- and map-detection algorithms are merged and are further used as input lists to steps 5 and 6. 5. Determination of source extraction radius A preliminary extent of the source counts is derived from the radial distribution of counts in annuli centered on the source position. This extent, with a minimum being fixed at 300 arcsec, is baselined as an extraction radius for the selection of the photons used in the subsequent maximum-likelihood detection algorithm. 6. The maximum-likelihood method The merged source lists are used as input to the maximum-likelihood method (Cruddace et al. 1987). In contrast to the previously described detection algorithms this method takes into account the position of each individual photon. This allows a proper weighting of each photon with the instrument point spread function, which is a strong function of off-axis angle. The high-resolution photons in the center of the PSPC are weighted higher than the off-axis photons. The maximum-likelihood method provides a source position and existence likelihood in the broad band. With this position fixed, the detection likelihood in each of the 4 energy bands A (PHA channels 11–41), B (52–201), C (52–90), D (91–201) is calculated. Vignetting is taken into account for each photon using an analytic fit, leading to a mean vignetting factor for each source. Source extent and its likelihood are derived using just the broad band data, assuming that the point response function and the surface brightness are 2-D Gaussian functions, that they are independent of photon energy, and that the background is uniform in annular rings concentric around the optical axis. For strong sources various techniques are applied in the SASS to quantify the likelihood for time variability and to perform spectral fits. This information is not included in the present version of the RBSC. #### 2.2.3 Parameters derived from the RASS-II processing In the following section a few basic source parameters from the RASS-II processing are described. A complete description of the derived source parameters is given at http://wave.xray.mpe.mpg.de/rosat/documentation/ productguide and a description of the catalogue entries of the RBSC is available at http://wave.xray.mpe.mpg.de/rosat/catalogues/rass-bsc. A source count-rate corrected for vignetting is given in the broad band. Two hardness ratios HR1 and HR2 are calculated, which represent X-ray colours. From the source counts in the band A and the band B HR1 is given by: HR1=(B–A)/(B+A). HR2 is determined from the source counts in the bands C and D by: HR2=(D–C)/(D+C). Since background subtraction is involved, the source counts in some bands may be negative. These negative counts have been set to zero, so that HR1 or HR2 becomes –1 or +1. Note that HR2 is a hardness ratio constructed in the hard region, since bands C and D together contain the same channels as the hard band B. Thus HR1 near –1 and HR2 near +1 is no contradiction. Hardness ratio errors greater than 9.99 have been set to 9.99. Each source has been assigned a ’priority’ parameter, the leftmost 6 characters of which denote the detection history before the maximum-likelihood algorithm was applied. These are: 1=M–broad, 2=L–broad, 3=M–hard, 4=L–hard, 5=M–soft, 6=L–soft. Here M and L stand for the map-detect algorithm and the local-detect method, respectively. Broad, hard and soft refer to the energy bands defined above. A flag is set to 0 for no detection or 1 for detection. The source extent (ext in the RBSC) is defined as the excess above the width of the point spread function given in arcsec. In addition, the likelihood (extl) for the source extent and the extraction radius in arcsec for a source (extr) used in the maximum-likelihood method are provided. ## 3 The ROSAT Bright Source Catalogue ### 3.1 Selection criteria The total number of sources found in the RASS II is 145,060 (detection likelihood $``$ 7). From this database the RBSC was selected according to the following criteria: (i) the detection likelihood is $``$ 15; (ii) the number of source photons is $``$ 15 and (iii) the source count-rate in the (0.1$``$2.4 keV) energy band is $``$ 0.05 $`\mathrm{counts}\mathrm{s}^1`$, resulting in 23,394 sources. These sources underwent an intensive screening process, which is described in the following section. ### 3.2 Screening process In order to ensure a high quality of the catalogue, the images of all sources which fulfill the above mentioned criteria were individually inspected. An automatic as well as a visual screening procedure was applied to all 1,378 sky fields. The automatic procedure searched for sources with overlapping extraction radii, as the count-rate determination can be affected in such cases. These sources were colour-coded in the subsequent visual inspection process. The visual inspection process is based on ROSAT All-Sky Survey images in the broad, soft and hard energy bands. The SASS position for each RBSC source as well as the extraction radius were marked in the various images. This enables the identification of regions on the sky where the detection algorithm had split sources into multiple detections, and verifies that the source extraction radius and the source position are correct. In addition, sources which were missed by the detection algorithm can be found. During the visual screening process source parameters from the SASS could be checked interactively, using software tools from the Extended Scientific Analysis System (EXSAS, see Zimmermann et al. 1994). The reliability of the screening process was verified in different ways. 100 of the 1,378 sky fields were used as training sets for the screening process and were analysed by more than one person to minimize the deviations in the flag setting. The results of the flag setting were again visually inspected. About 16% of the 23,394 sources were found to be spurious detections (mainly in large extended emission regions like the Vela Supernova remnant and the Cygnus Loop) which have been removed from the final source list. The remaining 18,811 sources are included in the present version of the RBSC. For 94% of these sources the SASS parameters were confirmed, and the remaining 6% of the sources have been flagged. The source flags applied to RBSC sources are defined in the following paragraphs (see Fig. 4 for examples of flagged RBSC sources). (i) nearby flag This flag is set when the distance between two sources is less than the sum of the individual extraction radii. The count-rate might be wrong in such cases. The nearby flag is given to both sources, when their count-rate ratio is less than a factor of 5. If one source exhibits a count-rate which is at least 5 times higher than that of the weaker source, the nearby flag is given only to the weaker source. In such cases, the count-rate of the brighter source is not affected significantly. The nearby flag was given to 588 sources. (ii) position error Whenever the source position is obviously not centered on the source extraction cell the position flag is applied. We did not correct the positions as there may be several different reasons for this problem (e.g. asymmetric elongated emission patterns, multiple emission maxima in the detection cell). The position error flag was applied to 317 sources and a broad band image provided for each one. (iii) source extent larger than the source cell size extraction radius. The standard analysis software fails in some cases to find the correct source extent, with the result that the count-rates are usually underestimated. All of these marked sources underwent a post-processing step to quantify whether the source extent is indeed larger than the value found by the standard analysis software. The number of sources marked with the source extent flag was 225. A broad band image is available for inspection and the new extraction radius is indicated by a white circle. (iv) complex emission patterns The SASS count-rate as well as the source position may be uncertain for sources which show complex emission patterns. 177 sources were flagged in the visual process. The flag serves as a warning that the SASS count-rate and position may be uncertain. A broad band image is available for inspection. (v) Sources missed by the detection algorithm In the visual inspection process sources were found, which were missed by the standard analysis software system. The number of such sources included into the RBSC is 49. Their main parameters, count-rate, exposure time, and position were determined in an interactive process using EXSAS tools. ### 3.3 Statistical properties #### 3.3.1 Sky- and count-rate distributions In Fig. 5 we present the sky distribution of all RBSC sources in galactic coordinates. The size of the symbols scales with the logarithm of the count-rate and the colours represent 5 intervals of the hardness ratio HR1. The distribution of RBSC sources shows the clustering of the bright ($`>1.3\mathrm{counts}\mathrm{s}^1`$) hard X-ray sources in the galactic plane, well known from the UHURU and HEAO-1 sky surveys. At fainter count-rates ($`1.3\mathrm{counts}\mathrm{s}^1`$) the source distribution is more uniform. This is illustrated in Fig. 6, where we compare the cumulative number count distributions for sources in the galactic plane ($`|\mathrm{b}|<20\mathrm{°}`$) and outside the galactic plane ($`|\mathrm{b}|20\mathrm{°}`$). A linear fit to the distribution of all sources outside the galactic plane (thick line in Fig. 6) results in a slope of –1.30 $`\pm `$ 0.03. The faint line in Fig. 6 represents the count-rate distribution for the galactic plane population of RBSC sources. The histogram shows a break at a count-rate of about 1.3 $`\mathrm{counts}\mathrm{s}^1`$. A linear fit to the distribution above this break-point gives a slope of –1.20 $`\pm `$ 0.04; the slope below the break-point is –0.72 $`\pm `$ 0.06. The flattening of the log N–log (count-rate) of the galactic plane distribution at the bright end is due to the disk population of bright X-ray sources (see Fig. 5). The effect of interstellar photoelectrical absorption is demonstrated in Fig. 5 by the fact that sources near the galactic plane exhibit higher values of the hardness ratio HR1 (blue symbols) than sources outside the galactic plane. Although both galactic and extragalactic objects show a large spread in their spectral energy distribution in the ROSAT band (e.g. when simple power-law models were fit to the ROSAT spectra of broad and narrow line Seyfert 1 galaxies, the photon indices ranged between about 2 and 5), the dominant effect here is probably the larger amount of absorption within the galactic plane. #### 3.3.2 Exposure time, hardness ratios, detection likelihood and source extent distributions In Fig. 7 we present the distributions of some source parameters of the RBSC. For most of the sources (see the upper left diagram) the exposure time is of the order of a few hundred seconds. The HR1 and HR2 distributions of the RBSC sources binned in intervals of 0.1 are shown in the upper middle and right diagrams, respectively. This gives a more detailed representation of the HR1 distribution compared to Fig. 5, where only 5 bins were used. The distribution of the source detection likelihood is shown in the lower left diagram. The lower value of 15 is set by the source selection criterion for RBSC sources. The last two diagrams show the distribution of the source extent and the extent-likelihood as a function of extent. A more detailed discussion of the RBSC source parameter distribution for different object classes can be found in Sect. 4. #### 3.3.3 Positional accuracy The RBSC sources were correlated with the TYCHO catalogue (Høg et al. 1998) to assess the positional accuracy. As TYCHO contains only stars, this correlation gives the positional accuracy of point-like sources. Figure 8 shows the result from the correlation of the RBSC for a search radius up to 120 arcsec with the TYCHO catalogue entries. The comparison shows that 68% (90%) of the RBSC sources are found within 13 arcsec (25 arcsec) of the optical position. #### 3.3.4 Temporal variability of RBSC sources We have investigated the temporal variability of RBSC sources on time scales of months to years by comparing the ROSAT All-Sky Survey observations with public pointed ROSAT PSPC observations (see also Voges & Boller 1998). The comparison was done with a search radius of 60 arcsec around the RBSC source position. The resulting number of RBSC sources which have counterparts in ROSAT public pointed observations is 2,611. Figure 9 shows the ROSAT survey count-rate versus the pointing count-rate for these sources. Sources showing a factor of variability above 5 (109), were visually inspected, similar to the screening process performed for the RBSC sources described in Section 3.2, to ensure the reliability of the source existence and of the source count-rate. 20 RBSC sources exhibit a factor of variability between 10 and 100. 5 RBSC sources show a factor of variability above 100. There is an excess of sources with factors of variability above 10, which are brighter during the RASS observations. This is most probably due to the fact, that during the RASS observations the count-rate threshold for the source detection is on average higher with respect to ROSAT pointed observations. As a result, the RBSC sources shown in Fig. 9 are biased towards sources with extreme variability. A comprehensive variability analysis of ROSAT sources will be presented elsewhere. #### 3.3.5 Flux determination and log N$``$log S distributions In order to facilitate the use of the catalogue for statistical studies it may be useful to quote not only count-rates but photon fluxes. To convert count-rates into fluxes in the 0.1–2.4 keV energy range we have used two different models: Model 1 assumes a power law $`\mathrm{E}^{\mathrm{\Gamma }+1}\mathrm{dE}`$ and may be useful for AGN and clusters of galaxies. We use a fixed photon index of $`\mathrm{\Gamma }=2.3`$, which is the typical value derived from ROSAT observation of extragalactic objects (see Hasinger et al. 1991, Walter & Fink 1993), and an absorbing column density fixed at the galactic value along the line of sight (Stark et al. 1992). These fluxes, corrected for galactic absorption, are called flux1. Model 2 is based on an empirical conversion between count-rates and fluxes following Schmitt et al. (1995), originally developed to obtain flux values for stars: $$\mathrm{flux2}=(5.3\mathrm{HR1}+8.31)10^{12}\mathrm{counts}\mathrm{s}^1[\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1]$$ Both flux values are listed in the electronic form of our catalogue. For a statistical study we use both fluxes for three types of objects, stars from the TYCHO catalogue, clusters of galaxies (ACO) from the compilation of Abell et al. (1989) and AGN listed in the Veron catalogue. We have further subdivided the samples in two categories; A: point sources having an extent-likelihood value of zero; B: sources with an extent-likelihood $`>`$ 0. In Fig. 10 we compare the two different flux determinations for the three object classes mentioned above which fall into category A. For TYCHO stars we obtain the largest dispersion between the two methods (left plot in the upper panel). The differences in fluxes are most likely due to the correction for the galactic absorption. The first method tends to overestimate the flux, as the galactic hydrogen column density is taken along the full line of sight. The true value is lower than the value taken in the computation of flux1. For stars the flux determination with method 2 is more reliable than method 1. A much smaller dispersion is found in comparing flux 1 and flux 2 for ACO and AGN. This is probably explained by the fact that both source populations are detected at high galactic latitudes, where the amount of galactic absorption is considerably smaller than in the galactic plane. In Fig. 11 we compare the two different flux methods for the three object classes which fall into category B. Again, for TYCHO stars we obtain the largest dispersion between the two methods (left plot in the upper panel). The differences in fluxes are due most likely to the correction for the galactic absorption. As in Fig. 10, a smaller dispersion is found for ACO and AGN objects. The log N$``$log S distributions for the three object classes of category A are shown in the middle panel (using flux1 values) and in the lower panel (using flux2 values) of Fig. 10. We want to stress that we did not apply any corrections concerning the varying sensitivity of the survey to the log N$``$log S distributions. To determine the slope a linear fit was made to the distribution, starting at the turn-over point of the distribution and ending where the cumulative number count dropped below 15. For ACO and AGN we derive slopes of the log N$``$log S distribution close to –1.5, which is the expected slope for an Euclidean source distribution, for both flux determination methods. For stars the log N$``$log S distribution is unreliable for flux1 as discussed before. The flux2 distribution for stars (lower panel) is very steep for category A. We have no good explanation for this steep slope. This requires further investigation. The log N$``$log S distribution for the three object classes of category B are shown in the middle (using flux1 values) and lower panels (using flux2 values) of Fig. 11. The log N$``$log S distributions and their slopes for ACO and AGN objects are comparable with each other for both object types and both flux determination methods. However, there is a general difference in the slopes for category A and category B sources. For example, the slope for ACO is –1.51 in category A (extl = 0) and –1.14 in category B (extl $`>`$ 0). There are various explanations possible for the flatter logN$``$log S distribution, such as: a) the count-rate determination for extended sources is underestimated by the SASS; this effect is largest for the fainter X-ray sources; b) the source detection probability for faint extended sources decreases more rapidly than for point sources; c) the probability to assign an extent likelihood value to a detected source decreases with decreasing source count-rate. This effect is stronger for faint extended sources as compared to point-like sources. Cases a,b and c are well-known software deficiencies in SASS. Various methods have been developed by Böhringer et al. (1998) (growth curve method), DeGrandi et al. (1997) (steepness ratio technique), Ebeling et al. (1993) (Voronoi, tesselation and percolation method), and Wiedenmann et al. (1997) (Scaling index method), to attack this problem, in particular for the study of clusters of galaxies. ## 4 Correlation with existing data bases We have performed a cross-correlation of the RBSC with various catalogues. These catalogues include public data bases like the NED or SIMBAD and the latest versions of published catalogues or catalogues in print, as well as lists which were made available to us by private communications from the following alphabetically listed authors: Alcala et al. (1995, 1996, 1998) (T-Tauri stars), Appenzeller et al. (1998) (optical identifications of northern RASS sources), Bade et al. (1998) (The Hamburg/RASS Catalogue of optical identifications), Berghöfer et al. (1996) (OB-stars), Beuermann et al. (1999) (identification of soft high galactic latitude RASS X-ray sources), Boller et al. (1998) (IRAS galaxies), Buckley et al. (1995), Burwitz et al. (1996a, 1996b) (individual stellar and cataclysmic variables candidates), Covino et al. (1997), Danner (1998) (stellar candidates in star forming regions), Fleming (1998), Fleming et al. (1995, 1996), (EINSTEIN extended medium sensitivity survey as well as white dwarf and M dwarf detections), Gregory & Condon (1991), Gregory et al. (1996) (radio sources), Haberl et al. (1994), Haberl & Motch (1995) (intermediate polars), Hakala et al. (1993) (cataclysmic variable sources), Harris et al. (1994) (EINSTEIN catalogue of IPC sources), Schwope et al. (1999) (identifications of RBSC sources), Hoffleit & Warren (1991) (WFC Bright source catalogue), Hünsch et al. (1998a, 1998b, 1999) (bright and nearby giant, subgiant and main-sequence stars), Kock (1998), Kock et al. (1996), Krautter et al. (1997), Kunkel (1997) (T-Tauri stars), Laurent-Muehleisen et al.(1998) (spectroscopic identification of AGN in the RASS-Green Bank catalogue), Law-Green et al. (1995), Magazzu et al. (1997) (T-Tauri stars), Mason et al. (1992) (white dwarfs), Metanomski et al. (1998) (photometry of F, G and K stars in the RASS), Motch et al. (1996, 1997a, 1997b, 1998) (Galactic plane survey), Nass et al. (1996) (Hamburg catalogue of bright source identifications), Neuhäuser et al. (1995, 1997) (T-Tauri stars), Perlman et al. (1996) (BL Lacertae objects), Romer et al. (1994) (clusters of galaxies), Schmitt (1997) (A, F and G stars in the solar vicinity with distances less than 13 pc), Schmitt et al. (1995) (K and M stars in the solar vicinity with distances less than 7 pc), Staubert et al. (1994) (cataclysmic variable sources), Stern et al. (1995) (RASS identifications in the Hyades cluster), Thomas et al. (1996, 1998) (optical identification of RASS sources), Wagner et al. in preparation (BL Lacertae objects), Wei et al. in preparation (quasars), Wichmann et al. (1996, 1997) (T-Tauri stars), Xie et al. (1997) (active galactic nuclei). These catalogues result in significant measure from follow-up optical observations of ROSAT sources. In these many campaigns extensive use was made of the COSMOS (MacGillivray and Stobie (1985); Yentis et al. (1992)) and APM (McMahon & Irwin (1992)) digitised sky surveys. Table 3 is a compilation of all catalogues used. In the identification process discussed in this section we use a search radius of 90 arcsec around the RBSC position. For 90% of all RBSC sources at least one counterpart has been found in either one of the catalogues (not taking into account ROSATP3, ROSAT-WGA, and ROSHRI pointing catalogues), whereas 10% have no catalogue entries. We have divided the RBSC sources with catalogue counterparts into two subclasses, (i) unique entries and (ii) multiple entries (see Fig.12). Unique entries refer to RBSC sources which have only one catalogue entry in the various catalogues, or which have a unique identification in the private catalogues. The number of RBSC sources which have unique entries is 7,117. Only these sources are used in the following to derive observational properties, like the ratio of the optical- to X-ray flux. For 9,900 RBSC sources more than one counterpart is detected within the 90 arcsec search radius. RBSC sources with multiple entries are not included in the discussion below. From the 7,117 unique entries 42% are classified as extragalactic objects and 58% are of galactic origin. The extragalactic objects are subdivided into galaxies, including active and non-active galaxies, and clusters of galaxies. 2,553 RBSC sources are identified as galaxies and 413 as galaxy clusters. The ratio of the X-ray-to-optical flux is a useful discriminant of the nature of the X-ray source, in particular in discriminating between stars and extragalactic objects. Figure 13 shows a correlation of the flux ratio with the optical magnitude for a sample of RBSC sources already identified as stars, AGN, or clusters of galaxies. The gap between stars and AGN/ACO is partly an artifact as star catalogues become incomplete at $`m_v>12`$. In Fig. 14 we compare results of the analysis of RBSC sources for Tycho stars, clusters of galaxies and active galactic nuclei. The distribution of ACO objects show a strong increase towards larger HR1 values, i.e. the majority of ACO objects are much harder as compared to stars and AGN. The AGN show a flat distribution in the range between –0.5 to +1.0. Below HR1=$``$0.5 the distribution drops quite rapidly due to galactic absorption and due to an intrinsic dispersion in the slopes of the X-ray continua. The HR1 distribution for stars peaks at HR1 = 0. Most of the stars are found in the range –0.5 to +0.5. The histograms of the hardness ratio HR2 confirm the nature of ACO clusters as “hard” sources. The distribution for stars and AGN are softer and alike. In the distribution of HR2 versus HR1 ACO are dominating the upper right region; AGN are found mostly in the central part with HR1 $`>`$ –0.5 and –0.5 $`<`$ HR2 $`<`$ +0.5. Stars are occupying a slightly more extended region than AGN. The scatter plot of $`\mathrm{f}_\mathrm{X}/\mathrm{f}_{\mathrm{opt}}`$ versus HR1 shows that the locus of stars is well separated from AGN and ACO clusters. Stars are primarily found in the lower portion of the plot. ACO clusters occupy only the upper right part of the figure. The first histogram in the lower panel of Fig. 14 displays the angular separation of RBSC positions from the optical positions. For stars and AGN most X-ray sources are found within 30 arcsec; the distribution for clusters of galaxies is much broader as one would expect from the X-ray source extent alone, which is shown in the next histogram. Abell et al. (1989) determined the cluster centers visually quoting typical standard deviations of $`\pm `$2’–3’ for the coordinates with the positional uncertainty depending on the compactness of the cluster. The optically determined centers do not necessarily follow the gas distribution and thus the gravitational potential well, which is the origin of the X-ray emission. Ebeling et al. (1993) found that the optical position of rich clusters had a mean deviation from the RASS position of 3’ for rich clusters, and 7’ for poor clusters. In some exceptional cases angular separations of up to 15’ were found. The last two scatter plots of Fig. 14 exhibit the extent likelihood and the count-rate versus extent. The populations of stars and AGN are well separated from those of ACO clusters. Figure 15 gives the celestial distribution of RBSC sources which have no counterparts according to the catalogues listed in Table 3, when a search radius of 90 arcsec is used. It is obvious that certain regions of the sky have been intensively studied at other wavelengths (empty regions in Fig. 15). In addition by comparing Fig. 15 with Fig. 5, it is evident that almost all of the brightest RBSC sources have already been identified. The list of possible identification candidates as generated from cross-correlating the RBSC sources with various catalogues is intended to assist in determining the nature of the X-ray sources. No attempt was made to remove duplicate entries from different source catalogues and in some cases contradictory identification candidates may be given. With the help of X-ray parameters, such as the hardness ratios, the source extent and the $`\mathrm{f}_\mathrm{X}/\mathrm{f}_{\mathrm{opt}}`$ ratio, a decision can be made as to which entry is the plausible identification candidate (also see Motch et al. 1998 and Beuermann et al. 1999). ## 5 Electronic archive All relevant information about access to the RBSC is available at http://wave.xray.mpe.mpg.de/rosat/catalogues/ rass-bsc. The full catalogues (RBSC and identification catalogue) and the descriptions of the catalogue contents can be retrieved as ASCII files via the WWW or via anonymous ftp. The RBSC presents the X-ray data. Subsets of the catalogue sources may be retrieved via the ROSAT Source Browser also available at the above mentioned address. An example of a search inquiry and its output are explained in Figs. 16 – 17. The identification catalogue (Fig. 18 and Tab. 4) consists of the results from cross-correlating the RBSC with the catalogues listed in Tab. 3. The main X-ray properties (such as name, position in equatorial and galactic coordinates, and fluxes) are given, as well as a designated number which represents the number of counterpart candidates found within a search radius of 300 arcsec. For each of the candidates an individual record is appended, containing the source position, angular separation from the X-ray position, different magnitudes (or fluxes in Radio or IR wavelength bands), redshift, spectral type and classification (if available). The catalogues are meant to be living databases, so that whenever improved X-ray or new ID data are available they will be included. Each data record contains a date stamp according to year, month and day in which this record was changed or newly included. ###### Acknowledgements. We would like to thank the ROSAT team at MPE for their support and for stimulating discussions. We thank Damir Šimić for help in compiling the identification data and Ray Cruddace for the critical reading of the manuscript. The ROSAT Project is supported by the Bundesministerium für Bildung und Forschung (BMBF/DLR) and the Max-Planck-Gesellschaft (MPG). This work has made use of the SIMBAD database operated at CDS, Strasbourg, France. In addition this research also made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA, and the COSMOS digitised optical survey of the southern sky, operated by the Royal Observatory Edinburgh and the Naval Research Laboratory, with support from NASA. We thank Richard MacMahon for making the APM catalogue availabe to MPE.
no-problem/9909/hep-lat9909117.html
ar5iv
text
# QCD hadron spectrum with domain wall fermions ## 1 INTRODUCTION The domain wall fermions (DWF) formalism uses an extra space-time dimension to separate the chiral limit from the continuum limit. In this paper, we report a study of the hadron spectrum obtained from both quenched and dynamical DWF. Our conventions are: $`m_f`$ is the 4-d bare quark mass that explicitly mixes the two chiralities on the domain walls, $`L_s`$ is the extent of lattice in the fifth dimension, and $`m_0`$ is the 5-d bare quark mass, which is often called domain wall height. The masses in physical units in this paper are obtained by using the rho mass to set the scale. ## 2 QUENCHED QCD SPECTRUM Last year, we reported that with domain wall fermions at $`\beta =5.7`$ on an $`8^3\times 32`$ lattice with $`m_0=1.65`$ and $`L_s=48`$, $`m_N/m_\rho =1.42(10)`$ as $`m_f0`$. Considering the moderate size of the lattice, this value is favorable compared with Wilson and staggered results. However, an $`L_s`$ study for $`8^3\times 32`$ lattices at this $`\beta `$ shows that $`m_\pi ^2(m_f0)`$ fits well to the form of $`A\mathrm{exp}(\alpha L_s)+B`$ with a non-zero value of $`B0.048`$, which gives $`m_\pi /a(m_f0)213\mathrm{M}\mathrm{e}\mathrm{V}`$ at infinite $`L_s`$. In Figure 1 we show the pion mass data on which the conclusion was based. In order to investigate how much of this $`213\mathrm{M}\mathrm{e}\mathrm{V}`$ pion mass comes from the effects of finite volume, we have studied a $`16^3\times 32`$ lattice at $`\beta =5.7`$, $`L_s=24`$. We find $`m_\pi ^2(m_f0)=0.077(2)`$, only $`0.021`$ smaller than the $`0.098(7)`$ value we obtained for $`8^3\times 32`$ at $`L_s=24`$. This finite volume shift is less than half of the $`0.048`$ infinite $`L_s`$ limit for $`8^3\times 32`$ suggesting this non-zero limit of $`m_\pi ^2`$ is not a finite volume effect. To examine this question more carefully, we assume the effects of $`L_s`$ can be represented by a residual quark mass, $`m_{\mathrm{res}}(L_s)`$, and express $`m_\pi ^2`$ to first order in chiral symmetry breaking as: $`m_\pi ^2(V,m_f,L_s)=c_0(V)+c_1(V)(m_f+m_{\mathrm{res}}(L_s))`$. We find $`c_1(8^3)=4.54(9)`$, $`c_1(16^3)=4.75(3)`$. If we require that $`c_0`$ vanishes as $`V\mathrm{}`$ and assume that $`16^3`$ is sufficiently large that $`c_0(16^3)0`$ then either $`m_{\mathrm{res}}`$ does not vanish as $`L_s\mathrm{}`$ or $`m_{\mathrm{res}}(L_s)`$ has a strong, unphysical dependence on the spatial volume with $`m_{\mathrm{res}}(24)`$ increasing from $`0.011(2)`$ to $`0.0162(5)`$ as the volume is increased from $`8^3`$ to $`16^3`$. Thus, our results appear to require one of three unexpected possibilities: i) quenched $`m_\pi `$ does not vanish in the chiral limit; ii) $`m_{\mathrm{res}}`$ does not vanish as $`L_s\mathrm{}`$ or iii) $`m_{\mathrm{res}}`$ has a strong volume dependence for $`L_s=24`$. To support our weak matrix element study, we have also measured hadron masses using DWF and the Wilson gauge action at a weaker coupling, $`\beta =6.0`$. On a $`16^3\times 32`$ lattice with $`m_0=1.80`$, $`L_s=16`$ and $`m_f`$ ranging from $`0.01`$ to $`0.04`$, we have(Table 1) $`m_N/m_\rho =1.37(5)`$, $`m_\pi ^2=0.014(2)`$ as $`m_f0`$, which gives $`m_\pi /a(m_f0)=230(15)\mathrm{MeV}`$. From the above discussion, we can see that to decrease the pion mass while using DWF and the Wilson gauge action, large $`L_s`$ is needed. However, the simulation difficulty is proportional to $`L_s`$. It might be helpful to use the renormalization group improved gauge action of Iwasaki which smooths out the gauge fields on the lattice. We have simulated using the Iwasaki gauge at $`\beta =2.2827`$ and $`c_1=0.331`$ which is equivalent to quenched $`\beta =5.7`$ with the Wilson action. The resulting pion masses are promising. Compared with the large $`m_\pi ^2(m_f0)`$ values using the Wilson action, much smaller values are obtained from lattice size $`8^3\times 32`$ as shown in Figure 1. Studies with $`L_s`$ ranging from $`16`$ to $`32`$ and $`m_0`$ ranging from $`1.40`$ to $`1.90`$ show that $`m_\pi `$ has little dependence on $`L_s`$ and $`m_0`$. A simulation at larger volume ($`16^3\times 32`$) also shows that the finite volume effect is small(Figure 1). These suggest that using the Iwasaki gauge action may enable us to study quenched DWF at smaller $`Ls`$, but we may still have a non-zero pion mass at infinite $`L_s`$. ## 3 DYNAMICAL QCD SPECTRUM To support our thermodynamics studies, we have measured the dynamical QCD hadron masses at zero temperature to set the scale. Unless indicated otherwise, all the simulations discussed in this section are done with lattice size $`8^3\times 32`$, $`m_0=1.90`$, $`L_s=24`$, $`m_f^{(dyn)}=0.02`$, and valence quark mass ranging from 0.02 to 0.22 with an increment of 0.04. Valence extrapolations for some of the simulations discussed below are shown in Table 1. Using DWF and the Wilson action, at $`N_t=4`$ the transition occurs at about $`\beta =5.325`$. At this $`\beta `$, we find $`m_\rho =1.18(3)`$, $`m_\pi =0.654(3)`$ at $`m_f^{(dyn)}=0.02`$, which gives $`T_c=163(4)\mathrm{MeV}`$ and $`m_\pi /a=427(11)\mathrm{MeV}`$. Using a larger $`16^3\times 16`$ volume only reduces $`m_\pi `$ to $`0.652(3)`$, which suggests that the finite volume effect is very small. A Ward identity evaluation shows that the residual mass caused by the mixing between the two walls plays an important role in this heavy pion mass. However, that study shows these residual mass effects do vanish in the limit of large $`L_s`$ for these full QCD simulations. We have also measured the masses for $`m_f^{(dyn)}=0.06`$ at $`\beta =5.325`$. Performing real dynamical extrapolations using the two dynamical points obtained from $`m_f=0.02,0.06`$, we get $`m_\pi ^2=0.320(6)+5.38(11)m_f^{(dyn)}`$. Compared with valence extrapolation, the dynamical extrapolation slightly decreases the pion mass as $`m_f0`$. As in the quenched studies, we have also investigated the renormalization group improved gauge action. We have simulated at $`\beta =1.90`$, $`c_1=0.331`$ which is about the transition point for $`N_t=4`$ with DWF and the Iwasaki gauge action. We obtain $`m_\rho =1.16(2)`$, $`m_\pi =0.604(3)`$ at $`m_f^{(dyn)}=0.02`$, which gives $`T_c=166(3)\mathrm{MeV}`$ and $`m_\pi /a=400(7)\mathrm{MeV}`$. Surprisingly this is about the same value as that obtained at $`\beta _c`$ using the Wilson action. Therefore, although the Iwasaki action helps to reduce the pion mass in our quenched study, this is not true for the dynamical case. We have also measured the hadron masses at $`\beta =2.0`$ for both $`m_f^{(dyn)}=0.02,0.06`$. As with the Wilson gauge action, we can draw the same conclusion that the dynamical extrapolation $`m_\pi ^2=0.088(10)+6.9(2)m_f^{(dyn)}`$ gives a slightly smaller pion mass as $`m_f0`$. To study the finite $`L_s`$ effect, we are currently doing a simulation at $`\beta =2.0`$, $`c_1=0.331`$, and $`L_s=48`$. Figure 2 shows the valence extrapolations of the pion, rho, nucleon mass values for $`L_s=24`$ and the pion mass for $`L_s=48`$. For $`L_s=24`$, we have $`m_\pi (m_f^{(dyn)}=0.02)=0.475(7)`$. For $`Ls=48`$, we have obtained $`m_\pi (m_f^{(dyn)}=0.02)=0.420(10)`$. This confirms that the mixing between the two walls is at least a major cause of the heavy pion mass at $`\beta _c`$, and this effect can be reduced by increasing $`L_s`$. ## 4 CONCLUSIONS Although for the quenched QCD spectrum the Iwasaki action lowers the pion mass compared with that obtained using the Wilson action, the non-zero value of $`m_\pi (m_f0)`$ is still problematic. From our dynamical QCD spectrum study, we have obtained a heavy pion mass at $`\beta _c`$ for $`N_t=4`$ for both the Wilson and Iwasaki gauge actions at $`L_s=24`$. This can be improved by increasing the size of $`L_s`$. These calculations were performed on the QCDSP machines at Columbia and RIKEN/BNL.
no-problem/9909/astro-ph9909118.html
ar5iv
text
# Untitled Document INSTABILITIES IN A SELF-GRAVITATING MAGNETIZED GAS DISK S. M. Lee and S. S. Hong Department of Astronomy, Seoul National University, Seoul 151-742, KOREA ABSTRACT A linear stability analysis has been performed onto a self-gravitating magnetized gas disk bounded by external pressure. The resulting dispersion relation is fully explained by three kinds of instability: a Parker-type instability driven by self-gravity, usual Jeans gravitational instability and convection. In the direction parallel to the magnetic fields, the magnetic tension completely suppresses the convection. If the adiabatic index $`\gamma `$ is less than a certain critical value, the perturbations trigger the Parker as well as the Jeans instability in the disk. Consequently, the growth rate curve has two maxima: one at small wavenumber due to a combination of the Parker and Jeans instabilities, and the other at somewhat larger wavenumber mostly due to the Parker instability. In the horizontal direction perpendicular to the fields, the convection makes the growth rate increase monotonically upto a limiting value as the perturbation wavenumber gets large. However, at small wavenumbers, the Jeans instability becomes effective and develops a peak in the growth rate curve. Depending on the system parameters, the maximum growth rate of the convection may or may not be higher than the peak due to the Jeans-Parker instability. Therefore, a cooperative action of the Jeans and Parker instabilities can have chances to over-ride the convection and may develop large scale structures of cylindrical shape in non-linear stage. In thick disks the cylinder is expected to align its axis perpendicular to the field, while in thin ones parallel to it. 1. Introduction The Parker instability is one of the most important processes through which the Galactic disk may have generated large scale structures. When one suggests the instability as a candidate mechanism for making a large scale structure in the Galaxy, one should be careful about destructive roles of convection (Kim & Hong 1998). Since growth rate of the convective instability increases with decreasing wavelength of perturbation, interstellar medium (ISM) in the Galactic disk may get shredded into filamentary pieces by the convection before fully developing a structure (Asséo et al. 1978). In most of the previous studies on the Parker instability, externally given gravity was taken as a sole source of its driving force. In the present study we instead take the self-gravity as the driving force and ignore the external gravity from stars. Nagai et al. (1998) also took the self-gravity into account, but they used uniform magnetic fields and considered only isothermal case. In making large scale structures the self-gravity ought have played a constructive role by triggering the Jeans instability in the medium. In this study we model the Galactic ISM as an infinite disk of magnetized gas under the influence of its own gravity, and carefully follow up the competition among the Jeans, Parker, and convective instabilities. We first give adiabatic perturbations to the ISM in an isothermal equilibrium, and then perform a linear stability analysis onto the perturbed disk to derive the dispersion relation. The $`z`$-axis is taken perpendicular to the disk plane, $`y`$-axis in the plane along the direction of un-perturbed magnetic fields, and $`x`$-axis perpendicular to the fields. All the lengths are normalized to the scale height $`H`$ of the equilibrium disk. The Galactic halo is supposed to bind the ISM disk between $`z`$=$`\pm `$$`z_\mathrm{a}`$, or equivalently $`\pm \zeta _\mathrm{a}(z_\mathrm{a}/H)`$. The normalized wavenumber is denoted by $`\nu _x`$ and $`\nu _y`$ for the perturbations in the $`x`$\- and $`y`$-directions, respectively. The growth rate $`|\omega |`$ is normalized by the free-fall time, and we denote the dimensionless rate by $`|\mathrm{\Omega }|`$. The system is fully described by the disk thickness, $`\zeta _\mathrm{a}`$, the ratio of magnetic to gas pressure, $`\alpha `$, the adiabatic index, $`\gamma `$, boundary conditions, and finally the perturbation wavenumbers, $`\nu _x`$ and $`\nu _y`$. 2. Dispersion Relations for Thick and Thin Disks Aiming at the gravitational instability, we assigned an odd symmetry to the perturbation at $`z`$=0. Perturbations with even symmetry do not trigger the gravitational instability. For the case of a thick disk with $`\zeta _\mathrm{a}`$=5.0, $`\alpha `$=0.1, $`\gamma `$=0.8, and the odd symmetry, we have shown, in Figure 1, how the growth rate varies with $`\nu _x`$ and $`\nu _y`$. This particular set of system parameters is chosen in such a way that we could see all the features of the Parker, Jeans and convective instabilities in the resulting dispersion relation. If $`\gamma `$ $`<`$ $`1+\alpha `$, the convection arises in the system. In the $`x`$-direction, the growth rate of the convection increases with increasing wavenumber. This is the reason why the ridge height in Figure 1 slowly increases, as $`\nu _x\mathrm{}`$, upto the limiting value, $`\mathrm{\Omega }_{\mathrm{max}}^2=(2/\alpha )\left[1+\alpha +\gamma 2\sqrt{\gamma (1+\alpha )}\right]\mathrm{tanh}^2\zeta _\mathrm{a}`$. However, in the $`y`$-direction, the convection gets completely suppressed by magnetic tension. If $`1\alpha `$ $`<`$ $`\gamma `$ $`<`$ $`1+\alpha `$, the magnetic Rayleigh-Taylor instability wouldn’t have a chance to develop. Since $`\gamma `$ $`<`$ $`1\alpha `$ in our case, the magnetic Rayleigh-Taylor instability can be triggered and yields non-zero growth rates all along the $`\nu _x`$-axis. A rather sharp peak in the dispersion curve at ($`\nu _x`$$``$0.50, $`\nu _y`$=0) is clearly due to the Jeans instability. One can see a similar peak at ($`\nu _x`$=0, $`\nu _y`$$``$0.61). The latter is higher than the former, because it is due to a combined effect of the Jeans and Parker instabilities. The Parker instability driven by the self-gravity has brought about the third maximum at around ($`\nu _x`$=0, $`\nu _y`$$``$1.4). Thick disks have enough space for the magnetic fields to bend over so that matter can easily slide down. Therefore, the gravity gets an extra boost from the fields. This is the reason why the $`\nu _y`$-axis peak is higher than the $`\nu _x`$-axis one in Figure 1. In thin disks, however, there is not enough leeway for the fields to buckle up. Consequently, the fields tightly confined in narrow layer hinder, instead of boosting, the system not to develop the gravitational instability along the $`y`$-direction. Without being hindered, the system can still develop the Jeans instability along the $`x`$-direction. This makes the $`\nu _y`$-axis peak lower than the $`\nu _x`$-axis one in Figure 2. Because of the $`\mathrm{tanh}^2\zeta _\mathrm{a}`$ factor, the gravity always over-rides the convection in thin disks. This is how Figure 2 becomes so different from Figure 1. 3. Competition between the Jeans-Parker and the Convection In order to see under what conditions the Jeans instability assisted by the Parker may win the convection, we have compared their maximum growth rates with each other. As can be seen from the left panel of Figure 3, for a given $`\alpha `$, one may find a critical value for $`\gamma `$, above which the Jeans-Parker instability (solid line with open circles) dominates the system over the convection (dotted line). Three instability criteria are compared with each other in the right panel of the figure: the solid line is for the convection, the dashed one for the magnetic Rayleigh-Taylor instability, and the dotted line with open circles for the Jeans-Parker instability. In the domain below the dashed line both the magnetic Rayleigh-Taylor and convection may develop; while in the domain bounded by the dashed and solid lines the magnetic Rayleigh-Taylor may not occur (cf. Newcomb 1961; Parker 1967). Above the dotted line with open circles the system forms a large scale structure via the Jeans-Parker instability. 4. Conclusion The linear stability analysis has marked out in the system parameter space those domains where the magnetized gas disk under self-gravity would become unstable against the perturbations of large wavelength. If the disk is thick, the Parker instability could assist the gravity to generate large scale structures of cylindrical shape. Because undular perturbation should be applied to the magnetic fields to trigger the Parker instability, the structure formed through the Jeans-Parker instability tends to align its axis perpendicular to the magnetic fields. In thin disks the undular perturbation ought be of short wavelength, and the resulting strong magnetic tension wouldn’t allow any structures to form perpendicular to the magnetic fields. Consequently, if the disk is thin, the self-gravity would drive the system to develop large scale structures along the field direction. This study was supported in part by a grant from the Korea Research Foundation made in the year 1997. REFERENCES Asséo, E., Cesarsky, C. J., Lachièze-Rey, M., & Pellat, R. 1978, ApJ, 225, L21 Kim, J., & Hong, S. S. 1998, ApJ, 507, 254 Nagai, T., Inutsuka, S., & Miyama, S. M. 1998, ApJ, 506, 306 Newcomb, W. A. 1961, Phys. Fluids, 4, 391 Parker, E. N. 1967, ApJ, 149, 535
no-problem/9909/hep-ph9909454.html
ar5iv
text
# Spontaneous CP Violation in Large Extra Dimensions ## 1 Introduction String theory indicates the existence of extra dimensions beyond our usual four dimensional spacetime. These extra dimensions must be compactified by small radii in order not to be observed. However, these radii do not have to be close to the Planck length, but have only to satisfy the constraint from the gravity experiment, i.e., they should be shorter than $`cm`$ range. The relation between the fundamental scale $`M_{}`$ and the observed Planck scale $`M_p`$ is given by $$M_p^2=M_{}^{n+2}V_n,$$ where $`V_n`$ is a volume of the extra space, and $`n`$ is the number of the extra dimensions. If $`M_{}`$ is set to TeV scale, $`n`$ must be greater than one, otherwise the gravity would change over the scale of the solar system from the observed one. For simplicity, we assume that the shape of the extra space is torus, in which case $`V_n=(2\pi )^nR_1R_2\mathrm{}R_n`$, where $`R_i`$ is the radius of the $`i`$-th extra dimension. Hence, $$M_p^2=(2\pi )^nM_{}^{2+n}R_1R_2\mathrm{}R_n.$$ (1) We can see from Eq.(1) that $`M_{}`$ can be lowered from $`M_p`$ to TeV scale by taking the radii $`R_i`$ to be large compared to the Planck length. In this way, we can solve the hierarchy problem without supersymmetry (SUSY) or technicolor . We can take the standard model (SM) fields for either bulk fields or boundary fields, which are confined to the D-branes or Domain walls. In the case that the SM fields feel some extra dimensions, their compactification scale must be larger than a few hundred GeV because corresponding Kaluza-Klein (K.K.) modes have never been observed yet . It is possible to realize the hierarchy among the fermion masses by using the ratio of the volume of the region in which the bulk fields spread out to that of the boundary fields . CP is a very good symmetry, but it is violated in the K-$`\overline{\text{K}}`$ system by a small amount ($`ϵ_K2\times 10^3`$). There are two possibilities for the origin of the CP violation. One is the explicit CP violation, and the other is the spontaneous CP violation (SCPV). In string theory, CP is a gauge symmetry , so it must be spontaneously broken. If this CP breaking scale is low enough, one finds that CP is violated spontaneously in an effective field theory at low energy. We shall consider this case here. In this paper we shall assume that different fields feel different numbers of extra dimensions, as discussed in Ref., and show in the context of large extra dimensions enough CP violation can be obtained from the spontaneous breaking in a simple non-SUSY model, which is usually considered not to cause the SCPV. In Section 2, we shall explain our model and realize the hierarchy among the fermion masses. In Section 3, we shall estimate $`ϵ_K`$ in our model and show that it is consistent with the observed value. In Section 4, the neutrino masses and mixing angles are derived without a help of intermediate scale, and in Section 5 we shall try to apply the axion scenario to our context. Finally, Section 6 contains some conclusions. ## 2 Model ### 2.1 Our model Basically, we shall consider the minimal standard model with an additional gauge-singlet scalar field, but we shall assume that different fields feel different numbers of extra dimensions. The relevant interactions are as follows. $`_{\text{int}}`$ $`=`$ $`h_{ij}^uQ_iH^{}\overline{U}_j+h_{ij}^dQ_iH\overline{D}_j+h_{ij}^eL_iH\overline{E}_j`$ (2) $`+\widehat{y}_{ij}^uSQ_iH^{}\overline{U}_j+\widehat{y}_{ij}^dSQ_iH\overline{D}_j+\widehat{y}_{ij}^eSL_iH\overline{E}_j+h.c.`$ $`+(\text{Higgs sector}),`$ where $`Q_i`$, $`\overline{U}_i`$ and $`\overline{D}_i`$ are the $`i`$-th generation of the left-handed quark doublet, right-handed up-type quark singlet and right-handed down-type quark singlet, respectively. $`L_i`$ and $`\overline{E}_i`$ are the $`i`$-th generation of the left-handed lepton doublet and right-handed charged lepton singlet. $`H`$ and $`S`$ are the doublet and singlet Higgs fields respectively, $`h_{ij}^x`$ ($`x=u,d,e`$) are the Yukawa coupling constants and $`\widehat{y}_{ij}^x`$ are dimensionful coupling constants. Note that the non-renormalizable terms in the second line of Eq. (2) should be considered because the fundamental scale $`M_{}`$ is relatively low in our scenario. We assume the CP-invariance for the Lagrangian, so that all parameters in Eq. (2) are real. We will explore the possibility that the CP-invariance is broken spontaneously at the weak scale due to the complex vacuum expectation values (VEVs) of the Higgs fields. The numbers of the extra dimensions that each field can feel are listed in Table 1. The radii of these extra dimensions are supposed to be the same size and it is denoted by $`R_1`$. ### 2.2 Fermion mass hierarchy Let us denote $`\psi (x,y)`$ as a five dimensional bulk field, where $`y`$ represents the coordinate of the fifth dimension compactified by a radius $`R`$. If we Fourier expand $$\psi (x,y)=\underset{m=0}{\overset{\mathrm{}}{}}\frac{1}{\sqrt{2\pi R}}\psi _m(x)e^{i(m/R)y},$$ then we can regard $`\psi _m(x)`$ as a four dimensional field corresponding to the $`m`$-th K.K. mode. On the other hand, the boundary fields are localized at the four dimensional wall whose thickness is of order $`M_{}^1`$, so a coupling including at least one boundary field is suppressed by a factor $`ϵ^k`$, where $`ϵ1/\sqrt{2\pi M_{}R}`$ and $`k`$ is a number of bulk fields included in the coupling . Generalization to our six dimensional case is trivial. The existence of infinite K.K. modes change the running of the gauge coupling constants above the compactification scale $`R_1^1`$ to the power-law running . So, it seems natural that a new physics like the GUT appears up to one order above the scale $`R_1^1`$ by considering the runnings of the gauge couplings . We shall denote the scale that the new physics appears as $`M_{NP}`$. Now Let us assume that coupling constants $`h_{ij}^x`$, $`\widehat{y}_{ij}^x`$ are generated at the scale $`M_{NP}`$, then it seems natural that they are of the form as $`\stackrel{~}{h}_{ij}^x/M_{NP}`$ and $`\stackrel{~}{y}_{ij}^x/M_{NP}^3`$ in the six dimensional bulk spacetime, where $`\stackrel{~}{h}_{ij}^x`$ and $`\stackrel{~}{y}_{ij}^x`$ are dimensionless couplings. In this case the four dimensional couplings $`h_{ij}^x`$ and $`\widehat{y}_{ij}^x`$ are of the form as $`h_{ij}^u\left(\begin{array}{ccc}ϵ^4& ϵ^3& ϵ^2\\ ϵ^3& ϵ^2& ϵ\\ ϵ^2& ϵ& 1\end{array}\right),h_{ij}^dϵ\left(\begin{array}{ccc}ϵ^2& ϵ^2& ϵ^2\\ ϵ& ϵ& ϵ\\ 1& 1& 1\end{array}\right),h_{ij}^eϵ\left(\begin{array}{ccc}ϵ^3& ϵ^2& ϵ\\ ϵ^2& ϵ& 1\\ ϵ^2& ϵ& 1\end{array}\right)`$ (12) $`\widehat{y}_{ij}^x{\displaystyle \frac{ϵM_{}}{M_{NP}^2}}h_{ij}^x`$ . (13) Here we have assumed $`(M_{}/M_{NP})\stackrel{~}{h}1`$ and $`\stackrel{~}{h}\stackrel{~}{y}`$. Thus, setting $`ϵ1/\sqrt{2\pi M_{}R_1}`$ to be $`1/15`$, the desired hierarchy among quark and lepton masses and mixing angles are obtained.<sup>2</sup><sup>2</sup>2Since the energy range of the power-law running is much smaller than the hierarchy between $`M_{}`$ and $`(2\pi R_1)^1`$, the power-law running of the Yukawa couplings does not destroy the structure represented by Eq.(13). For example, if we assume $`R_1^1300`$ GeV, we should set $`M_{}10`$ TeV. We shall take these values in the following, and further we shall assume $`M_{NP}3`$ TeV. ## 3 CP violation CP invariance is broken at the weak scale due to the complex VEVs of the neutral Higgs fields. These VEVs are parametrized as $$H^0=v,S=we^{i\rho },$$ where $`v=174`$ GeV, and we have removed the phase of the $`H^0`$ by using the $`U(1)_Y`$ gauge symmetry. Here note that our scenario does not depend on the Higgs potential, so we shall assume the potential to have a CP violating minimum. Then the quark mass matrices at low energy are obtained as $`(M_u)_{ij}`$ $`=`$ $`(h_{ij}^u+\widehat{y}_{ij}^uwe^{i\rho })v,`$ $`(M_d)_{ij}`$ $`=`$ $`(h_{ij}^d+\widehat{y}_{ij}^dwe^{i\rho })v.`$ We can see from Eq.(13) that $`M_u`$ and $`M_d`$ have complex phases of order $`\phi ϵwM_{}/M_{NP}^20.03`$, so each element of the CKM matrix also has an $`O(\phi )`$ phase. Here we have assumed $`w200`$ GeV. Next we expand the neutral Higgs fields around their VEVs as follows. $`H^0`$ $`=`$ $`v+{\displaystyle \frac{1}{\sqrt{2}}}\varphi ,`$ $`S`$ $`=`$ $`we^{i\rho }+{\displaystyle \frac{1}{\sqrt{2}}}e^{i\rho }(X+iY),`$ (14) where we have chosen the unitary gauge. $`\varphi `$ and $`X`$ are CP-even real scalar fields, and $`Y`$ is a CP-odd one. In terms of these fields, renormalizable Yukawa coupling terms below the weak scale are $`_{\text{yukawa}}`$ $`=`$ $`{\displaystyle \frac{m_i^u}{\sqrt{2}v_2}}q_i\overline{u}_i\varphi +{\displaystyle \frac{m_i^d}{\sqrt{2}v_1}}q_i\overline{d}_i\varphi `$ (15) $`+{\displaystyle \frac{1}{\sqrt{2}}}y_{ij}^ue^{i\rho }q_i\overline{u}_j(X+iY)+{\displaystyle \frac{1}{\sqrt{2}}}y_{ij}^de^{i\rho }q_i\overline{d}_j(X+iY)`$ $`+(\text{lepton sector}),`$ where $`q_i`$, $`u_i`$ and $`d_i`$ are the mass eigenstates of quarks, and $`m_i^x`$ is the mass eigenvalue corresponding to the state $`x_i`$. Note that $`y_{ij}^u\widehat{y}_{ij}^uv`$ and $`y_{ij}^d\widehat{y}_{ij}^dv`$ have $`O(\phi )`$ phases, where $`\widehat{y}^x`$ is the $`\widehat{y}^x`$-matrix in the basis of the quark mass eigenstates. Now we shall estimate $`ϵ_K`$ in the K-$`\overline{\text{K}}`$ system. The CP violation parameter $`ϵ_K`$ can be expressed as , $$|ϵ_K|\frac{1}{2\sqrt{2}}\frac{\mathrm{Im}M_{12}}{\mathrm{Re}M_{12}},$$ (16) where $`M_{ij}`$ is the neutral kaon mass matrix in the $`\text{K}^0\overline{\text{K}}^0`$ basis. The dominant contribution to $`\mathrm{Re}M_{12}`$ comes from the standard box diagram depicted in Fig.1. This diagram is estimated as $`M_{12}^{\text{box}}`$ $`=`$ $`{\displaystyle \frac{G_F}{2\sqrt{2}}}{\displaystyle \frac{\alpha }{4\pi \mathrm{sin}^2\theta _W}}(\mathrm{cos}\theta _c\mathrm{sin}\theta _c)^2\left({\displaystyle \frac{m_c^2}{M_W^2}}\right){\displaystyle \frac{K^0|\overline{d}_L\gamma _\mu s_L\overline{d}_L\gamma ^\mu s_L|\overline{K^0}}{M_K}}`$ (17) $``$ $`10^{13}{\displaystyle \frac{K^0|\overline{d}_L\gamma _\mu s_L\overline{d}_L\gamma ^\mu s_L|\overline{K^0}}{M_K}}(\mathrm{GeV}^2),`$ where $`m_c`$, $`M_W`$ and $`M_K`$ are the masses of the c quark, W boson and the kaon respectively, and $`G_F`$ and $`\alpha `$ are the Fermi constant and the fine structure constant. $`\theta _W`$ and $`\theta _c`$ are the Weinberg angle and the Cabibbo angle respectively. On the other hand, the dominant contribution to $`\mathrm{Im}M_{12}`$ comes from the tree-level diagram shown by Fig.2, because the box diagram mentioned above is real up to the phase of order $`\phi `$ and cannot be the leading contribution. This contribution is calculated by $`M_{12}^{\text{tree}}`$ $`=`$ $`{\displaystyle \frac{y_{12}^dy_{21}^d}{M_S^2}}{\displaystyle \frac{K^0|\overline{d}_Ls_R\overline{d}_Rs_L|\overline{K^0}}{M_K}}`$ (18) $``$ $`\left({\displaystyle \frac{vM_{}}{M_{NP}^2}}\right)^2{\displaystyle \frac{ϵ^7}{M_S^2}}{\displaystyle \frac{K^0|\overline{d}_Ls_R\overline{d}_Rs_L|\overline{K^0}}{M_K}},`$ where $`M_S`$ is the mass of the singlet Higgs field. According to Ref., $$\frac{K^0|\overline{d}_Ls_R\overline{d}_Rs_L|\overline{K^0}}{K^0|\overline{d}_L\gamma _\mu s_L\overline{d}_L\gamma ^\mu s_L|\overline{K^0}}7.6,$$ (19) then $$\left|\frac{M_{12}^{\text{tree}}}{M_{12}^{\text{box}}}\right|10^{13}(\mathrm{GeV}^2)\left(\frac{vM_{}}{M_{NP}^2}\right)^2\frac{ϵ^7}{M_S^2}\times 7.62\times 10^9\left(\frac{vM_{}}{M_{NP}^2}\right)^2ϵ^7.$$ (20) Here we have assumed $`M_S200`$ GeV. Together with the fact that $`\mathrm{Arg}(M_{12}^{\text{tree}})=O(\phi )`$, $$|ϵ_K|\frac{1}{2\sqrt{2}}\phi \left|\frac{M_{12}^{\text{tree}}}{M_{12}^{\text{box}}}\right|\frac{1}{\sqrt{2}}10^9ϵ^8\left(\frac{v}{M_{NP}}\right)^2\frac{w}{M_{NP}^2}\left(\frac{M_{}}{M_{NP}}\right)^310^3.$$ (21) ## 4 Neutrino In our scenario, the fundamental scale $`M_{}`$ is of order TeV scale, so at first sight it does not seem that the see-saw mechanism, which requires an intermediate scale around $`10^{10}`$ GeV, can be applied. However, by using the volume factor suppression mentioned in Section. 2.2 we can realize the small masses of the neutrinos. We shall introduce a new extra dimension and denote its radius as $`R_2`$. Let the right-handed neutrinos $`\nu _{Ri}`$ ($`i=1,2,3`$) feel this new extra dimension, so that neutrino Yukawa couplings $`h_{ij}^\nu `$ are suppressed by a large volume factor $`1/\sqrt{2\pi M_{}R_2}`$ and the Dirac masses of neutrinos can become small enough . Denote the Majorana mass scale of the right-handed neutrino as $`M_N`$, then according to the see-saw mechanism left-handed neutrino mass matrix $`m_\nu `$ is calculated as, $$m_\nu \frac{v^2}{M_N}\frac{1}{2\pi M_{}R_2}\left(\begin{array}{ccc}ϵ^2& ϵ& ϵ\\ ϵ& 1& 1\\ ϵ& 1& 1\end{array}\right),$$ (22) where $`v=174`$ GeV. This realizes the large mixing between $`\nu _\mu `$-$`\nu _\tau `$ expected from the atmospheric neutrino and the small mixing between $`\nu _e`$-$`\nu _\mu `$ expected from the small angle MSW solution simultaneously . For example, if we assume $`M_NM_{}`$, we should set $`R_2^12`$ keV in order to obtain $`m_{\nu _\tau }10^1`$ eV. We shall list some notable comments. First, if we try to suppress $`m_\nu `$ only by the volume factor without the see-saw mechanism, then $`R_2`$ becomes so large that the extra dimension will be observed by the gravity experiment. Thus we must either use the see-saw mechanism or let $`\nu _{Ri}`$ feel more than one extra dimensions in order to realize the experimentally accepted small masses of the neutrinos without contradiction to the gravity experiments. Second, the effect of the running of $`h_{ij}^\nu `$ between $`M_{NP}`$ and $`R_2^1`$ is not expected to be very large by the same reason mentioned at the footnote in Section.2.2. Then we have neglected this running effect in the above discussion. Finally, it is worthy to note that the infinite Kaluza-Klein modes of $`\nu _{Ri}`$ can correspond to the sterile neutrinos from the phenomenological point of view . ## 5 Strong CP problem The axion scenario is the most convincing solution to the strong CP problem. Similarly to the previous section, however, the axion scenario also needs the intermediate scale that the Peccei-Quinn symmetry is broken at. Here we shall avoid this difficulty by using the power-law running of coupling constants, which is characteristic of the context of large extra dimensions. Assume that the axion, which is confined to our four dimensional wall, interacts with spinor fields $`\psi `$ and $`\overline{\psi }`$, which feel two extra dimensions whose radii are both $`R_3`$. $$_{a\psi \overline{\psi }}=g_\psi a\psi \overline{\psi },$$ where $`a`$ represents the axion field. In this case a wavefunction renormalization factor of the axion $`Z_a`$ will scale according to the power-law running above the scale $`\mu _3R_3^1`$. $$Z_a=1cg_\psi ^2\left(\frac{\mathrm{\Lambda }}{\mu _3}\right)^4+\mathrm{},$$ (23) where $`a(\mathrm{\Lambda })=Z_a^{1/2}a(\mu _3)`$, $`\mathrm{\Lambda }`$ is a cut-off and $`c`$ is an $`O(1)`$ constant. $$g_\psi (\mathrm{\Lambda })=Z_{a\psi \overline{\psi }}Z_a^{1/2}Z_\psi ^{1/2}Z_{\overline{\psi }}^{1/2}g_\psi (\mu _3),$$ (24) where $`Z_{a\psi \overline{\psi }}`$ is a vertex renormalization factor, $`Z_\psi `$ and $`Z_{\overline{\psi }}`$ are wavefunction renormalization factors of $`\psi `$ and $`\overline{\psi }`$ respectively. Here if we assume that the power-law running of $`Z_a`$ is the strongest of the runnings of the $`Z`$-factors, then $$g_\psi ^2(\mathrm{\Lambda })\left\{1cg_\psi ^2\left(\frac{\mathrm{\Lambda }}{\mu _3}\right)^4\right\}g_\psi ^2(\mu _3).$$ (25) Next we put another assumption that the value of $`g_\psi `$ at the Peccei-Quinn symmetry breaking scale (PQ scale) $`M_{PQ}`$ is much larger than $`g_\psi (\mu _3)`$. Then we obtain $$g_\psi (\mu _3)\left(\frac{\mu _3}{M_{PQ}}\right)^2.$$ (26) From the assumption, this suppression mainly comes from the power-law running of $`Z_a`$, so that all couplings including the axion field $`a`$ are expected to receive the same suppression factor as that of $`g_\psi `$. For example, the coupling $`(1/64\pi ^2)(a/M_{PQ})ϵ_{\mu \nu \rho \sigma }F_\alpha ^{\mu \nu }F_\alpha ^{\rho \sigma }`$ receives the suppression factor $`(\mu _3/M_{PQ})^2`$ below the scale $`\mu _3`$, thus effective PQ scale $`f_{PQ}`$ becomes $$f_{PQ}\left(\frac{M_{PQ}}{\mu _3}\right)^2M_{PQ}.$$ (27) For instance, if we set $`M_{PQ}M_{}`$ and $`\mu _3=10`$ GeV, then we shall obtain $`f_{PQ}10^{10}`$ GeV and this value satisfies the cosmological constraint.<sup>3</sup><sup>3</sup>3In Ref. the axion field itself is suppose to be the bulk field and the constraint: $`10^9\text{ GeV}<f_{PQ}<10^{15}\text{ GeV}`$ is satisfied by the volume factor suppression. ## 6 Conclusions We showed in the context of the large extra dimensions enough CP violation can be obtained from the spontaneous breakdown in a simple non-SUSY model, which is usually considered not to cause the spontaneous CP violation and estimated $`ϵ_K`$ to be of order $`10^3`$ consistent with the experimental value. It is appealing that the same volume factors are used to generate both adequate smallness of the CP violation and the hierarchy among Yukawa couplings, which is used to suppress the flavor changing neutral current (FCNC). Our scenario does not depend on the Higgs potential that has a CP violating minimum and no extra symmetries are introduced, so we can easily generalize our model to models with more complicated Higgs sector. For example, two-Higgs-doublet standard model with the exact discrete symmetry of natural flavor conservation can also cause the SCPV in our scenario. The essence of our scenario is the existence of the suppressed extra Yukawa matrices that have complex phases and become main sources of the CP violation. Another work in this direction is, for example Ref., in which extra Higgs doublets are introduced and Peccei-Quinn-like approximate symmetry are used to suppress the dangerous FCNC and the CP violation to the observed level. On the other hand, we have used the volume factor suppression in the context of large extra dimensions instead of some approximate symmetries. In our scenario, naive see-saw mechanism or axion scenario, which need an intermediate scale around $`10^{10}`$ GeV, cannot be applied since the fundamental scale $`M_{}`$ is TeV scale. However, these difficulties can be avoided by making use of the volume factor suppression or the power-law running of couplings, which are characteristic of the context of large extra dimensions. One can also consider the scenario that the Yukawa couplings are generated at the scale $`M_{}`$ and realize the hierarchy among the Yukawa couplings as the hierarchy among quasi infrared fixed points (QFPs) , but in such a case the coupling constants $`\widehat{y}_{ij}^x`$ would become too small to realize the realistic value of $`ϵ_K`$. Also, in this case $`M_{NP}`$ has to be lift up to $`M_{}`$, we would need some mechanism that suppresses the power-law running of the gauge coupling constants, for example N=4 SUSY. So far we have not assumed supersymmetry because SUSY models have so many sources of the CP violation that the observed CP violation can be obtained without our scenario. However our scenario can be applied in SUSY models if we adopt an appropriate SUSY breaking mechanism (for example Scherk-Schwarz mechanism ) in which super-particles are heavy enough, squark masses are degenerate and so on. In this case, we can say that our scenario extends the parameter space that gives the observed CP violation in the SCPV. We collect the example values of all scales used here, $`R_2^12\text{ keV},R_3^110\text{ GeV},S200\text{ GeV},R_1^1300\text{ GeV},`$ $`M_{NP}3\text{ TeV},M_NM_{PQ}M_{}10\text{ TeV}.`$ Of course, there are various other possibilities for their values. Here we introduced new five extra dimensions. These must satisfy the relation Eq.(1). In the case of $`n=6`$, which is motivated by the superstring theory, the remaining radius is about $`(4\text{ MeV})^1`$ and satisfies the constraint from the gravity experiment. If we suppose our scenario to be right, we can represents each scale as a function of $`R_1^1`$. $`M_{}36R_1^1,M_{NP}1.8\times 10^2R_1^{1/2},R_2^12.7\times 10^{11}R_1^2,`$ $`R_3^12.1\times 10^3R_1^{3/2},R_41.4\times 10^5R_1^1,`$ (28) where the unit is GeV, and $`R_4`$ is the remaining radius in the case of $`n=6`$. Of course, the hierarchy among the radii of the extra dimensions, which is assumed here, must be explained for completeness. Acknowledgements The author would like to thank N.Sakai for useful advice and N.Maru and T.Matsuda for valuable discussion and information.
no-problem/9909/hep-lat9909006.html
ar5iv
text
# BI-TP 99/31 September 1999 Lattice QCD at Finite Temperature and Density ## 1 Introduction ### 1.1 Preface There are many reasons to study a complicated interacting field theory like QCD under extreme conditions, eg. at finite temperature and/or non-vanishing baryon number density. Of course, first of all we learn about the collective behaviour of strongly interacting matter, its critical behaviour, equation of state and thermal properties of hadrons as well as their fate at high temperature. However, in doing so we also have a look at the complicated non-perturbative structure of the QCD vacuum from a different perspective and can learn about the mechanisms behind confinement and chiral symmetry breaking. This does allow to check concepts that have been developed to explain these non-perturbative features of QCD. This review focuses on the former aspect of finite temperature QCD studies; the latter has recently been discussed in . In fact, an important step in the direction of understanding confinement in terms of the dual superconductor picture has been taken recently by the Pisa group . They analysed the scaling behaviour of an order parameter for monopole condensation and could show that it scales in SU(2) and SU(3) gauge theories with the critical exponents expected for the deconfinement transition (Fig. 1). This is a strong hint for the survival of the monopole condensation mechanism as a source for confinement in the continuum limit. ### 1.2 Outline Significant progress in finite temperature lattice QCD is based on the use of improved actions. In the pure gauge sector we have seen that these actions strongly reduce finite cut-off effects and allow to calculate thermal quantities on rather coarse lattices in good agreement with continuum extrapolated results. A similar approach is now followed in calculations with light quarks. The use of improved staggered and Wilson actions leads to new estimates of the critical temperature and the equation of state which we will discuss in the following sections. Moreover, thermodynamic calculations with domain wall fermions (DWF), which can greatly improve the chiral properties of lattice fermion actions, reached a stage where first quantitative results can be reported on $`T_c`$ and thermal masses. The latter will be discussed in Section 4. In Section 5 we will discuss new developments in the analysis of QCD at finite density. Section 6 contains an outlook. An important part of finite temperature ($`T`$) and density ($`n_B`$) (chemical potential ($`\mu `$)) calculations consists of trying to understand the QCD phase diagram, i.e. the order of the phase transition as function of the number of flavours ($`n_f`$) and quark masses ($`m_q`$) and its dependence on $`T`$ and $`n_B`$ (or $`\mu `$). We will not discuss these issues here. The finite temperature phase diagram has been discussed extensively in recent reviews and the possibly rather complex phase structure of QCD at high density and low temperatures has been the topic of E. Shuryak contribution to this conference . ## 2 The Critical Temperature One of the basic goals of lattice QCD calculations at finite T is to provide quantitative results for the phase transition temperature. In the pure gauge sector this goal has been achieved. The value of the critical temperature for the deconfining phase transition in a $`SU(3)`$ gauge theory is known with small statistical errors. Remaining systematic errors are of the order of 3%. They arise from the calculation of the string tension at $`T=0`$ which is used to set the scale for $`T_c`$. In Table 1 we summarise results for $`T_c/\sqrt{\sigma }`$ obtained with different gauge actions on lattices with varying temporal extent $`N_\tau 1/(aT)`$ and extrapolated to the continuum limit ($`N_\tau \mathrm{}`$). Using for the string tension the value<sup>1</sup><sup>1</sup>1This may be deduced from quenched spectrum calculations ($`m_\rho /\sqrt{\sigma }=1.81(4)`$ ). Recent estimates from CP-PACS data seem to lead to a somewhat smaller value . $`\sqrt{\sigma }425\mathrm{MeV}`$, we find $`T_c270\mathrm{MeV}`$. This large value can be understood in terms of the particle spectrum of the quenched theory; in the low temperature, confining phase there exist only rather heavy glueballs ($`m_G>1.7`$ GeV). A rather large temperature thus is needed to create a sufficiently dense glueball gas, which can trigger a deconfining transition. Investigations of the critical temperature in QCD with quarks of finite mass indeed have shown that the transition temperature drops rapidly with decreasing quark masses<sup>2</sup><sup>2</sup>2At intermediate values of $`m_q`$ the transition only reflects a rapid cross-over in thermodynamic observables rather than a true phase transition. The peak in the chiral susceptibility or the Polyakov-Loop susceptibility defines in this case a pseudo-critical temperature. We will continue to call this temperature the transition temperature at finite values of $`m_q`$ For 2-flavour QCD it is expected to be a critical temperature, corresponding to a phase transition, only in the chiral limit. For 3-flavour QCD it is a critical temperature below a certain critical quark mass .. Unlike in the pure gauge theory the transition temperature for QCD with finite quark masses does seem to be strongly affected by the discretization scheme used for the fermion action. The early calculations with the standard staggered and Wilson actions led to widely different estimates for $`T_c`$. Finite-cut off effects are thus expected to be large and improvement of the fermion action should be expected to be important. Indeed, the new calculations performed with improved Wilson (Clover) and staggered actions as well as with DWF yield transition temperatures which are in much better agreement among each other. Unfortunately, this statement is, at present, only partially correct. In fact, for comparable values of $`m_q`$, i.e. fixed ratios of pseudo-scalar (“pion”) and vector meson (”rho”) masses, $`m_{PS}/m_V`$, it only holds when we set the scale for $`T_c`$ using a hadron mass, eg. $`m_V`$. When we follow a similar approach and use the string tension, $`\sqrt{\sigma }`$, to set the scale the agreement is less evident. In this case calculations with the Wilson fermion action typically lead to transition temperatures about 20% below the results obtained with staggered fermion actions. The current status of the determination of $`T_c`$ for 2-flavour QCD is summarised in Fig. 2. As can be seen the simulations with Clover fermions performed by different groups are consistent with each other. Results on $`T_c/m_V`$ do not seem to depend significantly on the gauge action (one plaquette Wilson , Symanzik improved or RG-improved ) nor does it seem to be important whether tadpole or non-perturbative Clover coefficients are used. Also the DWF results seem to be insensitive to the gauge action chosen (plaquette or RG-improved) and are consistent with results obtained with the Clover action. Results obtained with an improved staggered fermion action , the p4-action<sup>3</sup><sup>3</sup>3The p4-action improves the rotational symmetry of the quark propagator in $`𝒪(p^4)`$. It strongly reduces the cut-off effects in bulk thermodynamic observables , see Fig. 4., also agree with these data within 10%. Unfortunately this consistent picture is, at present, not reproduced when calculating $`T_c`$ in units of $`\sqrt{\sigma }`$ (right frame of Fig. 2). A possible source for this discrepancy might be the calculation of the heavy quark potential, which in the case of Wilson fermions so far has only been performed on rather small spatial lattices, e.g. $`8^3\times 16`$. This may lead to an overestimate of the string tension. It, however, also is possible that calculations on $`N_\tau =4`$ lattices are still performed at too strong coupling and do not allow for a unique determination of the scale. A hint in this direction may be the large difference in $`T_c/\sqrt{\sigma }`$ observed in calculations on $`N_\tau =4`$ and 6 lattices with the non-perturbatively improved Clover action . Clearly more work is needed here to establish a unique result for $`T_c`$ using different fermion formulations. In general, we note that the transition temperature obtained with improved actions tends to be larger than what previously has been quoted on the basis of calculations performed with the standard staggered action. If the quark mass dependence does not change drastically closer to the chiral limit<sup>4</sup><sup>4</sup>4Calculations within the framework of quark-meson models suggest a rapid drop of $`T_c`$ for $`m_{PS}<m_\pi `$ . the current data suggest $$T_c(170190)\mathrm{MeV}$$ (1) for 2-flavour QCD in the chiral limit. In fact, this estimate also holds for 3-flavour QCD. Calculations which are currently performed for $`n_f=2`$ and 3 using the same improved staggered fermion action, suggest that the flavour dependence of $`T_c`$ is rather weak (see Fig. 3). It is remarkable that the transition temperature drops significantly already in a region where all hadron masses are quite large. This is apparent from Fig. 3 where we show $`T_c`$ in units of $`\sqrt{\sigma }`$ plotted vs. $`m_{PS}/\sqrt{\sigma }`$ for $`n_f=2`$ and 3. As can be seen the transition temperature starts deviating from the quenched values for $`m_{PS}<(67)\sqrt{\sigma }2.5\mathrm{GeV}`$. We also note that the dependence of $`T_c`$ on $`m_{PS}/\sqrt{\sigma }`$ is almost linear in the entire mass interval. This might be expected for light quarks in the vicinity of a $`2^{nd}`$ order chiral transition where the pseudo critical temperature depends on the mass of the Goldstone-particle like $$T_c(m_\pi )T_c(0)m_\pi ^{1/\beta \delta }.$$ (2) For 2-flavour QCD where the critical indices are expected to belong to the universality class of 3-d, $`O(4)`$ symmetric spin models one would indeed expect $`1/\beta \delta =1.1`$. However, this clearly cannot be the origin of the quasi linear behaviour observed for large hadron masses independent<sup>5</sup><sup>5</sup>5A similar conclusion holds for $`n_f=2`$, when one analyses the unimproved standard staggered fermion data. of $`n_f`$. A resonance gas model would probably be more appropriate to describe the thermodynamics for these heavy quarks. ## 3 The Equation of State In the pure gauge sector bulk thermodynamic quantities such as the pressure and energy density have been analysed in detail. It has been verified that the most significant cut-off effects result from high momentum modes, which dominate the infinite temperature, ideal gas limit. Improved actions that lead to small cut-off effects in the ideal gas limit<sup>6</sup><sup>6</sup>6In the ideal gas limit the cut-off dependence can be analysed analytically . For a recent analysis of staggered fermion actions see . still do so at finite temperature . In a recent analysis the CP-PACS collaboration calculated the pressure and energy density using the RG-improved (Iwasaki) action . They confirm that after extrapolation to the continuum limit also this action yields results for the $`SU(3)`$ equation of state which are, within an error of (3-4)%, consistent with the continuum extrapolation obtained from the Wilson , improved as well as some fixed point actions . In fact, the Wilson and RG-improved actions represent extreme cases for such calculations. Their cut-off corrections have opposite sign which on coarse ($`N_\tau =4`$) lattices leads to deviations of the pressure by about 25% from the continuum extrapolated result. In view of this the agreement reached with different discretization schemes is rather reassuring. Also in the presence of light quarks the use of an improved action thus seems to be mandatory, if one wants to calculate bulk thermodynamic quantities. The standard staggered and Wilson actions are known to lead to large deviations from the continuum ideal gas behaviour on coarse lattices (small $`N_\tau `$) . For staggered fermions improved actions, leading to smaller cut-off effects in thermodynamic observables, can be constructed by adding suitably chosen three-link terms to the conventional one-link terms . In Fig. 4 (left frame) we show the cut-off dependence of the ideal gas pressure obtained from these minimally improved staggered fermion actions (Naik and p4 action)<sup>7</sup><sup>7</sup>7Staggered actions with even smaller cut-off effects have been constructed . However, they also require a significantly larger numerical effort.. As expected the systematics seen for the cut-off dependence in the ideal gas limit carries over to finite T. In the right frame of Fig. 4 we show results for the pressure of 2-flavour QCD obtained with the standard staggered action (and Wilson gauge action) on lattices with temporal extent $`N_\tau =4`$ and 6 . These are compared with results obtained with the p4-action (and a Symanzik improved gauge action) . In the latter case also fat links have been introduced in the one-link terms to improve the flavour symmetry of the staggered action. While the pressure calculated with the standard staggered action rapidly overshoots the ideal gas limit (as expected from the analysis of the cut-off effects in the ideal gas limit) the results obtained with the p4-action stay below the ideal gas limit and show a temperature dependence very similar to what has been found in the pure gauge sector. We note that the calculations with the p4-action have been performed with rather large quark masses, $`m/T=0.4`$, corresponding to $`m_{PS}/m_V0.7`$, while the calculation with standard staggered action are for $`m_{PS}/m_V0.3`$. In the high temperature phase this does not seem to constitute a major problem<sup>8</sup><sup>8</sup>8Calculations for 4-flavour QCD with quark masses $`m/T=0.2`$ and $`0.4`$ show no significant quark mass dependence . Moreover, also in the continuum the pressure of an ideal gas of fermions with mass $`m/T=0.4`$ deviates by less than 10% from that of a massless gas.. Smaller quark masses are, however, definitely needed close to $`T_c`$ and below in order to become sensitive to the contributions from light pion modes. In the case of the SU(3) gauge theory the magnitude of the cut-off dependence in the temperature range up to a few times $`T_c`$ has been found to be about half of what has been calculated for the ideal gas, infinite temperature limit. If this continues to hold for the fermion sector one should expect, that the current result, obtained with the p4-action on $`N_\tau =4`$ lattices, underestimates the continuum result by about 15% for $`T>T_c`$. Based on this consideration an estimate for the continuum extrapolated pressure is also shown in Fig. 4. We also note that a calculation performed with identical, improved staggered fermion actions shows that the pressure normalised to the continuum ideal gas value has the same temperature dependence in 2 and 3 flavour QCD . Unfortunately, Wilson actions with similarly good high temperature behaviour have not been constructed so far. The Clover action does not improve the ideal gas behaviour, i.e. it has the same infinite temperature limit as the Wilson action. Nonetheless, a first attempt to calculate the equation of state, using the tadpole-improved Clover action combined with the RG-improved gauge action, has now been undertaken by the CP-PACS collaboration . A preliminary result for the pressure in 2-flavour QCD is shown in Fig. 5. Similar to simulations with the standard staggered fermion action one observes an overshooting of the ideal gas limit reflecting the cut-off effects in the unimproved fermion sector. ## 4 Thermal Masses Understanding the temperature dependence of hadron properties, e.g. their masses and widths, is of central importance for the interpretation of heavy ion experiments. Thermal modifications of the heavy quark potential influence the spectrum of heavy quark bound states. Their experimentally observed suppression thus is expected to be closely linked to the deconfining properties of QCD above $`T_c`$ . Changes in the chiral condensate, on the other hand, influence the light hadron spectrum and may leave experimental signatures, for instance in the enhanced dilepton production observed in heavy ion experiments . In numerical calculations on Euclidean lattices one has access to thermal Green’s functions $`G_H(\tau ,\stackrel{}{r})`$ in fixed quantum number channels, $`H`$, to which in particular at high temperature many excited states contribute. As the temporal direction of the Euclidean lattice is rather short at finite temperature one usually has not enough information on the correlation function to extract reliably thermal effects on the (pole) masses. A way out may come from the use of anisotropic lattices, which has extensively been explored by the QCDTARO group . In their recent analysis with quenched Wilson fermions they show evidence for a rapid change in the thermal correlation functions across $`T_c`$ which indicates a rapid change in thermal masses above $`T_c`$. At the same time their analysis of pseudo-scalar wavefunctions does, however, show evidence that the correlation among quarks in this quantum number channel remains strong. In how far this hints at the presence of light ”pionic” bound states has to be analysed further. Anisotropic lattices in combination with NRQCD have also been used to analyse thermal effects on heavy quark bound states . Large mass shifts have been observed for the first excited states. The missing information on the long distance behaviour of thermal correlation functions may also be overcome by using refined techniques to analyse the numerical data on thermal correlation functions. It recently has been suggested that the maximum entropy method may help to extract more detailed information on thermal modifications of hadronic spectral functions . So far the most convincing evidence for modifications of hadron properties at high temperature comes from the analysis of the behaviour of correlation function at large spatial separations, which yields screening masses , as well as thermal susceptibilities, $$\chi _H=_0^{1/T}d\tau \mathrm{d}^3rG_H(\tau ,\stackrel{}{r}),$$ (3) with $`G_H`$ denoting the hadron correlation function in the quantum number channel $`H`$. If there is only a single stable particle of mass $`m_H`$ contributing to $`G_H`$ then $`\chi _Hm_H^2`$. In general the susceptibilities, however, define only effective masses, i.e. they average over contributions from the ground state and all excited states in a particular quantum number channel. In Fig. 6 we show $`1/\sqrt{\chi _H}`$ for 2-flavour QCD obtained with staggered fermions of mass $`m_q=0.02`$ on lattices of size $`8^3\times 4`$.. This figure is based on data from . As can be seen the $`f_0`$ and $`\pi `$ masses become (almost) degenerate at $`\beta _c`$ while the $`a_0`$ remains heavy at $`\beta _c`$. The former behaviour is expected, $`f_0`$ and $`\pi `$ correlation functions are related through a rotation in $`SU(2)`$ flavour space. The degeneracy thus reflects the restoration of the $`SU(2)`$ flavour symmetry. The difference of $`\chi _{a_0}`$ and $`\chi _\pi `$ on the other hand reflects the persistence of the $`U_A(1)`$ symmetry breaking. A crucial question here is how the gap observed for non-zero $`m_q`$ changes in the chiral limit. At high temperature topologically non-trivial gauge field configurations are expected to be suppressed, which in turn would lead to a strong reduction in the strength of $`U_A(1)`$ symmetry breaking and thus in a strong reduction of the mass splitting between $`a_0`$ and $`\pi `$. For 2-flavour QCD it is expected that the quark mass dependence is quadratic, $`(m_{a_0}m_\pi )(\chi _\pi \chi _{a_0})A+Bm_q^2`$. Previous investigations of this were, however, not conclusive . If a quadratic ansatz is assumed calculations with staggered fermions led to the conclusion that a non-zero mass splitting remains also above $`T_c`$, i.e. the $`U_A(1)`$ remains broken . The problem has now been addressed again by the Columbia group using DWF. Due to the improved chiral properties of this action one should find a quadratic dependence on the quark mass. For $`T1.2T_c`$ the Columbia group indeed does observe such a quark mass dependence. In the chiral limit they find a non-zero mass splitting from the susceptibilities as well as from the analysis of screening masses , $$m_{a_0}m_\pi =0.0606(67)+9.66(58)m_q^2.$$ (4) The $`U_A(1)`$ thus remains broken above $`T_c`$, although the mass splitting is strongly reduced (see Fig. 6). This picture is also supported by an analysis of the disconnected parts of flavour singlet correlation functions in quenched QCD using DWF . A vanishing of these would signal a degeneracy between the $`\sigma `$ and $`\eta `$ mesons. The disconnected parts get non-zero contributions only from topologically non-trivial configurations . Indeed, these are strongly suppressed above $`T_c`$. However, at $`T=1.25T_c`$ about 10% of the configurations do still carry a non-zero topological charge indicating the persistence of $`U_A(1)`$ symmetry breaking above $`T_c`$. ## 5 Finite Density QCD Finite density calculations in QCD are affected by the well known sign problem, i.e. the fermion determinant becomes complex for non-zero values of the chemical potential $`\mu `$ and thus prohibits the use of conventional numerical algorithms. The most detailed studies have been performed so far using the Glasgow algorithm , which is based on a fugacity expansion of the grand canonical partition function at non-zero $`\mu `$, $$Z_{GC}(\mu /T,T,V)=\underset{B=\alpha V}{\overset{\alpha V}{}}z^BZ_B(T,V),$$ (5) where $`z=\mathrm{exp}(\mu /T)`$ is the fugacity and $`Z_B`$ are the canonical partition functions for fixed quark number $`B`$; $`\alpha =3,6`$ for one species of staggered or Wilson fermions, respectively. However, this approach so far did not overcome the severe numerical difficulties. It thus seems necessary to approach the finite density problems from another point of view. A reformulation of the original ansatz may lead to a representation of the partition function which, in the ideal case, would require the averaging over configurations with strictly positive weights only, or at least would lead to a strong reduction of configurations with negative weights. A big step away from the original formulation is to start from a Hamiltonian approach . Here it has been shown that problems with a fluctuating integrand can successfully be reformulated in terms of a model where the configurations generated do have strictly positive weights (meron cluster algorithm ). Whether such an approach can be applied to QCD remains to be seen. An alternative formulation of finite density QCD is given in terms of canonical rather than grand canonical partition functions , i.e. rather than introducing a non-zero chemical potential through which the number density is controlled one introduces directly a non-zero baryon number (or quark number $`B`$) from which the baryon number density on lattices of size $`N_\sigma ^3\times N_\tau `$ is obtained as $`n_B/T^3=\frac{B}{3}(N_\tau /N_\sigma )^3`$. After introducing a complex chemical potential in $`Z_{GC}`$(Eq. 5) the canonical partition functions can be obtained via a Fourier transform<sup>9</sup><sup>9</sup>9The use of this ansatz for the calculation of canonical partition functions as expansion coefficients for $`Z_{GC}`$ has been discussed in ., $$Z_B(T,V)=\frac{1}{2\pi }_0^{2\pi }d\varphi \mathrm{e}^{i\varphi B/T}Z_{GC}(i\varphi ,T,V).$$ (6) Also this formulation is by no means easy to use in general, i.e. for QCD with light quarks. In particular, it also still suffers from a sign problem. It, however, leads to a quite natural and useful formulation of the quenched limit of QCD at non-zero density . ### 5.1 Quenched limit of finite density QCD It had been noticed early that the straightforward replacement of the fermion determinant by a constant does not lead to a meaningful static limit of QCD . In fact, this simple replacement corresponds to the static limit of another theory with an equal number of fermion flavours carrying baryon number $`B`$ and $`B`$, respectively . This should not be too surprising. When one starts with and takes the limit of infinitely heavy quarks something should be left over from the determinant that represents the objects that carry the baryon number. In the canonical formulation this becomes obvious. For $`m_q\mathrm{}`$ one ends up with a partition function, which for baryon number $`B/3`$ still includes the sum over products of $`B`$ Polyakov loops, i.e. the static quark propagators which carry the baryon number . This limit also has some analogy in the grand canonical formulation where the coupled limit $`m_q,\mu \mathrm{}`$ with $`\mathrm{exp}(\mu )/2m_q`$ kept fixed has been performed <sup>10</sup><sup>10</sup>10This is a well known limit in statistical physics. When deriving the non-relativistic gas limit from a relativistic gas of particles with mass $`\overline{m}`$, the rest mass is splitted off from the chemical potential, $`\mu \mu _{nr}+\overline{m}`$, in order to cancel the corresponding rest mass term in the particle energies. On the lattice $`\overline{m}=\mathrm{ln}(2m_q)`$ for large bare quark masses.. In the confined phase of QCD the baryon number is carried by the rather heavy nucleons. Approximating them by static objects may thus be quite reasonable and we may expect to get valuable insight into the thermodynamics of QCD at non-zero baryon number density already from quenched QCD. From a numerical point of view there is hope that we can get control over this limit using different approaches. At non-zero $`\mu `$ a variant of the Glasgow approach seems to become applicable for large quark masses and also the static limit of Blum et al. may be explored further . In the canonical approach simulations at non-zero $`B`$ can be performed on relatively large lattices and the use of baryon number densities up to a few times nuclear matter density may be possible . The simulations performed so far show the basic features expected at non-zero density. As can be seen from the behaviour of the Polyakov loop expectation value shown in Fig. 7 the transition region gets shifted to smaller temperatures (smaller coupling $`\beta `$). The broadening of the transition region may suggest a smooth crossover behaviour at non-zero density. However, in a canonical simulation it also may indicate the presence of a region of coexisting phases and thus would signal the existence of a $`1^{st}`$ order phase transition. This deserves further analysis. Even more interesting is the behaviour of the heavy quark potential in the low temperature phase. As shown in the right frame of Fig. 7 the potential does get screened at non-vanishing number density. This will have a direct influence on heavy quark bound states at high density. ## 6 Outlook We have focused in this review on the calculation of basic thermodynamic quantities which are of immediate interest to experimental searches for the Quark Gluon Plasma. In quenched QCD the critical temperature and the equation of state have been calculated on the lattice and extrapolated to the continuum with an accuracy of a few percent. These calculations set a benchmark for many analytical studies of QCD thermodynamics . The progress made in developing and testing improved fermion actions for thermodynamic calculations shows that a similar accuracy for QCD with light quarks is within reach. The current systematic studies with different improved fermion actions may soon lead to a determination of the transition temperature and the equation of state with similar accuracy. We do have reached some understanding of thermal effects on hadron properties. In particular, modifications of the light meson spectrum due to flavour and approximate $`U_A(1)`$ symmetry restoration have been established. However, lattice calculations so far did not come up with detailed quantitative results on thermal masses, which could be confronted with experimental data. There are a few promising ansätze which can lead to more detailed information on thermal modifications of hadronic spectral densities. Of course, there are many more important issues which have to be addressed in the future. Even at vanishing baryon number density we do not yet have a satisfactory understanding of the critical behaviour of 2-flavour QCD in the chiral limit and the physically realized situation of QCD with two light, nearly massless quarks and a heavier strange quark has barely been analysed. Moreover, the entire phase diagram at non-zero baryon number density is largely unexplored. An interesting phase structure is predicted at high density and low temperatures which currently is not accessible to lattice calculations. This does require new algorithmic developments. There are thus many interesting questions waiting to be answered in the next millennium. Acknowledgements: I would like to thank the CCP at the University of Tsukuba for its kind hospitality during the time this talk has been prepared and written up. I also want to thank N. Christ, S. Ejiri, M. Okamoto, D.K. Sinclair, I.O. Stamatescu, D. Toussaint, and U.-J. Wiese for communication on their results and K. Kanaya and A. Ukawa for comments on the manuscript.
no-problem/9909/astro-ph9909017.html
ar5iv
text
# Multiple object and integral field near-infrared spectroscopy using fibers ## 1 Introduction SMIRFS is an instrument to explore new techniques for multiobject spectroscopy (MOS) and integral field spectroscopy (IFS) at near-infrared wavelength (1-2.5$`\mu `$m). It was developed at low cost to provide the UK Infrared Telescope (UKIRT) with a simple capability in these areas and to develop the techniques needed to build larger-scale instruments for 4-m and 8-m telescopes. As such, the instrument was designed to a modest specification with the aim of building it quickly and obtaining useful results of use to other projects. The MOS and IFS modes of SMIRFS are referred to as SMIRFS-MOS and SMIRFS-IFS respectively. In section 2, we describe the basic SMIRFS system consisting of the optical relay from the fiber slit to the cryogenic slit inside CGS4 and the multi-fiber system of the MOS mode. In Section 3, we describe the performance of this mode and present an example dataset obtained during commissioning. The IFS mode is described in Section 4. Its performance is presented in Section 5 which includes a comparison with theoretical expectations. In section 6, we give an example of a dataset obtained during commissioning which also serves to illustrate the operation of an integral field spectrograph. In Section 7, we state our conclusions and discuss the relevance of this work to other IFS systems under construction. We start by discussing the motivation for this work. ### 1.1 Multiobject spectroscopy in the infrared To date, the technique of multiple object spectroscopy (MOS) has been almost exclusively employed at visible wavelengths. Although Multi-slit spectroscopy (e.g. LDSS-2, Allington-Smith et al. 1994, and GMOS, Murowinski et al. 1998) provides the best background subtraction because contiguous regions of sky within the same slit are sampled, this is bought at the expense of truncation of spectra near the edge of the field and some problems in addressing real target distributions. In contrast multi-fiber systems (e.g. Autofib-2/WYFFOS, Parry et al. 1994, and 2dF, Taylor 1997) avoid these problems because the spectrum layout on the detector is independent of the field. However the accuracy of background subtraction is generally worse leading to limiting magnitudes which are 1-2 magnitudes brighter (see also Cuby & Mignoli 1994) and there are problems in addressing dense target distributions due to fiber collision restrictions. The argument for Multiobject spectroscopy is just as strong in the infrared as in the visible, especially since the spectral energy distributions of field galaxies increasingly redshift to longer wavelengths so that the most useful spectral features appear in the near infrared. As an example of the surface densities to be encountered, the field galaxy population contains $`10^4`$/deg<sup>2</sup> at $`K=20`$ which gives a multiplex gain of 10 in fields as small as 2 arcmin. Extending multiobject techniques into the infrared is not straightforward. The difficulty of building a reconfigurable multiobject capability into a cryogenic instrument has so far ruled out signal/noise-optimized MOS at longer wavelengths ($`>1.8\mu `$m) where it is necessary to encapsulate the optical system in a cryostat, although various systems are under development (e.g. EMIR for Gran Telescopio Canarias and the GIRMOS concept for GEMINI). However at shorter wavelengths, where the instrumental thermal background is not a problem ($`11.8\mu `$m), uncooled multi-fiber and multi-slit methods may be used (Herbst et al. 1995). Another issue is that current infrared instruments employ smaller detectors than in the visible. As we have seen, this poses some difficulties for multislit systems since they do not make optimum use of the detector surface whereas fibers can use this area more efficiently. This aspect makes it easier for multi-fiber systems to reach the high spectral resolutions required to reject the strong atmospheric OH emission lines which form most of the background in the J and H bands. Consequently we decided to explore the use of a fiber-based MOS system in the near-IR up to the K-band. Since this wavelength range extends into the region where some drop in the performance of fused silica (FS) fibers may be expected ($`>2\mu `$m), we decided to provide an alternative zirconium fluoride (ZF) fiber system. The use of ZF was also motivated by the possibility that fiber systems could be used in cryogenic instruments in which case fluoride fibers would definitely be required at the longer wavelengths which would then be accessible (up to 5$`\mu `$m). For reasons of cost, it was decided to use this prototype system with an existing infrared spectrograph: CGS4 (Ramsay-Howatt 1994) on UKIRT. This is a cryogenic instrument so a means had to be found to inject the reformatted light into it without rebuilding the cryostat. This was done using an optical relay between the uncooled fiber slit and the cold spectrograph slit which fits into the space normally occupied by CGS4’s calibration unit. The system was commissioned in June 1995 and December 1996. The first run was affected by very poor conditions. The second run had slightly better luck and incorporated an optional output mask to cut down thermal background from the fiber slit. Thereafter the system was adapted for integral field spectroscopy as described below. ### 1.2 Integral field spectroscopy in the infrared Integral field spectroscopy (IFS) produces a spectrum from each part of a two-dimensional field (e.g. Bacon et al. 1995). In contrast, long-slit spectroscopy is limited to a one-dimensional field whose width is determined by the need to obtain good spectral resolution. IFS avoids this restriction by decoupling the slit width from the field shape by reformatting a rectangular field into a linear pseudo-slit. A further advantage is that precise target acquisition is not required since the object does not need to be carefully placed on a narrow slit. If desired, the acquisition can be checked by reconstructing a white-light image of the object by summing the two-dimensional spectrogram over wavelength. Even when observing unresolved objects in poor seeing, the system acts as an image slicer to eliminate slit losses. The scientific motivation for integral field spectroscopy is summarized in Allington-Smith & Content (1998; hereafter AC) which also includes a discussion of sampling and background subtraction issues relevant to fiber-lenslet integral field spectrographs and describes the basic techniques. The fiber-lenslet technique used in SMIRFS ensures that the field is contiguous, with unit filling factor, whilst maximising throughput by optimal coupling with both the telescope and spectrograph. Particular applications for IFS in the infrared include studies of the obscured nuclear regions of active galaxies; the optical-radio co-alignment of distant radio galaxies, and studies of shocks in star forming regions. An example given in this paper relates to active galaxies via imaging of diagnostic emission lines indicative of star formation and non-thermal emission. Many of the key diagnostic spectral features for star-forming regions appear in the infrared and familiar ionic emission lines in distant galaxies are redshifted into the infrared. As discussed by AC, the fiber-lenslet technique provides significant benefits over lenslet-only systems (e.g. TIGER, Bacon et al. 1995) in terms of the efficiency with which the detector surface is addressed and the length of spectrum which can be obtained without overlaps between spatial elements. A lenslet-only system for UKIRT, where the available number of pixels was initially only $`256\times 72`$ pixels, would involve significant compromise in field and spectrum length. The fiber-lenslet approach also has advantages over fiber-only systems. Firstly, it provides much better coupling to the telescope and spectrograph. Without this, the very slow beam from the telescope (f/36) would result in very low throughput. Secondly, it provides unit filling factor whereas a fiber-only system wastes the light which strikes the cladding (and buffer, if present) between fiber cores. Therefore, the fiber-lenslet approach is the best for implementing an IFS capability on UKIRT. This approach has also been adopted by us for the Thousand-element integral field unit (TEIFU; Haynes et al. 1998), for the IFS capabilities of the VLT VIMOS (Lefevre et al. 1998) and for the GEMINI Multiobject Spectrographs (GMOS, Allington-Smith et al. 1997). Other systems using this technique are SPIRAL and COHSI (Kenworthy et al. 1998). For these reasons, we decided to adapt the SMIRFS-MOS system to a fiber-lenslet integral field unit (IFU). This system, which re-uses the SMIRFS-MOS infrastructure, allows us to prove technology to be applied to TEIFU (now successfully commissioned) and the GMOS IFU. The results in this paper refer to two observing runs with the SMIRFS-IFU system. One in June 1997, for initial technical commissioning was accompanied by very poor weather but the other, in March 1998, was more successful and allowed spectral-line mapping of active galaxies (Turner et al. in preparation, Chapman et al. in preparation). The first run used CGS4 with its short camera giving a sampling of one detector pixel per spatial element while the second run used the long camera with a sampling of 2 pixels per spatial element. ## 2 The SMIRFS-MOS system The system is described in detail by Haynes (1995). Fig. 7 shows the layout of SMIRFS and its coupling to the telescope and CGS4. SMIRFS has been designed to mount onto the West Port of UKIRT, which is reserved for visiting instruments. There are essentially four parts to the system; the field plate unit, the guide fiber unit, the fiber bundle and the slit projection unit. Despite the name, the guide fiber unit is not usually used for guiding, but is mainly used for field acquisition and checking the field plate orientation. ### 2.1 Field plate unit The field plate unit’s function is to hold the field plate and thereby the input of the fiber bundle at the focal plane of the telescope. The focus of UKIRT is 196mm from the West face of the instrument support unit. The field plate contains pre-drilled holes which correspond to the positions of the objects to be observed as well as extra holes to hold the acquisition fibers and dedicated sky fibers, if required. A different field plate is needed for each field. The fibers are fixed in small brass ferrules which are held in the field plate by a lock nut (Fig. 7). The field may be adjusted in rotation to correct for any misalignment between the field plate and the object field. The correction is performed once during the instrument set-up and does not normally require changing when a different field plate is installed. However, as a quick check, the guide fibers can be used to ensure the orientation is correct. The field plate unit also has a plate tensioner, that pulls on the centre of the field plate to distort it to approximate a spherical surface with a radius of 11.5m to ensure that all the fibers in the field are pointing correctly at the exit pupil of the telescope (the secondary mirror). Any error in the global fiber pointing can be corrected using the UKIRT dichroic mirror, which directs the infrared light to the required port, while letting the visible light through to the cross-head acquisition and guide camera. The available field is 4 arcmin across and the minimum fiber to fiber spacing is approximately 18 arcsec. Fibers may not be deployed within the central 14 arcsec, where the tensioner is attached. ### 2.2 Guide fiber unit This unit consists of three fiber bundles. Each is a coherent bundle containing 7 fibers; one central fiber closely surrounded by a ring of 6. These are coupled to a camera via two magnifying lenses. This enables the operator to determine the centroid of up to three stars in the field and from that make corrections to the telescope pointing and field plate rotation. Originally a SCANCO intensified camera was used, but this was not sufficiently sensitive to the small amount of predominately red light that was reflected by the dichroic. It was later replaced with a CCD, which considerable improved the sensitivity. However, because the guide fiber unit uses the visible light that is reflected by the dichroic, a correction for atmospheric refraction may be required at large zenith distance. The diameter of a single guide fiber bundle corresponds to approximately 2.7 arcsec on the sky. ### 2.3 Multi-object fiber bundles In MOS mode, SMIRFS has two fiber bundles each containing 14 fibers: a zirconium fluoride (ZF) system for use in the K band and an ultra-low OH fused silica (FS) system for the J and H bands. The throughput of different fiber type is shown in Fig 7. A small CaFl<sub>2</sub> lenslet (Fig. 7) is used at the input and output of each fiber to couple the telescope beam (f/36) into the fiber at f/5 and then back to f/36 at the output for coupling to the spectrograph. This reduces losses due to focal ratio degradation<sup>1</sup><sup>1</sup>1A non-conservation of Etendue which results in the output beam being faster than the input beam. It is an equivalent to an increase in entropy so should be avoided. (FRD) within the fibers. The input lenslet (diameter 3mm, focal length 7.6mm) re-images the telescope exit pupil onto the face of the fiber core (200$`\mu `$m diameter). The pupil size was chosen to almost completely fill the fiber core so as to reduce the amount of light contamination from the sky around the secondary mirror, which is unbaffled at UKIRT. This also reduces thermal contamination from any telescope structure around the top end. At the bundle output, the fibers are re-formated into a long slit. Each fiber is coupled to the CGS4 spectrograph via another lenslet which is identical to the lenslet at the fiber input. The centre to centre spacing along the fiber slit is 4mm, which corresponds to 6.2 arcsec at the detector. This corresponds to 5 pixels with the short camera (focal length 150mm) and 10 with the long camera (focal length 300mm). The fiber output spacing was constrained primarily by the number of detector pixels that are available along the CGS4 slit and the desire to maximize the multiplex gain. SMIRFS was original designed for use with the 256$`\times `$256 array and the short camera which results in only 72 pixels along the slit. The lenslets limit the aperture viewed by the fiber to 4.5 arcsec. However, this can be reduced to 2 arcsec in good seeing conditions by the use of a field stop mounted on the front of the fiber ferrule. This is not advisable for K band spectroscopy as the stop will contribute significantly to the thermal background as it is not cooled. The thermal background is kept to a minimum by using the CGS4 spectrograph’s cooled Lyot stop to mask out any contamination from beam angles faster than f/36 and by imaging the fiber slit onto the cooled CGS4 long slit. After the first telescope run it was found that there was a significant amount of thermal background from the slit material between the fibers. The slit was then modified by placing a reflective mask just in front of the fiber slit. This contained 14 holes to allow the light from the fibers to pass through and was positioned at such an angle that only light which had originated in CGS4 was reflected back into it, which, being cooled, produces very little thermal background. Each hole acts as a field stop limiting the effective aperture to $``$2.5 arcsec on the sky. However, the aperture is slightly blurred by FRD effects so some light will come from outside a diameter of 2.5 arcsec. The slit mask considerably reduced the thermal contamination from the slit material, and is discussed in the next section. ### 2.4 Slit projection unit As the spectrograph is a cryogenic instrument it was impractical to replace the CGS4 long slit with the SMIRFS fiber slit. It was therefore necessary to project the image of the fiber slit through the cryostat window into CGS4. This is achieved using a spherical re-imaging mirror and a plane fold mirror. In order to imitate the flat slit of CGS4 it was necessary to curve the fiber slit to match the field curvature of the re-imaging mirror, thus producing an image magnification of unity. Both mirrors are mounted on a tip/tilt and translation stage to facilitate alignment of the fiber slit with the CGS4 optical axis. ## 3 SMIRFS-MOS performance ### 3.1 Efficiency The first commissioning run took place in June 1995 (Haynes 1995, Haynes et al. 1995) using the short CGS4 camera and a 75 lines/mm grating. The run was badly affected by poor weather but the provisional results obtained led to a number of modifications, the most significant of which was the reflective mask for the ZF fiber bundle to reduce the thermal background in the K band. The second run in December 1996 used the same spectrograph configuration as June 95. The weather was partially clear but unsuitable for photometric studies much of the time. During the remainder of the time, photometric variation was around 10% which limits the accuracy of the results presented. The purpose of the run was primarily to observe a number of K and M giant star in both open and globular clusters to establish a new method of metal abundance determination for cool stellar population and demonstrate the potential of SMIRFS and future infrared fiber systems. A summary of the throughput performance of SMIRFS (CGS4 short camera and grating) is given in Table 1. This gives the efficiency of the SMIRFS alone, obtained from a comparison of the count rates with the SMIRFS-MOS installed and with it removed from from CGS4 so that light enters the spectrograph directly through the slit. The predicted values are estimates that take into account the average fiber transmission, reflection losses and optical alignment errors (see Haynes 1995 for further details). The FS fiber throughput agrees reasonable well with the predictions, especially in the H and K bands. The results for the ZF without the output mask bundle agree well with the predictions, but the results with the output mask installed are significantly lower. This may be due to vignetting by the mask, resulting from mis-alignment of the star on the fiber input or vignetting by the CGS4 slit due to flexure (Kerr 1997). The problem of flexure within CGS4 became more apparent during the SMIRFS IFU run discussed later. The fiber to fiber throughput variations are $``$10% RMS (Fig. 7) and are dominated by errors in the alignment between the fiber inputs and the telescope pupil. ### 3.2 Image quality The PSF of the system can be demonstrated from a comparison of an observation of a standard star with SMIRFS and by CGS4 alone using just a long slit (Fig. 7). Gaussian fits to the cores of the two profiles indicate $`\sigma =0.56`$ and $`0.64`$ pixels with and without SMIRFS respectively. Although this comparison does not take the seeing into account, it suggests that any degradation in spatial resolution by SMIRFS is small and that the wings of the distribution arise from within CGS4. This is not surprising since the radial distribution of light in the object should be preserved at the slit (because the fibers preserve the radial angular distribution of light), except for the effect of FRD. This suggests that FRD is not a significant problem. No obvious sign of truncation of the PSF due to the effect of the output mask is seen: the results are similar for the FS (unmasked) and ZF fibers (masked and unmasked). ### 3.3 Background removal Although background subtraction, by e.g. beam-switching, will remove the background whether it arises from the instrument or the sky, the signal/noise is degraded by the photon noise from the background. Therefore it is important to minimize the source of background before it is recorded by the detector. Although the poor conditions during the June 1995 run mean that it is not possible to make a quantitative comparison of the thermal background of SMIRFS with and without the output slit mask, a large reduction in the inter-fiber background for the K band was noted after the output mask was installed. When observing sky the signal from the inter-fiber area was less than the signal from the fiber across the whole of the K band whereas previously it was often higher. This is shown in Fig. 7 where the background count rate using the mask is plotted separately as the component from the fiber alone and from the inter-fiber gap. This excludes thermal background from the sky and telescope but includes any contribution from CGS4 since this could not be determined independently. For this reason the count rates are upper limits. Before the output mask was installed, a comparison of data taken with and without SMIRFS showed that the thermal background between 2.2 and 2.5$`\mu `$m was only $``$3 times larger for SMIRFS than for CGS4 alone (Haynes 1995). Although we were unable to make a direct comparison after the mask was installed, we believe that the thermal background due to SMIRFS is significantly less than implied by the figures given above. ### 3.4 Examples of data An example of the K band spectra from giant stars in a globular cluster field (NGC1904) is shown in Figs 7 and 7. Beam switching was used for sky subtraction, by means of a telescope nod, typically a few arcmin. The observing sequence was 12 exposures of 5 seconds with $`2\times 2`$ sampling (4 minutes) in a series of on-off-off-on target positions. This was repeated 8 times to give a total on-target time of 64 minutes. The data has been wavelength calibrated with an Argon arc spectrum and then co-added. The typical NaI (2.204$`\mu `$m) and weak CaI (2.258$`\mu `$m) absorption, plus the CO bands at redder wavelength (Terndrup et al. 1991), are visible in the brightest of the objects. It is also encouraging that the fibers dedicated to sky indicate negligible sky subtraction errors. Table 2 contains a list of the objects, their magnitudes and broad-band colors in the top-bottom order in which they appear in Fig. 7. ## 4 The SMIRFS-IFS system ### 4.1 The principle of fiber-lenslet IFS systems As described in Section 1.2, fiber-lenslet systems have advantages over both lenslet-only systems in terms of the efficiency with which the detector surface is used — leading to an increase in field of view for fixed sampling increment and detector format — and fiber-only systems in terms of efficiency and filling factor. The basic principle of the SMIRFS integral field unit (IFU) is shown in Fig. 7. An image of the sky is formed on the input lenslet array. This forms images of the telescope pupil on the cores of a matching array of fibers bonded to the unfigured side of the lenslet array. This performs two functions: firstly, the beam is made faster to reduce the effect of FRD (Section 2.3) and, secondly, all the light falling on the lenslet aperture is captured by the fiber leading to almost unit filling factor. It is very important that as much light as possible is directed into the fibers which requires careful control of positional errors between the lenslets and fibers. In a visible-light system the fiber cores could be oversized but this is not desirable in the infrared because it would inject background light into the fibers. For this reason, the core size is well-matched to the telescope pupil. The fibers are reformatted into a pseudoslit which consists of a line of fibers bonded to a linear lenslet array. The output of each fiber is a scrambled image of the telescope pupil from which each lenslet forms a scrambled sky image while ensuring that the beam is well-matched to CGS4. The resulting line of images, which forms the actual pseudoslit, is then re-imaged onto the cold slit of CGS4 inside its cryostat in the same was as for the SMIRFS-MOS system. Overlaps in the distribution of light between elements at the slit are permissible since this only results in a small degradation in spatial resolution in the slit direction (AC). But this requires that the mapping between the IFU input and output is such that objects which are adjacent on the sky are also adjacent at the slit. This condition is satisfied by the ‘snakewise’ mapping shown in Fig 7. To completely avoid overlaps would require a separation between the IFU outputs at the slit which would drastically reduce the number of spatial elements. ### 4.2 Description of the SMIRFS IFU Full details can be found in Lee (1998). Table 3 provides a summary. The SMIRFS-IFU reformats a rectangular field of 6$`\times `$4 arcsec onto the CGS4 slit. The field is sampled by 72 hexagonal elements of size 0.6 arcsec. A spectrum is produced from each element when dispersed by the spectrograph. The number of elements is determined by the slit length of CGS4. With the short camera, the sampling is 1 pixel/element, but 2 pixels/element if the long camera is used. The sampling scale was a balance between the desire to exploit good images available from the UKIRT tip-tilt secondary mirror and the need to cover a useful field of view, given the limited number of elements. The system consists of a field plate unit which supports the input of the IFU at the telescope focal plane, the lensed fiber bundle and a slit projection unit that projects the IFU slit onto the CGS4 slit. The slit projection unit and field plate unit are part of the SMIRFS-MOS system. The system is uncooled since it is not possible to package the system within the CGS4 cryostat. For this reason, the system is optimized for the J and H bands, although some useful performance may be expected in the K band, albeit with some instrumental thermal background. The field plate unit is mounted on a port of the Instrument Selection Unit at 90 azimuth to CGS4. When the dichroic is correctly positioned, infrared light from the sky is focussed onto the IFU input. Visible light continues through the dichroic and forms an image on the cross-head camera which is used for acquisition and guidance. The IFU input (Fig 7) consists of a microlens array which forms images of the telescope pupil on the fiber cores. The fibers (Table 4) are individually located in an array of microtubes (Fig. 7) to an accuracy of 7$`\mu `$m RMS. The overall RMS positional accuracy of the fiber cores with respect to the lenslets is $`10\mu `$m. This includes the non-concentricity of the fibers with respect to their outer diameter, the non-concentricity of the fibers with the microtubes and positional errors within the microlens array. A small chimney baffle reduces background contamination by preventing light from entering the fibers at large angles. The fibers are grouped together in a conduit and led into the slit plate unit which is installed in place of the CGS4 calibration unit. The fibers are terminated at 12 slit blocks where the fibers are interfaced to linear microlens arrays to form the slit (Figs 7 and 7). The fibers were found to be positioned with an accuracy of 7.5$`\mu `$m RMS with the largest error being 23$`\mu `$m. The fibers are attached to the flat end of the output microlens arrays (Fig. 7) which focus parallel rays emerging from the fibers to form a scrambled image of the sky within each of the subapertures at the location of the IFU slit. Each block is angled to ensure that the principal arrays arrive at CGS4 with the same angle as when beam-fed directly from the telescope. The lenslet arrays were made by Adaptive Optics Associates and consist of convex lens surfaces replicated in epoxy on a fused silica substrate. Tests of the lenslets show good position accuracy within the array. Measurements of the PSF for both input and output arrays show a well-concentrated diffraction-limited core with extended wings (see Section 6.4.3 of Lee 1998 and Lee et al. 1998). The wings are believed to arise from small defects in the lens surfaces. Table 5 shows detailed predictions from a numerical model of the IFU efficiency using realistic models of the loss mechanisms based on measurements of the lenslet PSF and FRD taking diffraction into account and assuming that the scattering losses are proportional to the inverse square of wavelength (Nussbaum et al. 1997). This suggests that the dominant loss mechanism ($``$20%) is the mismatch between the fiber core and the imperfect pupil image produced at the fiber input. Recall that the pupil image was not undersized with respect to the fiber core because of the need to baffle light from outside the telescope exit pupil. The throughputs of the epoxy used in the lenslet fabrication and the adhesive used for bonding to the fiber array are shown in Fig 7. Note that the lenslet arrays are not antireflection coated as the epoxy surface is not suitable for this process. The fiber slit is reimaged onto the cold CGS4 slit using a spherical mirror and a flat mirror mounted in the slit plate unit. By adjustments of these mirrors the magnification, offset and focus of the image of the IFU slit can be adjusted. CGS4 is used with a 2-pixel wide slit to cut down scattered light while not vignetting the IFU slit, whose elements each project onto a circle of diameter $``$1 pixel. In the construction, much care was devoted to the following critical areas: 1. Random lenslet-fiber misalignments. As discussed above, these are such that the RMS misalignment at input and output is no greater than 10$`\mu `$m RMS. 2. The fibers were polished as complete units within their respective holders to ensure a flat surface for mating to the lenslet arrays. An additional advantage of using lenslets is that any residual roughness in the fiber ends is filled in by the optical adhesive. 3. FRD. This was measured for the completed fiber bundle before the lenslet arrays were bonded and found to be such that a parallel beam at the input produced a cone of light corresponding to $``$f/10 at the output. This was achieved by careful adjustment of the fiber bundle conduit to relieve stress on the fibers. 4. Alignment of the lenslets with the fibers. This is a complex procedure which eliminates not only global shifts and rotation of the lenslet arrays with respect to the fibers but also the non-telecentricity of the telescope feed to the spectrograph to ensure correct pupil alignment. It was estimated that the angular misalignment about the optical axis was $`<0.3^{}`$ and the positional accuracy was $`<20\mu `$m. Any global shift is compensated for during alignment at the telescope. ## 5 On-telescope performance ### 5.1 Alignment and flexure At the telescope, the following alignments were carried out. 1. The IFU input was aligned so that the telescope exit pupil (the secondary mirror) was aligned with the fiber cores. This was done by back-projecting light from the fiber slit onto the telescope secondary mirror and adjusting the angle of the dichroic which diverts infrared light to the IFU. 2. The relay-optics were adjusted to give unit magnification and to eliminate angular misalignment between the fiber slit and cold slit. This was achieved by masking the input lenslet array to produce a diagnostic pattern at the slit which was recorded by the science detector. 3. Flexure test were done to examine the structural stability of the IFU plus spectrograph. Some flexure was found amounting to a maximum of 1.1 pixels (Lee 1998). However this is believed to be caused by flexure within CGS4 since it is similar to recent measurements made on CGS4 alone (Kerr 1997). ### 5.2 Efficiency The efficiency of the IFU was measured by comparing observations of standard stars taken with CGS4 alone and with the IFU plus CGS4 with a wide slit. The results are summarized in Table 5. This excludes the efficiency of CGS4, its detector, the telescope and atmosphere. Considering that this instrument is a retrofit to a spectrograph not designed for IFS, the relative throughput of $``$50% is very good. One uncertainty in this estimate is whether all the light from the star passed through the CSG4 slit when used without the IFU. This can be estimated from the spatial profile along the slit on the assumption that the profile in the perpendicular direction is the same. Unfortunately, it is not known if the extended wings of the spatial profile (see Fig. 7 for an example of an observation with CGS4 alone) arise from within the spectrograph (as suggested in Section 3.2) or before the slit. If the latter is true, the measured throughput values should be multiplied by 0.92. The theoretical efficiency prediction in the table includes all known sources of error including misalignment errors, FRD and lenslet scattering based on laboratory measurements. ### 5.3 Uniformity and background subtraction An important aspect of fiber-lenslet IFUs is the uniformity of response over the extent of the field. Large variations in response have the potential to complicate data reduction and, even if they can be removed by flatfield calibration, will compromise the final signal/noise since results will be degraded by regions of low response. The uniformity of response is indicated in Fig. 7. There is significant variation with an RMS of 16% of the mean. This includes one fiber which was broken during manufacture and others where it was noted during manufacture that the fiber was significantly displaced from its correct position. Nevertheless, these variation can be removed using standard flatfield calibration to within the limits imposed by photon statistics (Lee 1998). Since there is no field dedicated to background subtraction, because of the limited slit length of CGS4, the technique of beam-switching is used. This is the same technique routinely used for CGS4 longslit observations. The telescope is nodded between two positions on (A) and off (B) the target in the sequence ABBA which is repeated as often as required. If the exposure time per pointing is short enough the background will be accurately subtracted. The temporal power spectrum of background variations is a matter of some controversy and varies from night to night and site to site. Our data show background residuals consistent with the results expected for CGS4 alone. There is no reason to believe that the IFU is more susceptible to these problems than any other instrument using this technique. However, ideally the IFU should be equipped with a dedicated field for background subtraction so that background estimates can be obtained contemporaneously (AC). ### 5.4 Point spread function Here we assess the point-spread function (PSF) due to single elements of the IFU as a test of the quality of the system. The actual PSF observed will depend on the seeing and the method of reducing the IFS data. It is important to distinguish the PSF measured in the slit direction and dispersion direction in the raw data from the spatial PSF measured in orthogonal directions in the reconstructed image of the field. The PSF in the spectral direction was measured from observations of wavelength calibration sources. CGS4 has the option to dither the detector (moving the detector by sub-pixel amounts before re-combining into a single frame in software) which allows the PSF to be properly sampled. In the J-band, the recorded line FWHM was equivalent to using a 1.1-pixel wide slit. In the H-band the effective IFU slit width is 1.2 pixels. With the IFU, CGS4 is normally used with a 2-pixel wide slit to ensure that a large fraction of the light from the fiber slit passes into the spectrograph. In fact, observations with CGS4 with a 1-pixel wide slit gives FWHM$`=1.15\pm 0.05`$ pixels which implies that use of the IFU does not degrade the spectral resolution. However, the spectral PSF does vary slightly from fiber to fiber in the sense that fibers with lower efficiency also produce broader profiles (by up to $``$20%). This may be because these fibers are affected by FRD for which light exits from the fiber core at larger angles than usual leading to a broadened PSF at the slit. This would produce both a broadened image on the detector and lower efficiency since light emerging from the fibers at large angles would be vignetted by the output microlenses. Checks were also done to see if the images at the slit were displaced in the dispersion direction. This would complicate the interpretation of data because of overlaps between images from adjacent fibers. This test indicates that while there was a small non-linear distortion term amounting to $``$0.1 pixel from a linear fit, no fiber output was displaced by more than 80$`\mu `$m from the average position. Finally it should be noted that the reconstructed image profiles are slightly broadened in the direction which corresponds to the slit direction because of the overlap of images at the slit. As discussed by AC, this results in a small degradation in spatial resolution in this direction but has little effect on the spatial resolution in the orthogonal For the image shown in Fig. 7, which is mostly unresolved, the cores of the profiles are well-fit by gaussian functions with FWHM of 1.32 and 1.07 arcsec in the direction parallel and perpendicular to the slit respectively. This is consistent with a seeing FWHM of 1.0 arcsec which indicates that the object is undersampled. AC provided a methodology for estimating this broadening effect which is due to the overlaps between images at the slit and depends on the amount of FRD present. However, in practice, the broadening is likely to be dominated by uncompensated flexure (Section 5.1) which will also produce a preferential broadening in the slit direction, and which may be present at the level of a few tenths of an arcsec. ## 6 Observing with the IFU Here we outline the procedures required for data reduction. We present the example of an observation of NGC4151 to illustrate the operational techniques required. ### 6.1 Data reduction Dispersed images from the CGS4 spectrograph are captured by a $`256\times 256`$ InSb detector. With the SMIRFS-IFU and long camera, a $`256\times 178`$ subsection is read out non-destructively, effectively providing bias-subtracted images by comparing consecutive exposures. A single integration, limited in duration mainly by background variability, may consist of a number of exposures which are individually short enough to avoid saturation. These are automatically combined by control software to form the raw data available to the user. The raw integrations are typically obscured by high levels of dark current in $`4\%`$ of pixels, but this is removed effectively by sky subtraction. For extragalactic work, noise is sky dominated (or dominated by dark current for some pixels). To achieve full spectral resolution, critical sampling of the slit width requires a shift of the detector by half a pixel between integrations and interlacing pairs of images. In practice extra steps are used, covering two pixel widths, to reduce the impact of dead pixels. The process of reconstructing three-dimensional ($`x,y,\lambda `$) maps of target objects from the raw CGS4 integrations can be broken into the following main steps: 1. For each object integration (Fig. 7), the corresponding sky integration is subtracted, removing background and dark current. 2. The integration is divided by a detector flat image, taken before installing the IFU. 3. Integrations taken at different detector positions are combined into a single observation (Fig. 7), using an algorithm which also excludes pixels with anomalous values by comparison with a running median. 4. The spectra are extracted from each observation. Since the output of each element is not individually resolved along the slit, the positions must be calculated from known offsets. 5. Extracted spectra are divided by the extracted fiber flatfield exposure, to correct for variations in element transmission. 6. The spectra are reformatted into an ($`x,y,\lambda `$) datacube (Fig. 7), using the known mapping between position in the field and position in the slit, and by fitting a bidimensional surface to the spatial points at each wavelength increment. 7. If spatial mosaicing has been used to increase the field of view, the observations of each mosaic cycle are combined using information derived from the telescope offsets or by centroiding on known features. The individual datacubes are interpolated onto an output datacube with sub-pixel accuracy (Fig. 7). 8. Finally, the datacube is analysed to provide the required astrophysical diagnostics such as the distribution of continuum light in specified bands, the radial velocity field derived from the shift in any of the spectral features present and emission line ratios. This procedure has been coded into a series of IRAF tasks specially written for the purpose. ### 6.2 Observations of NGC4151 At infrared wavelengths it is possible to view the centres of active galaxies from which visible light is obscured by dust in the torus. A fiber-lenslet IFU gives two-dimensional spectroscopy with the full wavelength resolution and coverage of a conventional long slit. Thus accurate spectral line strengths and corresponding velocities can be mapped across an object from a single observation revealing any association with structures seen in broad-band images or radio observations. Such an ability to form connections between spectral and physical features has proved crucial in identifying line excitation mechanisms. Understanding excitation is important for studying kinematics as well as energy transfer. Previous work (Hutchings et al. 1998, Gallimore et al. 1997) 0suggests that the optical narrow line regions (NLR) of Seyfert galaxies are driven by photoionization via a cone of radiation from the active nucleus. This scenario is compatible with gravitationally dominated dynamics in those regions, indicated also by measurements of velocity dispersion (Nelson & Whittle 1996). However, emission may also occur through shock excitation (Genzelet al. 1995), in which case corresponding velocity measurements cannot be used to constrain the overall galactic gas kinematics. The strongest near-infrared emission lines in active galaxies are two forbidden \[Fe II\] transitions ($`1.26`$ and $`1.64\mu `$m), produced in regions where hydrogen is partially ionized. Usually the transition between ionized and neutral hydrogen is very sharply defined, so there has been recent speculation as to the how the partially ionized regions may arise (Veilleux et al. 1997). Proposed mechanisms involve X-ray photoionization of optically thick narrow line clouds, or shocks from either supernova remnants in starbursts or the interaction of a radio jet with the interstellar medium. The $`1.257\mu m`$ \[Fe II\] line in the J-band falls close to Pa$`\beta `$, at $`1.28\mu m`$; the ratio \[Fe II\]/Pa$`\beta `$ has been found to be greater where emission is due to shock excitation than where photoionization dominates (Simpson et al. 1996), so helps to separate X-ray illumination from radio jet induced shocks. The two emission lines can be observed simultaneously with the 150 lines/mm grating in CGS4 (albeit without much bare continuum for reference), and, being so close together, provide a combined indicator which is relatively insensitive to reddening. As an example, we show preliminary results for NGC4151, one of a number of nearby Seyfert galaxies observed in March 1998. To search for extended line emission over a field much larger than the nucleus, we constructed a mosaic using offsets from the nucleus. This also resulted in better sampling depending on location within the field. Fig. 7 shows data from 51 IFU observations in 7 overlapping fields of view. The observations were combined into a single $`78\times 106\times 257`$ element datacube as described in Section 6.1. In this galaxy, the axes of the radio jet and optical extended narrow line region are separated in the sky by $`25`$ degrees. The intention was to trace the infrared NLR emission far enough away from the nucleus to see whether it follows the jet or the optical NLR, thereby constraining the mechanism of excitation. Fig. 7 shows maps in the central region of \[FeII\] and Pa$`\beta `$ narrow-line intensity. There is clear evidence that the \[FeII\] emission is extended along the optical ENLR axis and marginal evidence for an extension in the narrow-component of Pa$`\beta `$. Further details, will be presented by Turner et al. (in preparation). ## 7 Conclusions We have demonstrated the potential of lenslet-coupled fibers in the non-thermal regime for integral field and multiple-object spectroscopy in the near infrared. The integral field system gives significant advantages over fiber-only and lenslet-only systems. We have demonstrated good performance and utility for astronomy even with a low-cost prototype system retrofitted to an existing spectrograph. The multiobject mode of SMIRFS provides a significant multiplex advantage in observing efficiency over the longslit spectrographs currently used at these wavelengths and points the way forward for infrared multi-fiber systems. Fiber systems offer efficient matching of the detector to a large field of view and make it easier to reach the high spectral resolution necessary to reject atmospheric OH emission lines. At present, the number of fibers is limited only by the length of the CGS4 long slit and the small field of UKIRT (4 arcmin). With a dedicated spectrograph the multiplex advantage could be much increased. With more work the thermal emission from the fiber slit could be reduced further without impacting the throughput, leading to an increase in signal/noise at longer wavelengths. With the integral field mode of SMIRFS, we have demonstrated how to use fiber-lenslet systems for efficient integral field spectroscopy in the near infrared. This work has also given us invaluable information for the construction of the recently-commissioned Thousand Element Integral Field Unit (TEIFU) and the fiber-lenslet system for the GEMINI Multiobject Spectrographs (GMOS). Both will include dedicated fields for background subtraction. Although these systems will initially operate at visible wavelengths, they can be extended with minor modification to work in the near-infrared. With fused silica fibers, we can operate efficiently within the wavelength range where instrumental thermal background is not a problem ($`<1.8\mu `$m). However, work at longer wavelengths will require a cooled system. Although fibers are not the preferred technology for fully cryogenic operation, where image slicers (such as the Advanced Image Slicer; Content 1997) are likely to give much better performance, fibers may still be usable in cooled systems where full cryogenic temperatures are not required. It may even be possible to use fibers in fully cryogenic environments since preliminary results from tests on cold fibers suggest that both the FRD and, more unexpectedly, the flexibility, are satisfactory for some types of unmounted fiber under such conditions (Haynes & Lee, private communication). The retention of flexibility is of considerable importance to multiple integral field spectroscopy since it suggests that fiber-lenslet systems could be deployable under cold conditions (see also Thatte et al. 1998). We thank Simon Morris for his help with the scientific programme. We are indebted to the staff of the Joint Astronomy Centre in Hawaii for help with the integration of SMIRFS with UKIRT and CGS4, particularly Tom Kerr and Tom Geballe. We also thank Adaptive Optics Associates (Carol Dwyer and Brian McNeil) for their work on the microlens arrays. This work was largely supported by a grant from the UK Particle Physics and Astronomy Research Council.
no-problem/9909/hep-lat9909070.html
ar5iv
text
# DESY 99-138 Numerical simulations of dynamical gluinos in 𝑆⁢𝑈⁢(3) Yang-Mills theory: first results. ## 1 INTRODUCTION In the last years there has been a great progress in the understanding of the non-perturbative properties of supersymmetric gauge theories. Because of their highly symmetric nature, supersymmetric quantum field theories are best suited for analytical studies, which sometimes lead to exact solutions . The basic assumption about the non-perturbative dynamics of supersymmetric Yang-Mills (SYM) theory is that there is confinement and spontaneous chiral symmetry breaking . ### 1.1 Supersymmetric Yang-Mills theory Since local gauge symmetries play a very important role in nature, there is a particular interest in supersymmetric gauge theories. The simplest examples are SYM theories, which are supersymmetric extensions of pure gauge theories. We shall pay our attention to the SYM action with $`N=1`$, where $`N`$ is the number of pairs of supersymmetry generators $`Q_{i\dot{\alpha }}`$, $`\overline{Q}_{i\dot{\alpha }}(i=1,2,\mathrm{},N)`$. This theory is a Yang-Mills theory with a Majorana fermion in the adjoint representation. The action for such a $`N=1`$ SYM theory with a $`SU(N_c)`$ gauge group is given by $$=\frac{1}{4}F_{\mu \nu }^a(x)F_{\mu \nu }^a(x)+\frac{1}{2}\overline{\mathrm{\Psi }}^a(x)\gamma _\mu 𝒟_\mu \mathrm{\Psi }^a(x),$$ (1) where $`\mathrm{\Psi }^a(x)`$ is the spinor field, and $`F_{\mu \nu }^a(x)`$ the field strength tensor, $`a\{1,\mathrm{},N_c^21\}`$. Introducing a non-zero gluino mass $`m_{\stackrel{~}{g}}`$ breaks supersymmetry “softly”. Such a mass term is $$m_{\stackrel{~}{g}}(\lambda ^\alpha \lambda _\alpha +\overline{\lambda }^{\dot{\alpha }}\overline{\lambda }_{\dot{\alpha }})=m_{\stackrel{~}{g}}\overline{\mathrm{\Psi }}\mathrm{\Psi }.$$ (2) Here in the first form the Majorana-Weyl components $`\lambda ,\overline{\lambda }`$ are used, in the second form the Dirac-Majorana field $`\mathrm{\Psi }`$. The Yang-Mills theory of a Majorana fermion in the adjoint representation is similar to QCD: besides the special Majorana-feature the only difference is that the fermion is in the adjoint representation and not in the fundamental one. As there is only a single Majorana adjoint “flavour”, the global chiral symmetry of $`N=1`$ SYM is $`U(1)_\lambda `$. The $`U(1)_\lambda `$-symmetry is anomalous: for the corresponding axial current $`J_\mu ^5=\overline{\mathrm{\Psi }}\gamma _\mu \gamma _5\mathrm{\Psi }`$, with a gauge group $`SU(N_c)`$, we have $$^\mu J_\mu ^5=\frac{N_cg^2}{32\pi ^2}ϵ^{\mu \nu \rho \tau }F_{\mu \nu }^aF_{\rho \tau }^a.$$ (3) However the anomaly leaves a $`Z_{2N_c}`$ unbroken: this can be seen by noting that the transformations $$\mathrm{\Psi }e^{i\phi \gamma _5}\mathrm{\Psi },\overline{\mathrm{\Psi }}\overline{\mathrm{\Psi }}e^{i\phi \gamma _5}$$ (4) are equivalent to $$m_{\stackrel{~}{g}}m_{\stackrel{~}{g}}e^{2i\phi \gamma _5}$$ (5) and $$\mathrm{\Theta }_{\mathrm{SYM}}\mathrm{\Theta }_{\mathrm{SYM}}2N_c\phi ,$$ (6) where $`\mathrm{\Theta }_{\mathrm{SYM}}`$ is the $`\theta `$-parameter of the gauge dynamics. Since $`\mathrm{\Theta }_{\mathrm{SYM}}`$ is periodic with period $`2\pi `$, for $`m_{\stackrel{~}{g}}=0`$ the $`U(1)_\lambda `$ symmetry is unbroken if $$\phi =\phi _k\frac{k\pi }{N_c},(k=0,1,\mathrm{},2N_c1).$$ (7) The discrete global chiral symmetry $`Z_{2N_c}`$ is expected to be spontaneously broken to $`Z_2`$ by the non-zero gluino condensate $`\overline{\mathrm{\Psi }}(x)\mathrm{\Psi }(x)0`$. The consequence of this spontaneous chiral symmetry breaking is the existence of a first order phase transition at zero gluino mass $`m_{\stackrel{~}{g}}=0`$. In the case of $`N_c=2`$, there exist two degenerate ground states with opposite signs of the gluino condensate. An interesting point is the dependence of the phase structure on the gauge group: instanton calculations at $`\mathrm{\Theta }_{\mathrm{SYM}}=0`$ give $`N_c`$ degenerate vacua $`(k=0,\mathrm{},N_c1)`$ with $$<\overline{\lambda }\lambda >=c\mathrm{\Lambda }_{\mathrm{SYM}}^3e^{\frac{2\pi ik}{N_c}}.$$ (8) The coexistence of $`N_c`$ vacua implies a first order phase transition at $`m_{\stackrel{~}{g}}=0`$. Recently Kovner and Shifman have suggested the existence of an additional massless phase with no chiral symmetry breaking . In the case of $`SU(3)`$, there are at least three degenerate vacua and for $`m_{\stackrel{~}{g}}<0`$ we expect that $`\mathrm{\Theta }_{\mathrm{SYM}}=\pi `$. ## 2 LATTICE FORMULATION No lattice gauge theory exists with an exact supersymmetry. This is because lacking lattice generators of the Poincaré group, it is impossible to fulfill the (continuum) algebra of SUSY transformations. Another problem is represented by the balancing between bosonic and fermionic modes required by SUSY: the naive lattice fermion formulation produces too many fermions. Curci and Veneziano have proposed a simple solution: instead of trying to have an exact version of SUSY on the lattice, the requirement is that, like chiral symmetry, it should only be recovered in the continuum limit, tuning the bare parameters (gauge coupling $`g`$, gluino mass $`m_{\stackrel{~}{g}}`$) to the supersymmetric point. ### 2.1 Actions The Curci-Veneziano action of $`N=1`$ SYM is based on Wilson fermions. The effective action obtained after integrating the gluino field is given by $$S_{CV}=\beta \underset{pl}{}(1\frac{1}{2}\text{Tr}U_{pl})\frac{1}{2}\text{log}\text{det}Q[U].$$ (9) The fermion matrix for the gluino $`Q`$ is $$Q_{yv,xu}=\delta _{yx}\delta _{vu}K\underset{\mu =\pm }{}\delta _{y,x+\widehat{\mu }}(1+\gamma _\mu )V_{vu,x\mu }$$ (10) with the gauge link in the adjoint representation $`V_{vu,x\mu }[U]=\text{Tr}(U_{x\mu }^{}\tau _vU_{x\mu }\tau _u)`$. ### 2.2 Monte Carlo simulation The renormalized gluino mass is obtained from the hopping parameter $`K`$ as $$m_{R\stackrel{~}{g}}=\frac{Z_m(a\mu )}{2a}\left[\frac{1}{K}\frac{1}{K_0}\right]Z_m(a\mu )m_{0\stackrel{~}{g}}.$$ (11) Here $`K_0=K_0(\beta )`$ gives the $`\beta `$dependent position of the phase transition and $`\mu `$ is the renormalization scale. The renormalized gluino condensate is obtained by additive and multiplicative renormalizations: $`<\overline{\mathrm{\Psi }}(x)\mathrm{\Psi }(x)>_{R(\mu )}=`$ $`Z(a\mu )\left[<\overline{\mathrm{\Psi }}(x)\mathrm{\Psi }(x)>b_0(a\mu )\right].`$ A first order phase transition should show up as a jump in the expectation value of the gluino condensate at $`K=K_0`$. By tuning the hopping parameter $`K`$ to $`K_0`$ for a fixed gauge coupling $`\beta `$ one expects to see a two peak structure in the distribution of the gluino condensate. By increasing the volume the tunneling between the two ground states becomes less and less probable and at some point practically impossible. It is possible to see this phase diagram in our simulations by measuring the chiral and pseudo chiral gluino condensate: the order parameter of the supersymmetry phase transition at zero gluino mass is the value of the gluino condensate $$\rho \frac{1}{\mathrm{\Omega }}\underset{x}{}(\overline{\mathrm{\Psi }}(x)\mathrm{\Psi }(x)).$$ (12) Additionally, for $`KK_0(m_{\stackrel{~}{g}}0)`$ a spontaneous CP-violation, indicated by a nonvanishing pseudo condensate $`<\overline{\mathrm{\Psi }}(x)\gamma _5\mathrm{\Psi }(x)>0`$, is expected. We determine the value of $`\rho `$ on a gauge configuration by stochastic estimators $$\frac{1}{N_\eta }\underset{i=1}{\overset{N_\eta }{}}\underset{xy}{}(\overline{\eta }_{y,i}Q_{yx}^1\eta _{x,i}).$$ (13) Outside the phase transition region the observed distribution of $`\rho `$ can be fitted well by a single Gaussian but in the transition region a good fit can only be obtained with two Gaussians. For $`SU(2)`$ results are shown in . The hopping parameter $`K_0`$, corresponding to zero gluino mass, is indicated by a first order phase transition which is due to the spontaneous discrete chiral symmetry breaking $`Z_6Z_2`$. We have investigated the dependence of the distribution of the gluino condensate and the pseudo condensate as a function of the hopping parameter, starting from a lattice volume $`L^3T=4^38`$. This lattice is, however, still not very large in physical units. Therefore the expected two-peak structure is not yet very well developed, nevertheless we have high statistics. For $`K=0.195`$ fig. 2 shows the distribution of the gluino condensate. The distribution indicates that we are near the phase transition. Outside this region, we can fit the distribution with a single Gaussian. Presently, we are calculating on a bigger lattice volume $`(L^3T=6^312)`$ in order to separate the two-peak structure. On the other hand, our present results on the smaller lattice do not show any signal for a pseudo condensate. Acknowledgements: The numerical simulations presented here have been performed on the CRAY-T3E computer at the John von Neumann Institute for Computing (NIC) Jülich. We thank NIC and the staff at ZAM for their kind support.
no-problem/9909/astro-ph9909375.html
ar5iv
text
# 𝜆₀ and Ω₀ from Lensing Statistics and Other Methods: Is There a Conflict? ## Abstract We discuss the consistency of constraints in the $`\lambda _0`$-$`\mathrm{\Omega }_0`$ plane from gravitational lensing statistics, the $`m`$-$`z`$ relation for type Ia supernovae and CMB anisotropies, based on our own (published or unpublished) work and results from the literature. Recently, several authors, (e.g. Perlmutter et al. 1999, Riess et al 1998, Lineweaver 1998) have made the claim that current observational data provide evidence for a positive cosmological constant $`\lambda _0`$. This is based mainly on the $`m`$-$`z`$ relation for type Ia supernovae or on CMB anisotropies; although joint constraints from more than one cosmological test also point in this direction (e.g. Ostriker & Steinhardt 1995, Turner 1996, Bagla et al. 1996, Krauss 1998), taken at face value, either the supernovae or CMB data *alone* suggest the presence of a positive cosmological constant. On the other hand, gravitational lensing statistics (e.g. Kochanek 1996, Falco et al. 1998) has often been seen as setting tight upper limits on the cosmological constant, perhaps even to the point of being in conflict with the ‘cosmic concordance’ of, for example, a flat universe with $`\lambda _00.7`$ and $`\mathrm{\Omega }_00.3`$. Based on the present observational data, is there a conflict? We have compared contours in the two-dimensional space of the $`\lambda _0`$-$`\mathrm{\Omega }_0`$ plane, with no priors on $`\lambda _0`$ or $`\mathrm{\Omega }_0`$, from results based on our own work involving gravitational lensing statistics and the CMB (Quast & Helbig 1999, Helbig et al. 1999, Helbig 1999, Macias-Perez et al. 1999, hereafter Papers I–IV, respectively) and from results from the $`m`$-$`z`$ relation for type Ia supernovae, kindly made available by the Supernova Cosmology Project and the High-Z Supernova Search Team (Perlmutter et al. 1999, Riess et al. 1998). Are the results from lensing statistics and the $`m`$-$`z`$ relation for type Ia supernovae consistent? As the 90% confidence contours from all supernovae data sets overlap with that of the lensing statistics, and even the 68% confidence contours from two of three supernovae data sets (one from Perlmutter et al. (1999) and one, with two different methods of analysis, from Riess et al. (1999)) overlap with that of the lensing statistics, the results from the two cosmological tests are consistent and one is justified in calculating joint constraints by multiplying the probability distributions of the individual tests. Interestingly, they are most consistent at small, but not too small, values of $`\mathrm{\Omega }_0`$, which is favoured on completely different grounds. Joint constraints from lensing statistics and the $`m`$-$`z`$ relation for type Ia supernovae are discussed in detail in Paper III. We have used the most recent CMB data available to do an analysis similar to that of Lineweaver (1998) and compare the constraints from the CMB to those of lensing statistics. For more details, see Paper IV. The constraints from the CMB are much tighter than those from lensing statistics or supernovae and a given confidence contour from the CMB is always (almost) contained within the corresponding contours from the other tests. Thus, there is no inconsistency. The full poster can be obtained from http://multivac.jb.man.ac.uk:8000/ceres/papers/papers.html where one can also find related publications. In the poster and related papers, we show plots of constraints from various cosmological tests, both from our own results as well as from the literature, in the same area of parameter space and with the same scale, plotting scheme etc., which makes comparison easy. ###### Acknowledgements. It is a pleasure to thank Saul Perlmutter, Brian Schmidt and Saurabh Jha for helpful discussions and the Supernova Cosmology Project and the High-Z Supernova Search Team for making their numerical results available. We thank D. Barbosa, G. Hinshaw and C. Lineweaver for helpful discussions and M. Zaldarriaga and U. Seljak for making their CMBFAST code publicly available. JFMP acknowledges the support of a PPARC studentship. Much of this poster is based directly or indirectly on the efforts of the CLASS collaboration and those of the CERES EU-TMR Network, coordinated by Ian Browne at Jodrell Bank, whose main purpose is to make use of CLASS—among other things for studies of the cosmological aspects of gravitational lensing. This research was supported in part by the European Commission, TMR Programme, Research Network Contract ERBFMRXCT96-0034 ‘CERES’.
no-problem/9909/hep-ph9909226.html
ar5iv
text
# Power corrections and renormalon resummation for the average thrustTalk given by E. Gardi at the QCD’99 conference, Montpellier, July 1999. ## 1 Introduction Power corrections to event shape observables in $`e^+e^{}`$ annihilation have been an active field of research in the recent years. Event shapes, as opposed to other inclusive observables, do not have an operator product expansion, so there is no established field theoretic framework to analyse them beyond the perturbative level. On the other hand, the experimental data now available cover a wide range of scales and thus could provide an opportunity to test QCD and extract a precise value of $`\alpha _s`$. The state of the art in perturbative calculations of average event shape variables is $`𝒪(\alpha _s^2)`$, i.e. NLO. It turns out that experimental data are not well described by these perturbative expressions, unless explicit power corrections, that may be associated with hadronization, are introduced. Renormalon phenomenology allows to predict the form of the power terms while their magnitude is determined by experimental fits. In the work reported here we assume the existence of an Abelian like “dressed skeleton expansion” in QCD and calculate the single dressed gluon contribution using the dispersive approach. This way we perform at once all order resummation of perturbative terms which are related to the running coupling (renormalons) and parametrization of power corrections. We discuss the ambiguity between the perturbative sum and the power corrections and show that the resummation is essential in order to extract the correct value of $`\alpha _s`$ from experimental data. ## 2 Average thrust in perturbation theory To demonstrate the method proposed we concentrate on one specific observable, the average thrust, defined by $$T=\frac{_i\left|\stackrel{}{p}_i\stackrel{}{n}_T\right|}{_i\left|\stackrel{}{p}_i\right|},$$ (1) where $`i`$ runs over all the particles in the final state, $`\stackrel{}{p}_i`$ are the 3-momenta of the particles and $`\stackrel{}{n}_T`$ is the thrust axis which is set such that $`T`$ is maximized. It is useful to define $`t1T`$ which vanishes in the 2-jet limit. Being collinear and infrared safe, the average thrust can be calculated in perturbative QCD to yield $$t_{\text{NLO}}(Q^2)=\frac{C_F}{2}\left[t_0a_{\overline{\text{MS}}}(Q^2)+t_1a_{\overline{\text{MS}}}^2(Q^2)\right]$$ (2) where $`a=\alpha _s/\pi `$, $`C_F=\frac{4}{3}`$ and the coefficients are $`t_0=1.58`$ and $`t_1=23.71.69N_f`$ (see refs. in ). Using the world average value of $`\alpha _s`$, $`\alpha _s^{\overline{\text{MS}}}(\mathrm{M}_\mathrm{Z})=0.117`$, the NLO perturbative result (2) turns out to be quite far from experimental data. This is shown in fig. 1. From the figure it is clear that the NLO correction is quite significant, so the perturbative series truncated at this order is not very reliable. Furthermore, the renormalization scale dependence is significant. It follows that higher order corrections that are related to the running coupling cannot be ignored. ## 3 Renormalons and power corrections In order to take running coupling effects into account it is useful to assume, in analogy with the Abelian theory, that there exists a “dressed skeleton expansion”. Then, the most important corrections which correspond to a single dressed gluon can be written in the form of a renormalon integral $$D_0=_0^{\mathrm{}}\frac{dk^2}{k^2}\varphi (k^2/Q^2)\overline{a}(k^2)$$ (3) where $`\overline{a}`$ represents a specific “skeleton effective charge”, not yet determined in QCD (in several schemes were used). As opposed to the standard perturbative approch (2), by performing the integral (3) over all scales one avoids completely renormalization scale dependence. The integral (3) represents a non Borel summable power series. Indeed, it involves integration over the coupling constant in the infrared which is ill-defined in perturbation theory due to Landau singularities. The ambiguous integral (3) can be defined mathematically, e.g. as a principle value of the Borel sum: $`D_0|_{\text{PV}}`$. This, however, does not solve the physical problem of infrared renormalons: information on large distances is required to fix the ambiguity. The perturbative calculation contains some information about the ambiguity: if the leading term in the small momentum expansion of $`\varphi `$ is $`\varphi (k^2/Q^2)=C_n(k^2/Q^2)^n`$, the leading infrared renormalon is located at $`n/\beta _0`$ in the Borel plane, and a power correction of the form $`1/Q^{2n}`$ is expected. Having no way to handle the problem on the non-perturbative level, it is natural to attempt a fit of the form $`D_0|_{\text{PV}}+\lambda /Q^{2n}`$. A stronger assumption is that the “skeleton coupling” can be defined on the non-perturbative level down to the infrared. Then the integral (3) should give at once the perturbative result plus the correct power term. Since the infrared coupling is not known, it is considered as a non-perturbative parameter. Using the cutoff regularization of (3), one fits the data with $`D_0|_{\mu _I}+\lambda _{\mu _I}/Q^{2n}`$, where $$D_0|_{\mu _I}=_{\mu _I^2}^{\mathrm{}}\frac{dk^2}{k^2}\varphi (k^2/Q^2)\overline{a}(k^2)$$ (4) is fully under control in perturbation theory and the normalization of the power term $$\lambda _{\mu _I}=C_n_0^{\mu _I^2}\frac{dk^2}{k^2}k^{2n}\overline{a}(k^2)$$ (5) is a perturbatively calculable coefficient times a moment of the coupling in the infrared. Since the coupling is assumed to be universal, the magnitude of power corrections can be compared between different observables . The generalization of this approach to Minkowskian observables such as the thrust was discussed in . At the level of a single gluon emission it is based on a “gluon mass” renormalon integral $`R_0`$ $``$ $`{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{d\mu ^2}{\mu ^2}}\overline{a}_{\text{eff}}(\mu ^2)\dot{}(\mu ^2/Q^2)`$ (6) where the characteristic function $``$ is calculated based on the matrix element for the emission of one massive gluon and $`\overline{a}_{\text{eff}}`$ is related to the time-like discontinuity of the coupling. Regularizations of the perturbative sum (6) with $`\overline{a}`$ at one or two loops were discussed in . We skip it here for brevity. ## 4 Fitting experimental data In order to perform renormalon resummation at the level of a single gluon emission, we calculated the characteristic function for the thrust . Then $`R_0`$ is evaluated taking either the one or two loop coupling. Since $`R_0`$ does not exhaust the full NLO correction of eq. (2), we add an explicit NLO correction, $`\delta _{\text{NLO}}=\left(t_1t_1^0\right)a_{\overline{\text{MS}}}^2`$, where $`t_1`$ corresponds to (2) and $`t_1^0`$ is the piece included in $`R_0`$. It was shown in that the Abelian part of $`t_1^0`$ almost coincides with that of $`t_1`$ in spite of the non-inclusive nature of the thrust. This is crucial for the applicability of the current resummation approach. Finally, we add an explicit power correction $`\lambda /Q^{2n}=\lambda /Q`$ where the power $`n=\frac{1}{2}`$ is determined from the small $`\mu ^2`$ expansion of $``$. Thus, using the PV regularization of $`R_0`$, we fit the data with $$t=\frac{C_F}{2}\left[R_0|_{\text{PV}}+\delta _{\text{NLO}}\right]+\frac{\lambda }{Q}.$$ (7) The resummation as well as the fit results are presented in fig. 1 for the world average value of $`\alpha _s`$. The resummation by itself is quite close to the data. Note also the stability of the result as $`\overline{a}`$ is promoted from one to two loops. Next, we fit also the value of $`\alpha _s`$. The best fit $`\chi ^2/\mathrm{point}=1.35`$, obtained with $`\alpha _s^{\overline{\text{MS}}}(M_Z)=0.110\pm 0.006`$ and $`\lambda =0.62\pm 0.11`$, is shown in fig. 3. ## 5 Truncated expansions in $`\overline{\mathrm{MS}}`$ It is interesting to compare the resummation to a truncated expansion in $`a_{\overline{\text{MS}}}`$. Expanding $`R_0`$ with $`\overline{a}`$ at 1-loop we obtain a series of the form $`_{i=0}^{\mathrm{}}t_i^0a_{\overline{\text{MS}}}^{i+1}`$. The coefficients $`t_i^0`$ can be considered as predictions for the perturbative coefficients $`t_i`$, provided the coupling $`\overline{a}`$ is close to the correct “skeleton coupling”, and provided that the “leading skeleton” $`R_0`$ is indeed dominant already in the sub-asymptotic regime. The significance of the NNLO and further sub-leading corrections which correspond to the dissociation of the emitted gluon is demonstrated in fig. 2. The figure shows $`t_{\mathrm{pert}}`$ $``$ $`t_{\text{NLO}}+{\displaystyle \frac{C_F}{2}}{\displaystyle \underset{i=3}{\overset{k}{}}}t_{i1}^0a_{\overline{\text{MS}}}^i(Q^2)`$ (8) where $`k`$ is the order of truncation for $`k=2`$ through $`6`$. At these orders the series is still convergent. We conclude that a major part of the discrepancy between the NLO result and experiment is due to neglecting this particular class of higher order corrections. Next consider a fit to experimental data based on the truncated expansion of $`R_0`$ (8): $`t_{\mathrm{pert}}+\lambda _{\mathrm{pert}}/Q`$. The fit results are listed in table 1. The quality of the fit is roughly the same in all cases. We see that the resummation is absolutely necessary in order to extract a reliable value of $`\alpha _s`$. ## 6 Infrared cutoff regularization Putting an infrared cutoff $`\mu _I`$ on the space-like momentum (4) we separate at once the perturbative and non-perturbative regimes as well as large and small distances. The cutoff regularized sum, $`R_0|_{\mu _I}`$ for $`\mu _I=2\mathrm{GeV}`$ is shown in fig. 3 together with the PV regularization. For the lower $`Q`$ values the difference between the two regularizations is large. This means that a large contribution to $`R_0|_{\text{PV}}`$ comes from momentum scales below $`2\mathrm{GeV}`$, where the coupling is not controlled by perturbation theory. Finally fitting the data $$t=\frac{C_F}{2}\left[R_0|_{\mu _i}+\delta _{\text{NLO}}\right]+\frac{\lambda _{\mu _I}}{Q}$$ (9) we get the same result as with the PV regularization (7). The only difference is in the required power correction $`\lambda _{\mu _I}=1.49`$. This is general: the two regularizations differ just by (calculable) infrared power corrections – in our case $`1/Q`$ – and are therefore equivalent once the appropriate power terms are included. ## 7 Conclusions The assumption of a “skeleton expansion” implies that resummation of perturbation theory and parametrization of power corrections must be performed together. For the thrust renormalon resummation is significant and closes most of the gap between the standard perturbative result (NLO) and experiment. The resummation is crucial to extract the correct value of $`\alpha _s`$. The infrared sensitivity of the thrust leads to ambiguity in the resummation, which is settled by fitting a $`1/Q`$ term. In the infrared cutoff regularization this power term is substantial. The resummation leads to a renormalization scale invariant result. In the BLM approach it corresponds to a low renormalization point in $`\overline{\mathrm{MS}}`$, $`\mu _{\text{BLM}}^{\overline{\text{MS}}}0.0447Q`$. E. De Rafael What is the physical meaning, in QCD, of the scale of the $`1/Q`$ correction? What other processes are sensitive to these corrections? The first question is basically an open one. Deeper understanding could hopefully be gained once renormalon phenomenology is supported by more rigorous field theoretic methods. In the infrared finite coupling approach the $`1/Q`$ correction is understood as a moment (5) of a universal infrared finite coupling. This allows comparison between observables – for example the C parameter is sensitive to similar $`1/Q`$ corrections. In the framework of shape-functions , a relation with the energy-momentum tensor was suggested.
no-problem/9909/astro-ph9909367.html
ar5iv
text
# 1 Introduction ## 1 Introduction Clusters of galaxies are the largest known virialized structures and represent the high-density peaks of the large scale structure of the Universe which is effectively traced up to 150h<sup>-1</sup> Mpc or more (Bahcall & Soneira 1984; Tully 1987). The distribution and the evolution of intrinsic properties of galaxy clusters provide important information for studies of galaxy and cluster evolution and on the dependence of galaxy properties on the environment. In the past, a number of cluster catalogues (cf. Abell 1958; Abell et al. 1989; Zwicky et al. 1961-68) has been extracted from photographic all-sky surveys. These catalogues, however, were compiled by visual inspection of the plates and lack the homogeneity and completeness which is needed for statistical studies (Postman et al. 1986; Sutherland 1988). Machine extracted catalogues (Dodd & MacGillivray 1986; Dalton et al. 1992; Lumsden et al. 1992) selected with more objective criteria, reach, in some cases, fainter limiting magnitudes but do not cover equally wide areas of the sky. More recently, CCD surveys have been carried out (Couch et al. 1991; Postman et al. 1996; Olsen et al. 1999), but they cover small regions of the sky and target deeper objects. ## 2 The CRoNaRio data The CRoNaRio project (Djorgovski et al. 1998) is a joint enterprise between Caltech and the astronomical observatories of Monte Porzio, Napoli and Rio de Janeiro, aimed to the production of the Palomar Norris Sky Catalogue (Djorgovski et al. 1999), which will eventually include all objects visible on the Second Palomar Sky Survey and therefore will provide a large database from which to extract a statistically well defined catalogue of putative galaxy clusters (see Gal et al. 1999). For each POSS-II field the CRoNaRio project provides astrometric, geometric and photometric information (J, F and N bands, calibrated through CCD frames in the g, r, i bands of the Gunn-Thuan system). The procedure for the search of cluster candidates, discussed in this paper, has been developed at the Astronomical Observatory of Capodimonte (Naples) and differs in many points from the standard way of preparing and generating the DPOSS distributable catalogues. The first step of our procedure requires the cleaning of the catalogues from spurious objects and artifacts (such as multiple detections coming from extended patchy objects, halos of bright stars, satellite tracks, etc.), which are present in the single filter catalogues. We mask and keep memory of the plate regions occupied by bright, extended and saturated objects (that locally make the detection extremely unreliable), taking this troubled areas into account in the next steps. Subsequently, the procedure performs the matching of the catalogues of the same sky region obtained in the three bands. This is done through the plate astrometric solution, matching each object in one filter with the nearest objects in the two other filters and assuming a tolerance box of 7 arcsec. The quality of the matching does not depend on the position of the objects on the plates: the fraction of the matched objects with respect to the original single filter catalogue is quite constant all over the plate (Fig.1, left and central panel). The quality of the matching depends, as expected, on the instrumental magnitude: at faint magnitude a significative fraction of objects (too faint to be detected in some among the three filter) is lost in the matching process (Fig.1, right panel). Due to the different S/N ratios in the three bands, many objects have discordant star-galaxy classifications in catalogues from different filters. The number of such objects obviously increases at faint magnitudes. (It needs to be stressed, however, that this problem is greatly reduced when a new training set for the classification is adopted, see Gal et al. 1999 for details). ## 3 The identification of cluster candidates In what follows we shall refer to POSS-II field n. 610 ($`5^{}\times 5^{}`$ centered at RA = 1h and $`\delta `$ =$`+15^{}`$). After matching and taking into account the above mentioned problem of misclassified objects, we first produced a catalogue of galaxies which is almost complete down to F $``$ 19.75 and J $``$ 20.21 mag. The spatial galaxy distribution in this catalogue was then binned into equal-area square bins of 36 arcsec in the sky and the resulting density map was smoothed with a Gaussian 2-D filter having a variable width chosen in function of the estimated cluster redshift in order to have a $``$ 250 kpc resolution (i.e., the dimension of the core radius of a typical cluster). Then, SExtractor (Bertin & Arnouts 1996) was run on the smoothed map in order to identify and extract all overdensities having number density $`3\sigma `$ above the mean background and covering at least 16 pixels (9.6 arcmin). In this way, as in the Schectman (1985) approach, we are not assuming any a priori cluster model. All the previously known Abell, Zwicky or X-ray clusters were recovered and many new candidates were detected (see Fig.2). ## 4 Conclusions We implemented a simple, but well understood and model independent, procedure for cleaning CRoNaRio catalogues from spurious sources. The procedure has been used to select the galaxy catalogues used for the determination of the LF of galaxy clusters and for the search of poor galaxy groups (Paolillo et al.; De Filippis et. al.,these Proceedings). Our next goal is to validate our detections by cross-identification with X ray catalogues of galaxy clusters or by direct optical observation.
no-problem/9909/nucl-th9909021.html
ar5iv
text
# On Isgur’s ”Critique of a Pion Exchange Model for Interquark Forces” ## 1 Introduction The recent “critique” of the Goldstone boson exchange (GBE) model for the baryon spectra contains a number of strong and at the same time unsubstantiated statements. Given the author’s ”silence is consent” <sup>1</sup><sup>1</sup>1 See the first version of the paper which has become widely known and which can be extracted from the LANL e-print server as version 1. and an increasing pressure from the community, a rebuttal has become unavoidable. In this rebuttal the structure of Isgur’s paper will be adhered to and his ”catalogue of criticisms” will be examined. An updated version of the GBE model for the baryon spectrum is available in ref.. We discuss conceptual issues related to a question of paramount importance: which physics, inherent in QCD, is responsible for the nucleon (baryon) mass and its low-energy properties and how this physics is connected with the observed baryon spectra. In the introduction to the first variant of his paper Isgur questions the superiority of the GBE model for solving the problem of the spectral ordering in light and strange baryons, and argues that the Coulomb component of the one gluon exchange (OGE) interaction naturally leads the positive parity state $`N\left(1440\right)`$ to be the lowest one among positive parity $`N=2`$ band. This issue is dropped from the final variant, but as the problem of the relative ordering of the lowest positive-negative parity states is the key question for deciding which physical picture is responsible for baryon (nucleon) masses, we will shortly address it here. In a model with a monotonic effective confining interaction between quarks in light and strange baryons, which is flavor- and spin-independent, and assuming that there are no residual interactions, the spectrum of the lowest lying baryons should be arranged into successive bands of positive and negative parity (Fig. 1). Empirically, however, the lowest excited levels in the spectra of nucleon, the $`\mathrm{\Delta }`$ \- resonance and $`\mathrm{\Lambda }`$-hyperon, which are shown in Fig. 2, look quite different. It follows that a picture, in which all other possible interactions are treated as only residual and weak and represent only a perturbation cannot be correct. In the other extreme case, with a very strong Coulomb interaction between quarks and without any confining force at all, the lowest excited positive and negative parity states should be degenerate in all flavor parts of the spectrum, as in the hydrogen atom. Experimentally, however, the positive parity state N(1440) lies $`100`$ MeV below the negative parity multiplet N(1535)-N(1520), on the one hand, but on the other hand the lowest positive parity state in the $`\mathrm{\Lambda }`$ spectrum lies 100 - 200 MeV above the lowest negative parity doublet (Fig. 2). This rules out the hypothesis of a dominant Coulomb interaction. In addition, a model with no confining interaction, that relies exclusively on the Coulomb part of OGE, fails for the spectra of all other low-lying baryons. Such a model cannot provide the required 500 MeV gap between the ground state baryons and the first negative parity excitation band. As soon as a confining interaction is added, irrespective of whether harmonic, linear or some other monotonic functional form, the Roper resonance (and its counterparts in other flavor parts of the spectrum) falls $`100300`$ MeV above the negative parity multiplet, a result which is well known from many exact 3-body calculations, see e.g. , including those of Isgur .<sup>2</sup><sup>2</sup>2In Isgur’s papers with Karl , the positive and negative parity states are treated separately in different papers, and a very strong color coupling constant - larger than 1 - is needed to fit the $`N\mathrm{\Delta }`$ mass splitting, which is incompatible with the perturbative treatment of QCD, and a huge anharmonic ”correction” is introduced by hand in order to cure the positive parity states. This is the salient point as it is assumed in the Isgur-Karl model that the anharmonicity, i.e. difference between the harmonic interaction and the linear + Coulomb interaction could shift the Roper strongly down. This is excluded in exact three-body calculations and also at the theorem level - see (and references cited therein). It then follows that a combined model, relying on both the confinement potential and color-Coulomb component of one gluon exchange cannot explain the experimentally observed pattern. The next important issue is how these spectra change when perturbed by the color-magnetic component of OGE. To leading order and when one ignores the spatial dependence of the color-magnetic interaction and assuming the $`SU\left(3\right)_F`$ limit, its contribution is determined exclusively by the spin structure of the zero-order baryon wave function, which is prescribed by the corresponding Young diagram. This spin structure is unambiguously determined by the total spin of three quarks. This spin is the same, $`S=1/2`$, for all baryons in $`N`$ and $`\mathrm{\Lambda }`$ spectrum, depicted in Fig. 2. This then implies that such a spin-spin force, which is not sensitive to the flavor of quarks, cannot modify the ordering of the states, suggested by the confinement + Coulomb interaction. If one takes into account a spatial dependence of the color-magnetic interaction as well as the $`SU\left(3\right)_F`$ breaking, its contribution to the positive and negative parity states will be slightly different, because of the different radial structure of these baryons. Nevertheless, a first order perturbation calculation or nonperturbative calculations reveal<sup>3</sup><sup>3</sup>3Note that the theoretical predictions for all positive and all negative parity states are shifted down and up, respectively, in figures of the ref. , so after a reconstruction of actual picture as it follows from the Tables of that paper it is very difficult to conclude that a reasonable description of the spectra has been obtained!, that the departures from the pattern of Fig. 1 are small. But even more seveer is the constraint from the $`\mathrm{\Delta }`$ spectrum. In this case the color-magnetic interaction shifts the $`N=2`$ state $`\mathrm{\Delta }\left(1600\right)`$ ($`S=3/2`$) up, but not down, with respect to the negative parity $`N=1`$ pair $`\mathrm{\Delta }\left(1620\right)\mathrm{\Delta }\left(1700\right)`$ ($`S=1/2`$) ! All these facts rule out the perturbative gluon exchange plus confinement picture as a physical mechanism for the generation of light baryon mass. These spectra obviously point to explicit flavor dependence in the underlying dynamics. The GBE force, which is explicitly flavor dependent, very naturally explains this as well as several other apparent puzzles<sup>4</sup><sup>4</sup>4In fact, one needs only two parameters to explain the pattern of Fig. 2 in the most simple form of the GBE model : the strength of the GBE-like interaction and the strength of the effective confining interaction . This is in contrast to quite a misleading counting of the number of free parameters as done in a recent review .. But in a sense more important is the conceptual inadequacy of the simplistic OGE model. This model invokes constituent quarks as particles with constant mass, without any attempt to understand an essense of these objects, which is very different from that of the light current quarks of QCD. The constituent quark can be introduced as a quasiparticle in the Bogoliubov or Landau sense stemming from the dynamical chiral symmetry breaking in QCD and related to the quark condensate in the QCD vacuum (in fact one cannot obtain a nonzero condensate without a dynamical mass of quarks). Such a dynamical mass is indeed observed on the lattice at small momenta where the nonperturbative phenomena become crucial . This physics is well known and it is a common theme in all strongly interacting many fermion systems, to which QCD belong. The chiral symmetry breaking and the dynamical mass generation are inherently nonperturbative phenomena and cannot be addressed within perturbation theory. In perturbation theory the perturbative vacuum persists in any order and the quark condensate (as well as dynamical mass) are identically zero. It is therefore inconsistent to invoke of constituent quarks along with perturbative one gluon exchange. If one invokes constituent quarks, then one necessarily assumes the spontaneously broken mode of chiral symmetry, where the Goldstone boson field is required by the Goldstone theorem and the flavor-octet axial current conservation in the chiral limit implies the coupling of Goldstone bosons and quasiparticles <sup>5</sup><sup>5</sup>5To avoid any confusion, one should not mix the one gluon exchange interaction between constituent quarks with its nonrelativistic spin-spin force, with the nonperturbative resummation of the gluonic exchanges between current quarks by solving the Dyson-Schwinger and Bethe-Salpeter equations, which could provide chiral symmetry breaking (this is one of the possibilities which is presently discussed ) and which will automatically lead to GBE between the quasiparticles in baryons upon t-channel iterations . In this approach, however, the $`U(1)_A`$ problem persists and the origin of confinement is unclear. In this case the underlying mechanism for the $`\pi \rho `$ splitting is the same like in Nambu and Jona-Lasinio model and has nothing to do with the nonrelativistic spin-spin force between quarks.. The fact that the typical momentum of valence current quarks in the nucleon is small, $`100200`$ MeV, i.e. well below the chiral symmetry breaking scale, $`\mathrm{\Lambda }_\chi 1`$ GeV, implies that the low-energy characteristics of baryons, such as their masses, should be formed by the nonperturbative QCD dynamics that is responsible for the chiral symmetry breaking and confinement, but not by the perturbative QCD degrees of freedom which become active at much higher momentum scales. The effective meson exchange interaction between valence quarks in baryons arises from the nonperturbative t-channel iterations of the QCD gluodynamics which triggers the breaking of chiral symmetry and which is responsible for the low-lying meson structure . This is a simple consequence of crossing symmetry: if one obtains the pion as a solution of the Bethe-Salpeter equation in the quark-antiquark s-channel, then one inevitably obtains pion exchange in the quark-quark systems as a result of iterations in the t-channel. What is important is that these t-channel iterations enormously reinforce the bare (gluonic) vertex in the GBE channel (which is due to the antiscreening) at small momenta. This antiscreening results in the pole that occurs at $`q^2=0`$. This pole ”explosion” explains the role of the GBE interaction at low momenta, which dominates the low-energy baryon physics. Generation of the dynamical mass and the exchange by Goldstone bosons between quasiparticles in baryons are synchronous phenomena based on chiral symmetry breaking and cannot be separated from each other. There are many independent indications from spectroscopy that show that the physics in the heavy quark sector (where the chiral symmetry is absent and the confinement plus OGE picture is a relevant one) is very different from the light quark sector. For instance, the hyperfine (spin-spin) splittings in charmonium are of the order of 3% of the hadron mass, i.e. they indeed represent a small perturbation. In contrast, the spin-spin force in light baryons should be very strong as it provides splitting at the level of 30% of hadron mass ($`N\mathrm{\Delta }`$) splitting. The color-magnetic spin-spin interaction in the heavy quark systems has a clear origin as a small nonrelativistic $`v^2/c^2`$ correction to the leading Coulomb force of OGE interaction, and, as it is well known from the positronium physics (which is similar) provides a small $`\alpha ^4`$ spin-spin splitting and in the present case the values of $`v^2/c^2`$ , $`\alpha ^4`$ and the experimental splitting are all consistent to each other. In the light quark systems, the light current quarks with their tiny mass are ultrarelativistic. In this case the perturbative gluon - quark vertex to a good approximation conserves helicity (to be contrasted to heavy quark - gluon vertex), which implies that the spin dependence of OGE interaction vanishes in the present case <sup>6</sup><sup>6</sup>6 I repeat, once one considers the standard one gluon exchange perturbative force, then one assumes that we are in the perturbative regime of QCD, i.e. on the top of perturbative vacuum. Hence one can use only original (current) quarks of QCD in the present case.. This is in obvious conflict with the large empirical hyperfine $`N\mathrm{\Delta }`$ splitting and implies that the perturbative gluon exchange force cannot be its origin. The lattice calculations indicate that the physics in the heavy quark sector is very different from the light quark sector. To these belong a recent analysis by Liu et al , showing that the origin of the $`N\mathrm{\Delta }`$ splitting is not due to the color-magnetic interaction, but inherently related with the dynamical chiral symmetry breaking and meson-like exchange force. The most recent work of RIKEN BNL - Columbia - KEK collaboration , which for the first time accurately measured the low-lying negative parity state<sup>7</sup><sup>7</sup>7This can be considered as a proof that N(1535) is a genuine three quark resonance, but not a cusp due to the nearby $`N\eta `$ threshold and not a quasibound state in the meson-baryon system. and also obtained a reliable signal for the Roper state, indicates that the dynamics for baryons made of heavy quarks (where the chiral symmetry and GBE-like force are absent), in which case the spectrum indeed looks like in Fig. 1, is very different from the real pattern in nature on Fig. 2, which is close to the chiral limit. ## 2 ”The Spin-Orbit Problem is not Solved” Here the argument made by Isgur is as follows. The empirical spectra of L=1 light baryons and mesons show no significant spin-orbit splittings. A scalar confining interaction implies a spin-orbit force due to Thomas precession, which should be cancelled by another spin-orbit force in both baryons and in mesons. Such an additional spin-orbit force is supplied by a strong one-gluon exchange interaction, while within the GBE model for baryons there is no source to counterbalance the Thomas term. This argument is based on the naive extrapolation of heavy quark physics into the light quark sector. In the heavy quark systems, like charmonium or bottomonium, the most important dynamics is indeed due to the string-like confining force at large distances and a small perturbative gluon exchange correction at short ones. In this case a heavy quark practically constantly ”sits” on the end of the string because a quantum-mechanical fluctuations of this quark into other one plus quark-antiquark pair (meson) are suppressed by the factor $`1/M_Q`$ (see footnote 14) and vanish in the heavy quark limit. This suppression factor comes from the meson propagator. A relativistic rotation of the string implies the Thomas precession, which is a pure kinematical effect related to successive Lorentz transformations. This Thomas precession gives rise to a spin-orbit interaction. Note that for this effect to be operative it is necessary to have the same particle on the end of the string at the successive moments $`t_1,t_2`$ and $`t_3`$. For the heavy quark this condition is indeed approximately fullfilled. In the light quark systems this condition is not satisfied, however. This is because quantum mechanical fluctuations of the light valence quark into the other quark and the light meson are not suppressed and become the most important effect. Within the quantum field theory such a fluctuation corresponds to materialization from the vacuum of the quark, which becomes the valence instead of the initial one, see Fig. 3. Because in the present case there is no big gap between the negative energy levels of the Dirac sea and the positive energy of the valence quarks, this process is intensive. This implies that at the successive moments $`t_1,t_2`$ and $`t_3`$ one has predominantly different quarks on the end of the string, though with exactly the same color. If quarks are different, the Thomas precession cannot be applied. In addition the spin of the quark at the moment $`t_2`$ is predominantly polarized just in opposite direction compared to the moments $`t_1`$ and $`t_3`$ as the pion-quark vertex is of spin-flip nature . Thus, at $`t_2`$ the spin-orbit Thomas term is of opposite sign compared to that at $`t_1`$ and $`t_3`$<sup>8</sup><sup>8</sup>8 One may speculate whether the loop fluctuation also affects the spin-spin force from the GBE between different quarks. The pion-quark vertex is of spin- and isospin-flip nature which means that the loop contributions to the one pion exchange, where exchanged pion is attached to quark within a loop, produces the same operator $`\stackrel{}{\tau }_i\stackrel{}{\tau }_j\stackrel{}{\sigma }_i\stackrel{}{\sigma }_j`$ as one pion exchange without loop.. This qualitative discussion suggests that the Thomas spin-orbit force should be strongly suppressed in the light quark systems, both in mesons and baryons<sup>9</sup><sup>9</sup>9A detailed formal extension of this qualitative discussion will be published elsewhere.. If so, the spin-orbit force from the OGE which should be very strong as it is fixed by large $`N\mathrm{\Delta }`$ and $`\pi \rho `$ splittings within the naive OGE model (combined with constituent quarks) completely destroys both baryon and meson spectra as it supplies splittings of hundreds MeV. Based on the view of the near perfect cancellation of the very large, but opposite in sign, LS forces from Thomas precession and OGE interaction in P-wave light mesons (which, according to Isgur, should be of the same origin like in light P-wave baryons) , Isgur interpolates their matrix elements between the light and the heavy quarkonia to the heavy-light mesons and predicts a dramatic and large inversion of the spin-orbit splittings in the heavy-light P-wave mesons, where the data were absent (for details see ref. ). This prediction has recently been checked by two independent lattice groups and has been ruled out. Not only does this prediction deviate from the data by a few hundreds MeV, but its sign is opposite! In fact there do appear spin-orbit forces in the GBE model from the second iteration of the interaction , which correspond to spin-orbit force from vector- and scalar-meson exchanges. Different meson exchanges provide the spin-orbit force with opposite signs in baryons , which suggests that the net spin-orbit force should not be large, which is compatible with the small 10-50 MeV LS-splittings observed in L=1 light and strange baryons. To conclude this section we stress that it is incorrect to identify the linear confining interaction between two heavy static sources, that is indeed established, with an effective confining interaction between the quasiparticles in the light quark systems. ## 3 ”Baryon Internal Wave Functions are Wrong” Here Isgur’s argument is that while the OGE model yields a mixing of the spin $`S=1/2`$ and $`S=3/2`$ states in the $`N\left(1535\right)`$ and $`N\left(1650\right)`$ baryon wave functions, which is compatible with the big observed $`N\left(1535\right)N\eta `$ branching ratio and the small $`N\left(1650\right)N\eta `$ one, the GBE model should fail to do so. This mixing above is provided by the tensor force component of the quark-quark force and crucially depends on its sign, while the masses of baryons are not strongly sensitive to this tensor force. Within the GBE picture there are two sources for tensor force: pion-like exchange and rho-like exchange mechanisms. Both of these exchanges supply a spin-spin force with the same sign, while their tensor force components have opposite signs . This implies that the net tensor force should be rather weak compared to the strong spin-spin force, in agreement with phenomenology. In ref. only a $`\pi `$-exchange tensor force was used for an estimate.<sup>10</sup><sup>10</sup>10 It has nevertheless been stressed there: ”Any vector-octet-like exchange interaction component between the constituent quarks, would also reduce the net tensor interaction at short range as the contributions to the tensor interaction from pseudoscalar and vector exchange mechanisms tend to cancel, whereas they add in the case of the spin-spin component. These modification of the tensor interaction at short range may even lead to a sign change of the matrix element.” Its strength has not been correlated with the strength of the spin-spin force, which is fixed by the hyperfine splittings. As soon as the corresponding $`\rho `$-exchange like tensor force is added, the mixing becomes qualitatively different. The flavor dependent tensor force component of the two-pion exchange interaction (which is $`\rho `$-like) is, in the range relevant for the baryon wave functions stronger than that of the one-pion exchange interaction , and therefore the net tensor force, while weak, does have the sign opposite to that of pion exchange. The sign is then that, which is favored by the empirical mixing of the negative parity multiplets. Note that in the modern fits of baryon spectra both $`\pi `$-like and $`\rho `$-like exchanges are taken into account . There are several indications that the $`\rho `$-like tensor force should dominate over the $`\pi `$-like in P-wave baryons. The analysis of the L=1 spectra and of the mixing angles for the flavor-dependent interaction reveals that the tensor force that mixes $`S=1/2`$ and $`S=3/2`$ components should have a sign of the $`\rho `$-exchange tensor interaction<sup>11</sup><sup>11</sup>11While this fact is not explicitly discussed in that paper, it follows from the mixing angles presented in Table 3 therein.. With the flavor-dependent spin-spin and tensor force, with the matrix element being adjusted to provide the best $`\chi ^2`$ fit to baryon masses, a parameter-free prediction for mixing angle was obtained, which ideally fits the observed $`\pi `$ and $`\eta `$ decays branches for $`J=1/2`$ and $`J=3/2`$ $`L=1`$ $`N^{}`$ baryons, discussed in Isgur’s paper . That work definitely shows that the fit of the observed $`L=1`$ spectra prefers a flavor-dependent interaction between quarks. This is perfectly consistent with the recent systematic $`1/N_c`$ analysis of both masses and mixing angles of L=1 nonstrange baryons . The result of this paper may be summarized as follows: both masses and mixing angles extracted from the strong and electromagnetic decays are compatible with the idea that the effective quark-quark interaction is of meson exchange form, while they are not compatible with the flavor independent gluon exchange hyperfine interaction. In particular the data require the significant contribution of the operator that contains a flavor-dependent tensor force ($`O_3`$), while the contribution of the operator which represents a flavor-independent tensor force ($`O_8`$) is compatible with 0. Note that in the present analysis the contribution of different operators is systematically weighted with the $`N_c`$-dependent factor, which is absent in other more phenomenological analyses. The study of the $`\pi N`$ phase shifts also reveals that the spin-spin force between quarks should be of pion-exchange type, while the tensor force component should be of just opposite sign<sup>12</sup><sup>12</sup>12Those authors actually conclude that the spin-spin force should be of pion-exchange type while the tensor force should of gluon-exchange type, which would be rather strange. But it is easy to see from their expressions that the same result will be obtained if one changes the sign of the single $`\pi `$-exchange tensor force to the opposite one.. This should not be construed as a claim that a simple $`QQQ`$ main component of the baryon wave function alone will be able to explain the variety of strong and electromagnetic decay data. The baryon wave function contains in addition further Fock components, $`QQQ+meson,\mathrm{}`$. The coupling of these higher Fock components will be very important for strong decays in the case when the energy of the resonance is close to the corresponding threshold. In this case the energy denominator, which determines a role of the higher Fock component in the given reaction, e.g. in $`\gamma Nor\pi NN\left(1535\right)N\eta `$, becomes very small and the otherwise insignificant $`QQQ\eta `$ component of the $`N\left(1535\right)`$ wave function becomes important. This should be a significant reason for why the $`\eta `$-decay branch is anomalously big in the case of $`N\left(1535\right)`$. Note that within the chiral constituent quark model this mechanism is very natural, while there are no meson components in the baryon wave function within the OGE model. Similar arguments can be applied to explain an anomalously large $`\mathrm{\Lambda }\left(1405\right)\mathrm{\Lambda }\left(1520\right)`$ spin-orbit splitting, because the $`\mathrm{\Lambda }\left(1405\right)`$ is below the $`\overline{K}N`$ threshold and can be viewed as $`\overline{K}N`$ bound state . If correct, it would simply mean that both coupled $`QQQ`$ and $`QQQK`$ components are significant in the present case and there is no contradiction with the flavor singlet $`QQQ`$ nature of these baryons, which in any case are LS partners with respect to their main $`QQQ`$ component. The alternative explanation of the latter extraordinary large LS splitting would be that there is some rather large spin-orbit force specific to the flavor singlet state only , which is also not rulled out, while it is clear that OGE cannot supply such a flavor dependent LS force. The mixing pattern of singlet and octet components that is obtained with the flavor-dependent interactions in ref. better describes the strong decays of $`\mathrm{\Lambda }\left(1405\right)`$ than that one obtained with the flavor-independent interaction. To conclude this section one should stress that in the present state of the art it is premature to judge on the effective $`QQ`$ interactions from the strong decays. This is, in particular, because the excited states are treated within the quark model as bound states, rather than as resonances, and an incorporation of the $`QQQ\pi `$,… continuum components coupled to the principal one $`QQQ`$ is vital for strong decays. A real test of any constituent quark model beyond spectroscopy (i.e. also of their wave functions) can reliably nowadays be performed only for the ground state observables. Such a task has just been completed for the chiral constituent quark model . Starting out from the wave functions obtained in ref. , which represent the eigenstates of the mass operator of the manifestly covariant point form of relativistic quantum mechanics, one has calculated nucleon e.m. formfactors performing relativistic boost transformations . The parameter free predictions for proton and neutron electric and magnetic formfactors as well as charge radii and magnetic moments turned out very satisfactory and practically explain existing data (e.g. within the experimental error bars for proton and neutron charge formfactors). It is also demonstrated that using the same wave functions but a nonrelativistic framework for calculating these observables, the formfactors and charge radii result completely differently, deviating by 1-2 orders of magnitude. From this comparison one can conclude, in particular, that the proper inclusion of Lorentz boosts is crucially important. In view of all that nonrelativistic calculations within the constituent quark models (or similarly within bag models), appear very questionable. This is especially true with regard to strong decays. ## 4 ”Mesons are Disaster” There are several arguments suggested by Isgur in this section. The first one is that while the GBE (or, generally, meson exchange like interactions) may be possible in baryons, such are impossible between valence quark and antiquark in mesons (e.g. in $`u\overline{d}`$ pair) within the quenched approximation to QCD, thus suggesting that meson and baryon spin-dependent interactions must have totally different physical origins which is very difficult to arrange. This question has been addressed in detail recently . I will briefly summarize here the main conclusions. One needs a nonperturbative gluonic interaction between quarks in QCD to provide chiral symmetry breaking. A good candidate is the instanton-induced ’t Hooft interaction . When this nonperturbative gluonic interaction breaks chiral symmetry, i.e. generates at low momenta the constituent mass $`m`$ of quarks, it also automatically supplies a strong attractive interaction in the pseudoscalar-isovector quark-antiquark system - pions - which makes them anomalously light, with zero mass in the chiral limit. This is how the pions appear as the Nambu-Goldstone bosons of the spontaneously broken chiral symmetry. This mechanism is well illustrated by the Nambu and Jona-Lasinio model . While there is a strong attractive interaction in the pseudoscalar-isovector quark-antiquark system, the interaction is absent to leading order in vector mesons, which means that masses of vector mesons should be approximately $`2m`$, which is well satisfied empirically, $`\mu _\rho \mu _\omega 2m`$. The implication is that the $`\pi \rho `$ mass splitting is not due to the perturbative color-magnetic interaction between spins of constituent quarks in $`\pi `$ and $`\rho `$, but entirely due to the fact that the QCD Lagrangian posseses a chiral symmetry which is dynamically broken in the QCD vacuum. Note that the ’t Hooft interaction also naturally solves the $`U\left(1\right)_A`$ problem, explaining thus why $`\eta ^{}`$ is heavy, contrary to $`\pi `$. This problem cannot be solved by the OGE interaction as a matter of principle. The Nambu and Jona-Lasinio mechanism of chiral symmetry breaking (and hence of $`\pi \rho `$ splitting) is the most general one. It only exploits the fact that the quark-gluon interaction in QCD respects chiral symmetry. In fact one does not need to assume that it is the instanton-induced interaction which provides chiral symmetry breaking. When that nonperturbative gluonic interaction between quarks, which is responsible for chiral symmetry breaking in QCD, is iterated in the qq t-channel in baryons, it inevitably leads to poles which correspond to a GBE interaction in quark-quark pairs, see Fig. 4. This is a typical antiscreening behavior, the interaction of two quarks in baryons is represented by a bare gluonic vertex at large momenta transfer (i.e. at very small distances), but it blows up at small momenta in the channel with GBE quantum numbers, explaining thus a distinguished role of the latter interaction in the low-energy regime. Thus the GBE interaction in baryons is in fact an effective representation of the t-channel ladders, which strongly reinforce a bare gluonic vertex at low-momentum transfer in the GBE channel. Since the typical momentum of valence current quarks in baryons is well below the chiral symmetry breaking scale, these interactions dominate (see Introduction). This suggests that the origin of the hyperfine splittings in both the low-lying mesons and baryons is intrinsically the same - it is the nonperturbative gluonic interaction between quarks which is responsible for chiral symmetry breaking in QCD - which, however, reveals itself differently in mesons and baryons. In Fig. 2 of his paper Isgur shows an evolution of the hyperfine splittings in mesons starting from the heavy quarkonium to $`\pi \rho `$ mass splitting, arguing that it supports ”a smooth evolution of the wave function … convoluted with the predicted $`1/m_Q^2`$ strength of the OGE hyperfine interaction”. This figure is misleading. Even if one takes a naive view that the $`\pi \rho `$ splitting is due to OGE spin-spin force between the constituent quarks, one cannot explain why the pion is very light, but $`\eta ,\eta ^{}`$ are heavy since this spin-spin force must provide the same strong attraction also in $`\eta ,\eta ^{}`$, or, in other words, one cannot explain in this approach why $`\pi \rho `$ mass splitting is big but $`\eta ^{}\frac{\sqrt{2}\omega +\varphi }{\sqrt{3}}`$ mass splitting is even opposite in sign. This fact alone rules out this naive mechanism of the light pseudoscalar-vector meson splittings. The Fig. 3a of Isgur’s paper claims to support the same idea using the heavy-light mesons. According to Isgur this figure illustrates the $`1/m_q`$ behaviour of the OGE spin-spin force splittings, where $`m_q`$ is light quark mass. Again, this figure is as misleading as the previous one, as it does not show really $`\mathrm{𝑐𝑙𝑒𝑎𝑛}`$<sup>13</sup><sup>13</sup>13The transition from the $`BB^{}`$ and $`DD^{}`$ systems to $`KK^{}`$ and $`\pi \rho `$ is dubious one as in the former case the system is indeed heavy-light, while in the latter it is light-light. examples which rule out this behavior. To these belong the hyperfine $`DD^{}`$ splitting of 141.4 MeV and $`D_SD_S^{}`$ one of 143.8 MeV. In the former case one has $`\overline{c}u`$ or $`\overline{c}d`$ system, while in the latter one the $`u`$ or $`d`$ quark is substituted by a strange one. One obviously observes an absence of the $`1/m_q`$ behaviour as the constituent light and strange masses differ by 30 - 50 %. Exactly the same situation takes place in B-meson, compare the hyperfine $`BB^{}`$ splitting of 45.7 MeV with that one of $`B_SB_S^{}`$, of 47.0 MeV. One can then conclude that while there is indeed the $`1/m_Q`$ scaling with respect to heavy quark mass $`m_Q`$ in the heavy-light systems, which follows from the heavy quark limit (symmetry) in QCD , there is no similar behavior with respect to light quark component of the heavy-light systems. Similar objections can be raised against Fig. 3,b of Isgur’s paper. Needless to say that the Isgur’s statement about the large $`\pi \rho `$ mass splitting as originating in the nonrelativistic spin-spin force component of OGE perturbation comes into conflict with the current algebra and all subsequent developments in QCD which show unambiguously that the low mass of pion, which is approximate Goldstone boson (and which is of course a quark-antiquark system), is due to the chiral symmetry dynamical breaking in the QCD vacuum. Even if one assumes that the dynamical chiral symmetry breaking comes from the nonperturbative resummation of gluonic exchanges by solving the Dyson-Schwinger equation for the quark Green function and the low mass of pion from a simultaneous solution of the Bethe-Salpeter equation for the quark-antiquark system with the gluon-exchange kernel, the low mass of the pion (and $`\pi \rho `$ splitting) in this case has nothing to do with the quark-antiquark nonrelativistic spin-spin force (see footnote 5). However, it is indeed the case that the small hyperfine splittings in the heavy quarkonia are due to the nonrelativistic color-magnetic spin-spin force stemming from the small OGE perturbation. This mechanism, while important at the botom and charm quark mass scales, dies out in the region between the charm and strange quark scales (it vanishes in the chiral limit). On the other hand near the chiral limit the splittings are due to the chiral symmetry dynamical breaking, which, in turn, should decrease with increasing the current quark mass. It then follows that the smooth evolution of the splittings shown in Figs. 2 and 3 of Isgur’s paper, is due to a superpositions of these two pictures. Next Isgur argues that the annihilation graphs, see his Fig. 4, which are possible only in the isoscalar channels in mesons, but not possible in the isovector ones and which violate the OZI rule should produce strong splitting in $`\rho \omega `$ system as well as a strong mixing of the $`\overline{u}u,\overline{d}d`$ and $`\overline{s}s`$ components in $`\omega `$ and $`\varphi `$, if one assumes that the GBE graphs between quarks in baryons induce a $`\mathrm{\Delta }N`$ splitting. This problem has a very simple resolution if one assumes that the instanton - induced ’t Hooft interaction is the most important one. These annihilation graphs do contribute in the pseudoscalar mesons and provide the solution of the $`\pi \eta \eta ^{}`$ puzzle. However, there are no such graphs from ’t Hooft interaction in vector mesons . It is this pecuilarity which explains the completely different mixing of singlet and octet components in the pseudoscalar and vector mesons, which is unnatural in the former case and natural in the latter one . As explained in the beginning of this section the GBE interaction in the quark - quark systems (to be contrasted to the quark - antiquark ones) can be regarded a result of the t-channel iterations of the same (like in mesons) bare ’t Hooft vertex. ## 5 ”The Connection to Heavy Quark Baryons is Lost” Here Isgur again uses his Figs. 2 and 3 for argumentation. While it is correct that around the heavy quark limit the OGE mechanism is indeed important for small hyperfine splittings, the light quark limit (chiral limit) is just opposite one in QCD and implies completely different dynamics, inherent in QCD. There are no doubts that it is a chiral dynamics, i.e. dynamics of massless quarks in external gluonic fields which becomes the most important phenomenon in this case. As argued in the previous section no conclusions can be obtained from these figures, which ignore well known empirical data. What then the dynamics is, that is responsible for the heavy - light systems, and, in particular, heavy-light baryons<sup>14</sup><sup>14</sup>14It is, unfortunately, an incorrect statement in that the exchange by heavy-light meson (e.g. $`D,D^{}`$) between heavy-light quark pairs in baryons should produce $`1/M_Q^2`$ scaling, in contradiction with the heavy quark limit. Naively, from the covariant meson propagator one would indeed obtain scaling $`1/M_Q^2`$. This scaling comes from both the positive energy solution propagating forward in time and the negative energy solution propagating backward in time. However, the ”heavy meson exchange” viewed as in Fig. 3 with the Z-like part made of the light quark line only and with the heavy quark propagating only forward in time scales as $`1/M_Q`$. The heavy quark Z-like line , which would correspond to the propagation of the heavy quark backward in time, is suppressed by heavy quark mass. is an open question (it cannot be excluded that in the case of baryons both meson-like dynamics and perturbative QCD corrections are equally important). At least, what is known, the prediction of the spin-orbit splittings in heavy-light mesons , based on the scalings of Figs. 2 and 3 and OGE, turned out in dramatic disagreement with the very recent lattice results . Returning then to the question of the splitting of the $`\mathrm{\Lambda }\left(1405\right)\mathrm{\Lambda }\left(1520\right)`$ multiplet and its charm analog $`\mathrm{\Lambda }_c\left(2954\right)\mathrm{\Lambda }_c\left(2627\right)`$ there is no objection to their dynamical similarity, which suggests that the $`\mathrm{\Lambda }\left(1405\right)`$ should have a large $`QQQ`$ component. As explained in the section 3, it does not contradict the idea that there is an appreciable higher Fock component $`QQQ\overline{K}`$, which provides an anomalously large $`\mathrm{\Lambda }\left(1405\right)\mathrm{\Lambda }\left(1520\right)`$ splitting. The other possibility, that there exists some spin-orbit force, which is specific to the flavor - singlet state only is also not ruled out, while it is clear that OGE, which is flavor independent, cannot provide such a spin-orbit force. ## 6 Conclusions In ”Conclusions” Isgur raises a few conceptual objections. The first one is about a double-counting problem since a theory which uses both constituent quarks and Goldstone bosons has ”both fundamental Goldstone bosons and quark-antiquark bound state Goldstone bosons”. This objection is obviously based on misunderstanding of the low-energy effective theory. There is no fundamental Goldstone boson field in QCD. The pion as a Goldstone boson is of course a system of quarks and antiquarks and has entirely dynamical origin . It arises naturally as a deeply bound state from the corresponding microscopical quark-gluon nonperturbative interaction in QCD, e.g. the instanton - induced one. When one applies the same Lagrangian (which does not contain any pion field!) in baryons and iterates it in the qq t-channel, one arrives at the pole contribution which corresponds to GBE between quarks in baryons . This is a simple consequence of crossing symmetry: if one obtains pion as a solution of the Bethe-Salpeter equation in the quark - antiquark s - channel, then one inevitably obtains a pion - exchange in the quark - quark systems as a result of iterations in the qq t-channel. There is no fundamental pion-exchange between quarks as there is no fundamental pion field in QCD. The pion exchange is not more than an effective representation of the t-channel ladders in the low-energy and low-momentum regime where these ladders become important. The second problem ”is that it is not legitimate to treat the quark-Goldstone boson vertex as pointlike”. In fact that was never suggested and instead it has been insisted, in all papers, that the finite size of both constituent quarks and pions provides a smearing of the otherwise contact short-range spin-spin quark-quark force. It is this smeared short-range part of GBE interaction that is crucially important for splittings in baryons. Indeed, the results crucially depend on the smearing parameter , that should be originated from the intrinsic structure of pion and also from unknown nonlinear behavior of the effective chiral Lagrangian . The third objection that ”there is no obvious rationale for truncating the tower of meson exchanges …” was addressed in the section 3. Obviously all mesons should contribute. An important issue, however, is that the spin-spin force from $`\pi `$, $`\rho `$ or $`a_1`$ meson exchanges in quark-quark system has exactly the same flavor-spin structure and sign at short range, which is crucial for baryon spectroscopy, so they only enhance the effect of each other, while the tensor and spin-orbit forces from different meson exchanges interfere destructively in baryons , which explains a significant spin-spin force and at the same time rather weak net tensor and spin-orbit forces, which is suggested by empirical baryon spectra. Nevertheless, the importance of different meson exchanges is different and is determined by the position of the corresponding pole at the unphysical time-like momenta in the quark-quark system (i.e. in baryon). The closer a pole is to the space-like region, which determines the quark-quark interaction, the more important the given meson exchange is. The pion pole is located just at the origin of the space-like axis and thus strongly influences the quark-quark interaction in baryons in the regime where momentum transfer is not large. In summary the idea of the GBE model in baryons is not that there is no perturbative gluon exchange in QCD and, in particular in light baryons and mesons, but that such contributions cannot be significant for the low-energy observables such as masses, where the dynamics is driven by nonperturbative phenomena among which the crucially important are dynamical chiral symmetry breaking and confinement. The importance of the GBE flavor-dependent spin-spin force is not only conceptually substantiated, but it is also strongly supported by the fact that once one extracts the pion-quark coupling constant from the well known pion-nucleon one, regularizes the $`\pi q`$ vertex with the cutoff of the order $`\mathrm{\Lambda }_\chi 1`$ GeV and solves the (semi)relativistic 3-body equations exactly, the $`N\mathrm{\Delta }`$ splitting turnes out of the order 300 MeV (or larger!). At the same time the Roper state is shifted down below the negative parity multiplet. The addition of any sizable phenomenological color-magnetic force between the constituent quarks explodes the baryon spectra . ## 7 Acknowledgement I am grateful to D.O. Riska for a careful reading of the manuscript.
no-problem/9909/astro-ph9909346.html
ar5iv
text
# The Optical Gravitational Lensing Experiment. Investigating the Influence of Blending on the Cepheid Distance Scale with Cepheids in the Large Magellanic Cloud1footnote 11footnote 1Based on observations obtained with the 1.3 m Warsaw Telescope at the Las Campanas Observatory of the Carnegie Institution of Washington ## 1 Introduction As the number of extragalactic Cepheids discovered with HST continues to increase and the value of $`H_0`$ is sought from distances based on these variables (Gibson et al. 1999, Saha et al. 1999), it becomes even more important to understand various possible systematic errors which could affect the extragalactic distance scale. Currently, the most important systematic is a bias in the distance to the Large Magellanic Cloud (LMC), which provides the zero-point calibration for the Cepheid distance scale. The LMC distance is very likely $``$15% shorter than usually assumed (e.g. Udalski et al. 1999a; Stanek et al. 1999), but it still might be considered uncertain at the $`10`$% level (e.g. Jha et al. 1999). Another possible systematic, the metallicity dependence of the Cepheid Period-Luminosity (PL) relation, is also very much an open issue, with the empirical determinations ranging from 0 to $`0.4magdex^1`$ (Freedman & Madore 1990; Sasselov et al. 1997; Kochanek 1997; Kennicutt et al. 1998, Udalski et al. 1999a). In this paper we investigate a much neglected systematic, that of the influence of blended stellar images on the derived Cepheid distances. Although Cepheids are very bright, $`M_V4`$ at a period of $`10days`$, their images when viewed in distant galaxies are likely to be blended with other nearby, relatively bright stars. Recently Mochejska et al. (1999) showed that a significant fraction of Cepheids discovered in M31 by the DIRECT project (e.g. Kaluzny et al. 1998; Stanek et al. 1998) were resolved into several blended stars when viewed on the HST images. The average FWHM on the DIRECT project ground-based images of M31 is about $`1.5\mathrm{}`$, or $`5pc`$, which corresponds to the HST-WFPC2 resolution of $`0.1\mathrm{}`$ for a galaxy at a distance of $`10Mpc`$. Any luminous star (or several of them) in a volume of that cross section through a galaxy could be indistinguishable from the Cepheid and would contribute to its measured flux. In this paper we investigate the effects of stellar blending on the Cepheid distance scale by studying Cepheids and their close neighbors observed in the LMC by the Optical Gravitational Lensing Experiment (OGLE: Udalski, Kubiak & Szymański 1997). The catalog of $`1300`$ LMC Cepheids has been recently released by Udalski et al. (1999b). As the LMC is $`100500`$ times closer to us than galaxies observed by HST, ground-based resolution of $`1.0\mathrm{}`$ allows us to probe linear scales as small as $`0.25pc`$ in that galaxy. We describe the OGLE data used in this paper in Section 2. In Section 3 we apply these data to simulate the blending of Cepheids at various distances. In Section 4 we discuss the implications of our results for the Cepheid distance scale. In Section 5 we propose further possible studies to learn more about the effects of blending. ## 2 The Data The data used in this paper came from two catalogs produced by the OGLE project. The first one, with 1333 Cepheids detected in the 4.5 square degree area of central parts of the LMC, has just been released (Udalski et al. 1999b) and it is available from the OGLE Internet archive at http://www.astrouw.edu.pl/~ftp/ogle/. It contains about $`3.4\times 10^5`$ $`BVI`$ photometric measurements for the variables, along with their derived periods, $`BVI`$ photometry, astrometry and classification. The second catalog, that of $`BVI`$ photometry for many millions of LMC stars observed by the OGLE project, will be released soon (Udalski et al. 1999c, in preparation). Its construction will be analogous to the SMC $`BVI`$ maps (Udalski et al. 1998). A typical accuracy of the photometric zero points of the LMC photometric data is about $`0.010.02`$mag for all $`BVI`$-bands and the catalog reaches $`V21`$mag. As our goal was to estimate the influence of blending on the Cepheid distance scale, we selected for further investigations only 54 longest period ($`P>10days`$), fundamental-mode Cepheids. We further required their $`I`$-band total amplitude of variations to exceed $`0.4`$mag, which corresponds to amplitude of $`0.7`$mag in the $`V`$-band. This is to reflect the fact that in distant galaxies a typical photometric error of $`0.1mag`$ prevents discovery of low amplitude Cepheids. We used the $`I`$-band amplitude criterion because OGLE observes mostly in the $`I`$-band. The amplitude cutoff reduced the OGLE LMC sample to 47 Cepheids. Finally, we have excluded four highly reddened Cepheids, which left us with 43 variables. It should be noted that because of the CCD saturation limits in the OGLE data the longest period Cepheid in our sample has $`P=31days`$. This sample will be used to investigate the effects of blending on the Cepheid distance scale. ## 3 Cepheids and Their Neighbors The LMC is located at about $`50kpc`$, or $`\mu _{0,LMC}=18.5`$, from us (for simplicity, in the rest of this paper we use the distance scale as adopted by the HST Key Project on the Extragalactic Distance Scale, hereafter: KP). We define two distances for which we simulate the blending using the LMC data: $`12.5Mpc`$ ($`\mu _030.5`$), somewhat shorter than the median distance of $`14.4Mpc`$ for the KP sample (see Table 1 in Ferrarese et al. 1999), and $`25Mpc`$ ($`\mu _032.0`$), roughly corresponding to the most distant galaxies in which Cepheids were detected with HST (Gibson et al. 1999). The FWHM of HST-WFPC2 is $`0.1\mathrm{}`$, which at $`12.5Mpc`$ and $`25Mpc`$ corresponds physically to area of radius $`12.5\mathrm{}`$ and $`25\mathrm{}`$ at the LMC distance. In Figure 1 we show two OGLE LMC Cepheids and their neighbors in the $`I`$-band (left panels) and the $`V`$-band (right panels). The images are $`1\mathrm{}`$ in size. Also shown are two circles corresponding to the FWHM of HST-WFPC2 camera at $`12.5Mpc`$ and $`25Mpc`$, i.e. $`12.5\mathrm{}`$ and $`25\mathrm{}`$ in radius. The two Cepheids were chosen to represent two different situations: the one shown in the top panels, LMC\_SC15 118594, has only one bright and nearby, red neighbor at $`5.4\mathrm{}`$ from the Cepheid, and several other, fainter neighbors further away. The second one shown in the bottom panels, LMC\_SC11 250872, is located in a very dense stellar region and it is probably a member of a small star cluster. Part of the cluster light would be included in HST-WFPC2 measurements of the Cepheid if the LMC were at $`12.5Mpc`$ while at $`25Mpc`$ almost entire cluster light would be included. As discussed later in the paper, large amount of blended light would most probably cause LMC\_SC11 250872 to elude detection when observed at large distances. The OGLE photometric catalog of stars in the LMC extends about six magnitudes, or $``$0.4% in flux (see Figure 4 of Udalski et al. 1999b) below the bright sample of Cepheids selected in the previous Section. We want to define a criterion to separate stars which will contribute to the flux of a Cepheid when blended together, from those which would contribute only to the background light in the host galaxy. We use a lower limit of 5% of the mean flux of the Cepheid for a star to be included as a blend, but will discuss different values later in the paper. We use this 5% cutoff in evaluating the sum $`S_F`$ (Mochejska et al. 1999) of all flux contributions in filter $`F`$ normalized to the flux of the Cepheid: $$S_F=\underset{i=1}{\overset{N_F}{}}\frac{f_i}{f_C}$$ (1) where $`f_i`$ is the flux of the i-th companion, $`f_C`$ the mean intensity flux of the OGLE LMC Cepheid and $`N_F`$ the total number of companions within either $`12.5\mathrm{}`$ and $`25\mathrm{}`$ in radius. In Figure 2 we show the cumulative probability distribution of flux contribution from companions $`S_I`$ (left panels) and $`S_V`$ (right panels) within a radius of $`12.5\mathrm{}`$ and $`25\mathrm{}`$ of LMC Cepheids. For the smaller radius of $`12.5\mathrm{}`$ 20-25% of our sample is not blended and the contribution of blue blends is somewhat stronger than that of red blends. For the larger radius of $`25\mathrm{}`$ all 43 Cepheids are blended to some extent and the contribution of red blends is now more significant than that of blue blends. In the next Section we attempt to use our data to quantify the effects of blending on the Cepheid distance scale. ## 4 Blending and the Cepheid Distance Scale We adopt the KP procedure for deriving distances, as given by prescription in Madore & Freedman (1991). LMC is assumed to be at distance modulus of $`\mu _{0,LMC}=18.50`$, with LMC Cepheids reddened by $`E(BV)=0.10`$mag. When we apply this prescription to the $`V,I`$ data of our 43 OGLE Cepheids, we obtain values of $`\mu _{0,LMC}=18.56`$ and $`E(BV)=0.12`$mag, i.e. somewhat discrepant, but basically indicating fairly good agreement in photometry. The $`rms`$ scatter around the fitted P-L relations (with their slopes fixed to that of the KP prescription) is $`0.17`$mag in the $`V`$-band and $`0.13`$mag in the $`I`$-band. As the next step we add contributions from the nearby stars to each Cepheid (applying the 5% cutoff discussed in the previous section) in the $`V`$ and $`I`$ band separately and we repeat the distance derivation procedure. To take into account the fact that a heavily blended Cepheids would elude detection, we require that their blended $`I`$-band total amplitude of variations exceeds $`0.4`$mag. The results for simulated distances of $`12.5`$ and $`25Mpc`$ are shown in Figure 3. This Figure shows a number of interesting features and deserves detailed discussion. For the simulated distance of $`12.5Mpc`$, the sample of Cepheids is reduced to 35 and the derived distance modulus is smaller than the “true” (unblended) value of $`\mu _{0,LMC}=18.56`$ by $`0.07`$mag. The reddening estimate is $`E(BV)=0.10`$mag, smaller than for the unblended sample because of the slightly larger contribution of blue blends, as discussed in the previous Section. The $`rms`$ scatter around the fitted P-L relations increases to $`0.22`$mag in the $`V`$-band and $`0.17`$mag in the $`I`$-band. This is because while there are now Cepheids in the sample with fairly substantial blending, it still contains Cepheids with no blending (see Figure 2). The situation becomes quite dramatic at the simulated distance of $`25Mpc`$ (right panel of Figure 3). There are only 13 Cepheids left with $`I`$-band amplitude larger than $`0.4`$mag, with the shorter period (and therefore fainter) Cepheids preferentially removed. The derived distance modulus is smaller than the “true” value by $`0.36`$mag, with the reddening estimate $`E(BV)=0.11`$mag. What is very interesting is that the $`rms`$ scatter around the fitted P-L relations now decreases to $`0.17`$mag in the $`V`$-band and $`0.13`$mag in the $`I`$-band. This is because the sample of Cepheids, while much smaller, is now more homogeneous when it comes to blending. All Cepheids are to some extent blended (see Figure 2), with strongly blended cases removed by the high amplitude requirement. Since there is such a dramatic difference in blending between the two distances simulated so far, we decided to investigate the blending for a larger number of distances. The results are shown in Figure 4, where blending difference between the true and measured distance modulus is shown as a function of simulated distance. Also shown is the histogram of distances for 24 galaxies for which Cepheid distances with HST were measured or re-measured by the KP (Ferrarese et al. 1999). The 5% cutoff which we employed to define blended stars is somewhat arbitrary and in reality is most likely a function of data reduction procedure, signal-to-noise in the images etc. We decided to investigate the dependence of blending effects on the cutoff value by using two additional cutoffs: 2.5% and 10%. The results are shown with different symbols in Figure 4. Clearly, the exact value of the blending difference between the true and measured distances modulus depends on the applied cutoff, but the overall trend remains the same. Looking at the histogram of distances in Figure 4 we can see that half of the KP sample is likely to exhibit a blending bias greater than $`0.1`$mag, and in some cases it can be as large as $`0.3`$mag. Clearly blending influence on the Cepheid distance scale can be potentially very large and cannot be neglected. ## 5 Further Studies of Blending and Conclusions The study of Mochejska et al. (1999) showed that a significant fraction of Cepheids discovered in M31 by the DIRECT project were resolved into several blended stars when viewed on the HST images. As we have shown in this paper, modelling of the blending effects using Cepheids in the LMC possibly indicates a large, $`0.10.3`$mag bias when deriving Cepheid distances to galaxies observed with HST. In addition, blending is a factor which always contributes in only one direction, and therefore it will not average out when a large sample of galaxies is considered. The sign of the blending effect on the $`H_0`$ is opposite to that caused by the lower LMC distance (e.g. Udalski et al. 1999a; Stanek et al. 1999). We would like to point out that the blending of Cepheids is likely not only to affect the studies of these stars in different galaxies, but might also affect differential studies, such as that of Kennicutt et al. (1998) in the spiral galaxy M101. Their observed effect that metal-rich (and therefore closer to the center of the galaxy) Cepheids appear brighter and closer than metal-poor (and therefore further away from the center) stars could be partially caused by the increased blending closer to the center of the galaxy, although at this point we make no attempt to estimate how much of this effect would be indeed due to blending. The bar of the LMC, where most of data discussed in this paper were collected by OGLE, seems on average to have higher surface brightness than a typical KP galaxy (Macri et al. 1999). It would be desirable to further establish the importance of blending for the Cepheid distance scale using a variety of methods and data in a number of different galaxies. Mochejska et al. (2000, in preparation) are now studying the HST archival images of a large sample of $`100`$ Cepheids detected in M33 (Macri et al. 2000, in preparation) by the DIRECT project. An approach analogous to that used in this paper will be employed by Stanek & Udalski (1999, in preparation) for a sample of OGLE Cepheids in the Small Magellanic Cloud, which will have the advantage of including Cepheids with periods of up to $`P50days`$ in a lower surface brightness system (de Vaucouleurs 1957). Another approach, closer reproducing the procedure employed by the KP when using Cepheids to determine distances, would be to use HST images of relatively nearby galaxies, such as NGC3031 or NGC5253 (Ferrarese et al. 1999), by degrading them in resolution and signal-to-noise as to represent much more distant galaxies. Unfortunately, much of the data for the several closest galaxies have been taken before the refurbishment of HST and are therefore of inferior spatial resolution compared to later WFPC2 data. All these studies can provide only an approximate answer to the blending problem, as each individual galaxy can in principle be different in its blending properties. One would like to find a way to constrain or eliminate blending in each individual case. As pointed out by Mochejska et al. (1999), one obvious solution to the problem of blending would be to obtain data with better spatial resolution, such as planned for the Next Generation Space Telescope (NGST). While there will be desire to use NGST to observe much more distant galaxies than with HST, it would be of great value to study some of the not-so-distant ones as well. Another possible approach would be to try to circumvent the blending problem by developing and applying a Period-Amplitude-Luminosity (PAL) relation for Cepheids (Paczyński 1999, private communication), together with image subtraction techniques such as that developed by Alard & Lupton (1998). We appreciate helpful discussions and comments on this work by Peter Garnavich, John Huchra, Lucas Macri, Barbara Mochejska, Bohdan Paczyński and Dimitar Sasselov. Support for KZS was provided by NASA through Hubble Fellowship grant HF-01124.01-99A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. AU acknowledges support by the Polish KBN grant 2P03D00814. Partial support for the OGLE project was provided with the NSF grants AST-9530478 and AST-9820314 to B. Paczyński.
no-problem/9909/hep-ph9909563.html
ar5iv
text
# 1 Model ## 1 Model ### 1.1 The Effective Lagrangian The following is a condensation of material more fully presented in and aims to bring together the key points necessary for the subsequent discussion of phenomenological consequences. In those references, as here, the Kähler $`U(1)`$ superspace formalism of is used throughout. Supersymmetry breaking is implemented via condensation of gauginos charged under the hidden sector gauge group $`𝒢=_a𝒢_a`$, which is taken to be a subgroup of $`E_8`$. For each gaugino condensate a vector superfield $`V_a`$ is introduced and the gaugino condensate superfields $`U_a\mathrm{Tr}(𝒲^\alpha 𝒲_\alpha )_a`$ are then identified as the (anti-)chiral projections of the vector superfields: $`U_a=\left(𝒟_{\dot{\alpha }}𝒟^{\dot{\alpha }}8R\right)V_a,`$ $`\overline{U}_a=\left(𝒟^\alpha 𝒟_\alpha 8\overline{R}\right)V_a.`$ (1) The dilaton field (in the linear multiplet formalism used here) is the lowest component of the vector superfield $`V=_aV_a`$: $`\mathrm{}=V|_{\theta =\overline{\theta }=0}`$. Note that none of the individual lowest components $`V_a|_{\theta =\overline{\theta }=0}`$ will appear in the effective theory component Lagrangian. In the class of orbifold compactifications we will be considering there are three untwisted moduli chiral superfields $`T^I`$ and matter chiral superfields $`\mathrm{\Phi }^A`$ with Kähler potential $`K=k\left(V\right)+{\displaystyle \underset{I}{}}g^I+{\displaystyle \underset{A}{}}e^{_Iq_I^Ag^I}\left|\mathrm{\Phi }^A\right|^2+𝒪\left(\mathrm{\Phi }^4\right),`$ $`g^I=\mathrm{ln}\left(T^I+\overline{T}^I\right),`$ (2) where the $`q_I^A`$ are the modular weights of the fields $`\mathrm{\Phi }^A`$. The relevant part of the complete effective Lagrangian is then $$_{\mathrm{eff}}=_{\mathrm{KE}}+_{\mathrm{VY}}+_{\mathrm{pot}}+\underset{a}{}_\mathrm{a}+_{\mathrm{GS}},$$ (3) where $`_{\mathrm{KE}}={\displaystyle \text{d}^4\theta E\left[2+f\left(V\right)\right]},`$ $`k\left(V\right)=\mathrm{ln}V+g\left(V\right),`$ (4) is the Lagrangian density for the gravitational sector coupled to the vector multiplet and gives the kinetic energy terms for the dilaton, chiral multiplets, gravity superfields and tree-level Yang-Mills terms. Here the functions $`f\left(V\right)`$ and $`g\left(V\right)`$ represent nonperturbative corrections to the Kähler potential arising from string effects. The two functions $`f`$ and $`g`$ are related by the requirement that the Einstein term in (4) have canonical normalization: $$V\frac{\text{d}g\left(V\right)}{\text{d}V}=V\frac{\text{d}f\left(V\right)}{\text{d}V}+f\left(V\right),$$ (5) and obey the weak-coupling boundary conditions: $`f\left(0\right)=g\left(0\right)=0`$. In the presence of these nonperturbative effects the relationship between the dilaton and the effective field theory gauge coupling becomes $`g^2/2=\mathrm{}/\left(1+f\left(\mathrm{}\right)\right)`$. The second term in (3) is a generalization of the original Veneziano-Yankielowicz superpotential term , $$_{\mathrm{VY}}=\frac{1}{8}\underset{a}{}\text{d}^4\theta \frac{E}{R}U_a\left[b_a^{}\mathrm{ln}\left(e^{K/2}U_a\right)+\underset{\alpha }{}b_a^\alpha \mathrm{ln}\left[\left(\mathrm{\Pi }^\alpha \right)^{p_\alpha }\right]\right]+\mathrm{h}.\mathrm{c}.,$$ (6) which involves the gauge condensates $`U_a`$ as well as possible gauge-invariant matter condensates described by chiral superfields $`\mathrm{\Pi }^\alpha _A\left(\mathrm{\Phi }^A\right)^{n_\alpha ^A}`$ . Neither the gaugino nor the matter condensate superfields are taken to be propagating . The coeffecients $`b_a^{}`$, $`b_a^\alpha `$ and $`p_\alpha `$ are determined by demanding the correct transformation properties of the expression in (6) under chiral and conformal transformations and yield the following relations: $`b_a^{}={\displaystyle \frac{1}{8\pi ^2}}\left(C_a{\displaystyle \underset{A}{}}C_a^A\right),`$ $`{\displaystyle \underset{\alpha ,A}{}}b_a^\alpha n_\alpha ^Ap_\alpha ={\displaystyle \underset{A}{}}{\displaystyle \frac{C_a^A}{4\pi ^2}},`$ $`p_\alpha {\displaystyle \underset{A}{}}n_\alpha ^A=3\alpha .`$ (7) The final condition amounts to choosing the value of $`p_\alpha `$ so that the effective operator $`\left(\mathrm{\Pi }^\alpha \right)^{p_\alpha }`$ has mass dimension three. In (7) the quantities $`C_a`$ and $`C_a^A`$ are the quadratic Casimir operators for the adjoint and matter representations, respectively. Given the above relations it is also convenient to define the combination $$b_ab_a^{}+\underset{\alpha }{}b_a^\alpha =\frac{1}{8\pi ^2}\left(C_a\frac{1}{3}\underset{A}{}C_a^A\right)$$ (8) which is proportional to the one-loop beta-function coefficient for the condensing gauge group $`𝒢_a`$. The third term in (3) is a superpotential term for the matter condensates consistent with the symmetries of the underlying theory $$_{\mathrm{pot}}=\frac{1}{2}\text{d}^4\theta \frac{E}{R}e^{K/2}W[\left(\mathrm{\Pi }^\alpha \right)^{p_\alpha },T^I]+\mathrm{h}.\mathrm{c}..$$ (9) We will adopt the same set of simplifying assumptions taken up in , namely that for fixed $`\alpha `$, $`b_a^\alpha 0`$ for only one value of $`a`$. Then $`u_a=0`$ unless $`W_\alpha 0`$ for every value of $`\alpha `$ for which $`b_a^\alpha 0`$. We next assume that there are no unconfined matter fields charged under the hidden sector gauge group and ignore possible dimension-two matter condensates involving vector-like pairs of matter fields. This allows a simple factorization of the superpotential of the form $$W[\left(\mathrm{\Pi }^{p_\alpha }\right),T]=\underset{\alpha }{}W_\alpha \left(T\right)\left(\mathrm{\Pi }^\alpha \right)^{p_\alpha },$$ (10) where the functions $`W_\alpha `$ are given by $$W_\alpha \left(T\right)=c_\alpha \underset{I}{}\left[\eta \left(T^I\right)\right]^{2\left(p_\alpha q_I^\alpha 1\right)}.$$ (11) Here $`q_I^\alpha =_An_\alpha ^Aq_I^A`$ and the Yukawa coefficients $`c_\alpha `$, while a priori unknown variables, are taken to be of $`𝒪\left(1\right)`$. The function $`\eta (T^I)`$ is the Dedekind function and its presence in (11) ensures the modular invariance of this term in the Lagrangian. The remaining terms in (3) include the quantum corrections from light field loops to the unconfined Yang-Mills couplings and the Green-Schwarz (GS) counterterm introduced to ensure modular invariance.<sup>1</sup><sup>1</sup>1Not included in this paper are string loop corrections $`\left(_{\mathrm{th}}\right)`$ which vanish for orbifold compactifications with no $`N=2`$ supersymmetry sector . The latter is given by the expression $`_{\mathrm{GS}}`$ $`=`$ $`{\displaystyle \text{d}^4\theta EVV_{\mathrm{GS}}},`$ (12) $`V_{\mathrm{GS}}`$ $`=`$ $`b{\displaystyle \underset{I}{}}g^I+{\displaystyle \underset{A}{}}p_Ae^{_Iq_I^Ag^I}\left|\mathrm{\Phi }^A\right|^2+𝒪\left(\left|\mathrm{\Phi }^A\right|^4\right),`$ (13) where $`bC_{E_8}/8\pi ^20.38`$ is proportional to the beta-function coefficient for the group $`E_8`$ and the coefficients $`p_A`$ are as yet undetermined. As for the operators $`_\mathrm{a}`$ in (3), their rather involved form in curved superspace was worked out in and will not be repeated here. Their importance for this work lies in their contributions to the supersymmetry-breaking gaugino masses at the condensation scale arising from the superconformal anomaly – a contribution that was recently emphasized by a number of authors . We will return to these in Section 3.1. ### 1.2 Condensation and Dilaton Stabilization The Lagrangian in (3) can be expanded into component form using the standard techniques of the Kähler superspace formalism of supergravity . In reference the bosonic part of the Lagrangian relevant to dilaton stabilization and gaugino condensation was presented and the equations of motion for the nonpropagating fields were solved. In particular, the equations of motion for the auxiliary fields of the condensates $`U^a`$ give $$\rho _{a}^{}{}_{}{}^{2}=e^{2\frac{b_a^{}}{b_a}}e^Ke^{\frac{\left(1+f\right)}{b_a\mathrm{}}}e^{\frac{b}{b_a}_Ig^I}\underset{I}{}\left|\eta \left(t^I\right)\right|^{\frac{4\left(bb_a\right)}{b_a}}\underset{\alpha }{}\left|b_a^\alpha /4c_\alpha \right|^{2\frac{b_a^\alpha }{b_a}},$$ (14) where $`t_IT_I|_{\theta =\overline{\theta }=0}`$ and $`u_a=U_a|_{\theta =\overline{\theta }=0}\rho _ae^{i\omega _a}`$. Upon substituting for the gauge coupling via the relation $`g^2/2=\mathrm{}/\left(1+f\left(\mathrm{}\right)\right)`$ we recognize the expected one-instanton form for gaugino condensation. Expression (14) encodes more information, however, than simply the one-loop running of the gauge coupling. In the loop corrections to the gauge coupling constants were computed using a manifestly supersymmetric Pauli-Villars regularization. The (moduli independent) corrections were identified with the renormalization group invariant $$\delta _a=\frac{1}{g_a^2\left(\mu \right)}\frac{3b_a}{2}\mathrm{ln}\mu ^2+\frac{2C_a}{16\pi ^2}\mathrm{ln}g_a^2\left(\mu \right)+\frac{2}{16\pi ^2}\underset{a}{}C_a^A\mathrm{ln}Z_a^A\left(\mu \right).$$ (15) Using the above expression it is possible to solve for the scale at which the $`1/g^2(\mu )`$ term becomes negligible relative to the $`\mathrm{ln}g^2(\mu )`$ term – effectively looking for the “all loop” Landau pole for the coupling constant. This scale is related to the string scale by the relation $$\mu _{L}^{}{}_{}{}^{2}\mu _{\mathrm{str}}^{}{}_{}{}^{2}e^{\frac{2}{3b_ag_a^2\left(\mu \right)}}\underset{A}{}\left[Z_a^A\left(\mu _{\mathrm{str}}\right)/Z_a^A\left(\mu _L\right)\right]^{\frac{C_a^A}{12\pi ^2b_a}}.$$ (16) Now comparing the effective Lagrangian given in Section 1.1 with the field theory loop calculation given in shows that the two agree provided we identify the wave function renormalization coefficients $`Z_a^A`$ with the quantity $`|4W_\alpha /b_a^\alpha |^2`$. This is precisely what is needed to produce the final product in the condensate expression given in (14), indicating that the condensation scale represents the scale at which the coupling becomes strong as would be computed using the so-called “exact” beta-function. Note that this final factor introduces the unknown Yukawa coefficients $`c_\alpha `$ into the scale of supersymmetry breaking. Such dependence of the gaugino condensate on the parameters of the superpotential is not unexpected, and has in fact been demonstrated in the case of supersymmetric QCD as well as certain models of supersymmetric Yang-Mills theories coupled to chiral matter . This last Yukawa-related factor has the virtue of allowing two different hidden sector configurations which result in the same beta-function to condense at widely different scales. In order to go further and make quantitative statements about the scale of gaugino condensation (and hence supersymmetry breaking) it is necessary to specify some form for the nonperturbative effects represented by the functions $`f`$ and $`g`$. The parameterization adopted in was originally motivated by Shenker and was of the form $`\mathrm{exp}\left(1/g_{\mathrm{str}}\right)`$ where $`g_{\mathrm{str}}`$ is the string coupling constant. A consensus seems to be forming around this characterization for string nonperturbative effects and the function $`f\left(V\right)`$ in (4) will be taken to be of the form $$f\left(V\right)=\left[A_0+A_1/\sqrt{V}\right]e^{B/\sqrt{V}},$$ (17) which was shown to allow dilaton stabilization at weak to moderate string coupling with parameters that are all of $`𝒪\left(1\right)`$. The benefits of invoking string-inspired nonperturbative effects of the form of (17) have recently been explored by others in the literature . The scalar potential for the moduli $`t_I`$ is minimzed at the self-dual points $`t_I=1`$ or $`t_I=\mathrm{exp}\left(i\pi /6\right)`$, where the corresponding F-components $`F_I`$ of the chiral superfields $`T^I`$ vanish. At these points the dilaton potential is given by $$V\left(\mathrm{}\right)=\frac{1}{16\mathrm{}^2}\left(1+\mathrm{}\frac{\text{d}g}{\text{d}\mathrm{}}\right)\left|\underset{a}{}\left(1+b_a\mathrm{}\right)u_a\right|^2\frac{3}{16}\left|\underset{a}{}b_au_a\right|^2.$$ (18) As an example, the potential (18) can be minimized with vanishing cosmological constant and $`\alpha _{\mathrm{str}}=0.04`$ for $`A_0=3.25,A_1=1.70`$ and $`B=0.4`$ in expression (17). ## 2 Phenomenological Implications ### 2.1 Scale of Supersymmetry Breaking With the adoption of (17) the scale of gaugino condensation can be obtained once the following are specified: (1) the condensing subgroup(s) of the original hidden sector gauge group $`E_8`$, (2) the representations of the matter fields charged under the condensing subgroup(s), (3) the Yukawa coefficients in the superpotential for the hidden sector matter fields and (4) the value of the string coupling constant at the compactification scale, which in turn determines the coefficients in (17) necessary to minimize the scalar potential (18). A great deal of simplification in the above parameter space can be obtained by making the ansatz that all of the matter in the hidden sector which transforms under a given subgroup $`𝒢_a`$ is of the same representation, such as the fundamental representation. Then the sum of the coefficients $`b_a^\alpha `$ over the number of condensate fields labeled by $`\alpha `$ $`\left(\alpha =1,\mathrm{},N_c\right)`$, can be replaced by one effective variable $`{\displaystyle \underset{\alpha }{}}b_a^\alpha \left(b_a^\alpha \right)_{\mathrm{eff}}`$ $`\left(b_a^\alpha \right)_{\mathrm{eff}}=N_cb_a^{\mathrm{rep}}.`$ (19) In the above equation $`b_a^{\mathrm{rep}}`$ is proportional to the quadratic Casimir operator for the matter fields in the common representation and the number of condensates, $`N_c`$, can range from zero to a maximum value determined by the condition that the gauge group presumed to be condensing must remain asymptotically free. The redefinition in (19) essentially takes the coefficients $`b_a^\alpha `$, which we are free to choose in our effective Lagrangian up to the conditions given in (7), and assigns the same value to each condensate. The variable $`b_a^\alpha `$ can then be eliminated in (14) in favor of $`\left(b_a^\alpha \right)_{\mathrm{eff}}`$ provided the simultaneous redefinition $`c_\alpha \left(c_\alpha \right)_{\mathrm{eff}}`$ is made so as to keep the product in (14) invariant: $$\rho _+^2\left|\frac{\left(b_a^\alpha \right)_{\mathrm{eff}}}{4\left(c_\alpha \right)_{\mathrm{eff}}}\right|^{2\left(b_a^\alpha \right)_{\mathrm{eff}}/b_a}.$$ (20) With the assumption of universal representations for the matter fields, this implies $$\left(c_\alpha \right)_{\mathrm{eff}}N_c\left(\underset{\alpha =1}{\overset{N_c}{}}c_\alpha \right)^{\frac{1}{N_c}}$$ (21) which we assume to be an $`𝒪\left(1\right)`$ number, if not slightly smaller. From a determination of the condensate value $`\rho `$ using (14) the supersymmetry-breaking scale can be found by solving for the gravitino mass, given by $$M_{3/2}=\frac{1}{3}\left|M\right|=\frac{1}{4}\left|\underset{a}{}b_au_a\right|.$$ (22) In it was shown that in the case of multiple gaugino condensates the scale of supersymmetry breaking was governed by the condensate with the largest one-loop beta-function coefficient. Hence in the following it is sufficient to consider the case with just one condensate with beta-function coefficient denoted $`b_+`$: $$M_{3/2}=\frac{1}{4}b_+\left|u_+\right|.$$ (23) As an illustration of this point, the gravitino mass for the case of pure supersymmetric Yang-Mills $`SU(5)`$ condensation (no hidden sector matter fields) would be $`4000`$ GeV. The addition of an additional condensation of pure supersymmetric Yang-Mills $`SU(4)`$ gauginos would only add an additional $`0.004`$ GeV to the mass. Now for given values of $`\left(c_\alpha \right)_{\mathrm{eff}}`$ and $`g_{\mathrm{str}}`$ the condensation scale $$\mathrm{\Lambda }_{\mathrm{cond}}=\left(M_{\mathrm{Planck}}\right)\rho _+^2^{1/6}$$ (24) and gravitino mass (23) can be plotted in the $`\{b_+,\left(b_+^\alpha \right)_{\mathrm{eff}}\}`$ plane. The sharp variation of the condensate value with the parameters of the theory, as anticipated by the functional form in (14), is apparent in the contour plot of Figure 1. The dependence of the gravitino mass on the group theory parameters is even more profound. Figure 1 gives contours for the gravitino mass between $`100`$ GeV and $`100`$ TeV. Clearly, the region of parameter space for which a phenomenologically preferred value of the supersymmetry-breaking scale occurs is a rather limited slice of the entire space available. The variation of the gravitino mass as a function of the Yukawa parameters $`c_\alpha `$ is shown in Figure 2. On the horizontal axis there are no matter condensates ($`b_a^\alpha =0,\alpha `$) so there is no dependence on the variable $`\left(c_\alpha \right)_{\mathrm{eff}}`$. For values of $`\left(c_\alpha \right)_{\mathrm{eff}}<0.1`$ the contours of gravitino mass in the TeV region lie beyond the limiting value of $`b_+0.09`$ and are thus in a region of parameter space which is inaccessible to a model in which the unified coupling at the string scale is $`\alpha _{\mathrm{str}}=0.04`$ or larger. For very large values of the effective Yukawa parameter the gravitino mass contours approach an asymptotic value very close to the case shown here for $`\left(c_\alpha \right)_{\mathrm{eff}}=50`$. We might therefore consider the shaded region between the two sets of contours as roughly the maximal region of viable parameter space for a given value of the unified coupling at the string scale. ### 2.2 Implications for the Hidden Sector Having examined some of the universal constraints placed on any string-derived model proposing to describe low energy physics in Section 2.1 it is natural to ask whether the region of phenomenological viability (roughly the shaded area in Figure 2) can be used to constrain the matter content of the hidden sector. Upon orbifold compactification the $`E_8`$ gauge group of the hidden sector is presumed to break to some subgroup(s) of $`E_8`$ and the set of all such possible breakings has been computed for $`Z_N`$ orbifolds . Under the working assumption that only the subgroup with the largest beta-function coefficient enters into the low-energy phenomenology, there are then a finite number of possible groups to consider: $$\{\begin{array}{c}E_7,E_6\hfill \\ SO\left(16\right),SO\left(14\right),SO\left(12\right),SO\left(10\right),SO\left(8\right)\hfill \\ SU\left(9\right),SU\left(8\right),SU\left(7\right),SU\left(6\right),SU\left(5\right),SU\left(4\right),SU\left(3\right)\hfill \end{array}$$ (25) For each of the above gauge groups equations (7) and (8) define a line in the $`\{b_+,\left(b_+^\alpha \right)_{\mathrm{eff}}\}`$ plane. These lines will all be parallel to one another with horizontal intercepts at the beta-function coefficient for a pure Yang-Mills theory. The vertical intercept will then indicate the amount of matter required to prevent the group from being asymptotically free, thereby eliminating it as a candidate source for the supersymmetry breaking described in Section 2.1. In Figure 3 we have overlaid these gauge lines on a plot similar to Figure 2. We restrict the Yukawa couplings of the hidden sector to the more reasonable range of $`1\left(c_\alpha \right)_{\mathrm{eff}}10`$ and give three different values of the string coupling at the string scale. The choice of string coupling constant is made when specifying the boundary conditions for solving the dilaton scalar potential, as described in Section 1.2. Changing this boundary condition will affect the scale of gaugino condensation through equation (14), altering the supersymmetry-breaking scale for a fixed point in the $`\{b_+,\left(b_+^\alpha \right)_{\mathrm{eff}}\}`$ plane. Demanding larger values of $`g_{\mathrm{str}}`$ will result in the shifting of the contours of fixed gravitino mass towards the origin, as in Figure 3. Such large values of $`\alpha _{\mathrm{str}}`$ have recently been invoked as part of a mechanism for stabilizing the dilaton and/or as a consequence of reconciling the apparent scale of gauge unification in the minimal supersymmetric standard model (MSSM) with the scale predicted by string theory . We will return to such issues in Section 4. A typical matter configuration would be represented in Figure 3 by a point on one of the gauge group lines. As each field adds a discrete amount to $`\left(b_a^\alpha \right)_{\mathrm{eff}}`$ and the fields must come in gauge-invariant multiples, the set of all such possible hidden sector configurations is necessarily a finite one.<sup>2</sup><sup>2</sup>2For example, one cannot obtain values of $`b_+`$ arbitrarily close to zero in practical model building. The number of possible configurations consistent with a given choice of $`\{\alpha _{\mathrm{str}},\left(c_\alpha \right)_{\mathrm{eff}}\}`$ and supersymmetry-breaking scale $`M_{3/2}`$ is quite restricted. For example, Figure 3 immediately rules out hidden sector gauge groups smaller than SU(6) for weak coupling at the string scale $`\left(g_{\mathrm{str}}^20.5\right)`$. Furthermore, even moderately larger values of the string coupling at unification become increasingly difficult to obtain as it is necessary to postulate a hidden sector with very small gauge group and particular combinations of matter to force the beta-function coefficient to small values. ## 3 Constraints from the Low-Energy Spectrum ### 3.1 Soft Supersymmetry-Breaking Terms Simply requiring that the scale of supersymmetry breaking be in a reasonable range of energy values (i.e. within an order of magnitude of 1 TeV) can put significant constraints on the dynamics of the hidden sector. Requiring further that the pattern of supersymmetry breaking be consistent with observed electroweak symmetry breaking and direct experimental bounds on superpartner masses can restrict the parameter space even more. The pattern of supersymmetry breaking is determined by the appearance of soft scalar masses, gaugino masses and trilinear couplings at the condensation scale. The gaugino masses in the one-condensate approximation, including the contribution from the quantum effects of light fields arising at one loop from the superconformal anomaly, are given by $$m_{\lambda _a}|_{\mu =\mathrm{\Lambda }_{\mathrm{cond}}}=\frac{g_a^2\left(\mu \right)}{2}\left[\frac{3b_+\left(1+b_a^{}\mathrm{}\right)}{1+b_+\mathrm{}}3b_a+\underset{A}{}\frac{C_a^Ap_A\left(1+b_+\mathrm{}\right)}{4\pi ^2b_+\left(1+p_A\mathrm{}\right)}\right]M_{3/2}.$$ (26) The incorporation of scalar masses and trilinear terms in the scalar potential for observable sector matter fields $`\mathrm{\Phi }^A`$ depends on the form of the Kähler potential and the nature of the couplings of observable sector matter fields to the Green-Schwarz counterterm. Adopting the Kähler potential assumed in (2) and the counterterm of (13), the scalar masses are given in the one-condensate approximation by $$m_A^2=\frac{1}{16}\rho _+^2\frac{\left(p_Ab_+\right)^2}{\left(1+p_A\mathrm{}\right)^2},$$ (27) and the trilinear “A-terms” in the scalar potential are given by $$V_A\left(\varphi \right)=Ae^{K/2}W\left(\varphi \right)$$ (28) with $$A=\frac{3}{4}\overline{u}_+\frac{\left(p_Ab_+\right)}{\left(1+p_A\mathrm{}\right)}+\frac{b_+}{\left(1+b_+\mathrm{}\right)}.$$ (29) As noted in , the fact that (27) and (29) are independent of the modular weights $`q_I^A`$ of the individual observable sector fields is the result of the vanishing of the auxiliary fields $`F^I`$ in the vacuum. This is a manifestation of the so-called “dilaton dominated” scenario of supersymmetry breaking for which flavor-changing neutral currents might be naturally suppressed. For this to indeed occur, however, it is also necessary to make the assumption that the couplings $`p_A`$ are the same for the first and second generations of matter. To analyze the low-energy particle spectrum it is necessary to choose a value of $`p_A`$ for each generation of matter fields. If the Green-Schwarz term (13) is independent of the $`\mathrm{\Phi }^A`$ so that $`p_A=0`$, then from (27) $`m_A=M_{3/2}`$. We will call such a generation “light.” On the other hand, it was postulated in that the Green-Schwarz term may well depend only on the combination $`T^I+\overline{T}^I_A\left|\mathrm{\Phi }_I^A\right|^2`$, where $`\mathrm{\Phi }_I^A`$ represents untwisted matter fields. Then for these multiplets $`p_A=b`$ and the scalar masses for these fields are in general an order of magnitude greater than the gravitino mass. We will call these generations “heavy.” The scalar masses (27) and A-terms (29) given above do not include the contributions proportional to the matter field wave-function renormalization coefficients arising from the superconformal anomaly (the analog to the gaugino mass terms studied in and included in (26)). A systematic treatment of these contributions to the soft-breaking terms is currently underway , but their general size is comparable to the gaugino masses. In the following it has been checked that varying the initial soft terms by arbitrary amounts of this size has a negligible impact on the conclusions we report here. Before giving the results of a numerical analysis using the renormalization group equations (RGEs) with the boundary conditions determined by equations (26), (27) and (29), it is worthwhile looking at what patterns of symmetry breaking are expected for various choices of the parameter $`p_A`$ in the context of the MSSM. For any generation with non-negligible Yukawa couplings a good indicator that the stable minimum of the scalar potential will yield correct electroweak symmetry breaking is the relation $$\left|A_t\right|^23\left(m_Q^2+m_U^2+m_{H_u}^2\right).$$ (30) When this bound is not satisfied it is typical to develop minima away from the electroweak symmetry breaking point in a direction in which one of the scalar masses carrying electric or color charge becomes negative. For any heavy matter generation with a non-negligible coupling to a heavy Higgs field ($`p_A=b`$) equation (29) yields $`A3m_A`$ and so (30) is already nearly saturated at the condensation scale. Another key factor in preventing dangerous color and charge-breaking minima is the ratio of scalar masses to gaugino masses and the degree of splitting between any light and heavy matter generations. In this model, both of the hierarchies, $`m_A^{\mathrm{light}}/m_\lambda `$ and $`m_A^{\mathrm{heavy}}/m_A^{\mathrm{light}}`$, will turn out to be $`𝒪\left(10\right)`$. This pattern of soft supersymmetry-breaking masses has been shown to lie on the boundary of the region in MSSM parameter space for which light squark masses tend to be driven negative by two-loop effects arising from the heavier squarks. All of the above considerations suggest that compactification scenarios in which the observable sector matter fields couple universally to the Green-Schwarz counterterm with $`p_A=b`$ may have trouble reproducing the correct pattern of low-energy symmetry breaking. ### 3.2 RGE Viability Analysis Within the MSSM To determine what region of parameter space in the $`\{b_+,\left(b_+^\alpha \right)_{\mathrm{eff}}\}`$ plane is consistent with current experimental data it is necessary to run the soft supersymmetry-breaking parameters of equations (26), (27) and (29) from the condensation scale to the electroweak scale using the renormalization group equations. For this purpose we take the MSSM superpotential and matter content for the observable sector, keeping only the top, bottom and tau Yukawa couplings. In order to capture the significant two loop contributions to gaugino masses and scalar masses these parameters are run at two loops, while the other parameters are evolved using the one-loop RGEs. The equations used are in the $`\overline{DR}`$ scheme and can be found in . The RGE analysis was performed on four different scenarios: * Scenario A: All three generations light. * Scenario B: Third generation light, first and second generations heavy. * Scenario C: All three generations heavy. * Scenario D: All matter heavy except for the two Higgs doublets which remain light ($`p_A=0`$). To protect against unwanted flavor changing neutral currents we have chosen the Green-Schwarz coefficients $`p_A`$ to be universal throughout each matter generation. While our scalars will turn out to be heavy enough that small deviations from universality (such as those arising from the superconformal anomaly discussed above) will not be problematic, the large hierarchies controlled by the values of the $`p_A`$ would be untenable. The Higgs fields will be taken to couple to the Green-Schwarz counterterm identically to the third generation of matter, as we keep only the third generation Yukawa couplings in the MSSM superpotential. In Scenario D we relax this assumption. In the boundary values of (26), (27) and (29) the values of $`\left(c_\alpha \right)_{\mathrm{eff}}`$ and $`\left(b_a^\alpha \right)_{\mathrm{eff}}`$ appear only indirectly through the determination of the value of the condensate $`\rho _+^2`$. It is thus convenient to cast all soft supersymmetry-breaking parameters in terms of the values of $`b_+`$ and $`M_{3/2}`$ using equation (23). While the gravitino mass itself is not strictly independent of $`b_+`$, it is clear from Figure 2 that we are guaranteed of finding a reasonable set of values for $`\{\left(c_\alpha \right)_{\mathrm{eff}},\left(b_+^\alpha \right)_{\mathrm{eff}}\}`$ consistent with the choice of $`b_+`$ and $`M_{3/2}`$ provided we scan only over values $`b_+0.1`$ for weak string coupling. This transformation of variables allows the slice of parameter space represented by the contours of Figure 3 to be recast as a two-dimensional plane for a given value of $`\mathrm{tan}\beta `$ and $`\mathrm{sgn}(\mu )`$. The condensation scale (the scale at which the RG-running begins) is also a function of the gravitino mass in this framework, found by inverting equation (23). Having chosen a set of input parameters $`\{b_+,M_{3/2},\mathrm{tan}\beta ,\mathrm{sgn}(\mu )\}`$ for a particular scenario, the model parameters are run from the condensation scale $`\mathrm{\Lambda }_{\mathrm{cond}}`$ given by (24) to the electroweak scale $`\mathrm{\Lambda }_{\mathrm{EW}}=M_Z`$, decoupling the scalar particles at a scale approximated by $`\mathrm{\Lambda }_{\mathrm{scalar}}=m_A`$. While treating all superpartners with a common scale sacrifices precision for expediency, the results presented below are meant to be a first survey of the phenomenology of this class of models. At the electroweak scale the one-loop corrected effective potential $`V_{1\mathrm{loop}}=V_{\mathrm{tree}}+\mathrm{\Delta }V_{\mathrm{rad}}`$ is computed and the effective mu-term $`\overline{\mu }`$ is calculated $$\overline{\mu }^2=\frac{\left(m_{H_d}^2+\delta m_{H_d}^2\right)\left(m_{H_u}^2+\delta m_{H_u}^2\right)\mathrm{tan}\beta }{\mathrm{tan}^2\beta 1}\frac{1}{2}M_Z^2.$$ (31) In equation (31) the quantities $`\delta m_{H_u}`$ and $`\delta m_{H_d}`$ are the second derivatives of the radiative corrections $`\mathrm{\Delta }V_{\mathrm{rad}}`$ with respect to the up-type and down-type Higgs scalar fields, respectively. These corrections include the effects of all third-generation particles. If the right hand side of equation (31) is positive then there exists some initial value of $`\mu `$ at the condensation scale which results in correct electroweak symmetry breaking with $`M_Z=91.187`$ GeV .<sup>3</sup><sup>3</sup>3Note that we do not try to specify the origin of this mu-term (nor its associated “B-term”) and merely leave them as free parameters in the theory – ultimately determined by the requirement that the Z-boson receive the correct mass. A set of input parameters will then be considered viable if at the electroweak scale the one-loop corrected mu-term $`\overline{\mu }^2`$ is positive, the Higgs potential is bounded from below, all matter fields have positive scalar mass-squareds and the spectrum of physical masses for the superpartners and Higgs fields satisfy the selection criteria given in Table 1.<sup>4</sup><sup>4</sup>4Though the inclusive branching ratio for $`bs\gamma `$ decays was not used as a criterion, an a posteriori check of the region of the parameter space where this class of models wants to live – namely relatively low $`\mathrm{tan}\beta `$ and gaugino masses with high scalar masses – indicates no reason to fear a conflict with the bounds from CLEO except possibly in the case $`\mathrm{sgn}(\mu )=1`$ for Scenario D . The first condition to be imposed on the scenarios considered here is correct electroweak symmetry breaking, defined by (31), with no additional scalar masses negative. This criterion alone rules out Scenario C, with all three generations coupling universally to the GS-counterterm and having large scalar masses. For the opposite case of no coupling to the GS-counterterm (Scenario A) the allowed region is displayed in Figure 4. In this scenario electroweak symmetry breaking requires $`1.65<\mathrm{tan}\beta <4.5`$, the lower bound being the value for which the top quark Yukawa coupling develops a Landau pole below the condensation scale. This restricted region of the $`\mathrm{tan}\beta `$ parameter space is a result of the large hierarchy between gaugino masses and scalar masses in these models and has been observed in more general studies of the MSSM parameter space . Scenario B with its split generations can exist only for $`0.08b_+0.09`$, where the hierarchy between the generations is small enough to prevent the two-loop effects of the heavy generations from driving the right-handed top squark to negative mass-squared values. Furthermore, proper electroweak symmetry breaking in this model requires the value of $`\mathrm{tan}\beta `$ to be in the uncomfortably narrow range $`1.65\mathrm{tan}\beta 1.75`$, making this pattern of Green-Schwarz couplings phenomenologically unattractive. As for Scenario D, the large third generation masses give an additional downward pressure on the Higgs mass-squareds in the running of the RGEs, allowing for a much wider allowed range of $`\mathrm{tan}\beta `$. In fact, electroweak symmetry is radiatively broken in the entire range of parameter space. However, as the value of $`b_+`$ is raised past the critical range $`b_+0.08`$, the scalar mass boundary values at the condensation scale start to become light enough that the right-handed stop is again driven to negative mass-squared values. This is shown in Figure 4 where the region between the upper and lower curves is excluded. While this region expands rapidly as the beta-function coefficient is increased, the values of the beta-function coefficient consistent with $`\alpha _{\mathrm{str}}0.04`$ are nearly saturated when this effect arises. The direct experimental constraints are most binding for the gaugino sector as they are by far the lightest superpartners in this class of models. Typical bounds reported from collider experiments are derived in the context of universal gaugino masses with a relatively large mass difference between the lightest chargino and the lightest neutralino. For most choices of parameters in the models studied here this is a valid assumption, but when the condensing group beta-function coefficient $`b_+`$ becomes relatively small (i.e. similar in size to the MSSM hypercharge value of $`b_{U(1)}=0.028`$) the pieces of the gaugino mass arising from the superconformal anomaly (26) can become equal in magnitude to the universal term. Here there is a level crossing in the neutral gaugino sector. The lightest supersymmetric particle (LSP) becomes predominately wino-like and the mass difference between the lightest chargino and lightest neutralino becomes negligible. This effect is displayed in Figure 5. The experimental constraints as normally quoted from LEP and the Tevatron cannot be applied in the region where the mass difference between the lightest neutralino and chargino falls below about 2 GeV. The phenomenology of such a gaugino sector has been studied recently in . Note that when any scalar fields couple to the GS-counterterm (as in Scenario D) there is a large additional, universal contribution to the gaugino masses at the condensation scale in (26). This eliminates any region with a non-standard gaugino sector in these cases. Figure 6 gives the binding constraints from Table 1 for Scenario A with $`\mathrm{tan}\beta =3`$ and positive $`\mu `$ (the most restrictive case). The most critical constraints are for the lightest chargino and gluino.<sup>5</sup><sup>5</sup>5The gluino mass determination takes into account the difference between the running mass ($`M_3`$) and the physical gluino mass . This difference is neglected for the other mass parameters. The effect of varying $`\mathrm{tan}\beta `$ on these bounds is negligible over the range $`1.65<\mathrm{tan}\beta <4.5`$, as its effect is solely in the variation in the Yukawa couplings appearing at two loops in the gaugino mass evolution. The region for which the anomaly-induced contributions to the gaugino masses make the normal experimental constraints inoperative is represented by the shaded region in the upper left of the figure. In general, the light gaugino masses at the condensation scale require a large gravitino mass (and hence, a large set of soft scalar masses since $`m_A=M_{3/2}`$ in this scenario) in order to evade the observational bounds coming from LEP and the Tevatron. While current theoretical prejudice would disfavor such large soft scalar masses, this pattern of soft parameters may not necessarily be a sign of excessive fine-tuning . Nevertheless, we refrain from making any statements about the “naturalness” of this class of models as we have not specified any mechanism for generating the mu-term. Figure 7 gives the binding constraints from Table 1 for Scenario D with $`\mathrm{tan}\beta =3`$ and positive $`\mu `$. Note the change of scale in both axes for these plots relative to those of Scenario A. As in Figure 6, varying $`\mathrm{tan}\beta `$ over the range $`1.65<\mathrm{tan}\beta <40`$ has a negligible effect on the gaugino constraint contours and only a very small effect on the contours of constant stop mass. Here the gaugino masses start at much larger values so a lower supersymmetry-breaking scale is sufficient to evade the bounds from LEP and the Tevatron. Though the gravitino mass can now be much smaller, recall that the scalars in this scenario have masses at the condensation scale roughly an order of magnitude larger than the gravitino. Thus the typical size of scalar masses at the electroweak scale continues to be about 1 TeV for the first two generations and a few hundred GeV for the third generation scalars. As opposed to the case where all the matter fields of the observable sector decouple from the GS-counterterm, here smaller values of the condensing group beta-function coefficient enhance the gaugino masses via the last term in (26). We end this section by giving mass contours for the lightest Higgs, chargino, neutralino and top-squark for $`\mathrm{tan}\beta =3`$ and positive $`\mu `$ for Scenarios A and D in Figures 8 and 9, respectively. ## 4 Gauge Coupling Unification In Section 2.2 the possibility of larger values of the unified coupling constant $`g_{\mathrm{str}}^2`$ at the string scale was considered in a very general way. It is well known that the apparent unification of coupling constants at a scale $`\mathrm{\Lambda }_{\mathrm{MSSM}}2\times 10^{16}`$ GeV, assuming only the MSSM field content, is at odds with the string prediction that unification must occur at a scale given by $$M_{\mathrm{string}}^2=\lambda g_{\mathrm{string}}^2M_{\mathrm{Planck}}^2$$ (32) where $`\lambda `$ represents the (scheme-dependent) one-loop correction from heavy string modes. In this factor was computed for the $`\overline{MS}`$ scheme and it is given by $$\lambda =\frac{1}{2}\left(f+1\right)e^{g1}.$$ (33) For the vacua considered in this work this parameter is typically $`\lambda 0.19`$. Even after taking into account one-loop string corrections there is still an order of magnitude discrepancy between the scale of unification predicted by string theory and the apparent scale of unification as extrapolated from low energy measurements under the MSSM framework. One possible solution to the problem is the inclusion of additional matter fields in incomplete multiplets of SU(5) at some intermediate scale which will alter the running of the coupling constants, causing them to converge at some value higher than $`\mathrm{\Lambda }_{\mathrm{MSSM}}`$ . These solutions tend to involve slightly larger values of the coupling constant at the string scale than that of the MSSM ($`\alpha _{\mathrm{MSSM}}^124.7`$). In the model in question here, the intermediate scale ($`\mathrm{\Lambda }_{\mathrm{cond}}`$) at which this additional matter might appear is not independent of the scale of the superpartner spectrum ($`\mathrm{\Lambda }_{\mathrm{SUSY}}M_{3/2}`$), but the two are in fact related by equation (23). Thus if we assume this additional matter has a typical mass of the condensation scale, each point in the $`\{b_+,M_{3/2}\}`$ plane can be tested for potential compatibility with string unification given a certain set of additional matter fields. We will not specify the origin of these fields (though such incomplete multiplets are not uncommon in string theory compactifications), but merely posit their existence with masses on the order of the condensation scale. Our procedure for carrying out this investigation is similar to that used in the literature by a number of authors . The standard model coupling constants $`\alpha _3`$, $`\alpha _2`$ and $`\alpha _1`$ are determined from $`\alpha _{\mathrm{EM}}\left(M_Z\right)=1/127.9`$, $`\alpha _3\left(M_Z\right)=0.119`$ and $`\mathrm{sin}^2\theta _{\mathrm{EW}}\left(M_Z\right)=.23124`$ and these $`\overline{MS}`$ values are converted to the $`\overline{DR}`$ scheme. As we will not be concerned with performing a precision survey, these coupling constants are run at one loop from their values at the electroweak scale using only the standard model field content up to the scale $`\mathrm{\Lambda }=M_{3/2}`$. At this scale the entire supersymmetric spectrum is added to the equations until the scale $`\mathrm{\Lambda }=\mathrm{\Lambda }_{\mathrm{cond}}`$ is reached. Here incomplete multiplets of SU(5) are added and the couplings are run to the scale at which the SU(2) and U(1) fine structure constants coincide. This scale will be defined as the string scale. We now require $`\alpha _3=\alpha _2=\alpha _1`$ at this scale and invert equation (32) to find the implied Planck scale. Consistency requires that this value be the reduced Planck mass of $`2.4\times 10^{18}`$ GeV and that the QCD gauge coupling, when the renormalization group equations are solved in the reverse direction, give a value for $`\alpha _3`$ at $`\mathrm{\Lambda }=M_Z`$ within two standard deviations of the measured value.<sup>6</sup><sup>6</sup>6It is worth remarking that even the celebrated supersymmetric SU(5) unification of couplings fails to predict the strong coupling at the electroweak scale at the level of two sigma and calls for a rather large value of $`\alpha _3\left(M_Z\right)`$. This is usually taken as an indication of the size of model-dependent threshold corrections. We therfore demand no more from the models considered here. The results of the analysis for a typical choice of extra matter fields are shown in Figure 10, where a pair of vector like $`(Q,\overline{Q})`$ and two pairs of vector-like $`(D,\overline{D})`$’s are introduced at the condensation scale with quantum numbers identical to their MSSM counterparts. The two sigma window about the current best-fit value of $`\alpha _3`$ can indeed accomodate a consistent Planck mass while allowing for perturbative unification of gauge couplings. From this base configuration additional 5s and 10s of SU(5) can be added at will to increase the value of the unified coupling at the string scale, but the contours of constant implied Planck mass shown in Figure 10 will not move significantly. While these combinations of matter fields have been known to allow for gauge coupling unification for some time , the relationships (23) and (32) between the various scales involved makes this a nontrivial accomplishment for this class of models. ## Conclusion The preceeding pages should be cause for guarded optimism with regard to string phenomenology. The initial challenge of dilaton stabilization has been met without resorting to strong coupling in the effective field theory nor requiring delicate cancellations. Reasonable values of the supersymmetry-breaking scale can be achieved over a fairly large region of the parameter space, but a given combination of coupling strength at the string scale and hidden sector matter content will single out a tantalizingly small slice of this space. These successful combinations do not destroy the potential solutions to the coupling constant unification problem by the introduction of additional matter at the condensation scale. Tighter restriction on the hidden sector will require more precise knowledge of the size of Yukawa couplings in the corresponding superpotential. Requiring a vacuum configuration which gives rise to successful electroweak symmetry breaking seems to demand that either the Green-Schwarz counterterm be independent of the matter fields or that all matter fields couple in a universal way but that the Higgs fields are distinct. The pattern of soft supersymmetry-breaking parameters in the former case pushes the theory towards large gravitino masses and very low values of $`\mathrm{tan}\beta `$. The low gaugino masses relative to scalar masses favors larger beta-function coefficients for the condensing group of the hidden sector, while smaller values may result in phenomenology in the gaugino sector similar to that of the “anomaly dominated” scenarios. In the latter case a proper vacuum configuration and weak coupling at the string scale leave the value of $`\mathrm{tan}\beta `$ free to take its entire range of possible values. Larger beta-function coefficients for the condensing group allow a promising region with relatively light scalar partners of the third-generation matter fields and light gauginos. A more realistic model may alter these results to some degree and uncertainty remains in the general size and nature of the Yukawa couplings of the hidden sector of these theories. Nevertheless this survey suggests that eventual measurement of the size and pattern of supersymmetry breaking in our observable world may well point towards a very limited choice of hidden sector configurations (and hence string theory compactifications) compatible with low energy phenomena. ## Acknowledgements We than Pierre Binètruy, Hitoshi Murayama and Marjorie Shapiro for discussions. This work was supported in part by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of High Energy Physics of the U.S. Department of Energy under Contract DE-AC03-76SF00098 and in part by the National Science Foundation under grant PHY-95-14797 and PHY-94-04057.
no-problem/9909/astro-ph9909178.html
ar5iv
text
# VLT Spectroscopy of the z=4.11 Radio Galaxy TN J1338-1942Based on observations at the ESO VLT Antu telescope ## 1 Introduction Within standard Cold Dark Matter scenarios the formation of galaxies is a hierarchical and biased process. Large galaxies are thought to be assembled through the merging of smaller systems, and the most massive objects will form in over–dense regions, which will eventually evolve into the clusters of galaxies (Kauffmann et al. (1999)). It is therefore important to find and study the progenitors of the most massive galaxies at the highest possible redshifts. Radio sources are convenient beacons for pinpointing massive elliptical galaxies, at least up to redshifts $`z1`$ (Lilly & Longair (1984); Best, Longair & Röttgering (1998)). The near–infrared ‘Hubble’ $`Kz`$ relation for such galaxies appears to hold up to $`z=5.2`$, despite large K–correction effects and morphological changes (Lilly and Longair 1984; van Breugel et al. 1998, 1999). This suggests that radio sources may be used to find massive galaxies and their likely progenitors out to very high redshift. While optical, ‘color–dropout’ techniques have been successfully used to find large numbers of ’normal’ young galaxies (without dominant AGN) at redshifts surpassing those of quasars and radio galaxies(Weymann et al. (1998)), the radio and near–infrared selection technique has the additional advantage that it is unbiased with respect to the amount of dust extinction. High redshift radio galaxies (HzRGs) are therefore also important laboratories for studying the large amounts of dust (Dunlop et al. (1994); Ivison et al. (1998)) and molecular gas (Papadopoulos et al. (1999)), which are observed to accompany the formation of the first forming massive galaxies. Using newly available, large radio surveys we have begun a systematic search for $`z>4`$ HzRGs to be followed by more detailed studies of selected objects. In this Letter, we present deep intermediate resolution VLT/FORS1 spectroscopy of TN J1338$``$1942 which, at $`z=4.11`$, was the first $`z>4`$ radio galaxy discovered in the southern hemisphere (De Breuck et al. 1999a ), and is one of the brightest and most luminous Ly$`\alpha `$ objects of its class. In §2, we describe the discovery and previous observations of TN J1338$``$1942. In §3 we describe our VLT observations, and in §4 we discuss some of the implications of our results. Throughout this paper we will assume $`H_0=65`$ km s<sup>-1</sup>Mpc<sup>-1</sup>, $`q_0`$=0.15, and $`\mathrm{\Lambda }=0`$. At $`z=4.11`$, this implies a linear size scale of 7.5 kpc/arcsec. ## 2 Source selection and previous observations The method we are using to find distant radio galaxies is based on the empirical correlation between redshift and observed spectral index in samples of low-frequency selected radio sources (e.g., Carilli et al. (1999)). Selecting radio sources with ultra steep spectra (USS) dramatically increases the probability of pinpointing high-z radio galaxies, as compared to observing radio galaxies with more common radio spectra. This method, which can to a large extent be explained as a K-correction induced by a curvature of the radio spectra, has been shown to be extremely efficient (e.g., Chambers, Miley & van Breugel (1990); van Breugel et al. 1999a ). We constructed such a USS sample ($`\alpha _{365\mathrm{M}\mathrm{H}\mathrm{z}}^{1.4\mathrm{GHz}}<1.30;S_\nu \nu ^\alpha `$; De Breuck et al. 1999b ), consisting of 669 objects, using several radio catalogs which, in the southern hemisphere, include the Texas 365 MHz catalog (Douglas et al. (1996)) and the NVSS 1.4 GHz catalog (Condon et al. (1998)). As part of our search–program we observed TN J1338$``$1942 ($`\alpha _{365\mathrm{M}\mathrm{H}\mathrm{z}}^{1.4\mathrm{GHz}}=1.31\pm 0.07`$) with the ESO 3.6m telescope in 1997 March and April (De Breuck et al. 1999a ). The radio source was first identified by taking a 10 minute $`R`$band image. Followup spectroscopy then showed the radio galaxy to be at a redshift of $`z=4.13\pm 0.02`$, based on a strong detection of Ly$`\alpha `$, and weak confirming $`\mathrm{IV}`$ $`\lambda `$ 1549 and He $`\mathrm{II}`$ $`\lambda `$ 1640. At this redshift its derived rest–frame low frequency (178 MHz) radio luminosity is comparable to that of the most luminous 3CR sources. More detailed radio information was obtained with the VLA at 4.71 GHz and 8.46 GHz on 1998 March 24, as part of a survey to measure rotation measures in HzRGs (Pentericci et al. (1999)). We detect two radio components ($`S_{4.7GHz}^{NW}=21.9`$ mJy; $`S_{4.7GHz}^{SE}=1.1`$ mJy) separated by 5$`\stackrel{}{.}`$5 in the field of the radio galaxy (Fig. 1). The bright NW component has a very faint radio companion ($`S_{4.7GHz}^C=0.3`$ mJy) at 1$`\stackrel{}{.}`$4 to the SE. Our present observations show that all components have very steep radio spectra with $`\alpha _{4.7\mathrm{GHz}}^{8.5\mathrm{GHz}}(NW)1.6`$, $`\alpha _{4.7\mathrm{GHz}}^{8.5\mathrm{GHz}}(SE)1.8`$, and $`\alpha _{4.7\mathrm{GHz}}^{8.5\mathrm{GHz}}(C)1.0`$. The proximity and alignment of such rare USS components strongly suggests that they are related and part of one source. While further observations over a wider frequency range would be useful to confirm this, for now we conclude that TN J1338$``$1942 is a very asymmetric radio source, and identify component C at $`\alpha _{2000}=13^h38^m26\stackrel{s}{.}10`$ and $`\delta _{2000}=19\mathrm{°}42\mathrm{}31\stackrel{}{.}1`$ with the radio core. Such asymmetric radio sources are not uncommon (e.g., McCarthy, van Breugel & Kapahi (1991)), and are usually thought to be due to strong interaction of one of its radio lobes with very dense gas or a neighboring galaxy (see for example Feinstein et al. (1999)). We also obtained a $`K`$band image with the Near Infrared Camera (NIRC; Mathews & Soifer (1994)) at the Keck I telescope on UT 1998 April 18. The integration time was 64 minutes in photometric conditions with 0$`\stackrel{}{.}`$5 seeing. Observing procedures, calibration and data reduction techniques were similar to those described in van Breugel et al. (1998). Using a circular aperture of 3″, encompassing the entire object, we measure $`K=19.4\pm 0.2`$ (we do not expect a significant contribution from emission lines at the redshift of the galaxy). In a 64 kpc metric aperture, the magnitude is $`K_{64}=19.2\pm 0.3`$, which puts TN J1338$``$1942 at the bright end, but within the scatter, of the $`Kz`$ relationship (van Breugel et al. (1998)). We determined the astrometric positions in our $`5\mathrm{}\times 5\mathrm{}`$ $`R`$band image using the USNO PMM catalog (Monet et al. (1998)). We next used the positions of nine stars on the R-band image in common with the Keck $`K`$band to solve the astrometry on the $`1\mathrm{}\times 1\mathrm{}`$ $`K`$band image. The error in the relative near–IR/radio astrometry is dominated by the absolute uncertainty of the optical reference frame, which is $``$0$`\stackrel{}{.}`$4 (90% confidence limit; Deutsch (1999)). In figure 1, we show the overlay of the radio and $`K`$band (rest-frame $`B`$band) images. The NW hotspot coincides within 0$`\stackrel{}{.}`$035 of the peak of the $`K`$band emission, while some faint diffuse extensions can be seen towards the radio core and beyond the lobe. The positional difference between the peak of the $`K`$band emission and the radio core is 1$`\stackrel{}{.}`$4 ($`4\sigma `$), which suggests that the AGN and peaks of the $`K`$band and Ly$`\alpha `$ emission may not be co–centered. ## 3 VLT observations Because of the importance of TN J1338$``$1942 as a southern laboratory for studying HzRGs, we obtained a spectrum of this object with high signal–to–noise and intermediate spectral resolution with FORS1 on the ANTU unit of the VLT on UT 1999 April 20. The purpose of these observations was to study the Ly$`\alpha `$ emission and UV–continuum in detail. The radio galaxy was detected in the acquisition images ($`t_{int}=2\times 60`$ s; $`I=23.0\pm 0.5`$ in a 2″ aperture). We used the 600R grism with a 1$`\stackrel{}{.}`$3 wide slit, resulting in a spectral resolution of 5.5 Å (FWHM). The slit was centered on the peak of the $`K`$band emission at a position angle of 210° North through East. To minimize the effects of fringing in the red part of the CCD, we split the observation into two 1400 s exposures, while offsetting the object by 10″ along the slit between the individual exposures. The seeing during the TN J1338$``$1942 observations was $``$0$`\stackrel{}{.}`$7 and conditions were photometric. Data reduction followed the standard procedures using the NOAO IRAF package. We extracted the one-dimensional spectrum using a 4″ wide aperture, chosen to include all of the Ly$`\alpha `$ emission. For the initial wavelength calibration, we used exposures of a HeArNe lamp. We then adjusted the final zero point of the wavelength scale using telluric emission lines. The flux calibration was based on observations of the spectrophotometric standard star LTT2415, and is believed to be accurate to $`15\%`$. We corrected the spectrum for foreground Galactic extinction using a reddening of $`E_{BV}=0.096`$ determined from the dust maps of Schlegel, Finkbeiner & Davis (1998). In figure 2 we show the observed one dimensional spectrum and in figure 3 the region of the two-dimensional spectrum surrounding the Ly$`\alpha `$ emission line. Most notable is the large asymmetry in the profile, consistent with a very wide ($`1400`$ km s<sup>-1</sup>) blue-ward depression. Following previous detection of Ly $`\alpha `$ absorption systems in HzRGs (Röttgering et al. (1995); van Ojik et al. (1997); Dey (1999)) we shall interpret the blue-ward asymmetry in the Ly$`\alpha `$ profile of TN J1338$``$1942 as being due to foreground absorption by neutral hydrogen. The rest–frame equivalent width of Ly$`\alpha `$ in TN J1338$``$1942 $`W_\lambda ^{rest}=210\pm 50`$ Å, is twice as high as in the well–studied radio galaxy 4C 41.17 ($`z=3.80`$; Dey et al. (1997)). The large Ly$`\alpha `$ luminosity ($`L_{\mathrm{Ly}\alpha }4\times 10^{44}`$ erg s<sup>-1</sup> after correction for absorption) makes TN J1338$``$1942 the most luminous Ly$`\alpha `$ emitting radio galaxy known. Following Spinrad et al. (1995), we measure the continuum discontinuity across the Ly$`\alpha `$ line, defined as \[$`F_\nu (12501350\mathrm{\AA })/F_\nu (11001200\mathrm{\AA })`$\] = $`1.56\pm 0.24`$. Similarly, for the Lyman limit at $`\lambda _{rest}`$ = 912Å, we find \[$`F_\nu (9401000\mathrm{\AA })/F_\nu (850910\mathrm{\AA })`$\] = $`2.2\pm 0.5`$, though this value is uncertain because the flux calibration at the edge of the spectrum is poorly determined. The presence of these continuum discontinuities further confirm our measured redshift. However, the redshift of the system is difficult to determine accurately because our VLT spectrum does not cover $`\mathrm{IV}`$ $`\lambda `$ 1549 or He $`\mathrm{II}`$ $`\lambda `$ 1640. Furthermore, since the Ly$`\alpha `$ emission is heavily absorbed, it is likely that the redshift of the peak of the Ly$`\alpha `$ emission (at $`6206\pm 4\mathrm{\AA }`$, $`z=4.105\pm 0.005`$) does not exactly coincide with the redshift of the galaxy. We shall assume $`z=4.11`$. ## 4 Discussion TN J1338$``$1942 shares several properties in common with other HzRGs but some of its characteristics deserve special comment. Here we shall briefly discuss these. ### 4.1 Ly$`\alpha `$ emission Assuming photoionization, case B recombination, and a temperature of $`T=10^4`$ K we use the observed Ly$`\alpha `$ emission to derive a total mass ($`M(H\text{ii})`$) of the H ii gas (e.g., McCarthy et al. (1990)) using $`M(H\text{ii})=10^9(f_5L_{44}V_{70})^{1/2}`$ M. Here $`f_5`$ is the filling factor in units of 10<sup>-5</sup>, $`L_{44}`$ is the Ly$`\alpha `$ luminosity in units of $`10^{44}`$ ergs s<sup>-1</sup>, and $`V_{70}`$ is the total volume in units of $`10^{70}`$ cm<sup>3</sup>. Assuming a filling factor of 10<sup>-5</sup> (McCarthy et al. (1990)), and a cubical volume with a side of 15 kpc, we find $`M(H\text{ii})2.5\times 10^8`$ M. This value is on the high side, but well within the range that has been found for HzRGs (e.g., van Ojik et al. (1997))). Previous authors have shown that gas clouds of such mass can cause radio jets to bend and decollimate (e.g., van Breugel Filippenko Heckman & Miley (1985); Lonsdale & Barthel (1986); Barthel & Miley (1988)). Likewise, the extreme asymmetry in the TN J1338$``$1942 radio source could well be the result of strong interaction between the radio–emitting plasma and the Ly$`\alpha `$ gas. ### 4.2 Ly$`\alpha `$ absorption Our spectrum also shows evidence for deep blue-ward absorption of the Ly$`\alpha `$ emission line. We believe that this is probably due to resonant scattering by cold H i gas in a halo surrounding the radio galaxy, as seen in many other HzRGs (c.f., Röttgering et al. (1995), van Ojik et al. (1997), Dey (1999)). The spatial extent of the absorption edge as seen in the 2-dimensional spectrum (Fig. 3) implies that the extent of the absorbing gas is similar or even larger than the 4″(30 kpc) Ly$`\alpha `$ emitting region. To constrain the absorption parameters we constructed a simple model that describes the Ly$`\alpha `$ profile with a Gaussian emission function and a single Voigt absorption function. As a first step, we fitted the red wing of the emission line with a Gaussian emission profile. Because the absorption is very broad, and extends to the red side of the peak, the parameters of this Gaussian emission profile are not well constrained. We adopted the Gaussian that best fits the lower red wing as well as the faint secondary peak, 1400 km s<sup>-1</sup> blue-wards from the main peak. The second step consisted of adjusting the parameters of the Voigt absorption profile to best match the sharp rise towards the main peak. The resulting model (shown along with the parameters of both components in figure 4) adequately matches the main features in the profile. We varied the parameters of both components, and all acceptable models yield column densities in the range $`3.5\times 10^{19}1.3\times 10^{20}`$ cm<sup>-2</sup>. The main difference between our simple model and the observations is the relatively flat, but non–zero flux at the bottom of the broad depression. This flux is higher than the continuum surrounding the Ly$`\alpha `$ line, indicating some photons can go through (i.e., a filling factor less than unity) or around the absorbing cloud. If the angular size of absorber and emitter are similar, the size of the absorber is $`R_{abs}`$ $``$10 kpc. The total mass of neutral hydrogen then is $`210\times 10^7M_{}`$, comparable to or somewhat less than the total mass of H ii. ### 4.3 Continuum Following Dey et al. (1997), and assuming that the rest frame UV continuum is due to young stars, one can estimate the star–formation rate (SFR) in TN J1338$``$1942 from the observed rest–frame UV continuum near 1400 Å. From our spectrum we estimate that $`F_{1400}2\mu Jy`$, resulting in a UV luminosity $`L_{1400\mathrm{\AA }}1.3\times 10^{42}`$ erg s<sup>-1</sup> Å<sup>-1</sup> and implying a SFR between 90 $``$ 720 h$`{}_{}{}^{2}{}_{65}{}^{}`$ M yr<sup>-1</sup> in a $`10\times 30`$ kpc<sup>2</sup> aperture. These values are similar to those found for 4C 41.17. In this case detailed HST images, when compared with high resolution radio maps, strongly suggested that this large SFR might have been induced at least in part by powerful jets interacting with massive, dense clouds (Dey et al. (1997); van Breugel et al. 1999b ; Bicknell et al. (1999)). The co–spatial Ly$`\alpha `$ emission–line and rest–frame optical continuum with the brightest radio hotspot in TN J1338$``$1942 suggests that a similar strong interaction might occur in this very asymmetric radio source. The decrement of the continuum blue-wards of Ly$`\alpha `$ (Fig. 2) due to the intervening H i absorption along the cosmological line of sight is described by the “flux deficit” parameter $`D_A=1\frac{f_\nu (\lambda 10501170)_{obs}}{f_\nu (\lambda 10501170)_{pred}}`$ (Oke & Korycanski (1982)). For TN J1338$``$1942 we measure $`D_A=0.37\pm 0.1`$, comparable to the $`D_A=0.45\pm 0.1`$ that Spinrad et al. (1995) found for the $`z=4.25`$ radio galaxy 8C 1435+64 (uncorrected for Galactic reddening). This is only the second time the $`D_A`$ parameter has been measured in a radio galaxy. The decrement described by $`D_A`$ is considered to be extrinsic to the object toward which it has been measured, and should therefore give similar values for different classes of objects at the same redshift. Because they have bright continua, quasars have historically been the most popular objects to measure $`D_A`$. For $`z4.1`$, quasars have measured values of $`D_A0.55`$ (e.g., Schneider, Schmidt & Gunn 1991, 1997 ). Similar measurements for color selected Lyman break galaxies do not yet exist. Other non-color selected objects, in addition to radio galaxies, which do have reported $`D_A`$ measurements are serendiptiously discovered galaxies ($`z=5.34,D_A>0.70`$, Dey et al. (1998)) and narrow-band Ly$`\alpha `$-selected galaxies ($`z=5.74,D_A=0.79`$, Hu, McMahon & Cowie (1999)). Because of their larger redshifts these galaxy values can not directly be compared with those of quasars ($`z_{max}=5.0,D_A=0.75`$, Songaila et al. (1999)). However, they seem to fall slightly ($`\mathrm{\Delta }D_A0.1`$) below the theoretical extrapolation of Madau (1995) at their respective redshifts, which quasars do follow rather closely. This is also true for the two radio galaxies ($`\mathrm{\Delta }D_A0.2`$) at their redshifts. Thus it appears that non-color selected galaxies, whether radio selected or otherwise, have $`D_A`$ values which fall below those of quasars. Although, with only two measurements, the statistical significance of the low radio galaxy $`D_A`$ values is marginal, the result is suggestive. It is worthwhile contemplating the implications that would follow if further observations of $`z>4`$ radio galaxies and other objects selected without an optical color bias confirmed this trend. Given that optical color selection methods (often used to find quasars, and Lyman break galaxies) favour objects with large $`D_A`$ values, it is perhaps not surprising that non-color selected $`z>4`$ objects might have lower values of $`D_A`$. Consequently, quasars and galaxies with low $`D_A`$ values might be missed in color–based surveys. This then could lead to an underestimate of their space densities, and an overestimate of the average H i columns density through the universe. Radio galaxies have an extra advantage over radio selected quasars (e.g., Hook & McMahon (1998)), because they very rarely contain BAL systems (there is only one such example, 6C 1908+722 at $`z=3.537`$; Dey (1999)). Such BAL systems are known to lead to relatively large values of $`D_A`$, indicating that part of the absorption is not due to cosmological HI gas, but due to absorption within the BAL system (Oke & Korycanski (1982)). A statistically significant sample of $`z>4`$ radio galaxies would therefore determine the true space density of intervening H i absorbers. ## 5 Conclusions Because of its enormous Ly$`\alpha `$ luminosity and strong continuum, its highly asymmetric and broad Ly$`\alpha `$ profile, and its very asymmetric radio/near–IR morphology TN J1338$``$1942 is a unique laboratory for studying the nature of $`z>4`$ HzRGs. It is particularly important to investigate the statistical properties of similar objects by extending the work begun here to a significant sample of $`z>4`$ HzRGs. The VLT will be a crucial facility in such a study. ###### Acknowledgements. We thank the referee, Hy Spinrad, for his comments, which have improved the paper. We also thank Remco Slijkhuis for his help in using the ESO archive, and Mỹ Hà Vuong for useful discussions. The W. M. Keck observatory is a scientific partnership between the University of California and the California Institute of Technology, made possible by the generous gift of the W. M. Keck Foundation. The National Radio Astronomy Observatory is operated by Associated Universities Inc., under cooperative agreement with the National Science Foundation. The work by C.D.B., W.v.B., D.M. and S.A.S. at IGPP/LLNL was performed under the auspices of the US Department of Energy under contract W-7405-ENG-48. DM is also supported by Fondecyt grant No. 01990440 and DIPUC.
no-problem/9909/astro-ph9909069.html
ar5iv
text
# Time-resolved HST and IUE UV spectroscopy of the Intermediate Polar FO AqrBased on observations collected with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute and with the International Ultraviolet Explorer, obtained from the IUE Final Archive at VILSPA ## 1 Introduction Intermediate Polars (IPs) are a subclass of magnetic Cataclysmic Variables which consist of an asynchronously rotating ($`\mathrm{P}_{\mathrm{spin}}<\mathrm{P}_{\mathrm{orb}}`$) magnetized white dwarf accreting from a late type, main sequence, Roche-lobe filling secondary star (Patterson 1994; Warner 1995). Except for a few systems for which polarized optical/IR emission is detected, the white dwarf is believed to possess a weak ($``$ 2 MG) magnetic field which dominates the accretion flow only at a few radii from its surface. Within the magnetospheric radius, material is channeled towards the magnetic polar regions in an arc-shaped accretion curtain (Rosen et al. 1988). At larger distances, different accretion patterns can be present: a truncated accretion disc (disc-fed systems), direct accretion from the stream onto the magnetosphere (disc-less systems) as well as a combination of the two, where the stream material overpasses the disc (disc-overflow) (Hellier 1995 and references therein). Due to the asynchronous rotation, IPs show a wide range of periodicities at the white dwarf spin ($`\omega `$), the orbital ($`\mathrm{\Omega }`$) and sideband frequencies (Warner 1986; Patterson 1994, Warner 1995), whose amplitudes can be different in different spectral ranges (de Martino 1993). FO Aqr (H2215-086) was known to show strong periodic X-ray, optical and IR pulsations at the spin frequency, $`\omega =1/(\mathrm{P}_{\mathrm{spin}}=20.9\mathrm{min})`$, and lower amplitude variations at the orbital $`\mathrm{\Omega }=1/(\mathrm{P}_{\mathrm{orb}}=4.85\mathrm{hr})`$ and beat $`\omega \mathrm{\Omega }=1/(\mathrm{P}_{\mathrm{beat}}=22.5\mathrm{min})`$ frequencies (de Martino et al. 1994, hereafter Paper 1, and references therein; Marsh & Duck 1996; Patterson et al. 1998). The dominance of the spin pulsation at optical and high X-ray energies ($`>`$5 keV) can be accounted for by a disc-fed accretion, whose evidence was provided by a partial eclipse in the optical continuum and emission lines (Hellier et al. 1989; Mukai et al. 1994; Hellier 1995). FO Aqr was also found to possess a disc-overflow accretion mode (Hellier 1993), and the recent evidence of a long term variability in the amplitudes of the X-ray pulsations has been interpreted as changes in the accretion mode (Beardmore et al. 1998). This kind of variability, only recently recognized, is also observed in other IPs like TX Col (Buckley 1996; Norton et al. 1997) and BG CMi (de Martino et al. 1995). The identification of the actual accretion geometry and the determination of energy budgets of the primary X-ray and secondary reprocessed UV, optical and IR emissions reside on multi-wavelength observations. The main modulations in FO Aqr have been studied in the X-rays and at optical/IR wavelengths. The optical spin pulsations, occurring mostly in phase with the X-ray ones, were found to arise from the outer regions of the accretion curtain (Paper 1; Welsh & Martell 1996). The orbital modulation from UV to IR was instead found to be multicomponent and attributed to the X-ray heated azimuthal structure of the accretion disc (henceforth bulge) and to the inner illuminated face of the secondary star (Paper 1). However the temperature of the UV emitting bulge could not be constrained due to the lack of a precise quantification of the spectral shape of the UV spin modulation. In this work we present high temporal resolution spectroscopy acquired with HST/FOS which provides the first detection of different UV periodicities in FO Aqr. For a comprehensive study, these data are complemented with low temporal resolution IUE spectra along the orbital period. Coordinated optical photometry extends the study to a wider spectral range, providing the link to investigate the long term behaviour of these variabilities. ## 2 Observations and data reduction The UV and optical campaign on FO Aqr was carried out between September and October 1995 with HST, IUE and at the South African Astronomical Observatory. The journal of the observations is reported in Table 1. ### 2.1 The HST data HST Faint Object Spectrograph observations of FO Aqr were performed on September 10, 1995. The observations were carried out in the rapid mode during 7 consecutive HST orbits. Due to target acquisition procedures the total effective on source exposure time was 4.07 h, yielding six continuous exposure slots as detailed in Table 1. The orbital period was unevenly sampled since it is commensurable with the HST orbit. The G160L grating was operated with the blue digicon covering the range 1154–2508 Å at a resolution of 6.8 Å diode<sup>-1</sup> and with the 0.86” upper square aperture, supposed to be free from the 1500–1560 Å photocathode blemish which is known to affect the circular apertures (HST Data Handbook, 1997). A total of 797 spectra were collected, each with an effective exposure time of $``$ 18 s. The standard routine processing ST ScI pipeline applied to the data at the time of the observations revealed the presence of anomalous features when comparing the reduced spectra with the IUE data, and in particular in the regions 1500–1590 Å and 1950–2010 Å . The data were then re-processed using the STSDAS/CALFOS routine within IRAF using the latest reference files for sensitivity correction and appropriate aperture flatfield provided by the ST ScI Spectrograph Group in summer 1998. The calibrated spectra appear then free from the above features although the IUE fluxes are on average $`1.1`$ times larger than the FOS ones (see Fig. 1). A check against systematic effects using spectra of standard stars observed with both IUE and FOS only confirms the known lower flux (on average of $`7\%`$) of IUE, with respect to that of FOS (Gonzalez-Riestra 1998). The residual flux difference may be due to the fact that the IUE data were acquired a month later than the FOS spectra. The G160L grating provides zero order light covering the full range between 1150 and 5500 Å, with an effective wavelength at 3400 Å, which provides useful simultaneous broad-band photometry. The flux calibration of the zero-order signal (Eracleous & Horne 1994) updated for errors and post-COSTAR sensitivity and aperture throughput (HST Data Handbook 1997) has been applied to the signal extracted in the zero-order feature of each of the 797 exposures. ### 2.2 The IUE data On October 15, 1995 ten IUE SWP (1150–1980 Å) and ten LWP (1950–3200 Å) low resolution ($``$ 6 Å) spectra were acquired during consecutive 16 h with exposure times equal or twice the $`\mathrm{P}_{\mathrm{spin}}`$ to smear out effects of the rotational pulsation (Table 1). The SWP and LWP exposures sample the orbital period. The spectra have been re-processed at VILSPA using the IUE NEWSIPS pipeline used for the IUE Final Archive which applies the SWET extraction method as well as the latest flux calibrations and close-out camera sensitivity corrections (Garhart et al. 1997). Line-by-line images have been inspected for spurious features which have been identified and removed. ### 2.3 The optical photometry Optical photometry was conducted in the period October 18–23 1995 at the SAAO 0.75 m telescope and UCT Photometer employing a Hamamatsu R93402 GaAs photomultiplier. BVRI (Cousins) photometry was carried out on the first three nights performing symmetric modules with integration times of 30 s or 20 s respectively for the B and I, or V and R filters. The typical time resolution for the sequence of all four filters was $``$120 s, with interruption every $``$ 10 min for sky measurements. The times and durations of individual runs are reported in Table 1. The orbital period has not been fully sampled. Additional fast photometry in white light has been carried out on October 21 and 23, 1995 using the same photometer, but employing a second channel photomultiplier for monitoring of a nearby comparison star. Continuous 5 s integrations were obtained, with occasional interruptions (every 15–20 min) for sky measurements (lasting $``$ 20–30 s). During the two nights, the observations were carried out for 4.3 h and 2.5 h respectively. The photometric data have been reduced in a standard manner with sky subtraction, extinction correction and transformation to the standard Cousins system using observations of E-region standards obtained on the same night. ## 3 The UV spectrum of FO Aqr The grand average UV spectra of FO Aqr as observed with FOS and IUE are shown in Fig. 1 (upper panel), where the above flux difference is apparent. The UV luminosity in the 1150–3200 Å IUE range is $`6\times 10^{32}`$$`\mathrm{ergs}\mathrm{s}^1`$assuming a distance of 325 pc (Paper 1), a factor 1.3 larger than during previous UV observations in 1990. The optical photometry also indicates a brightening of 0.17 mag between the two epochs, indicating long term luminosity variations (see also sect. 8.5) The UV spectrum of FO Aqr is typical of magnetic CVs (Chiappetti et al., 1989; de Martino 1995) with strong emissions of N V $`\lambda 1240`$, Si IV $`\lambda 1397`$, C IV $`\lambda 1550`$, He II $`\lambda \lambda 1640,2733`$ and Mg II $`\lambda 2800`$. The weaker Si IV with respect to N V emission, classes the IP nature (de Martino 1995). Weaker emissions from lower ionization states of different species such as C III $`\lambda 1176`$, the blend of Si III $`\lambda 1298`$ multiplet, Si II$`\lambda 1304`$ and geocoronal O I$`\lambda 1305`$, Si III $`\lambda 1895`$, Si II $`\lambda 1808`$, N IV $`\lambda 1718`$, N III\] $`\lambda 17471754`$, Al III $`\lambda 1855`$ and Al II $`\lambda 1670`$, as well as He II $`\lambda 2307`$, possibly blended with C III $`\lambda 2297`$ , and He II $`\lambda 2386`$, are identified in the higher quality FOS spectrum. Also weak oxygen lines of O IV $`\lambda `$1343 and O V $`\lambda `$1371 are detected. Some of these lines are also observed in the HST/FOS spectra of AE Aqr (Eracleous & Horne 1994), DQ Her (Silber et al. 1996) and PQ Gem (Stavroyiannopoulos et al. 1997). The presence of high ionization species together with extremely weak emissions (E.W. $`<`$ 1 Å ) of lower ionization species are characteristic of a higher ionization efficiency in IPs with respect to Polars (de Martino 1998). The line ratios N V/Si IV and N V/He II, when compared with photoionization models developed by Mauche et al. (1997) are close to the predicted values for an ionizing blackbody spectrum at 30 eV. In contrast to the IUE spectra, the FOS data allow us to finally detect the intrinsic $`\mathrm{Ly}\alpha `$ $`\lambda `$1216 line. This appears to be composed of a relatively deep (E.W.=5.4$`\pm `$0.1 Å ) absorption and a weak emission (E.W.=1.4$`\pm `$0.1 Å ). The center wavelength of the absorption feature is however red-shifted by $``$4 Å with respect to rest wavelength and other emission line positions, while the weak emission is blue-shifted at 1206 Å in the grand average spectrum. Discussion on the nature of this feature is left until sects. 5 and 7, however the $`\mathrm{Ly}\alpha `$ absorption provides an upper limit to the hydrogen column density along the line of sight to FO Aqr. A pure damping Lorentzian profile (Bohlin 1975) convolved with a 7 Å FWHM Gaussian has then been fitted to the $`\mathrm{Ly}\alpha `$ absorption line (Fig. 1, bottom panel). The resulting neutral hydrogen column density is $`N_\mathrm{H}=(5.0\pm 1.5)\times 10^{20}\mathrm{cm}^2`$. The residual from the fit shows an emission line with maximum flux at $`1215`$ Å , probably geocoronal or intrinsic, with an excess of flux in the blue wing possibly due to emission of Si III $`\lambda `$1206. The derived value for $`N_\mathrm{H}`$ is consistent with the total interstellar column density in the direction of FO Aqr as derived from Dickey & Lockman (1990) and with the upper limit estimated from X-rays (Mukai et al. 1994). Assuming an average gas-to-dust ratio (Shull & van Steenberg 1985), this upper limit corresponds to a reddening of $`\mathrm{E}_{\mathrm{B}\mathrm{V}}0.1`$. Although FO Aqr was already known to be negligibly reddened from IUE observations (Chiappetti et al. 1989), an upper limit $`\mathrm{E}_{\mathrm{B}\mathrm{V}}=0.013\pm `$ 0.005 is derived from the absence of the 2200 Å absorption in the FOS data. This indicates that, despite the coincidence, most of the neutral absorption is unrelated to the interstellar dust and hence it is likely located within the binary system. ## 4 Time series analysis The presence of periodicities in FO Aqr has been investigated in the FOS continua, emission lines and zero-order light as well as in the optical photometric data. ### 4.1 HST UV data Fluxes in five line-free continuum bands have been measured in each FOS spectrum in the ranges $`\lambda `$1265–1275, $`\lambda `$1425–1450, $`\lambda `$1675–1710, $`\lambda `$2020–2100, $`\lambda `$2410–2500. Line fluxes of He II $`\lambda `$1640, N V, Si IV and C IV and $`\mathrm{Ly}\alpha `$ have been computed adopting a method which uses for the continuum a power law distribution as found from a fit in the above continuum bands. Furthermore, since the low spectral resolution of FOS data prevents the study of UV line profiles, measures of the V/R ratios of emission lines have been used to investigate possible motions in the lines. These are defined as the ratios between the integrated fluxes in the violet and red portions of the emission lines assuming as centroid wavelength that measured in the average profile. Such analysis is restricted to the strong emissions of He II, C IV and N V lines whose FWZI are $`\pm `$3000 $`\mathrm{km}\mathrm{s}^1`$, $`\pm `$4000 $`\mathrm{km}\mathrm{s}^1`$and $`\pm `$3000 $`\mathrm{km}\mathrm{s}^1`$ respectively. In order to detect the active frequencies in the power spectrum and to optimize the S/N ratio, a Fourier analysis has been performed using the DFT algorithm of Deeming (1975) on the total UV continuum (sum of the five bands) and line (sum of N V, Si IV, C IV and He II) fluxes and zero-order light. From Fig. 2, the dominance of the $`\mathrm{\Omega }`$ variability is apparent, being about twice the $`\omega `$ signal, as well as the presence of substantial power at the sideband and orbital harmonic frequencies. To distinguish real signals from artifacts due to the sampling of the HST orbit, a least-square technique was applied to the each data set which fits simultaneously multiple sinusoids at fixed frequencies. A synthetic light curve with the same temporal sampling of the data was created and subtracted (residuals). The continuum and zero-order light reveal, besides the $`\omega `$ and $`\mathrm{\Omega }`$ frequencies, also the $`\omega 2\mathrm{\Omega }`$, $`\omega \mathrm{\Omega }`$ and $`\mathrm{\Omega }+\omega `$ sidebands, whereas in the emission lines the $`2\mathrm{\Omega }`$ and $`\omega \mathrm{\Omega }`$ frequencies are detected. In Fig. 2 the amplitude spectra relative to the multiple sinusoids and the residuals are also shown for comparison. It should be noted that peaks at $`54\mathrm{day}^1`$ and $`84\mathrm{day}^1`$ are identified as sideband fequencies of the spin and HST orbital frequencies. These are removed by the method as shown by the residuals. Then a five (four) frequency composite sinusoidal function for each spectral band (emission line) has been used and the derived amplitudes, reported in Table 2, have been compared with the average power in the DFTs of the residuals ($`\sigma `$) in the range of frequencies of interest (i.e. $`\nu `$ 1.4 mHz), (column 8). While the signals at $`\omega 2\mathrm{\Omega }`$ (continuum) and at $`2\mathrm{\Omega }`$ (emission lines) on average fulfil a 4 $`\sigma `$ criterium, the other sidebands are between between 2.1 and 3.5 $`\sigma `$. A further check has been performed using the CLEAN algorithm (Roberts et al. 1987) which removes the windowing effects of the HST orbit. The CLEANED power spectra, adopting a gain of 0.1 and 500 iterations, indeed reveal the presence of the $`\omega 2\mathrm{\Omega }`$ and $`\omega \mathrm{\Omega }`$ sidebands in the continuum and the $`2\mathrm{\Omega }`$ in the emission lines. The lack of significant power at the other frequencies is consistent with the previous analysis. Hence, these weakly active frequencies will be considered with some caution in this analysis. A strong colour effect in the UV continuum is detected with amplitudes decreasing at longer wavelengths. Different from other lines is the behaviour in the $`\mathrm{Ly}\alpha `$ feature, whose absorption component is modulated at the $`\mathrm{\Omega }`$ frequency, while a variability at the spin is at a 2$`\sigma `$ level. No significant variations are detected in the equivalent width of the absorption as well as in the emission component. In contrast to the flux behaviour, the V/R ratios are variable only at the $`\omega `$ frequency (Fig. 2, bottom panels). Noteworthy is the marginal spin variability in the C IV line. The amplitudes of the spin modulation, obtained from the least-square fits, are reported in the last column of Table 2. ### 4.2 Optical data The analysis of BVRI data acquired during three nights has been performed following the same procedure adopted for the HST data. Contrary to previous observations, the orbital variability also dominates in the optical being $``$ 1.5 times the spin modulation. The presence of sideband modulations is more uncertain because of the lower quality of the data. Nevertheless, to ensure uniformity between UV and optical results, a least-square fit to the data has been applied using the same five frequency sinusoidal function, i.e. $`\mathrm{\Omega }`$, $`\omega `$, $`\omega 2\mathrm{\Omega }`$, $`\omega \mathrm{\Omega }`$, $`\omega +\mathrm{\Omega }`$. The resulting amplitudes, reported in Table 2, when compared with the noise in the residuals (column 8), can be considered as upper limits. As far as the fast photometry is concerned, the low quality of the data only allows the detection of the spin and orbital variabilities. In particular, the latter is detected on the first night with a pronounced dip which is not consistent with the refined orbital ephemeris based on orbital minima recently given by Patterson et al. (1998), which defines the inferior conjunction of the secondary star. Therefore these data will not be used for a multi-wavelength analysis of the pulsations. ¿From this analysis new times of maxima for the orbital and rotational modulations are derived for the UV continuum and optical light: $`HJD_{\mathrm{orb}}^{\mathrm{max}}=\mathrm{2\hspace{0.17em}449\hspace{0.17em}971.2251}\pm 0.0006`$ in the UV; $`HJD_{\mathrm{spin}}^{\mathrm{max}}=\mathrm{2\hspace{0.17em}449\hspace{0.17em}971.11181}\pm 0.00010`$ in the UV; $`HJD_{\mathrm{orb}}^{\mathrm{max}}=\mathrm{2\hspace{0.17em}449\hspace{0.17em}971.2235}\pm 0.0034`$ in the optical; $`HJD_{\mathrm{spin}}^{\mathrm{max}}=\mathrm{2\hspace{0.17em}449\hspace{0.17em}971.11021}\pm 0.00035`$ in the optical; Both UV and optical orbital maxima lead by $`\mathrm{\Delta }\mathrm{\Phi }_{\mathrm{orb}}`$ = 0.145 those predicted by Patterson et al.’s ephemeris. Such phase difference, discussed in sect. 7, is consistent with the previous UV results (Paper 1). On the other hand, the optical rotational maximum agrees within 8 per cent with that predicted by the new revised cubic ephemeris given by Patterson (1998, private communication): $`HJD\mathrm{\hspace{0.17em}2\hspace{0.17em}444\hspace{0.17em}782.9168}(2)+0.014519035(2)E+`$ $`7.002(7)\mathrm{\hspace{0.17em}10}^{13}E^21.556(2)\mathrm{\hspace{0.17em}10}^{18}E^3`$ (1) The UV rotational pulses lag the optical by $`\mathrm{\Delta }\mathrm{\Phi }_{\mathrm{rot}}=0.186`$. Such colour effect will be discussed in more detail in sect. 5. Furthermore, the time of coherence between spin and beat modulations is found to be $`HJD_{\mathrm{co}}`$ = 2 449 971.2335 $`\pm `$ 0.0070 in both UV and optical. Throughout this paper the Patterson et al.’s orbital ephemeris will be used but $`\mathrm{\Phi }_{\mathrm{orb}}`$ = 0.0 will refer to the orbital maximum. Hence, phase coherence occurs at $`\mathrm{\Phi }_{\mathrm{orb}}=0.90\pm `$0.03. i.e. close to the orbital maximum, whilst in 1988 and 1990 it was found close to the orbital minimum (Osborne & Mukai 1989; Paper 1). Such phase changes are not uncommon for FO Aqr (Semeniuk & Kaluzny 1988; Hellier et al. 1990). ## 5 The rotational modulation UV continuum, zero-order light, and UV emission line fluxes as well as their V/R ratios have been folded in 56 phase bins along $`\mathrm{P}_{\mathrm{spin}}`$. The spin pulses, pre-whitened from all other frequency variations, are shown in Fig. 3 together with the optical B band pulses folded in 28 phase bins. A strong colour effect is observed in both amplitudes and phasing. Fractional amplitudes (amplitudes of the sinusoid $`\mathrm{Asin}(\omega \mathrm{t}+\varphi )`$ divided by the average value) decrease from 26$`\%`$ in the far-UV to $`16\%`$ in the near-UV and to $`10\%`$ in the optical. The far-UV pulse maximum is broader and lags by $`\mathrm{\Delta }\varphi _{\mathrm{spin}}=0.091\pm 0.010`$ and 0.186$`\pm `$0.008 the near-UV and optical maxima respectively. While the UV line fluxes follow the near-UV modulation, their V/R ratios are in phase with the far-UV, with a maximum blue shift when the far-UV pulse is at maximum. The pulsation in the line fluxes are 18$`\%`$, 12$`\%`$, and 10$`\%`$ in He II, C IV and Si IV. The V/R ratios are generally $`<`$1 indicating the presence of a dominant blue component in the lines, which is also visible from the extended blue wings in the average spectrum. The velocity displacements indicate the presence of a spin S-wave in the line profiles similar to the optical (Hellier et al. 1990). Both continuum and emission lines therefore strongly indicate the presence of two components which affect the rotational modulation in FO Aqr. The 797 FOS spectra have been spin-folded into 20 phase bins. A total of 780 light curves, each sampling a wavelength bin of $``$ 1.8 Å , were then produced and fitted with a sinusoid. The resulting amplitudes define the rotational pulsed spectrum as $`\mathrm{F}_\lambda =2\mathrm{A}_\lambda `$. This spectrum (shown in the enlargement of the lower panel of Fig. 3), has to be regarded as an upper limit to the modulated flux since no pre–whitening could be performed given the low S/N of each 780 light curves. This spectrum gives evidence of modulation not only in the main emission lines and $`\mathrm{Ly}\alpha `$ absorption but also in the weaker emission features identified in sect. 3. Broad band UV continuum and optical photometric spin pulsed fluxes, obtained from the multi-frequency fit and reported in Fig. 3, provide a correct description of the rotational pulsed energy distribution. A spectral fit to the broad band UV and optical spectrum, using a composite spectral function, consisting of two blackbodies, gives 37 500$`\pm `$500 K and 12 000$`\pm `$400 K ($`\chi _{\mathrm{red}}^2=0.91`$). The projected fractional area of the hot component is $``$0.11 A<sub>wd</sub>, while the cool one covers $`10.7\mathrm{A}_{\mathrm{wd}}`$, for $`\mathrm{R}_{\mathrm{wd}}=8\times 10^8\mathrm{cm}`$, and d=325 pc (Paper 1). The FOS rotational pulsed spectrum shows the presence of a $`\mathrm{Ly}\alpha `$ absorption feature which gives a hydrogen column density of $`8\pm 2\times 10^{20}\mathrm{cm}^2`$. On the other hand, assuming $`N_\mathrm{H}=5\times 10^{20}\mathrm{cm}^2`$, as derived from the grand average spectrum, a composite function consisting of a white dwarf model atmosphere with at 36 000 K and of the same 12 000 K blackbody gives an equally satisfactory fit to the whole FOS spectrum. For a distance of 325 pc the radius of the white dwarf is 4.9$`\times 10^8`$ cm, in agreement with that of the hot blackbody component. While the detection of the hot component is new, a comparison with previous spectral analysis of the optical and IR spin pulses observed in 1990 shows that the temperature of the cool component has not changed with time but instead it suffered a decrease in area by a factor of $``$ 1.5. ## 6 The sideband modulations The beat $`\omega \mathrm{\Omega }`$ modulation, although weak among the detected sidebands, shows an anti-phased behaviour between line fluxes and continuum (Fig. 4a). The UV line fluxes, with average fractional amplitudes of $`11\%`$, show a minimum when the UV continuum is maximum. Their maximum shows a dip-like feature centered on the UV continumm minimum. Colour effects are also encountered, the far-UV pulses being stronger ($`11\%`$) than the near-UV ($`6\%`$) ones and lagging by $``$ 0.2 in phase the near-UV. The trend of a decrease in the amplitudes at increasing wavelengths is confirmed in the optical where an upper limit of $``$ 2$`\%`$ can be set to the fractional amplitudes. A similar colour effect is also observed in the $`\omega +\mathrm{\Omega }`$ sideband pulsation with fractional amplitudes ranging from $`20\%`$ in the far-UV to $`7\%`$ in the near-UV, displaying a broadening of the maximum towards longer wavelengths (Fig. 4b). The strong $`\omega 2\mathrm{\Omega }`$ pulsation only shows a wavelength dependence in the fractional amplitudes, being in the far-UV $`26\%`$ and decreasing to $`12\%`$ in the near-UV (Fig. 4c) and to $`23\%`$ (upper limit) in the optical. The modulated spectra at the three frequencies have been derived following the same procedure as for the rotational pulsed spectrum. The anti-phased behaviour of the emission lines is seen in the beat pulsed spectrum, where these are seen as absorption features, except for residuals in the C IV and He II lines (Fig 4d, bottom panel). The modulation spectra at $`\omega 2\mathrm{\Omega }`$ and $`\omega +\mathrm{\Omega }`$ frequencies show weak emissions at Si IV, C IV and He II indicating a marginal variability at these frequencies. The UV continuum energy distributions of these variabilities are best represented by power laws $`\mathrm{F}_\lambda \lambda ^\alpha `$ with spectral index $`\alpha 0.81.7`$, rather than blackbodies (20 000 – 25 000 K), possibly suggesting that more than one component is acting. Given the low level of confidence of the sideband variabilities it is not possible to derive further information. ## 7 The orbital variability The UV and optical orbital modulations have been investigated folding the FOS continuum broad band and emission line fluxes in 28 orbital phase bins. The light curves have been prewhitened by the other active frequencies using the results of the multi-frequency fits. For the IUE data, continuum broad band and emission line flux measures have been performed on each SWP and LWP spectrum. Three broad bands have been selected in each spectral range, five of them coinciding with the FOS selected bands and a sixth one in the range $`\lambda `$ 2900–2985. The contribution of the spin pulsation, as derived from the multi-frequency fit has been removed. The best fit blackbody spin pulsed spectrum has been used to allow prewhitening in the range $`\lambda `$2900–2985. The FOS and IUE broad band continuum fluxes in the far-UV, mid-UV and near-UV as well as the zero order and B band light curves are reported in the left panel of Fig. 5, while the emission line fluxes of Si IV, C IV and He II are shown in the right panel. The orbital gaps due to the HST sampling are apparent. A strong colour dependence is encountered in the modulation amplitudes as well as the phasing. Fractional amplitudes range from 40$`\%`$ in the far-UV to 28$`\%`$ in the near-UV (IUE band) and 15$`\%`$ in the optical. The modulation amplitudes then have increased by a factor $``$ 2.5 in the UV and $``$ 1.5 in the optical with respect to 1990. However, the phasing of UV maximum and minimum has not changed with time, occurring at $`\mathrm{\Phi }_{\mathrm{orb}}=0.86`$ and $`\mathrm{\Phi }_{\mathrm{orb}}=0.34`$, respectively. The UV modulation is more sinusoidal, whilst the optical light curve is more structured with a double humped maximum between $`\mathrm{\Phi }_{\mathrm{orb}}`$ = 0.75 and $`\mathrm{\Phi }_{\mathrm{orb}}`$ = 0.0. A comparison with the optical behaviour in 1990 indicates an absence of a broad maximum centered at $`\mathrm{\Phi }_{\mathrm{orb}}`$ = 0.0 and a less defined minimum. The current observations are inadequate to resolve the orbital dip due to a grazing eclipse of the accretion disc in either UV and optical ranges. The orbital modulation in the UV emission line fluxes is strong with fractional amplitudes of 40$`\%`$ in N V, 20$`\%`$ in Si IV, 29$`\%`$ in C IV and 23$`\%`$ in He II and almost in phase with the UV continuum. The spectrum of the UV orbital variability derived with the same procedure as described before is shown in the enlargement of Fig. 5 (bottom panel). ¿From the inserted figure the $`\mathrm{Ly}\alpha `$ absorption feature is apparent and is consistent with the neutral hydrogen column density of $`8\pm 2\times 10^{20}`$ cm<sup>-2</sup> inferred from the spin pulsed spectrum. Hence, while this absorption in the orbital pulsation spectrum is clearly circumstellar, the same nature in the spin pulsed spectrum cannot be excluded. The UV FOS spectrum requires a hot component with a blackbody temperature of 21 500$`\pm `$500 K (Fig. 5 bottom panel enlargement). The composite UV and optical broad band energy distrubution confirms the previous results on the presence of two components, a hot at 19 500 $`\pm `$ 500 K and a cool one at 5 700 $`\pm `$ 200 K ($`\chi _{\mathrm{red}}^2=1.1`$) (Fig. 5, bottom panel). With current UV observations, it is now possible to constrain the temperature of the hot emitting region. The temperature of the cool component is in agreement, within errors, with that inferred in Paper 1. A substantial increase by a factor of $`2.6`$ in the area of the hot region is found when compared to the 1990 epoch, which is $`12\mathrm{A}_{\mathrm{wd}}`$. The emitting area of the cool component is instead similar to that previously derived (Paper 1). ## 8 Discussion The HST/FOS and IUE spectroscopy has revealed new insights in the UV variability of FO Aqr. ### 8.1 The periodic variations The UV continuum and emission line fluxes are found to be strongly variable at the orbital period. This periodicity also dominates the optical range where FO Aqr was previously found to be spin dominated. The time series analysis indicates the presence of other periodicities, the negative $`\omega 2\mathrm{\Omega }`$ sideband being much stronger than the beat $`\omega \mathrm{\Omega }`$ in both UV and optical ranges. The presence of sidebands with different amplitudes at different epochs is rather common in FO Aqr, although the beat is usually the strongest (Patterson & Steiner 1983; Warner 1986; Semeniuk & Kaluzny 1988, Chiappetti et al. 1989; Paper 1, Marsh & Duck 1996). In particular the intermittent occurrence of a stronger pulsation at $`\omega 2\mathrm{\Omega }`$ frequency was already noticed (Warner 1986; Patterson et al. 1998). The strong negative sideband $`\omega 2\mathrm{\Omega }`$, cannot be produced by an amplitude modulation at $`2\mathrm{\Omega }`$ frequency of the rotational pulses, since the positive sideband $`\omega +2\mathrm{\Omega }`$ should have been present. Also, an orbital variability of the amplitude of the $`\omega \mathrm{\Omega }`$ modulation cannot be responsible alone since it is too weak. Hence the $`\omega 2\mathrm{\Omega }`$ pulsation should be dominated by the effects of an unmodulated illumination from the white dwarf, which naturally gives rise to the orbital variability (Warner 1986). The occurrence of coherence between spin and beat pulsations appears to be different from epoch to epoch (Semeniuk & Kaluzny 1988; Osborne & Mukai 1989; Paper 1). It was proposed that phase coherence close to the optical orbital minimum could be possible if the reprocessing site(s) are viewing the lower accreting pole (Paper 1). The observed shift of half an orbital cycle would then imply that the reprocessing region(s), are now viewing the main accreting pole, as predicted by the standard reprocessing scenario (Warner 1986). The behaviour of UV emission lines is different between fluxes and V/R ratios. The line fluxes are strongly variable at the orbital period, the spin variability being 1.6 times lower. On the other hand, their V/R ratios only show a rotational S-wave, but that of C IV line is surprisingly weak. The lack of detection of an orbital S-wave, can be ascribed to the low amplitude ($``$ 300–400 $`\mathrm{km}\mathrm{s}^1`$) velocity displacements known from optical data (Hellier et al. 1989; Marsh & Duck 1996), which are not detected because of the low spectral resolution of the FOS data. ### 8.2 The rotational pulses Both shapes and amplitudes of UV and optical continnum spin pulses indicate the presence of two components, one dominating the near-UV and optical ranges, already identified in Paper 1 and a new contribution dominating the far-UV pulses which lags by $``$ 0.2 in phase the first one. Furthermore a different behaviour between emission line fluxes and V/R ratios is observed. While the latter show a spin S-wave in phase with the far-UV continuum, the line fluxes follow the near-UV and optical pulsations. The maximum blue-shift found at rotational maximum of the far-UV pulses indicates that the bulk of velocity motions in the emission lines maps the innermost regions of the accretion curtain. The outer curtain regions are then responsible for X-ray illumination effects seen in the line fluxes and near-UV and optical continua. A direct comparison with previous X-ray observations reported by Beardmore et al. (1998) is not possible since, adopting their linear spin ephemeris, the UV and optical maxima lag by $`\mathrm{\Delta }\mathrm{\Phi }_{\mathrm{spin}}`$=0.4 their predicted optical maximum. However, the X-ray pulse maxima observed by Beardmore et al. (1998), typically lag by $`\mathrm{\Delta }\mathrm{\Phi }_{\mathrm{spin}}`$ =0.2 their optical phase zero (see their Fig. 3), consistently with the lag observed between the far-UV and optical pulses. Hence this difference is an indication that the far sides of the accretion curtain come into view earlier than the innermost regions. The spectrum of the pulsation reveals regions at $``$ 37 000 K covering a relatively large area, $`0.1\mathrm{A}_{\mathrm{wd}}`$, with respect to typical X-ray fractional areas $`f<10^3`$ (Rosen 1992). Such hot components have been also observed in the IPs PQ Gem (Stavroyiannopoulos et al. 1997) and EX Hya (de Martino 1998). The presence of the $`\mathrm{Ly}\alpha `$ absorption feature in this spectrum can be partially due to the photospheric absorption of the heated white dwarf with similar temperature and fractional area as a blackbody representation. On the other hand, both orbital and rotational modulated spectra give similar values of $`N_\mathrm{H}`$ if this absorption is of circumstellar nature. Hence, while only in AE Aqr the UV pulses are clearly associated with the heated white dwarf (Eracleous & Horne 1994, 1996), those in FO Aqr can be associated with either the innermost regions of the accretion curtain onto the white dwarf or its heated polar regions. The second component identified as a cool 12 000 K region covers $`11\mathrm{A}_{\mathrm{wd}}`$, a factor $`1.5`$ lower than previously found in Paper 1. Thus the decrease in the optical amplitudes does not involve substantial changes in temperatures but in the size of the accretion curtain. Such lower temperatures characterizing the near-UV and optical pulses are also recognized in other IPs (de Martino et al. 1995; Welsh & Martell 1996; Stavroyiannopoulos et al. 1997; de Martino 1998). Although a two component pulsed emission might be a crude representation, it is clear that temperature gradients are present within the accretion curtain extending up to $`6\mathrm{R}_{\mathrm{wd}}`$. The bolometric flux involved in the spin modulation due to both components amounts to 6.3$`\times 10^{11}`$$`\mathrm{ergs}\mathrm{cm}^2\mathrm{s}^1`$. Although no contemporary X-ray observations are available this accounts for $`26\%`$ the total accretion luminosity as derived from ASCA 1993 observations (Mukai et al. 1994). ### 8.3 The sidebands variability The UV continuum pulsations observed at the sideband frequencies, $`\omega 2\mathrm{\Omega }`$, $`\omega \mathrm{\Omega }`$ and $`\omega +\mathrm{\Omega }`$, indicate the presence of a relatively hot component $``$ 20 000-25 000 K. The lack of adequate data in the optical range does not allow one to confirm the cool ($``$ 7 000 K), and hence possible second component, in the beat pulsed energy distribution as found in Paper 1. The phase lags of the far-UV maximum with respect to that in the near-UV in the $`\omega \mathrm{\Omega }`$ and $`\omega +\mathrm{\Omega }`$ pulsations are similar to that observed in the rotational pulses. This is consistent with the pulsations being produced by amplitude variations of the spin pulses at the orbital period. In contrast, the prominent negative sideband $`\omega 2\mathrm{\Omega }`$ variability is not affected by phase shift effects, indicating that indeed such variability is mainly due to an aspect dependence of the reprocessing site at the orbital period. No strong pulsation in the UV emission lines is observed at these frequencies except for the interesting anti-phased behaviour of these lines at the beat period. These are observed as weak absorption features in the modulated spectrum. Such behaviour, although much more prominent, is also observed in the IP PQ Gem (Stavroyiannopoulos et al. 1997). Though this is not easy to understand, a possibility could be that the reprocessing site, producing the $`\omega \mathrm{\Omega }`$ component in the emission lines, is viewing the lower pole instead of the main X-ray illuminating pole as also suggested by Stavroyiannopoulos et al. (1997). ### 8.4 The orbital variability The present study confirms previous results where the orbital modulation is composed by two contributions, identified as the illuminated bulge and the heated face of the secondary star, the former being at superior conjunction at $`\mathrm{\Phi }_{\mathrm{orb}}`$ = 0.86, while the latter is at superior conjunction at $`\mathrm{\Phi }_{\mathrm{orb}}`$ = 0.0. The double-humped maximum in the optical light curve can be understood in terms of relative proportion of a strong bulge contribution with respect to that of the secondary star. Indeed no changes in the temperatures are found (the hot one is better constrained with the present data), but a substantial change by a factor of $`2.6`$ since 1990 is found in the emitting area of the bulge itself. All this indicates that illumination effects are basically unchanged whilst the inflated part of the disc has increased. The total bolometric flux involved in the orbital variability amounts to 2.7$`\times 10^{10}`$$`\mathrm{ergs}\mathrm{cm}^2\mathrm{s}^1`$which is a factor $``$ 4 larger than that of the rotational pulsation. Neglecting the sideband contributions at first approximation, the total modulated flux amounts to 3.3$`\times 10^{10}`$$`\mathrm{ergs}\mathrm{cm}^2\mathrm{s}^1`$and corresponds to a reprocessed luminosity of $`4.2\times 10^{33}`$$`\mathrm{ergs}\mathrm{s}^1`$which is approximately the order of magnitude of the accretion luminosity derived from X-rays (Mukai et al. 1994). Then assuming the balance of the energy budgets of the reprocessed and primary X-ray radiations, and hence of the accretion luminosity, an estimate of the accretion rate of $`\dot{\mathrm{M}}=6.7\times 10^{10}\mathrm{M}_{}\mathrm{yr}^1`$ is derived. ### 8.5 The long term variability FO Aqr has displayed a change in its power spectrum at optical and UV wavelengths on a time scale of five years. It was also brighter by 0.3 mag and 0.2 mag in the two ranges with respect to 1990. This difference is accounted for by the orbital modulated flux. The study of the spectra of the periodic variabilities has shown that a shrinking of the accretion curtain by a factor of $`1.5`$ has occurred while the inflated part of the disc has increased in area by a factor of 2.6. Such changes indicate variations in the accretion parameters. Worth noticing is that the unmodulated UV and optical continuum component has not changed with time, indicating that the steady emission from the accretion disc (Paper 1) has not been affected. These results are in agreement with the long term trend of the X-ray power spectra (Beardmore et al. 1998) which showed that FO Aqr was dominated by the spin pulsation in 1990, while in 1988 and in 1993 prominent orbital and sideband variabilities were present. Changes from a predominant disc-fed accretion to a disc-overflow (or stream-fed) accretion have been the natural explanation for such changes in the X-ray power spectra. As proposed in Paper 1, the bulge provides a source for a variable mass transfer onto the white dwarf, and an increase in its dimensions accounts for a predominant disc-overflow towards the white dwarf. Beardmore et al. (1998) suggested that changes in the accretion mode could be triggered by variations in the mass accretion rate and the analysis presented here is indeed in favour of this hypothesis. An estimate of changes in the accretion rate producing a shrinking of the accretion curtain can be inferred by the relation betwen magnetospheric radius, accretion rate and magnetic moment (Norton & Watson 1989): $`r_{mag}=\varphi \mathrm{\hspace{0.17em}2.7}\times 10^{10}\mu _{33}^{4/7}\dot{M}_{16}^{2/7}M_{wd}^{1/7}`$ (2) where $`\varphi 1`$ is a dimensionless factor accounting for a departure from spherical symmetry, $`\mu _{33}`$ is the magnetic moment in units of $`10^{33}\mathrm{G}\mathrm{cm}^3`$, $`\dot{\mathrm{M}}_{16}`$ is the mass accretion rate in units of $`10^{16}\mathrm{g}\mathrm{s}^1`$ and $`\mathrm{M}_{\mathrm{wd}}`$ is the white dwarf mass in units of $`\mathrm{M}_{}`$ . Hence assuming that the accretion curtain reaches the magnetospheric boundary, a reduction of the linear extension by a factor of $``$ 1.2 implies that the accretion rate has increased by a factor $``$ 2 in five years. The long term spin period variations observed from 1981 to 1997 in FO Aqr (Patterson et al. 1998), which changed from a spin-down to a recent spin-up since 1992, were proposed to be due to variations around the equilibrium period produced by long term variations in $`\dot{\mathrm{M}}`$. The increase in brightness level and the results found in the present analysis are strongly in favour of this interpretation. ## 9 Conclusions The present HST/FOS and IUE spectroscopy has allowed us for the first time to infer the characteristics of multiple periodicities in the UV emission of FO Aqr. Also, coordinated optical photometry has provided further constraints summarized as follows: 1) The rotational pulsations in the UV and optical are consistent with the current accretion curtain scenario. They reveal the presence of strong temperature gradients within the curtain moving from the footprints onto the white dwarf surface to several white dwarf radii. A spin S-wave is observed in the UV emission lines which maps the innermost regions of the accretion curtain. 2) Orbital sideband variabilities indicate that reprocessing is occurring at fixed regions within the binary frame. Different behaviour is observed in the emission lines and continuum beat pulses indicating a complex reprocessing scenario. 3) The orbital UV variability is confirmed to arise from the illuminated side of the inflated disc (bulge) while the optical modulation is produced by heating effects at the secondary star. 4) A change in the relative proportion of rotational and orbital modulation amplitudes is found on a timescale of five years. These are interpreted as a reduction in the dimensions of the accretion curtain accompained by an increase in the bulge size. These observations indicate a long term change in the accretion mode where FO Aqr has switched from a disc-fed to a disk-overflow state triggered by changes in the mass accretion rate. ###### Acknowledgements. The authors wish to acknowledge the valuable help from ST ScI staff who provided new calibration files for the square apertures of FOS instrument before publication. J. Patterson is gratefully acknowledged for providing the new revised spin ephemeris quoted in the text.
no-problem/9909/hep-lat9909064.html
ar5iv
text
# OCHA-PP-142 Landau Gauge Fixing supported by Genetic Algorithm ## 1 INTRODUCTION Gauge fixing degeneracies (existence of Gribov’s copies) are generic phenomena in non-abelian continuum gauge theories , and thus there exist fundamental problems, e.g., what is the correct gauge fixed measure in the path integral formalism. Although the foudation of lattice gauge theories does not necessitate gauge fixings, they are, however, often requeired for field theoretic transcriptions of their nonperturbative dynammics. The principle of the Landau gauge fixing algorithm is given as an optimization problem of some functions along the gauge orbit of link variables. There are many extrema of the optimization function in general, and all these extrema points on the gauge orbit correspond to the Landau gauge, Gribov copies. The absolute maximum corresponds to the minimal Landau gauge. In order to fix the Landau gauge uniquely, the minimal Landau gauge is the most favourable goal. Local search algorithms were developed by a chain of gauge transformations. . Since there are many Gribov copies, the gauge orbit paths are easily captured by these extrema. We call these method as the steepest ascent method (SA). In such a situation that SA fails to attain an absolute extremum, one can try a simple random search on the gauge orbit with succeeding SA. There is only unique method of Hetrick-de Forcrand (HdeF) aiming at the minimal Landau gauge fixing, which is systematic in the sense that random trial is not involved. However it works successfully only for large $`\beta `$ samples, rather smooth configurations. Thus finding efficient algorithm for the minimal Landau gauge fixing is still an open problem. We report results of an attempt in some GA type methods in comparison with other methods as the simple RGT method. We work on $`SU(2)`$ gauge theory of $`8^4`$ lattice, and define gauge fields as $`A_{x,\mu }={\displaystyle \frac{1}{2i}}(U_{x,\mu }U_{x,\mu }^{})`$, then the optimization function, fitness, is given as $$F_U(G)=\underset{x,\mu }{}\frac{1}{2}Tr(U_{x,\mu }^G),$$ (1) where $`U_{x,\mu }^G=G_x^{}U_{x,\mu }G_{x+\mu }`$. The fitness $`F_U(G)`$ can be viewed as negative of energy of $`SU(2)`$ spin system $`G`$ sitting on sites, and thus the problem is equivalent to finding the lowest energy state under the randomized interaction $`U`$. Straightforward GA strategy was applied to the minimal Landau gauge fixing. Their preliminary results of the Landau gauge fixing show that the straightforward application of GA is not so good as to become a practical method. We tested three types of GA methods, which are different from each other in how RGT are incorporated in the algorithms. We compare them with the simple RGT method in performance. ## 2 Algorithms Our aim is to develop algorithms efficient for small $`\beta `$s. Basic building block for the algorithms is RGT. The simple RGT (TRGT) method which applies RGT on the whole lattice takes time, while it is difficult for SA paths to escape from a Gribov copy if the RGT is restricted in blocks of too small area on the lattice. Thus main features of our algorithms consist in how to determine blocks to which RGT is applied. Given a Gribov copy link configuration $`U`$, the following three types of definition of blocking for RGT are devised. 1. The whole lattice is partitioned into $`N_{div}^{}{}_{}{}^{d}`$ chequered blocks, where the number of dimension, $`d=4`$. Then RGT is applied on white blocks and a constant RGT on each black one, and vice versa. We call this method as blocked RGT (BRGT) method. 2. We set two parameters $`R1`$ and $`R2`$, and sites $`i`$ where RGT is applied are chosen according to local fitness density, $`f(i)`$, by $`f(i)<R1`$ or $`R2<f(i)`$. We call this method as local fitness density RGT (LFRGT). 3. Ising spin interaction on randomly chosen coarse lattice is defined from gauge spin interaction given by $`U`$ such that at least one antiferro interaction should be involved. Through Monte Carlo simulation of this Ising system, one obtains up-spin blocks $`B_+`$ and down-spin blocks $`B_{}`$. Blocks for RGT is given by one of these blocks. $`\beta _{Ising}`$ is so chosen that size of both blocks $`B_\pm `$ becomes comparable. We call this method as Ising RGT (IRGT). We use the local exact algoritm for SA method. Given a Gribov copy $`U`$, the SA method following RGT in use of one of three blockings, brings $`U`$ to the extrema of fitness by steps of gauge transformations. This new copy in the Landau gauge is put as an initial copy for the next iteration. This sub-procedure is repeated $`M_{itr}`$ times. The original Gribov copy and the maximum fitness copy among $`M_{itr}`$ new gauge copies are compared in fitness value. If the fitness of the new one is higher, the sub-procedure is to be started again with this new one as an initial copy. Otherwise the process stops, and the initial copy is considered as the expected configuration with the maximum fitness value. In addtion to the above basic algorithm, two types of modification are devised as follows: 1.As the initial copy in the sub-process, the better fitness copy between the current copy and the preceding, is always chosen. We call this as the inter-selection-on (IS) scheme. 2.As the initial copy in the sub-process, the product of crossing is adopted, where chequered block crossing is done between the best fitness copy and the second best one among the obtained copies so far. We call this as the crossing-on (C) scheme. We tested these algorithms on $`SU(2)`$ $`8^4`$ lattice with $`\beta =2.0,1.75,1.5`$ and tuned some parameters, as $`N_{div}`$, $`\beta _{Ising}`$ and $`M_{itr}`$, with or without IS and/or C scheme. ## 3 Results and Performance Our three GA type methods were executed on the same set of randomly produced 50 copies from a suitably chosen copy from samples, $`\beta =2.0,1.75,1.5`$. The TRGT method and the method of HdeF were also tested on the same set. Performance of these methods, hit rate of the minimal Landau gauge and average time consumption for the gauge fixing, are compared. We fix $`M_{itr}=5`$, and in tests of parameters search for $`\beta =2.0`$, we found that BRGT with $`N_{div}=2`$ shows a sufficient global search power, while BRGT with $`N_{div}=4`$ shows a lower hit rate. The IS scheme could be viewed as a kind of elitism, and it is known that elitism is a suitable strategy when the global search power is available. BRGT with IS scheme, $`N_{div}=2`$, shows a powerful and efficient search, while IRGT with IS, however, does not. For LFRGT, the cut parameters, $`R1`$ and $`R2`$, affect its search power. Without $`R2`$ or with too large $`R2`$, even high $`R1`$ does not work well, nor low $`R2`$ without $`R1`$. Since LFRGT with both paramters suitabley chosen, achieves the efficient search, IS scheme helps. From Table 1, our algorithms, BRGT with IS, $`N_{div}=2`$, and LFRGT with IS, $`R1=0.5`$ and $`R2=0.85`$, have high hit rates and exhibit good performance comparable with TRGT. The HdeF method with small $`\beta `$ is known that it does not tend to the maximun fitness . The hit rate performance for various methods is given in Figure 1, and the average time consumption is shown in Table 1. This work is supported by Japan Society for the Promotion of Science, Grant-in-aid for Scientific Research(C) (No.11640251).
no-problem/9909/hep-ph9909250.html
ar5iv
text
# The Solar Neutrino Problem and Gravitationally Induced Long-wavelength Neutrino Oscillation IFUSP-DFN/99-034 hep-ph/9909250 \[ ## Abstract We have reexamined the possibility of explaining the solar neutrino problem through long-wavelength neutrino oscillations induced by a tiny breakdown of the weak equivalence principle of general relativity. We found that such gravitationally induced oscillations can provide a viable solution to the solar neutrino problem. preprint: IFUSP-DFN/99-XXX hep-ph/9909250 \] Nature seems to be most strongly in agreement with neutrino oscillations. The compelling evidences coming from solar neutrino experiments , that span over two decades, and from atmospheric neutrino experiments are difficult, if not impossible, to be accommodated without admitting neutrino flavor conversion. Nevertheless the dynamics underlying such conversion is yet to be established and in particular does not have to be a priori related to the electroweak force. The interesting idea that gravitational forces may induce neutrino mixing and flavor oscillations if the weak equivalence principle of general relativity is violated, was proposed by Gasperini and independently by Halprin and Leung about a decade ago, and thereafter, many works have been performed on this subject . In Ref. this was shown to be phenomenologically equivalent to velocity oscillations of neutrinos due to a possible violation of Lorentz invariance . So even a tiny breakdown of the space-time structure of special and/or general relativity may lead to flavor oscillations even if neutrinos are strictly massless. Some theoretical insight on the type of gravitational potential that could violate the weak equivalence principle can be found in Ref. . A discussion on the departure from exact Lorentz invariance in the standard model Lagrangian in a perturbative framework is developed in Ref. . Several authors have investigated the possibility of solving the solar neutrino problem (SNP) by such gravitationally induced neutrino oscillations , generally finding it necessary, in this context, to invoke the MSW like resonance since they conclude that it is impossible that this type of long-wavelength vacuum oscillation could explain the specific energy dependence of the data . Recently these neutrino oscillation mechanisms have been investigated in the light of the experimental results from Super-Kamiokande (SK) on the atmospheric neutrino anomaly, obtaining stringent limits for the $`\nu _\mu \nu _\tau `$ channel. We consider in this letter the possibility of explaining the most precise and recent solar neutrino data coming from gallium, chlorine and water Cherenkov detectors by means of neutrino mixing due to a “just-so” violation of the weak equivalence principle (VEP). We demonstrate that all the data can be well accounted for by the VEP induced long-wavelength neutrino oscillation in contrast to previous conclusions . We assume that neutrinos of different species will incur different time delay due to the weak, static gravitational field in the intervening space on their way from the Sun to the Earth. Their motion in this gravitational field can be appropriately described by the parametrized post-Newtonian formalism with a different parameter for each neutrino type. In this manner neutrinos that are weak interaction eigenstates and neutrinos that are gravity eigenstates will be related by a unitary transformation that can be parameterized, assuming only two neutrino flavors, by a single parameter, the mixing angle $`\theta _G`$ which can lead to flavour oscillation . Let us briefly revise the formalism that will be used in this work. We will assume oscillations only between two species of neutrinos, which are degenerate in mass, either between active and active ($`\nu _e\nu _\mu ,\nu _\tau `$) or active and sterile ($`\nu _e\nu _s`$, $`\nu _s`$ being an electroweak singlet) neutrinos. The evolution equation for neutrino flavors $`\alpha `$ and $`\beta `$ propagating through the gravitational potential $`\varphi (r)`$ in the absence of matter is : $$i\frac{d}{dt}\left[\begin{array}{c}\nu _\alpha \\ \nu _\beta \end{array}\right]=E\varphi (r)\mathrm{\Delta }\gamma \left[\begin{array}{cc}\mathrm{cos}2\theta _G& \mathrm{sin}2\theta _G\\ \mathrm{sin}2\theta _G& \mathrm{cos}2\theta _G\end{array}\right]\left[\begin{array}{c}\nu _\alpha \\ \nu _\beta \end{array}\right],$$ (1) where $`E`$ is the neutrino energy; $`\mathrm{\Delta }\gamma `$ is the quantity which measures the magnitude of VEP, it is the difference of the gravitational couplings between the two neutrinos involved normalized by the sum. There are many possible sources for $`\varphi `$, but it is generally believed that the Super Cluster contribution ($`\varphi 3\times 10^5`$) would be the dominant one . Therefore, it seems reasonable to ignore any variation of $`\varphi `$ over the whole solar system and take it as a constant . In this case Eq. (1) can be analytically solved to give the survival probability of $`\nu _e`$ produced in the Sun after traveling the distance $`L`$ to the Earth: $$P(\nu _e\nu _e)=1\mathrm{sin}^22\theta _G\mathrm{sin}^2\frac{\pi L}{\lambda },$$ (2) where the oscillation wavelength $`\lambda `$ is given by, $$\lambda =\left[\frac{\pi \text{ km}}{5.07}\right]\left[\frac{10^{15}}{|\varphi \mathrm{\Delta }\gamma |}\right]\left[\frac{\text{MeV}}{E}\right],$$ (3) which in contrast to the wavelength for mass induced neutrino oscillations in vacuum, is inversely proportional to the neutrino energy. In this case the survival probability is a function of two unknowns parameters that can be fitted, or constrained, by experimental data: $`\mathrm{\Delta }\gamma `$ and $`\mathrm{sin}2\theta _G`$. Since the value of the potential $`\varphi `$ in our solar system is somewhat uncertain , we will adopt the procedure used by other authors and work with the product $`\varphi \mathrm{\Delta }\gamma `$. We will perform a fit of the rates and SK recoil-electron spectrum but not take into account the day night effect (or zenith angle dependence) in SK. This is justified by the fact that day night variations can not be induced by this mechanism, and therefore, are irrelevant in determining the allowed parameter region. We will comment about the possible seasonal variations at the end. We first examine the observed solar neutrino rates in the VEP framework. In order to do this we have calculated the theoretical predictions for gallium, chlorine and Super-Kamiokande water Cherenkov solar neutrino experiments, as a function of the two VEP parameters, using the solar neutrino fluxes predicted by the Standard Solar Model by Bahcall and Pinsonneault (BP98 SSM) taking into account the eccentricity of the Earth orbit around the Sun. We then have performed a $`\chi ^2`$ analysis to fit these parameters and an extra normalization factor $`f_B`$ for the <sup>8</sup>B neutrino flux, to the most recent experimental results coming from Homestake $`R_{\text{Cl}}=2.56\pm 0.21`$ SNU, GALLEX and SAGE combined $`R_{\text{Ga}}=72.5\pm 5.5`$ SNU and SK $`R_{\text{SK}}=0.475\pm 0.015`$ normalized to BP98 SSM. The definition of the $`\chi ^2`$ function to be minimized is the same as the one used in Ref. which essentially follows the prescription given in Ref. except that our theoretical estimatives were computed by convoluting the survival probability given in Eq. (2) with the absorption cross sections taken from Ref. and the neutrino-electron elastic scattering cross section with radiative corrections and the solar neutrino flux corresponding to each reaction, $`pp`$, $`pep`$, <sup>7</sup>Be, <sup>8</sup>B, <sup>13</sup>N and <sup>15</sup>O and other minor neutrino sources such as <sup>17</sup>F or $`hep`$ neutrinos are neglected. We will first discuss our results for active to active conversion. We present in Fig. 1 (a) the allowed region determined only by the rates with free $`f_B`$ and in Table I the best fitted parameters as well as the $`\chi _{\text{min}}^2`$ values for fixed and free $`f_B`$. We found for $`f_B=1`$ that $`\chi _{\text{min}}^2=1.49`$ for 3-2=1 degree of freedom and for $`f_B=0.81`$ that $`\chi _{\text{min}}^2=0.32`$ for 3-3=0 degree of freedom. We also have checked that the allowed region for fixed <sup>8</sup>B flux ($`f_B=1`$) is rather similar to the one presented here and so we only give the values of the corresponding best fitted parameters for this case in Table I. Next we perform a spectral shape analysis fitting the <sup>8</sup>B spectrum measured by SK using the following $`\chi ^2`$ definition: $$\chi ^2=\underset{i}{}\left[\frac{S^{\text{obs}}(E_i)f_BS^{\text{theo}}(E_i)}{\sigma _i}\right]^2,$$ (4) where the sum is performed over all the 18 experimental points $`S^{\text{obs}}(E_i)`$ normalized by BP98 SSM prediction for the recoil-electron energy $`E_i`$, $`\sigma _i`$ is the total experimental error and $`S^{\text{theo}}`$ is our theoretical prediction that was calculated using the BP98 SSM <sup>8</sup>B differential flux, the $`\nu e`$ scattering cross section , the survival probability as given by Eq. (2) taking into account the eccentricity as we did for the rates, the experimental energy resolution as in Ref. and the detection efficiency as a step function with threshold $`E_{\text{th}}`$ = 5.5 MeV. After the $`\chi ^2`$ minimization with $`f_B=0.80`$ we have obtained $`\chi _{\text{min}}^2=15.8`$ for 18-3 =15 degrees of freedom. The best fitted parameters that can be found in Table I permit us to compute the allowed region displayed in Fig. 1 (b). Finally we have performed a combined fit of the rates and the spectrum obtaining the allowed region presented in Fig. 1 (c). Again we can read from Table I the best fitted parameters. We observe that the combined allowed region is essentially the same as the one obtained by the rates alone. In all cases presented in Figs. 1 (a)-(c) we have two isolated islands of 90% C. L. allowed regions. See Table I for the fitted values corresponding to the local minimum in these islands. We note that only the upper corner of the Fig. 1 (c), for $`|\varphi \mathrm{\Delta }\gamma |>2\times 10^{23}`$ and maximal mixing in the $`\nu _e\nu _\mu `$ channel can be excluded by CCFR , and moreover, there are no restrictions in the range of parameters we considered in the case of $`\nu _e\nu _\tau `$ or $`\nu _e\nu _s`$ oscillations. In Fig. 2 we show the expected recoil-electron spectrum in SK for various fitted parameters of the VEP solution to the SNP. We see that the data from the spectrum alone can be quite well described by the VEP oscillation mechanism (thick solid line), whereas the prediction for the best fitted parameters from the rates alone and from the combined fit give flatter curves (dashed and long-dashed lines). Nevertheless parameters for a “test point” taken inside the 90 % C. L. region of Fig. 1 (c) can give rise to some spectral distortion (thin solid line). We have performed the same analyses with rates as well as spectrum also for the $`\nu _e\nu _s`$ channel. Since the allowed regions as well as the fitted recoil-electron spectra obtained in this case are rather similar to the ones for active to active conversion, we do not present them here but only show the best fitted parameters and $`\chi _{\text{min}}^2`$ values in Table II. Although the spectrum alone gives a comparable fit to the active to active case, we see that the rates can not be so well explained by this type of scenario and consequently the combination gives a worse fit. In spite of that this is still much better than the mass induced active to sterile vacuum oscillation solution to SNP. To understand why it is possible to fit the solar neutrino data we show in Fig. 3 (a) the survival probabilities for the best fitted parameters of the VEP induced oscillation. Due to the specific energy dependence of the probability assumed here we can actually strongly suppress the Be line and still keep the $`pp`$ neutrino flux high enough to be in agreement with Ga data, and at the same time obtain $``$ 50 % reduction of the <sup>8</sup>B neutrino flux, which is in fact the required suppression pattern of the solar neutrino fluxes in order to get a good fit . Because of the contributions from the strong smearing in energy of the scattered electron and of the finite experimental energy resolution, the probability alone can not give us a precise insight on the spectral shape. We can only qualitatively expect some distortion for the probability in Fig. 3 (b). Finally, let us discuss about the seasonal variation of the solar neutrino signal. In contrast to the usual vacuum oscillation solution to the SNP, in this scenario, no strong seasonal effect is expected in any of the present or future experiments, even the ones that will be sensitive to <sup>7</sup>Be neutrinos such as Borexino and Hellaz . Contrary to the usual vacuum oscillation case, the oscillation length for the low energy $`pp`$ and <sup>7</sup>Be neutrinos are very large, comparable to or only a few times smaller than the Sun-Earth distance, so that the effect of the eccentricity in the oscillation probability is small. On the other hand, for higher energy neutrinos relevant for SK, the effect of the eccentricity in the probability could be large, but averaged out after the integration over a certain neutrino energy range. These observations are confirmed in Fig. 4 where we present the expected seasonal variations for the best fitted parameters of the VEP induced oscillation scenario. In conclusion we found a new solution to the SNP which is comparable in quality of the fit to the other suggested ones. We thank Plamen Krastev, Eligio Lisi, George Matsas, Hisakazu Minakata, Pedro de Holanda and GEFAN for valuable discussions and useful comments. We also thank Michael Smy for useful correspondence. H.N. thanks Wick Haxton and Baha Balantekin and the Institute for Nuclear Theory at the University of Washington for their hospitality and the Department of Energy for partial support during the final stage of this work. This work was supported by the Brazilian funding agencies FAPESP and CNPq.
no-problem/9909/astro-ph9909197.html
ar5iv
text
# An Ongoing Parallax Microlensing Event OGLE-1999-CAR-1 Toward Carina ## 1 Introduction Gravitational microlensing was originally proposed as a method of detecting compact dark matter objects in the Galactic halo (Paczyński 1986). However, it also turned out to be a powerful method to study Galactic structure, mass functions of stars and extrasolar planetary systems (for a review, see Paczyński 1996). Earlier microlensing targets include the Galactic bulge, LMC, SMC and M31. Recently, the EROS and OGLE collaborations have started to monitor spiral arms. The EROS collaboration (Derue et al. 1999) has announced the discovery of three microlensing events toward two spiral arms. In this paper, we study the first spiral arm microlensing event discovered by the OGLE II experiment (Udalski, Kubiak & Szymański 1997). This ongoing event, OGLE-1999-CAR-1, was discovered in real-time toward the Carina arm by the OGLE early-warning system (Udalski et al. 1994). We show that this is a unique event that exhibits strong parallax effects. Such events were predicted by Refsdal (1966) and Gould (1992). The first case was reported by the MACHO collaboration toward the Galactic bulge (Alcock et al. 1995). OGLE-1999-CAR-1 is the first parallax event discovered toward any spiral arm. The outline of the paper is as follows. In Sect. 2, we briefly describe the observational data that we use. In Sect. 3, we present two different fits for the light curve, with and without considering the effect of parallax and blending. In Sect. 4, we calculate the expected optical depth, event rate and duration distribution toward Carina. Finally in Sect. 5, we summarize and discuss the implications of our results. ## 2 Observational Data The observational data were collected by the OGLE collaboration in their second phase of microlensing search (Udalski et al. 1997). The search was done with the 1.3-m Warsaw telescope at the Las Campanas Observatory, Chile which is operated by the Carnegie Institution of Washington. The targets of the OGLE II experiment include the LMC, SMC, Galactic bulge and spiral arms. All the collected data were reduced and calibrated to the standard system. For more details on the instrumentation setup and data reduction, see Udalski et al. (1997, 1998). The event, OGLE-1999-CAR-1, was detected and announced in real-time on Feb. 19, 1999 by the OGLE collaboration <sup>1</sup><sup>1</sup>1http://www.astrouw.edu.pl/~ftp/ogle/ogle2/ews/ews.html. Its equatorial coordinates are $`\alpha `$=11:07:26.72, $`\delta `$=-61:22:30.6 (J2000), which corresponds to Galactic coordinates $`l=290^{}.8,b=0^{}.98`$. The ecliptic coordinates of the lensed star are $`\lambda =331^{}.9`$, $`\beta =58^{}.1`$ (e.g., Lang 1980). Finding chart and I-band data of OGLE-1999-CAR-1 are available at the above web site and the V-band data were made available to us by Dr. Andrzej Udalski. In total, there are 416 I-band and 85 V-band data points. The baseline magnitudes of the object <sup>2</sup><sup>2</sup>2 we added an offset of $`0.003`$ to the V-band and $`+0.045`$ to the I-band to obtain the standard magnitudes (Udalski 1999, private communication), in the standard Johnson V and Cousins I-band, are $`V=19.66,I=18.01`$. ## 3 Models In this section, we will fit the OGLE V and I-band light curves simultaneously with theoretical models. We start with the simple standard model, and then consider a fit that takes into account both parallax and blending. Most microlensing light curves are well described by the standard form (e.g., Paczyński 1986): $$A(t)=\frac{u^2+2}{u\sqrt{u^2+4}},u(t)\sqrt{u_0^2+w^2(t)},$$ (1) where $`u_0`$ is the impact parameter (in units of the Einstein radius) and $$w(t)=\frac{tt_0}{t_E},t_ER_E/v_t$$ (2) with $`t_0`$ being the time of the closest approach (maximum magnification), $`R_E`$ the Einstein radius, $`v_t`$ the lens transverse velocity relative to the observer-source line of sight, and $`t_E`$ the Einstein radius crossing time. The Einstein radius is defined as $$R_E=\sqrt{\frac{4GMD_\mathrm{s}x(1x)}{c^2}},$$ (3) where $`M`$ is the lens mass, $`D_\mathrm{s}`$ the distance to the source and $`x=D_\mathrm{d}/D_\mathrm{s}`$ is the ratio of the distance to the lens and the distance to the source. To fit both the I-band and V-band data with the standard model, we need five parameters, namely, $$u_0,t_0,t_E,m_{\mathrm{I},0},m_{\mathrm{V},0}.$$ (4) Best-fit parameters are found by minimizing the usual $`\chi ^2`$ using the MINUIT program in the CERN library and are tabulated in Table 1. The resulting $`\chi ^2`$ is 893.9 for 496 degrees of freedom. For convenience, we divide the data into one ‘unlensed’ part and one ‘lensed’ part; the former has $`t=\mathrm{J}.\mathrm{D}.2450000<1150\mathrm{d}`$ and the latter has $`t>1150\mathrm{d}`$. For the standard fit, the lensed part has $`\chi ^2=407.9`$ for 161 data points, and the unlensed part has $`\chi ^2=486.0`$ for 340 data points; the somewhat high $`\chi ^2`$ for this part may be due to contaminations of nearby bright stars (as can be seen in the finding chart), particularly at poor seeing conditions. The $`\chi ^2`$ per degree of freedom for the ‘lensed’ part is about 2.5, indicating that the fit is not satisfactory. This can also be seen from Fig. 1, where we show the model light curve together with the data points. As can be seen, the observed values are consistently brighter than the predicted ones for $`t>1325\mathrm{d}`$ in the I-band. Further, the prediction is fainter by about 0.05 magnitude at the peak in the V-band. We show next that both inconsistencies can be removed by incorporating parallax effect and blending. To take into account the Earth motion around the Sun, we have to modify the expression for $`u(t)`$ in eq. (1). This modification, to the first order of the Earth’s orbital eccentricity ($`ϵ=0.017`$), is given by Alcock et al. (1995) and Dominik (1998): $`u^2(t)`$ $`=`$ $`u_0^2+w(t)^2+\stackrel{~}{r}_{}^2\mathrm{sin}^2\psi `$ (5) $`+`$ $`2\stackrel{~}{r}_{}\mathrm{sin}\psi \left[w(t)\mathrm{sin}\theta +u_0\mathrm{cos}\theta \right]+\stackrel{~}{r}_{}^2\mathrm{sin}^2\beta \mathrm{cos}^2\psi `$ $`+`$ $`2\stackrel{~}{r}_{}\mathrm{sin}\beta \mathrm{cos}\psi \left[w(t)\mathrm{cos}\theta u_0\mathrm{sin}\theta \right],`$ where $`\theta `$ is the angle between $`v_t`$ and the line formed by the north ecliptic axis projected onto the lens plane, $`u_0`$ is now more appropriately the minimum distance between the lens and the Sun-source line. The expression of $`\stackrel{~}{r}_{}`$ and $`\psi `$ are given by $$\stackrel{~}{r}_{}=\frac{1\mathrm{A}\mathrm{U}}{\stackrel{~}{v}t_E}\{1ϵ\mathrm{cos}[\mathrm{\Omega }_0(tt_p)]\},$$ (6) and $$\psi =\varphi +\mathrm{\Omega }_0(tt_p)+2ϵ\mathrm{sin}[\mathrm{\Omega }_0(tt_p)],$$ (7) where $`t_p`$ is the time of perihelion, $`\stackrel{~}{v}=v_t/(1x)`$ is the transverse speed of the lens projected to the solar position, $`\mathrm{\Omega }_0=2\pi /\mathrm{yr}`$, and $`\varphi `$ is the longitude measured in the ecliptic plane from the perihelion toward the Earth’s motion; this is given in the appendix of Dominik (1998), $$\varphi =\lambda +\pi +\varphi _\gamma ,$$ (8) where $`\varphi _\gamma `$ is the longitude of the vernal equinox measured from the perihelion. $`\varphi _\gamma =1.33`$ (rad), and the Julian day for Perihelion is $`t_p=2451181.57`$; the readers are referred to the The Astronomical Almanac (1999) for the relevant data. Note that the inclusion of the parallax effect introduces two more parameters, $`\stackrel{~}{v}`$ and $`\theta `$. The two-color light curves show that the lensed object became bluer by $`0.05`$ mag at the peak of magnification; such chromaticity is easily produced by blending. The additional source of light may be from the lens itself and/or it can come from another star which lies in the seeing disk of the lensed star by chance alignment. When blending is present, the observed magnification is given by $$A_i=f_i+(1f_i)A(t),i=I,V.$$ (9) To model the blending in two colors, we need two further parameters – the fraction of light contributed by the unlensed component in I and V, $`f_I`$ and $`f_V`$, at the baseline. Therefore, a fit that takes into account both parallax and blending effects requires 9 parameters: $`u_0,t_0,t_E,m_{\mathrm{I},0}`$, $`m_{\mathrm{V},0}`$, $`\stackrel{~}{v},\theta ,f_I`$, and $`f_V`$. The best-fit parameters for this model are given in Table 1. Compared with the standard fit, the $`\chi ^2`$ is reduced from 893.9 to 640.8. The reduction in the lensed part is dramatic: the $`\chi ^2`$ drops from 407.9 to 177.7 for 161 data points. The $`\chi ^2`$ for the unlensed part is 463.2 (as compared to 486.0 for the standard fit) for 340 data points. The $`\chi ^2`$ per degree of freedom is satisfactory. The predicted light curve (solid line in Fig. 1) matches the observed data both in the I-band and V-band. From Table 1, the blending fractions in the V and I bands are not well constrained, $`f_I=0.36\pm 0.2`$ and $`f_V=0.29\pm 0.2`$. The differential blending, however, is reasonably constrained, $`f_V/f_I=0.81_{0.27}^{+0.1}`$ due to the observed differential magnification (0.05 mag) between the I-band and V-band. The projected lens velocity is well constrained while its direction has somewhat larger errors. For completeness, we mention that the best model that accounts for blending but not parallax has $`\chi ^2=791.8`$ while the best model that accounts for parallax but not blending has $`\chi ^2=682.2`$. Hence the parallax effect reduces the $`\chi ^2`$ much more effectively than blending. This can be easily understood since the observed light curve is asymmetric, which cannot be produced by blending. ## 4 Optical Depth and Event Rate Toward Carina To put OGLE-1999-CAR-1 into context, in this section we estimate the optical depth, event rate and event duration distribution toward Carina. These can be compared with future observations when more events become available. One major uncertainty for microlensing toward spiral arms such as Carina is that we do not know the distance to the sources. Several molecular clouds at the same direction have distances between $`6.8\mathrm{kpc}`$ and $`8.7\mathrm{kpc}`$ (see Table 1 and Fig. 4. in Grabelsky et al. 1988). We adopt $`D_\mathrm{s}=6.8\mathrm{kpc}`$ as the distance to the sources. This assumption is also consistent with the lensed star being a main-sequence star as required by its position in the color-magnitude diagram (Udalski 1999, private communication); we return to this issue in the discussion. Since the direction toward the lensed star is nearly in the Galactic plane ($`b=0^{}.98`$) and far away from the Galactic center, we assume that the lenses are entirely contributed by disk stars, and their density profile follows the standard double exponential disk distribution $$\rho (R,z)=\frac{\mathrm{\Sigma }}{2z_0}\mathrm{exp}\left(\frac{rr_0}{r_d}\right)\mathrm{exp}\left(\frac{|z|}{z_0}\right),$$ (10) where $`\mathrm{\Sigma }=50M_{}/\mathrm{pc}^2`$, $`r`$ is the lens distance to the Galactic center, $`r_0=8.5\mathrm{kpc}`$, disk scale-length $`r_d=3.5\mathrm{kpc}`$ and scale-height $`z_0=0.325\mathrm{kpc}`$. The optical depth can then be obtained (e.g., eq. 9 in Paczyński 1986): $$\tau =3.4\times 10^7,$$ (11) independent of the lens mass function and kinematics. To estimate the event rate and event duration distribution, we have to make assumptions about the lens kinematics and mass function. The motion of lenses can be divided into an overall Galactic rotation of $`220\mathrm{km}\mathrm{s}^1`$ and a random motion. We assume that the random component follows Gaussian distributions with velocity dispersions of $`\sigma _R=40\mathrm{km}\mathrm{s}^1,\sigma _\theta =30\mathrm{km}\mathrm{s}^1,\sigma _\mathrm{z}=20\mathrm{km}\mathrm{s}^1`$ (cf. Derue et al. 1999). The motion of the Sun relative to the Local Standard of Rest is taken as $`v_{R,}=9\mathrm{km}\mathrm{s}^1,\mathrm{v}_{\theta ,}=11\mathrm{km}\mathrm{s}^1,\mathrm{v}_{\mathrm{z},}=16\mathrm{km}\mathrm{s}^1`$. We study three mass functions: 1. $`n(M)dM\delta (M1M_{})dM`$, 2. $`n(M)dMM^{2.35}dM,0.08M_{}<M<1M_{}`$, 3. $`n(M)dMM^{0.54}dM,0.1M_{}<M<0.6M_{}`$. Note that the second is a Salpeter mass function while the third describes the local disk mass function determined using HST star counts (Gould et al. 1997). The predicted event duration distributions for these three mass functions are shown in the left panel of Fig. 2. It is clear that there is a tail toward long durations for all three distributions. The probabilities of having $`t_E`$ longer than $`100\mathrm{d}`$ are respectively, $``$ 50%, 10% and 20%; so the observed long duration is not statistically rare. The total event rates per million stars per year are found to be $$\mathrm{\Gamma }=0.64,1.4,1.1,$$ (12) for the three mass functions, respectively. The predicted duration distribution and event rate are sensitive to the assumed velocity dispersions. For example, if we adopt $`\sigma _R=\sigma _\theta =\sigma _z=20\mathrm{km}\mathrm{s}^1`$ (as in Kiraga & Paczyński 1994), then the events will be on average 30% longer and the event rate decreases by about 30%. ## 5 Discussion We have shown that the microlensing event, OGLE-1999-CAR-1, has a light curve shape that is significantly modified by the earth motion around the Sun. This event is still ongoing at the time of writing (Aug. 12, 1999); later evolutions of the light curve will test our predictions and reduce the uncertainties in the parameters. For comparison, the three microlensing events seen by the EROS collaboration (Derue et al. 1999) are toward different spiral arms which are somewhat closer to the Galactic center direction. The three events have $`t_E`$ between 70$`\mathrm{d}`$ to 100$`\mathrm{d}`$, and none of the events show parallax effects. We have assumed a source distance of $`6.8\mathrm{kpc}`$ in the previous section. We show now that this is a reasonable assumption. From $`V=19.66`$ and $`I=18.01`$, and taking into account the blending, we find that the lensed star has $`M_V=5.9,M_I=4.3`$. Assuming an extinction of $`A_V=1.5`$ mag (see Fig. 4 in Wramdemark 1980), and $`A_I/A_V=0.482`$, we obtain the intrinsic magnitude and color $`M_V=4.4,(VI)_0=0.75`$. The star is consistent with being a main sequence star (with mass $`M1.05M_{}`$), as can be seen from the color-magnitude diagram (Udalski 1999, private communication) in the Carina region. We can combine the expression for $`\stackrel{~}{v}`$ and $`t_E`$ to obtain the lens mass as a function of its distance $$M=\frac{1x}{x}\frac{\stackrel{~}{v}^2t_E^2c^2}{4GD_\mathrm{s}}1.8M_{}\frac{1x}{x}.$$ (13) Using eq. (6) in Alcock et al. (1995), we have calculated the likelihood of obtaining the observed transverse velocity and direction as a function of the lens distance. The result is shown in the right panel of Fig. 2 for the Gould et al. (1997) mass function. The lens distance is approximately $`x=0.5\pm 0.2`$; we caution that this likelihood function is sensitive to the assumed kinematics. For example, if we take $`\sigma _R=\sigma _\theta =\sigma _z=20\mathrm{km}\mathrm{s}^1`$, then the best-fit lens distance changes to $`x=0.7`$. It is possible that a lens is a (massive) dark lens such as a white dwarf, neutron star and black hole; in this case the blended light is contributed by another unrelated star. Another possibility is that the blending is contributed by the lens itself. If the lens is a main sequence star and $`x0.7`$, which implies $`M0.8M_{}`$, then the lens will contribute about the right amount of light to explain the light curves (including the color change in the V and I bands). Further high-resolution imaging of the lensed object should allow us to differentiate between these two possibilities. ###### Acknowledgements. We greatly appreciate the OGLE collaboration for permission to use their data for this publication. I also thank Matthias Bartelmann, Martin Dominik, Bohdan Paczyński, Andrzej Udalski, Hans Witt and particularly the referee Eric Aubourg for valuable discussions and comments on the paper.
no-problem/9909/cond-mat9909210.html
ar5iv
text
# Energy-level statistics at the metal-insulator transition in anisotropic systems ## I Introduction It is well known that the isotropic Anderson model of localization exhibits a metal-insulator transition (MIT) for spatial dimensions larger than two. A critical amount of disorder $`W_c`$ is necessary to localize all the eigenstates. The asymptotic envelopes of the localized states for $`W>W_c`$ decay exponentially in space due to the destructive interference of the disorder-backscattered wave functions with themselves. An electron confined in such a state cannot contribute to charge transfer at temperature $`T=0`$. For $`W<W_c`$ there exist states that are extended through the whole sample allowing charge transfer through the system at $`T=0`$. In spatial dimensions up to two an arbitrarily small amount of disorder localizes all states and there is no MIT at finite $`W`$ for non-interacting systems. In the present study we consider the problem of Anderson localization in three dimensional (3D) disordered systems with anisotropic hopping. One might expect that increasing the hopping anisotropy, namely reducing the hopping probability in one ore two directions, is effectively equivalent to changing the dimensionality of the system continuously from 3D to 2D or 1D similarly to the case of (bi)-fractal lattices. However, this is not the case: The topology of the lattice is not changed as long as the hopping is non-zero and the dimensionality is still three. Only if the coupling is reduced to zero the dimension jumps from 3D to 2D or 1D. Previous studies using the transfer-matrix method (TMM) and multifractal analysis (MFA) showed that an MIT exists even for very strong anisotropy. The values of the critical disorder $`W_c`$ were found to decrease by a power law in the anisotropy, reaching zero only for the limiting 1D or 2D cases. In the present work, we focus our attention on the critical properties of this MIT. As an independent check to previous results, we employ energy level statistics (ELS) together with the finite-size scaling (FSS) approach. ELS has been previously applied with much success at the MIT of the isotropic model and it was shown that a size-independent statistics exists at the MIT. This critical statistics is intermediate between the two limiting cases of Poisson statistics for the localized states and the statistics of the Gaussian orthogonal ensemble (GOE) which describes the spectrum of extended states. We check whether the critical ELS is influenced by the anisotropy. A major part of our study is dedicated to the determination of the critical exponent $`\nu `$ of this second-order phase transition. In general it is assumed that $`\nu `$ depends only on general symmetries, described by the universality class, but not on microscopic details of the sample. Thus, anisotropic hopping might shift $`W_c`$ but should not change $`\nu `$. In order to verify this assumption it is necessary to determine $`\nu `$ with high accuracy for various anisotropies. This is a computationally demanding task and we emphasize that even in numerical studies of the isotropic Anderson model the correct value of $`\nu `$ is still not entirely clear. Recent highly accurate TMM studies report $`\nu =1.54\pm 0.08`$ and $`\nu =1.58\pm 0.06`$, but from ELS $`\nu =1.35\pm 0.15`$ and $`\nu =1.4\pm 0.15`$ was found. Also from TMM $`\nu =1.3\pm 0.1`$ for the isotropic system and $`\nu =1.3\pm 0.1`$ and $`\nu =1.3\pm 0.3`$ for two different data sets for the anisotropic system were determined. As we will show, an accurate estimation of $`\nu `$ requires the computation of large system sizes before FSS can be reliably employed. Using high precision data and taking into account non-linear corrections to scaling we then find $`\nu =1.45\pm 0.2`$. The paper is organized as follows. In Sec. II we introduce our notation. We recall the use of ELS for the characterization of the MIT in Sec. III. Using the fact that the statistical properties at the transition do not depend on the system size, we corroborate the existence of the MIT and determine the anisotropy dependence of the critical disorder and of the critical statistics. In particular, we show that the critical ELS changes continuously with increasing anisotropy from its functional form at the isotropic MIT towards Poisson statistics. In Sec. IV we describe the concept of FSS and the numerical methods to determine the critical properties such as $`\nu `$. Then we demonstrate that the scaling concept applies to the integrated $`\mathrm{\Delta }_3`$ statistics from ELS and we estimate $`\nu `$ from highly accurate ELS data obtained for very large system sizes. In Sec. V we also check whether our results are compatible with highly accurate TMM data. Finally, we summarize our results in Sec. VI. ## II The anisotropic Anderson model of localization The Anderson model is a standard model for the description of disordered systems with Hamiltonian given as $$H=\underset{i}{}ϵ_i|ii|+\underset{ij}{}t_{ij}|ij|.$$ (1) The states $`|i`$ are orthonormal and correspond to particles located at sites $`i=(x,y,z)`$ of a regular cubic lattice with size $`N^3`$. We use periodic boundary conditions $`(x+N,y,z)=(x,y+N,z)=(x,y,z+N)=(x,y,z)`$. The potential site energies $`ϵ_i`$ are uniformly distributed in the interval $`[W/2,W/2]`$ with $`W`$ defining the disorder strength, i.e., the amplitude of the fluctuations of the potential energy. The transfer integrals $`t_{ij}`$ are restricted to nearest neighbors and depend only on the three spatial directions, so $`t_{ij}`$ can either be $`t_x`$, $`t_y`$ or $`t_z`$. We study two possibilities of anisotropic transport: (i) weakly coupled planes with $$t_x=t_y=1\text{}t_z=1\gamma $$ (2) and (ii) weakly coupled chains with $$t_x=t_y=1\gamma \text{}t_z=1.$$ (3) This defines the strength of the hopping anisotropy $`\gamma [0,1]`$. For $`\gamma =0`$ we recover the isotropic case, $`\gamma =1`$ corresponds to $`N`$ independent planes or $`N^2`$ independent chains. ## III Energy level statistics ### A ELS and MIT The statistical properties of the energy spectra reflect the character of the eigenstates and have been proven to be a powerful tool for characterizing the MIT. On the insulating side of the MIT, one finds that localized states that are close in energy are usually well separated in space whereas states that are localized in vicinal regions in space have well separated eigenvalues. Consequently, the eigenvalues on the insulating side are uncorrelated, there is no level repulsion and the probability of eigenvalues to be close together is high. This is called level clustering and is described by the Poisson statistics. On the other hand, extended states occupy the same regions in space and their eigenvalues become correlated. This results in level repulsion such that the spectral properties are given by the GOE statistics. In an infinitely large disordered system, the MIT corresponds to a sharp transition from GOE statistics at the metallic side to Poisson statistics at the insulating side via some intermediate critical statistics only exactly at the critical point. In a finite system, this abrupt change is smeared out, because the divergence of the characteristic lengths — such as the localization length — of the wave functions at the phase transition is cut off at the system size. If for a given $`W`$ the localization length in the infinite system is much larger than the system size under consideration, the states appear to be extended in the finite system. Their eigenvalues become correlated and the ELS is changed from Poisson towards GOE statistics. To obtain a reliable characterization of the MIT one should therefore investigate the system-size dependence of the spectral properties: With increasing system size there is a trend towards the limiting cases of GOE and Poisson statistics for the extended and localized regions, respectively. Directly at the critical disorder there are no characteristic length scales, the wave functions are scale-invariant multifractal entities and the statistical properties of the spectrum are independent of the system size. ### B The numerical approach In ELS a system is characterized by the local fluctuations of the energy spectrum around its average density of states (DOS) $`\overline{\rho }(E)`$. Usually, $`\overline{\rho }(E)`$ is not constant for a given sample and has different width or even different shape for different samples. We therefore apply a so-called unfolding procedure to map the set of eigenvalues $`\{E_i\}`$ to a new set $`\{\epsilon _i\}`$ with constant average density equal to unity as required for the application of random matrix theory. We then characterize the unfolded spectrum $`\{\epsilon _i\}`$ by the distribution $`P(s)`$, which measures the level repulsion in terms of the nearest-neighbor level-spacing $`s`$, by the cumulative level-spacing distribution $`I(s)=_s^{\mathrm{}}P(s^{})𝑑s^{}`$, and by the $`\mathrm{\Delta }_3`$ statistics, which measures the deviation from a sequence of $`L`$ uniformly spaced levels, $$\mathrm{\Delta }_3(L)=\frac{1}{L}\underset{A,B}{\mathrm{min}}_\epsilon ^{\epsilon +L}[D(\epsilon ^{})A\epsilon ^{}B]^2𝑑\epsilon ^{}_\epsilon .$$ (4) Here, $`D(\epsilon )`$ is the integrated DOS and $`_\epsilon `$ corresponds to averaging over the spectrum. For the eigenvalue computation the Lanczos algorithm in the Cullum-Willoughby implementation is applied which is very effective for our sparse matrices. We use system sizes up to $`N=50`$ for which a 400 MHz Pentium II machine needs about five days for the diagonalization of a single system. The character of the eigenstates has been shown not to change in a large energy interval around $`E=0`$. For the computation of the spectral properties we therefore use an interval centered at $`E=0`$ containing 50% of the eigenvalues. Furthermore we average over a number of configurations of the potential site energies such that at least $`10^5`$, but typically $`2\times 10^5`$ to $`4\times 10^5`$, eigenvalues contribute to $`P(s)`$, $`I(s)`$ or $`\mathrm{\Delta }_3(L)`$ for every $`N`$, $`W`$, and $`\gamma `$. Altogether we investigated about 750 such parameter combinations. ### C Dependence of $`W_c`$ on anisotropy As expected, we find a crossover from GOE statistics to Poisson statistics with increasing $`W`$ for all values of $`\gamma `$ considered and both, coupled planes and chains. As an example we show $`I(s)`$ for weakly coupled chains in Fig. 1. This crossover is a first hint for the existence of an MIT. In order to check this further we investigate the system-size dependence of the $`\mathrm{\Delta }_3`$ statistics. In Fig. 2 we show $`\mathrm{\Delta }_3(L)`$ for weakly coupled planes for various system sizes. There is a clear trend towards GOE and Poisson statistics for $`W=6`$ and $`W=12`$, respectively. But there is hardly any system-size dependence visible for $`W=8.625`$. As described above, this indicates an MIT with a critical disorder in the vicinity of $`W=8.625`$. For a more accurate determination of $`W_c`$, we consider the integral $`\alpha _N(W)=_0^{30}\mathrm{\Delta }_3(L,W,N)𝑑L`$ as a function of $`W`$ for several system sizes $`N`$. As can be seen from Fig. 2, the value of $`\alpha _N`$ monotonically increases as the ELS changes from GOE to Poisson statistics. In the localized region $`\alpha _N`$ will therefore increase with $`N`$, whereas it will decrease with $`N`$ for extended states. One can then determine $`W_c`$ from plots of $`\alpha _N(W)`$ for different $`N`$ as shown, e.g., in Fig. 3 for $`\gamma =0.9`$. All curves cross in one point, at which the size effects change sign. This indicates the transition which occurs at $`W_c=8.6\pm 0.2`$ in this case. Our results for other values of $`\gamma `$ are compiled in Fig. 4. We find that with increasing anisotropy the critical disorder decreases according to a power law $`W_c=16.5(1\gamma )^\beta `$ with $`\beta 0.25`$ for coupled planes and $`\beta 0.6`$ for coupled chains. As shown in Fig. 4 this is an appropriate description for our ELS data and agrees well with the results of our previous MFA. For coupled planes this result is also consistent with a previous TMM study and a perturbative analysis employing the coherent potential approximation (CPA) but in the case of coupled chains $`\beta 0.6`$ appears more appropriate than the result $`\beta =0.5`$ of Ref. . ### D Dependence of $`P_c(s)`$ on anisotropy Let us now turn our attention to the question whether the form of the size-independent statistics at the MIT $`P_c(s)`$ depends on anisotropy. It seems to be settled — at least for the isotropic case — that the small-$`s`$ behavior of $`P_c(s)`$ is equal to that of the metallic phase with $`P_c(s)s`$ as usual for the orthogonal ensemble, $`s^2`$ for the unitary, and $`s^4`$ for the symplectic ensembles. Furthermore, it was shown that the large-$`s`$ behavior of $`P_c(s)`$ and $`I_c(s)`$ can be described by an exponential decay $`P_c(s)I_c(s)e^{A_cs}`$ with $`A_c1.9`$ for all three universality classes. All these studies were performed for 3D using periodic boundary conditions and cubic samples. On the other hand, $`P_c(s)`$ has been shown to depend on the sample shape and on the applied boundary conditions as well. A trend towards Poisson behavior was found when the cube was deformed by increasing or decreasing the length in one direction or when periodic boundary conditions were changed to Dirichlet in one, two, or three directions. Thus $`A_c`$ decreases from $`1.9`$ towards $`1`$. Furthermore, $`A_c`$ depends on the dimensionality since in the orthogonal 4D case the critical $`P_c(s)`$ was found to be closer to Poisson statistics than in 3D. From the 4D results one might expect the opposite effect for our coupled planes and chains. This is not the case and we also find a trend towards Poisson statistics for increasing $`\gamma `$ as can be seen, e.g., in Fig. 5 for coupled planes. This finding is consistent with the MFA results where the singularity spectra at the transition were found to tend towards the localized behavior with increasing anisotropy. While investigating the dependence on the sample shape we observe another interesting behavior. In the isotropic case, when deforming the cubic sample to a cuboid, the statistical properties of the spectra always tend towards Poisson statistics, irrespective of whether the sample becomes a long quasi-1D bar or a flat quasi-2D sample. Here we compute $`I(s)`$ for coupled planes with $`\gamma =0.9`$ at $`W=W_c=8.625`$ for two cases: (i) bar-shaped samples of size $`10\times 10\times 100`$ extending in the direction with reduced hopping and (ii) flat samples of size $`50\times 50\times 5`$ with large weakly coupled planes. We insert the results into Fig. 5. Surprisingly we find an opposite trend for the two cases: (i) for the bars $`I(s)`$ is close to Poisson statistics, very similar to $`I_c(s)`$ for $`\gamma =0.99`$; (ii) for the flat samples $`I(s)`$ is close to GOE statistics and the isotropic $`I_c(s)`$ is nearly recovered. This result is probably due to the fact, that the system sizes in case (ii) are proportional to the localization lengths, which depend on the direction. The extension of the wave function measured in units of its characteristic lengths is then equal in all directions. We remark that a similar observation in 2D anisotropic samples exists: the isotropic scaling function is recovered, if the dimensions of the system are proportional to the localization lengths. We expect that a further increase of the aspect ratio, i.e., reduction of the system towards 2D, will drive $`I(s)`$ again towards Poisson statistics. But this needs further study. We remark, that the increasing fluctuations for $`s>6`$ in the $`I(s)`$ curves in Fig. 5 are due to the fact, that there are only very few such large spacings. Consequently, there is no good statistics. In tests with large Poisson sequences of up to $`10^6`$ eigenvalues, we find similar small but increasing deviations from the theoretical result for $`s>6`$. ## IV One-parameter scaling at the MIT The MIT in the Anderson model of localization is expected to be a second-order phase transition, which is characterized by a divergence in an appropriate correlation length $$\xi _{\mathrm{}}(W)=C|WW_c|^\nu $$ (5) with critical exponent $`\nu `$, where $`C`$ is a constant. Here $`\xi _{\mathrm{}}(W)`$ is the correlation length of the infinite system, but in practice only finite, and still relatively small, systems are numerically accessibly. In order to construct a thermodynamic limit, scaling laws $`X(W,bN)=F(X(W,N),b)`$ are applied to the finite-size data $`X(W,N)`$. Here $`X`$ denotes a dimensionless system property to be specified later and $`b`$ is an arbitrary scale factor. The scaling law has solutions of the form $$X=f(N/\xi _{\mathrm{}})$$ (6) which implies that the system size $`N`$ can be scaled by $`\xi _{\mathrm{}}(W)`$ such that all $`X(W,N)`$ collapse onto a single scaling function $`f`$. For a system with an MIT, this scaling function consists of two branches corresponding to the localized and the extended phase. In numerical experiments the reduced localization length $`\mathrm{\Lambda }_N(W)`$ obtained by the TMM is often used as quantity $`X`$. Scaling has also been shown for quantities derived from ELS, particularly for $`\alpha _N(W)`$ defined in Sec. III C. For the estimation of the critical exponent $`\nu `$, one can numerically perform the FSS procedure in order to determine the scaling function $`f(N/\xi _{\mathrm{}})`$ by minimizing the deviations of the data from the common scaling curve. After construction of the scaling curve, one can then fit the obtained scaling parameters $`\xi _{\mathrm{}}(W)`$ according to Eq. (5). However, the divergence of $`\xi _{\mathrm{}}(W)`$ at $`W_c`$ is rounded because of numerical inaccuracies. On the other hand, Eq. (5) is not expected to be valid far away from $`W_c`$ and it is a priori very difficult to determine an appropriate disorder range for the fit. There are several possibilities of determining $`\nu `$ directly from $`X(W,N)`$, thereby avoiding the numerical inaccuracies introduced by the scaling procedure. Linearizing Eq. (6) at the transition and using Eq. (5) one finds that close to $`W_c`$ the quantity $`X`$ behaves as $$X(W,N)=X(W_c,N)+\stackrel{~}{C}(WW_c)N^{1/\nu }.$$ (7) Fitting the linear range of $`X(W)`$ for various fixed values of $`N`$ now allows us to determine $`\nu `$. One can also find a similar expression where $`X`$ is replaced by $`\mathrm{ln}X`$. Although both expressions are equivalent close to the MIT, they can give different results due to finite precision numerics. Another complication appears due to the presence of a systematic shift of the crossing point of the $`X(W)`$ curves visible, e.g., in highly accurate TMM data. Such a shift occurs also in our ELS data, but it is less prominent than in the mentioned TMM studies and can barely be seen in the inset of Fig. 3. The shift is not described by Eq. (7). The first attempt to overcome this problem was adding a correction term $`B`$ to Eq. (7) which depends on $`N`$ but not on $`W`$. This correction allows us to determine $`\nu `$ without an assumption about the nature of the shift. However, the value of $`W_c`$ is not accessible in this procedure since a change in $`W_c`$ can be compensated by changing the correction $`B(N)`$ accordingly. Alternatively one can assume that the small deviations from one-parameter scaling are caused by an irrelevant scaling variable, i.e., the presence of additional terms in (7) with system-size dependence $`N^y`$, with $`y<0`$, which vanish for large system sizes. This approach takes into account that the shift is not random but rather appears to be systematic. We use such a method introduced recently. A family of fit functions is constructed, which includes two kinds of corrections to scaling: (i) an irrelevant scaling variable and (ii) nonlinearities of the disorder dependence of the scaling variables. Starting point is the renormalization group equation (or scaling function) for $`X`$ $$X=\stackrel{~}{f}(\chi _\mathrm{r}N^{1/\nu },\chi _\mathrm{i}N^y).$$ (8) $`\chi _\mathrm{r}`$ and $`\chi _\mathrm{i}`$ are the relevant and irrelevant scaling variables with corresponding critical and irrelevant exponents $`\nu `$ and $`y`$, respectively. $`\stackrel{~}{f}`$ is then Taylor expanded up to order $`n_\mathrm{i}`$ in terms of the second argument $$X=\underset{n=0}{\overset{n_\mathrm{i}}{}}\chi _\mathrm{i}^nN^{ny}\stackrel{~}{f}_n(\chi _\mathrm{r}N^{1/\nu }),$$ (9) and each $`\stackrel{~}{f}_n`$ is Taylor expanded up to order $`n_\mathrm{r}`$: $$\stackrel{~}{f}_n=\underset{i=0}{\overset{n_\mathrm{r}}{}}a_{ni}\chi _\mathrm{r}^iN^{i/\nu }.$$ (10) Finally, nonlinearities are taken into account by expanding $`\chi _\mathrm{r}`$ and $`\chi _\mathrm{i}`$ in terms of $`w=(W_cW)/W_c`$ up to order $`m_\mathrm{r}`$ and $`m_\mathrm{i}`$, respectively, $$\chi _\mathrm{r}(w)=\underset{n=1}{\overset{m_\mathrm{r}}{}}b_nw^n,\chi _\mathrm{i}(w)=\underset{n=0}{\overset{m_\mathrm{i}}{}}c_nw^n,$$ (11) with $`b_1=c_0=1`$. Choosing the orders $`n_\mathrm{i},n_\mathrm{r},m_\mathrm{r},m_\mathrm{i}`$ up to which the expansions are carried out, one can adjust the fit function to the data set. If there is a single crossing point, an irrelevant scaling variable is not necessary and $`n_\mathrm{i}`$ is set to zero. In order to recover the linear behavior (7), one chooses $`n_\mathrm{r}=m_\mathrm{r}=1`$ and $`n_\mathrm{i}=m_\mathrm{i}=0`$. However, we emphasize that the linear region around $`W_c`$ is very small and a simple scaling with Eq. (7) will not give accurate results. For larger $`W`$ ranges, $`2`$nd or $`3`$rd order terms are necessary for the fit. Setting $`n_\mathrm{r}`$ or $`m_\mathrm{r}`$ to two or three yields appropriate fit functions. The total number of fit parameters including $`W_c`$, $`\nu `$, and $`y`$ is $`N_p=(n_\mathrm{i}+1)(n_\mathrm{r}+1)+m_\mathrm{r}+m_\mathrm{i}+2`$ and should of course be kept as small as possible. There is an alternative method developed for ELS which should give very accurate results with a relatively small number of data points. The data $`X(W)`$ for constant $`N`$ are fitted with third-order polynomials and these functions are used to generate a large number of new data points for which the FSS procedure is employed. This saves a lot of computer time but does not solve the problems of the FSS procedure discussed above. Furthermore, this smoothing pretends a higher quality of the data than actually achieved. ## V Computation of the critical properties at the MIT We have decided to determine the critical exponent $`\nu `$ for coupled planes with strong anisotropy $`\gamma =0.9`$ with highest accuracy. We therefore doubled the number of samples for this value of $`\gamma `$ compared to the other cases and used large system sizes, namely $`N=13,17,21,24,30,40`$, and $`50`$. The disorder range is $`W[6,12]`$ accept for $`N=50`$ where we reduced it to $`W[8,9.25]`$. These data are shown in Fig. 3. Here, $`5\times 10^5`$ to $`7\times 10^5`$ eigenvalues contribute to each $`\alpha _N(W)`$ and the statistical error from the average over the $`\alpha _N`$ values from each sample is between 0.2% and 0.4%. The number of samples ranges from $`699`$ for $`N=13`$ to $`10`$ for $`N=50`$. ### A The non-linear fit For the non-linear fit to the data, we use the Levenberg-Marquardt method (LMM) as implemented, e.g., in the Mathematica function NonlinearRegress. The LMM minimizes the $`\chi ^2`$ statistics, measuring the deviation between model and data under consideration of the error of the data points. In order to judge the quality of the fit, we use the goodness-of-fit parameter $`Q`$, which incorporates besides the value of $`\chi ^2`$ also the number of data points and fit parameters. For reliable fits $`Q`$ should fall into the range $`0.01<Q<1`$. The output of the LMM routine contains also confidence intervals for the estimated fit parameters. We check these independently by computing a number of new data sets by randomly varying each data point $`\alpha _N(W)`$ within its error bar. The LMM fit procedure is then applied to these new sets and the variance of the resulting $`W_c`$ and $`\nu `$ values is compared to the confidence intervals of the original fit. We find that they do not differ significantly. More importantly, we also check whether the fit parameters are compatible when using different expansions of the fit function as outlined in the last section. As we will show below, it turns out that then the fitted values of $`W_c`$ and $`\nu `$ differ from each other by more than the confidence intervals obtained from the individual LMM fits. This is important when coalescing all of them into a final result. ### B Results In the ELS data we do not find a clear shift of the crossing point up to the accuracy of our data as shown in Fig. 3. Therefore, an irrelevant scaling variable is probably not necessary when using the FSS of Sec. IV and we usually set $`n_\mathrm{i}=0`$. However, the nonlinearities in the $`W`$ dependence of $`\alpha _N`$ require fit functions with $`W^3`$ terms, i.e., $`n_\mathrm{r}3`$ or $`m_\mathrm{r}3`$. E.g., our choice $`n_\mathrm{r}=3`$, $`m_\mathrm{r}=1`$ yields the fit function $$\alpha _N=a_{00}+a_{01}N^{1/\nu }w+a_{02}N^{2/\nu }w^2+a_{03}N^{3/\nu }w^3.$$ (12) To achieve a good quality of fit while avoiding higher orders of the expansions than 3, we use the reduced $`W`$ interval $`[7,11]`$ in the fits. Additionally, the interval $`W[8,9.25]`$ where we have data for $`N=50`$ is employed and we vary the $`N`$ interval in order to find a possible trend when using larger system sizes only. In Table I the parameters of the fit function, $`\chi ^2`$, $`Q`$ and the results for $`W_c`$ and $`\nu `$ with their confidence intervals are summarized. We report the best fits obtained for five combinations of $`N`$ and $`W`$ intervals as denoted by the characters A to E. For completeness, a few worse fits, denoted by small characters, are added. First one notices that the quality of fit is rather good for most cases, i.e., $`Q>0.8`$. Only for group B, we find smaller values $`Q=0.130.22`$ which nevertheless still indicate a useful fit. Thus the fit functions describe the data very well. This can be also seen in Fig. 3, where we added the fitted model functions of fit A. As shown in Fig. 6, the data collapse onto a single scaling function with two branches corresponding to the localized and extended regime. This indicates, together with the divergent scaling parameter shown in the inset of Fig. 6, clearly the MIT. The values of the critical disorder obtained from the different fits are scattered from $`8.54`$ to $`8.62`$ although the 95% confidence intervals as given by the LMM are $`\pm 0.02`$ only. Apparently, these error estimates do not characterize the real situation. Thus, we conclude $`W_c=8.58\pm 0.06`$. For the critical exponent the situation is even worse as visualized in Fig. 7. The results scatter from $`1.26`$ up to $`1.51`$. Some of the 95% confidence intervals which range from $`\pm 0.04`$ to $`\pm 0.10`$ overlap but others are far apart. Consider for instance group C: while both fits have a $`Q`$ value of nearly one, their estimates for $`\nu `$ differ by twice the width of the confidence intervals. This is unexpected since the standard deviation of the $`\alpha _N`$ values is taken into account by the LMM. We have also tested the use of an energy interval containing only 20% of the eigenvalues instead of 50% for the determination of the $`\alpha _N`$ data, because one might argue that the ELS and thus the critical behavior depends on $`E`$ more strongly in the anisotropic case than in the isotropic system where no changes were found for a large $`E`$ interval. However, we find no significant changes of the results in the anisotropic case, either. An interesting trend can be seen by considering the mean value of the fitted $`\nu `$ within the groups. For A and B it is 1.28 and 1.32, for the three other groups C,D,E, where the data for smaller system sizes ($`N=13,17,21`$) are neglected, the mean $`\nu `$ is 1.42, 1.45, and 1.42. The fitted critical exponent apparently increases if only larger system sizes are considered. This is a hint that in the thermodynamic limit $`\nu `$ is probably larger than 1.4. And it also might indicate that there are finite-size corrections in the $`\alpha _N`$ data that are not described by the fit functions. Considering this trend and the scattering of the results from all fits we conclude from our ELS data the critical exponent $`\nu =1.45\pm 0.2`$. This is consistent with other ELS results for the orthogonal case, $`\nu =1.34\pm 0.10`$ and $`1.4\pm 0.15`$. As our results are obtained from more accurate data due to more samples and larger system sizes, and in the light of the above discussion, we believe that the error estimates of Refs. are too small. Our $`\nu `$ is small compared to results from highly accurate TMM studies. For the present case of weakly coupled planes $`\nu =1.62\pm 0.08`$ was obtained. For the isotropic case without magnetic field $`\nu =1.54\pm 0.08`$ and $`\nu =1.58\pm 0.06`$ was determined. However, considering the error bars obtained from the scattering of the results from different fits, the results are consistent. As a further test to check whether the results from TMM and ELS are compatible, we scaled the $`\alpha _N(W)`$ data with the scaling parameter $`\xi _{\mathrm{}}(W)`$ available from FSS of TMM data with 0.07% error. As can be seen in Fig. 8, the $`\alpha _N`$ data collapse reasonably well onto a single scaling function. Apparently, the $`N=50`$ data lie systematically slightly above the scaling function. We remark that in our calculations, the seed that initializes the random number generator depends only on the number of the sample but not on the value of $`W`$. The potential energies for the $`k`$th sample are obtained by scaling the $`k`$th sequence of random numbers with $`W`$. For a small number of samples, i.e., 10 for $`N=50`$, this might lead to the observed systematic shift. Another possible reason is, that the influence of an irrelevant scaling variable starts to become visible. But using such a variable in the fits gives no improvement of the results although the number of parameters increases remarkably. In group g of Table I we report a few of such fits. Compared to group B the $`Q`$ values are much larger. But for the first two fits the confidence intervals are extremely large and the estimates for $`\nu `$ and $`W_c`$ are nearly meaningless. The third fit gives essentially the same results as the corresponding fit of group B. We remark that one has to be very careful when using such non-linear fits. Even an excellent $`Q`$ value does not guarantee useful results for the fitted model parameters. For completeness we also used the fit method described at the end of Sec. IV. The third-order polynomials are shown in the inset of Fig. 3 and the scaled data in Fig. 6. The scaling is almost perfect, which is not surprising because of the effective smoothing of the data by means of the cubic fits. The scaling function is very similar to that from fit f which gives the same critical exponent $`\nu 1.3`$. But as mentioned before, it is very difficult to derive a reliable $`\nu `$ and its error estimate with this method. ## VI Summary In the present work we have studied the metal-insulator transition in the 3D Anderson model of localization with anisotropic hopping. We used ELS together with FSS analysis to characterize the MIT. Our results indicate in agreement with previous studies that a transition exists for any anisotropy $`\gamma <1`$ for coupled planes and for coupled chains. We find a power-law decay of the critical disorder with increasing anisotropy. For coupled chains the decay is found to be faster than predicted by previous CPA results. A large part of the present work is devoted to the determination of the critical exponent $`\nu `$. We calculated the integral $`\alpha _N(W)`$ over the $`\mathrm{\Delta }_3`$ statistics with very large system sizes and high accuracy for the case of coupled planes with $`\gamma =0.9`$. In order to determine $`\nu `$ we used a method to fit the data introduced by Slevin and Ohtsuki which allows for corrections to scaling due to an irrelevant scaling variable and nonlinearities in the disorder dependence of the scaling variables. It turns out that it is not necessary to take into account an irrelevant scaling variable. By varying the ranges of system size and disorder we find a large number of fits, which all describe the data remarkably good. One expects that the estimates for $`\nu `$ from all these fits are consistent within the error bars. Unfortunately, the $`95\%`$ confidence intervals do not always overlap. Apparently, the error estimate from the fit procedure does not reflect the possible fluctuations in $`\nu `$. Furthermore, we find a systematic shift towards larger values of $`\nu `$ if only the largest system sizes are considered. This might indicate the existence of finite-size corrections not included in the ansatz. Taken all these into account we conclude $`\nu =1.45\pm 0.2`$. Considering the accuracy of our data and the large system sizes used, this error estimate appears surprisingly large when compared to previous estimates. We presume that most specified errors in similar studies are somewhat optimistic. Within the error bars, our result is consistent with highly accurate TMM studies for the GOE case. This supports the concept of universality classes, from which one expects no change of the critical exponent for anisotropic hopping. The ELS data at the MIT are independent of the system size, but they depend on anisotropy. For increasing anisotropy we find a trend from the isotropic form towards Poisson statistics. Since it was found that the critical statistics also depends on boundary conditions and on the shape of the sample, as well as on the dimensionality, one might doubt the validity of the very attractive concept of a super universality class characterizing the critical point in all universality classes. That concept is based on the observation of very similar critical exponents and agreement of the large-$`s`$ behavior of $`P(s)`$ at $`W_c`$ in the isotropic Anderson model. We believe that at present the accuracy in the determination of the critical properties does not allow for an entirely convincing validation of this concept. ###### Acknowledgements. This work was supported by the DFG within the Sonderforschungsbereich 393.
no-problem/9909/hep-ph9909409.html
ar5iv
text
# References Freiburg–THEP 99/11 UM-TH-99-06 September 1999 Considerations on anomalous vector boson couplings J.J. van der Bij Albert–Ludwigs–Universität Freiburg, Fakultät für Physik, Hermann–Herder–Strasse 3, 79104 Freiburg i. Br., Germany A. Ghinculov Randall Laboratory of Physics, University of Michigan, Ann Arbor, Michigan 48109-1120, USA ## Abstract We discuss the meaning of anomalous vector boson self couplings. Implications of present experimental constraints for future colliders are discussed. Results for triple vector boson production at the LHC are given. 1. Introduction The standard model is well established by the experiments at LEP and the Tevatron. Any deviations of the standard model can therefore be introduced only with care. Changes to the standard model come with different forms of severity. In order to see at what level anomalous vector boson couplings can be reasonably discussed one has to consider these cases separately. Changes to the gauge structure of the theory, that do not violate the renormalizability of the theory, i.e. the introduction of extra fermions or possible extensions of the gauge group are the least severe. They will typically generate small corrections to vector boson couplings via loop effects. In this case also radiative effects will be generated at lower energies. For the LHC the important thing in this case is not to measure the anomalous couplings precisely, but to look for the extra particles. This subject belongs naturally to the ”extensions of the standard model” working group. We will not discuss it further. In the other case, a more fundamental role is expected for the anomalous couplings, implying strong interactions. In this case one has to ask oneself whether one should study a model with or without a fundamental Higgs boson. Simply removing the Higgs boson from the standard model is a relatively mild change. The model becomes nonrenormalizable, but the radiative effects grow only logarithmically with the cut-off. The question is whether this scenario is ruled out by the LEP1 precision data. The LEP1 data appear to be in agreement with the standard model, with a preferred low Higgs mass. One is sensitive to the Higgs mass in three parameters , known as S,T,U or $`ϵ_1,ϵ_2,ϵ_3`$. These receive corrections of the form $`g^2(log(m_H/m_W)+constant)`$, where the constants are of order one. The logarithmic enhancement is universal and would also appear in models without a Higgs as $`log(\mathrm{\Lambda })`$, where $`\mathrm{\Lambda }`$ is the cut-off, where new interactions should appear. Only when one can determine the three different constants independently, can one say that one has established the standard model. At present the data do not suffice to do this to great enough precision. A much more severe change to the standard model is the introduction of non-gauge vector boson couplings. These new couplings violate renormalizability much more severely than simply removing the Higgs boson. Typically quadratically and quartically divergent corrections would appear to physical observables. It is therefore questionable, if one should study models with a fundamental Higgs boson, but with extra anomalous vector boson couplings. It is hard to imagine a form of dynamics that could do this. If the vector bosons become strongly interacting the Higgs probably would at most exist in an ”effective” way. The most natural way is therefore to study anomalous vector boson couplings in models without a fundamental Higgs. Actually when one removes the Higgs boson the standard model becomes a gauged non-linear sigma-model. The nonlinear sigma-model is well known to describe low-energy pion physics. The ”pions” correspond to the longitudinal degrees of freedom of the vector bosons. To $`f_\pi `$ corresponds the vacuum expectation value of the Higgs field. Within this description the standard model corresponds to the lowest order term quadratic in the momenta, anomalous couplings to higher derivative terms. The systematic expansion in terms of momenta is known as chiral perturbation theory and is extensively used in meson physics. Writing down the most general non-linear chiral Lagrangian containing up to four derivatives gives rise to a large number of terms, which are too general to be studied effectively. One therefore has to look for dynamical principles that can limit the number of terms. Of particular importance are approximate symmetry principles. In the first place one expects CP-violation to be small. We limit ourselves therefore to CP-preserving terms. In order to see what this means in practice it is advantageous to describe the couplings in a manifestly gauge-invariant way, using the Stückelberg formalism . One needs the following definitions: $$F_{\mu \nu }=\frac{i\tau _i}{2}(_\mu W_\nu ^i_\nu W_\mu ^i+gϵ^{ijk}W_\mu ^jW_\nu ^k)$$ (1) is the SU(2) field strength. $$D_\mu U=_\mu U+\frac{ig}{2}\tau _iW_\mu ^iU+igtg\theta _wU\tau _3B_\mu $$ (2) Is the gauge covariant derivative of the SU(2) valued field $`U`$ , that describes the longitudinal degrees of freedom of the vector fields in a gauge invariant way. $$B_{\mu \nu }=_\mu B_\nu _\nu B_\mu $$ (3) is the hypercharge field strength. $$V_\mu =(D_\mu U)U^{}/g$$ (4) $$T=U\tau _3U^{}/g$$ (5) are auxiliary quantities having simple transformation properties. Excluding CP violation, the nonstandard three and four vector boson couplings are described in this formalism by the following set of operators. $$_1=Tr(F_{\mu \nu }[V_\mu ,V_\nu ])$$ (6) $$_2=i\frac{B_{\mu \nu }}{2}Tr(T[V_\mu ,V_\nu ])$$ (7) $$_3=Tr(TF_{\mu \nu })Tr(T[V_\mu ,V_\nu ])$$ (8) $$_4=(Tr[V_\mu V_\nu ])^2$$ (9) $$_5=(Tr[V_\mu V_\mu ])^2$$ (10) $$_6=Tr(V_\mu V_\nu )Tr(TV_\mu )Tr(TV_\nu )$$ (11) $$_7=Tr(V_\mu V_\mu )(Tr[TV_\nu ])^2$$ (12) $$_8=\frac{1}{2}[(Tr[TV_\mu ])(Tr[TV_\nu ])]^2$$ (13) In the unitary gauge $`U=1`$, one has $$_1=i[(cZ_{\mu \nu }+sF_{\mu \nu })W_\mu ^+W_\nu ^{}+Z_\nu /c(W_{\mu \nu }^+W_\mu ^{}W_{\mu \nu }^{}W_\mu ^+)]$$ (14) \+ gauge induced four boson vertices $$_2=i(cF_{\mu \nu }sZ_{\mu \nu })W_\mu ^+W_\nu ^{}$$ (15) $$_3=i(cZ_{\mu \nu }+sF_{\mu \nu })W_\mu ^+W_\nu ^{}$$ (16) c and s are cosine and sine of the weak mixing angle. The standard model without Higgs corresponds to: $$_{EW}=\frac{1}{2}\text{Tr}(𝑾_{\mu \nu }𝑾^{\mu \nu })\frac{1}{2}\text{Tr}(𝑩_{\mu \nu }𝑩^{\mu \nu })+\frac{g^2v^2}{4}\text{Tr}(𝑽_\mu 𝑽^\mu )$$ (17) 2. Dynamical constraints The list contains terms that give rise to vertices with minimally three or four vector bosons. Already with the present data a number of constraints and/or consistency conditions can be put on the vertices. The most important of these come from the limits on the breaking of the so-called custodial symmetry. If the hypercharge is put to zero the effective Lagrangian has a larger symmetry than $`SU_L(2)\times U_Y(1)`$, i.e. it has the symmetry $`SU_L(2)\times SU_R(2)`$. The $`SU_R(2)`$ invariance is a global invariance. Within the standard model this invariance is an invariance of the Higgs potential, but not of the full Lagrangian. It is ultimately this invariance that is responsible for the fact that the $`\rho `$-parameter, which is the ratio of charged to neutral current strength, is equal to one at the tree level. Some terms in the Lagrangian, i.e. the ones containing the hypercharge field explicitly or the terms with $`T`$, that project out the third isospin component violate this symmetry explicitly. These terms, when inserted in a loop graph, give rise to quartically divergent contributions to the $`\rho `$-parameter. Given the measurements this means that the coefficients of these terms must be extremely small. It is therefore reasonable to limit oneself to a Lagrangian, where hypercharge appears only indirectly via a minimal coupling, so without explicit T. This assumption means physically, that the ultimate dynamics that is responsible for the strong interactions among the vector bosons acts in the non-Abelian sector. Indeed one would normally not expect precisely the hypercharge to become strong. However we know, that there is a strong violation of the custodial symmetry in the form of the top-quark mass. Actually the top-mass almost saturates the existing corrections to the $`\rho `$-parameter, leaving no room for violations of the custodial symmetry in the anomalous vector boson couplings. We therefore conclude: If there really are strong vector boson interactions, the mechanism for mass generation is unlikely to be the same for bosons and fermions. Eliminating the custodial symmetry violating interactions we are left with the simplified Lagrangian, containing $`_1`$, $`_4`$, $`_5`$. Besides the vertices there are in principle also propagator corrections. We take the two-point functions without explicit $`T`$. Specifically, we add to the theory $$_{hc,tr}=\frac{1}{2\mathrm{\Lambda }_W^2}\text{Tr}[(D_\alpha 𝑾_{\mu \nu })(D^\alpha 𝑾^{\mu \nu })]+\frac{1}{2\mathrm{\Lambda }_B^2}\text{Tr}[(_\alpha 𝑩_{\mu \nu })(^\alpha 𝑩^{\mu \nu })]$$ (18) for the transverse degrees of freedom of the gauge fields and $$_{hc,lg}=\frac{g^2v^2}{4\mathrm{\Lambda }_V^2}\text{Tr}[(D^\alpha 𝑽^\mu )(D_\alpha 𝑽_\mu )]$$ (19) for the longitudinal ones, where the $`\mathrm{\Lambda }_X`$ parametrize the quadratic divergences and are expected to represent the scales where new physics comes in. In phenomenological applications these contributions give rise to formfactors in the propagators . Introducing such cut-off dependent propagators in the analysis of the vector boson pair production is similar to having s-dependent triple vector boson couplings, which is the way the data are usually analysed. This effective Lagrangian is very similar to the one in pion-physics. Indeed if one takes the limit, vev fixed and gauge couplings to zero, one finds the standard pion Lagrangian. As it stands one can use the LEP1 data to put a limit on the terms in the two point vertices. Using a naive analysis one finds $`1/\mathrm{\Lambda }_B^2=0`$. For the other two cut-offs one has: A. The case $`\mathrm{\Lambda }_V^2>0,\mathrm{\Lambda }_W^2<0`$ $`\mathrm{\Lambda }_V>0.49TeV,\mathrm{\Lambda }_W>1.3TeV`$ B. The case $`\mathrm{\Lambda }_V^2<0,\mathrm{\Lambda }_W^2>0`$ $`\mathrm{\Lambda }_V>0.74TeV,\mathrm{\Lambda }_W>1.5TeV`$ This information is important for further limits at high energy colliders, as it tells us, how one has to cut off off-shell propagators. We notice that the limits on the form factors are different for the transversal, longitudinal and hypercharge formfactors. The precise limits are somewhat qualitative and should be taken as such. However they show that effective cut-off form factors should be taken around 500 GeV. It is certainly not correct to put them at the maximum machine energy. Further information comes from the direct measurements of the three-point couplings at LEP2, which tell us that they are small. Similar limits at the Tevatron have to be taken with some care, as there is a cut-off dependence. As there is no known model that can give large three-point interactions, we asume for the further analysis of the fourpoint vertices, that the three point anomalous couplings are absent. On the remaining two fourpoint vertices two more constraints can be put. The first comes from consistency of chiral perturbation theory . Not every effective chiral Lagrangian can be generated from a physical underlying theory. A second condition comes from the $`\rho `$-parameter. Even the existing violation of the custodial symmetry, though indirect via the minimal coupling to hypercharge, gives a contribution to the $`\rho `$-parameter. It constrains the combination $`5g_4+2g_5`$. The remaining combination $`2_45_5`$ is fully unconstrained by experiment and gives in principle a possibility for very strong interactions to be present. However this particular combination does not seem to have any natural interpretation from underlying dynamics. Therefore one can presumably conclude that both couplings $`g_4,g_5`$ are small. There is a loophole to this conclusion, namely when the anomalous couplings are so large that the one-loop approximation, used to arrive at the limits, is not consistent and resummation has to be performed everywhere. This is a somewhat exotic possibility, that could lead to very low-lying resonances, which ought to be easy to discover at the LHC. 3. LHC processes Given the situation described above one has to ask oneself, what the LHC can do and in which way the data should be analysed. There are essentially three processes that can be used to study vector bosonvertices: vector boson pair production, vector boson scattering, triple vector boson production. About the first two we have only a few remarks to make. They are discussed more fully in other contributions to the workshop. 3a. vector boson pair production Vector boson pair production can be studied in a relatively straightforward way. The reason is that here the Higgs boson does not play a role in the standard model, as we take the incoming quarks to be massless. Therefore naive violations of unitarity can be compensated by the introduction of smooth form-factors. One produces two vector bosons via normal standard model processes with an anomalous vertex added. The extra anomalous coupling leads to unitarity-violating cross-sections at high energy. As a total energy of 14 TEV is available this is in principle a serious problem. It is cured by introducing a formfactor for the incoming off-shell line connected to the anomalous vertex. Naively this leads to a form-factor dependent limit on the anomalous coupling in question. The LEP1 data gives a lower limit on the cut-off to be used inside the propagator. When one wants an overall limit on the anomalous coupling one should use this value. This is particularly relevant for the Tevatron. Here one typically takes a cut-off of 2 TeV. This might give too strict a limit, as the LEP1 data indicate that the cut-off can be as low as 500GeV. For practical purposes the analysis at the Tevatron should give limits on anomalous couplings for different values of the cut-off form factors, including low values of the cut-off. For the analysis at the LHC one has much larger statistics. This means, that one can do better and measure limits on the anomalous couplings as a function of the invariant mass of the produced system. This way one measures the anomalous formfactor completely. 3b. vector boson scattering Here the situation is more complicated than in vector boson pair production. The reason is that within the standard model the process cannot be considered without intermediate Higgs contribution. This would violate unitarity. However the incoming vector bosons are basically on-shell and this allows the use of unitarization methods, as are commonly used in chiral perturbation theory in pion physics. These methods tend to give rise to resonances in longitudinal vector boson scattering. The precise details depend on the coupling constants. The unitarization methods are not unique, but generically give rise to large I=J=0 and/or I=J=1 cross-section enhancements. The literature is quite extensive. A good introduction is ; a recent review is . 3c. Triple vector boson production In this case it is not clear how one should consistently approach an analysis of anomalous vector boson couplings. Within the standard model the presence of the Higgs boson is essential in this channel. Leaving it out one has to study the unitarization. This unitarization has to take place not only on the two-to-two scattering subgraphs, as in vector boson scattering, but also on the incoming off-shell vector boson, decaying into three real ones. The analysis here becomes too arbitrary to derive very meaningful results. One cannot confidently calculate anything here without a fully known underlying model of new strong interactions. Also measurable cross sections tend to be small, so that the triple vector boson production is best used as corroboration of results in vector boson scattering. Deviations of standard model cross sections could be seen, but the vector boson scattering would be needed for interpretation. One therefore needs the standard model results. The total number of vector boson triples is given in table 1. We used an integrated luminosity of $`100fb^1`$ and an energy of $`14TeV`$ throughout. | $`m_{Higgs}`$ | 200 | 400 | 600 | 800 | | --- | --- | --- | --- | --- | | $`W^+W^{}W^{}`$ | 11675 | 5084 | 4780 | 4800 | | $`W^+W^+W^{}`$ | 20250 | 9243 | 8684 | 8768 | | $`W^+W^{}Z`$ | 20915 | 11167 | 10638 | 10685 | | $`W^{}ZZ`$ | 2294 | 1181 | 1113 | 1113 | | $`W^+ZZ`$ | 4084 | 2243 | 2108 | 2165 | | $`ZZZ`$ | 4883 | 1332 | 1087 | 1085 | Table 1. Total number of events, no cuts, no branching ratios. One sees from this table that a large part of the events comes from associated Higgs production, when the Higgs is light. However for the study of anomalous vector boson couplings, the heavier Higgs results are arguably more relevant. Not all the events can be used for the analysis. If we limit ourselves to events, containing only electrons, muons and neutrinos, assuming just acceptance cuts we find table 2. | $`m_{Higgs}`$ | 200 | 400 | 600 | 800 | | --- | --- | --- | --- | --- | | $`W^+W^{}W^{}`$ | 68 | 28 | 25 | 25 | | $`W^+W^+W^{}`$ | 112 | 49 | 44 | 44 | | $`W^+W^{}Z`$ | 32 | 17 | 15 | 15 | | $`W^{}ZZ`$ | 1.0 | 0.51 | 0.46 | 0.45 | | $`W^+ZZ`$ | 1.7 | 0.88 | 0.79 | 0.79 | | $`ZZZ`$ | 0.62 | 0.18 | 0.13 | 0.12 | Table 2. Pure leptons, $`|\eta |<3`$, $`p_T>20GeV`$, no cuts on neutrinos. We see that very little is left, in particular in the processes with at least two Z-bosons, where the events can be fully reconstructed. In order to see how sensitive we are to anomalous couplings we assumed a 4Z coupling with a formfactor cut-off at 2TeV. We make here no correction for efficiencies etc. Using the triple Z-boson production, assuming no events are seen in $`100fb^1`$, we find a limit $`|g_4+g_5|<0.09`$ at the 95% CL, where $`g_4`$ and $`g_5`$ are the coefficients muliplying the operators $`_4`$ and $`_5`$. This is to be compared with $`0.15<5g_4+2g_5<0.14`$ or $`0.066<(5g_4+2g_5)\mathrm{\Lambda }^2(TeV)<0.026`$ . So the sensitivity is not better than present indirect limits. Better limits exist in vector boson scattering or at a linear collider . In the following tables we present numbers for observable cross sections in different decay modes of the vector bosons. We used the following cuts. $$|\eta |_{lepton}<3$$ $$|p_T|_{lepton}>20GeV$$ $$|\eta |_{jet}<2.5$$ $$|p_T|_{jet}>40GeV$$ $$\mathrm{\Delta }R_{jet,lepton}>0.3$$ $$\mathrm{\Delta }R_{jet,jet}>0.5$$ $$|p_T|_{2\nu }>50GeV$$ States with more than two neutrinos are not very useful because of the background from two vector boson production. We did not consider final states containing $`\tau `$-leptons. With the given cuts the total number of events to be expected is rather small, in particular since we did not consider the reduction in events due to experimental efficiencies, which may be relatively large, because of the large number of particles in the final state. For the processes containing jets in the final state, there will be large backgrounds due to QCD processes. A final conclusion on the significance of the triple vector boson production for constraining the four vector boson couplings will need more work, involving detector Monte Carlo calculations. However it is probably fair to say from the above results, that no very strong constraints will be found from this process at the LHC, but it is useful as a cross-check with other processes. It may provide complementary information if non-zero anomalous couplings are found. | $`m_{Higgs}`$ | 200 | 300 | 400 | 500 | 600 | | --- | --- | --- | --- | --- | --- | | $`6\mathrm{}`$ | 0.62 | 0.29 | 0.18 | 0.14 | 0.13 | | $`4\mathrm{},2\nu `$ | 5.1 | 2.5 | 1.5 | 1.2 | 1.1 | | $`4\mathrm{},2j`$ | 6.6 | 3.8 | 2.2 | 1.7 | 1.4 | | $`2\mathrm{},2j,2\nu `$ | 34 | 20 | 12 | 9.0 | 7.7 | | $`2\mathrm{},4j`$ | 24 | 19 | 11 | 7.6 | 6.0 | | $`2\nu ,4j`$ | 37 | 34 | 21 | 15 | 11 | | $`6j`$ | 25 | 31 | 19 | 12 | 8.7 | Table 3. $`ZZZ`$ production in different decay modes. | $`m_{Higgs}`$ | 200 | 300 | 400 | 500 | 600 | | --- | --- | --- | --- | --- | --- | | $`4\mathrm{},2\nu `$ | 31 | 20 | 17 | 16 | 15 | | $`3\mathrm{},2j,1\nu `$ | 51 | 40 | 31 | 28 | 26 | | $`2\mathrm{},4j`$ | 19 | 22 | 17 | 14 | 13 | | $`2\nu ,4j`$ | 63 | 74 | 60 | 51 | 48 | | $`2\mathrm{},2j,2\nu `$ | 102 | 68 | 54 | 49 | 48 | | $`1\mathrm{},4j,1\nu `$ | 262 | 196 | 140 | 127 | 127 | | $`6j`$ | 86 | 104 | 78 | 62 | 56 | Table 4. $`WWZ`$ production in different decay modes. | $`m_{Higgs}`$ | 200 | 300 | 400 | 500 | 600 | | --- | --- | --- | --- | --- | --- | | $`5\mathrm{},1\nu `$ | 0.45 | 1.04 | 0.63 | 0.52 | 0.47 | | | 0.80 | 1.69 | 1.08 | 0.91 | 0.81 | | $`3\mathrm{},2j,1\nu `$ | 3.37 | 6.89 | 5.36 | 4.18 | 3.73 | | | 5.9 | 11.5 | 9.3 | 7.4 | 6.5 | | $`1\mathrm{},4j,1\nu `$ | 7.6 | 11.5 | 12.4 | 10.0 | 8.4 | | | 13.3 | 20.0 | 21.6 | 18 | 15 | | $`4\mathrm{},2j`$ | 0.29 | 1.0 | 0.54 | 0.38 | 0.32 | | | 0.49 | 1.6 | 0.91 | 0.65 | 0.54 | | $`2\mathrm{},2j,2\nu `$ | 2.0 | 6.5 | 3.5 | 2.5 | 2.2 | | | 3.4 | 10.7 | 6.1 | 4.4 | 3.7 | | $`2\mathrm{},4j`$ | 2.5 | 7.4 | 5.4 | 3.6 | 2.9 | | | 4.7 | 9.5 | 9.5 | 6.9 | 5.6 | | $`4j,2\nu `$ | 8.9 | 27 | 18 | 12.6 | 10.4 | | | 195. | 54 | 38 | 28 | 23 | | $`6j`$ | 5.3 | 12.3 | 13.3 | 8.8 | 7.4 | | | 9.1 | 20.7 | 23 | 16 | 12.5 | Table 5. $`ZZW^{}`$(upper) and $`ZZW^+`$(lower) production in different decay modes. | $`m_{Higgs}`$ | 200 | 300 | 400 | 500 | 600 | | --- | --- | --- | --- | --- | --- | | $`3\mathrm{},3\nu `$ | 66 | 44 | 37 | 35 | 33 | | $`\mathrm{}^+\mathrm{}^+,2j,2\nu `$ | 57 | 43 | 31 | 26 | 24 | | $`\mathrm{}^+\mathrm{}^{},2j,2\nu `$ | 13 | 7.9 | 5.3 | 4.4 | 4.0 | | $`\mathrm{}^+,4j,1\nu `$ | 148 | 129 | 86 | 66 | 58 | | $`\mathrm{}^{},4j,1\nu `$ | 99 | 61 | 36 | 26 | 23 | | $`6j`$ | 50 | 74 | 46 | 32 | 25 | Table 6. $`W^{}W^+W^+`$ production in different decay modes. | $`m_{Higgs}`$ | 200 | 300 | 400 | 500 | 600 | | --- | --- | --- | --- | --- | --- | | $`3\mathrm{},3\nu `$ | 40 | 26 | 22 | 21 | 20 | | $`\mathrm{}^{}\mathrm{}^{},2j,2\nu `$ | 34 | 25 | 17 | 14 | 13 | | $`\mathrm{}^+\mathrm{}^{},2j,2\nu `$ | 78 | 45 | 30 | 25 | 23 | | $`\mathrm{}^{},4j,1\nu `$ | 90 | 76 | 49 | 37 | 33 | | $`\mathrm{}^+,4j,1\nu `$ | 59 | 35 | 20 | 15 | 13 | | $`6j`$ | 29 | 43 | 26 | 18 | 14 | Table 7. $`W^+W^{}W^{}`$ production in different decay modes.
no-problem/9909/astro-ph9909448.html
ar5iv
text
# Blue Horizontal–Branch Stars: The “Jump” in Strömgren 𝑢, Low Gravities, and Radiative Levitation of Metals 11footnote 1Based on data collected at the European Southern Observatory (La Silla, Chile) and the Nordic Optical Telescope, Spain ## 1 Introduction It is of significant interest to understand what factors govern the morphology of the horizontal branch in stellar populations since it is often used as a distance and age indicator. In this contribution we will focus our attention on two problems in our understanding of the HB that have recently become apparent: The occurence of BHB stars of lower gravities than predicted by canonical models (e.g., Moehler, Heber & de Boer 1995) and the jump in Strömgren $`u`$ first discovered in M13 by Grundahl, VandenBerg, & Andersen (1998). Heber, Moehler, & Reid (1997) suggested that helium mixing (Sweigart 1997a,b) in the HB progenitor stars might explain the low gravities problem; a similar suggestion was advanced by Grundahl et al. (1998) in regard to the $`u`$–jump phenomenon. Given that subsequent observations showed the $`u`$-jump to be present in all clusters with a sufficiently hot HB, we decided to undertake a systematic study of this phenomenon. We shall demonstrate, in the following, that the low measured gravities and the $`u`$-jump are manifestations of one and the same physical mechanism which most likely is radiative levitation of heavy elements. The observations for this project and a more detailed account of the phenomenon are given in Grundahl et al. (1999). ## 2 The Ubiquitous Nature of the Jump In Fig. 1 we show the CMDs for all our observed clusters with a BHB. Several important conclusions can be drawn from this Figure: 1) the jump occurs in all clusters, irrespective of any parameter characterizing these such as: metallicity, concentration, luminosity or extent of mixing on the RGB. This makes the helium mixing hypothesis unlikely as the sole explanation for this phenomenon; 2) the “size” of the jump is constant from one cluster to the next; 3) the onset of the jump occurs at $`\mathrm{T}_{\mathrm{eff}}11,500\pm 500`$ K for all clusters in the sample (derived from our calibrated photometry and Kurucz color-temperature relations); 3) The cool onset of the jump appears very abrupt. Even in $`\omega `$ Centauri (which shows a significant metallicity spread) the jump is clearly present and well defined (as predicted in Grundahl et al. 1999); our new photometry for this cluster will be presented elsewhere. We also note that the jump occurs irrespective of the detailed HB morphology, i.e. whether the cluster has a long blue tail or just a stubby BHB. ## 3 Low Gravities and the Jump Phenomenon Several of our clusters have had stars with spectroscopically determined gravities published in recent literature. Since our determination of the temperature for the onset of the jump appeared close to the region where gravities lower than predicted by theory were found we have investigated whether the two phenomena are related. Figure 2 shows a plot of the stars in our sample with spectroscopic determinations of their gravity. All stars which are classified as belonging to the jump region (based on the $`uvby`$ photometry) are plotted as black symbols, whereas stars located outside the jump region are plotted as gray symbols. It is apparent from this Figure that stars classified as “jump stars” also have gravities lower than expected from ZAHB models. This clearly shows that the two phenomena are related on a star by star basis and hence that the two most likely are manifestations of the same physical phenomenon. As the effect seems unlikely to be caused by stellar evolution (no dependence on mixing history or age is found, both in terms of jump size and location) we strongly suspect that a stellar atmospheres effect is the cause. ## 4 Radiative Levitation of Metals as an Explanation Glaspey et al. (1989) obtained high resolution spectroscopy of two BHB stars in NGC 6752 and found that the one which lies inside the jump region at 16,000 K had super–solar iron abundance whereas the one outside had normal abundances compared to the other cluster stars. For field BHB stars Bonifacio et al. (1995) and Hambly et al (1997) find similar trends with large over abundances of some of the heavy elements – the detailed abundance patterns are quite complex. These results led Grundahl et al. (1999) to propose that radiative levitation of heavy elements could be the cause for the $`u`$-jump and low gravities phenomenon. Furthermore simple experiments with enhanced heavy-element abundances, based on Kurucz solar-scaled atmospheres (see Figure 3), succeeded in qualitatively producing a higher flux in $`u`$ as seen observationally. Subsequently this hypothesis was given strong observational support by the spectroscopic investigations of Moehler et al. (1999), Peterson (1999), and Behr et al. (1999). These studies showed that several of the heavy elements were overabundant by large factors in the atmospheres of BHB stars in NGC 6752 and M13. Behr et al. further found that the onset of the jump occurrs at $`\mathrm{T}_{\mathrm{eff}}\mathrm{\hspace{0.17em}11},500`$ K, in excellent agreement with our estimate. The study of hot HB stars in NGC 6752 by Moehler et al. (1999) further showed that the problem of too low gravities is significantly reduced (although not eliminated) if model atmospheres with appropriately high metallicity are used in the analysis of the spectra. Given the remarkable similarity of the $`u`$ (and gravity) jump(s) from one cluster to the next, we suggest that very similar chemical abundance patterns as found by the above authors are likely present in the other systems as well. For detailed quantitative comparisons between observations and theory in what concerns photometry and spectroscopy of blue HB stars lying inside the jump region, we warn that more realistic calculations taking into account the very complicated (but observed) abundance patterns – using, e.g., Kurucz’s ATLAS12 code – will likely be necessary (Grundahl et al. 1999). ## 5 Conclusions We have demonstrated that the $`u`$–jump discovered by Grundahl et al. (1998) occurs in every globular cluster with HB stars hotter than 11,500 K and that this phenomenon is connected, on a star by star basis, to the “low gravities” found by Moehler et al. (1995). The most likely explanation is that radiative levitation of heavy elements into the stellar atmosphere changes the emergent flux pattern in such a way as to increase the $`u`$ flux and cause the measured gravities to be too low if these element enhancements are not taken into account. Detailed spectroscopic studies of large samples of stars and theoretical diffusion calculations are urgently needed for further explanation of this phenomenon since ratiative levitation is expected to have implications for the interpretation of integrated ultraviolet spectra of old stellar populations – both Galactic and extragalactic (Landsman 1999). ## Acknowledgements Support for M.C. was provided by NASA through Hubble Fellowship grant HF–01105.01–98A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5–26555. ## References > Behr B.B., Cohen J.G., McCarthy J.K., Djorgovski S.G., 1999, ApJ 517, L135 > > Bonifacio P., Castelli F., Hack M., 1995, A&AS 110, 441 > > Glaspey J.W., Michaud G., Moffat A.F.J., Demers S., 1989, ApJ 339, 926 > > Grundahl F., VandenBerg D.A., Andersen M.I., 1998, ApJ 500, L179 > > Grundahl F., Catelan M., Landsman W.B., Stetson P.B., Andersen M.I., 1999, ApJ 524, in press (Oct. 10<sup>th</sup> issue) > > Hambly N.C., Rolleston W.R.J., Keenan F.P., Dufton P.L., Saffer R.A., 1997, ApJS 111, 419 > > Heber U., Moehler S., Reid I. N., 1997. In: Battrick B. (ed.) ESA-SP 402, HIPPARCOS Venice’97, p. 461 > > Landsman W. B., 1999. In: Hubeny I., Heap S., Cornett R. (eds.) Spectrophotometric Dating of Stars and Galaxies. San Francisco, ASP, in press (astro-ph/9906123) > > Moehler S., Heber U., de Boer K.S., 1995, A&A 294, 65 > > Moehler S., Sweigart A.V., Landsman W.B., Heber U., Catelan M., 1999, A&A 346, L1 > > Peterson R. C., 1999, these proceedings > > Sweigart A.V., 1997a, ApJ 474, L23 > > Sweigart A.V., 1997b. In: Philip A.G.D., Liebert J., Saffer R.A. (eds.) The Third Conference on Faint Blue Stars. Schenectady, L. Davis Press, p. 3
no-problem/9909/astro-ph9909490.html
ar5iv
text
# The effect of grain drift on the structure of (Post–) AGB winds ## 1. Dust driven winds Dust driven winds are powered by a fascinating interplay of radiation, chemical reactions, stellar pulsations and atmospheric dynamics. As soon as an AGB star’s atmosphere develops sites suitable for the formation of solid “dust” (i.e. sites with a relatively high density and a low temperature) its dynamics will be dominated by the power of the radiative force. Dust grains absorb stellar radiation efficiently and experience a large radiation pressure. The momentum thus acquired is partially transferred to the ambient gas by frequent collisions. The gas is then blown outward in a dense, slow wind that can reach high mass loss rates. ## 2. Numerical hydrocode We have written a numerical hydrodynamics code that self consistently calculates a dust driven wind. In our code both gas and dust are described by their own set of hydro equations (continuity, momentum). Exchange of matter (nucleation and growth of grains) and momentum (collisions) are taken into account in the source terms. The time dependent continuity and momentum equations are numerically solved using a two–step FCT/LCD algorithm (Boris 1976; Icke 1991). The abundances of H, H<sub>2</sub>, C, C<sub>2</sub>, C<sub>2</sub>H, C<sub>2</sub>H<sub>2</sub> and CO in the gas are calculated using a simple equilibrium chemistry (Dominik et al. 1990). Nucleation and growth of dust grains is described by the moment method (Gail, Keller, & Sedlmayr 1984; Gail & Sedlmayr 1988; Dorfi & Höfner 1991). ## 3. Two fluid hydrodynamics Two fluid hydrodynamics requires a careful implementation of the momentum transfer term in terms of the drift velocity of the grains with respect to the gas. It turns out that the expression for this drag force that has been used before (e.g. Dominik 1992; Berruyer 1991; Krüger, Gauger, & Sedlmayr 1994) doesn’t always apply, in particular when or where grains have just started to form. Moreover this expression just takes into account what we will call the “macroscopic” component of the drift velocity and doesn’t incorporate the contribution to the momentum transfer due to the radiative acceleration between two subsequent collisions of a grain (“microscopic drift”). We have derived a new implementation for the momentum transfer from dust to gas, see Simis, Icke, & Dominik (2000): $$f_{\mathrm{drag}}=n_\mathrm{d}g_{\mathrm{rad}}\frac{m_\mathrm{g}}{m_\mathrm{g}+m_\mathrm{d}}\left(1+\frac{v}{\sqrt{v^2+2\lambda g_{\mathrm{rad}}}v}\right)$$ (1) Here, $`n_\mathrm{d}`$ is the number density of grains, $`m_{\mathrm{d},\mathrm{g}}`$ is the mass of a dust/gas particle, $`\rho _{\mathrm{d},\mathrm{g}}`$ are mass densities, $`g_{\mathrm{rad}}`$ is the radiative acceleration of a grain, $`v=v_\mathrm{d}v_\mathrm{g}`$ is the drift velocity, $`\lambda `$ is the mean free path of a grain and $`\mathrm{\Omega }=\rho _\mathrm{g}m_\mathrm{d}\rho _\mathrm{d}m_\mathrm{g}/\rho _\mathrm{g}(m_\mathrm{g}+m_\mathrm{d})`$. This expression takes into account the contribution of the radiative acceleration between individual gas–grain encounters as well. Especially when the mean free path of the grains is large, this radiation pressure contribution is important and may be the dominant factor in the momentum transfer. The equilibrium drift velocity corresponding to Eq. (1) is (Simis et al. 2000) $$v_{\mathrm{eq}}=\sqrt{\frac{\mathrm{\Omega }^2}{1\mathrm{\Omega }^2}2\lambda g_{\mathrm{rad}}}$$ (2) This formalism will in the future allow to treat the coupling force also at non–equilibrium drift speeds. For the current calculations we have still assumed that the grains reach equilibrium speed quickly enough for the rate of momentum transfer to be given by $`v=v_{\mathrm{eq}}`$, i.e. $$f_{\mathrm{drag}}=n_\mathrm{d}g_{\mathrm{rad}}\frac{\rho _\mathrm{g}}{\rho _\mathrm{g}+\rho _\mathrm{d}}$$ (3) ## 4. The effect of drift on the dynamics of the wind The rates of grain nucleation and growth depend on the velocity with which grains and gas particles collide. In absence of drift, gas–grain collisions are thermally driven. Now that we have an expression for the drift velocity that is consistent with the micro–dynamics we can take into account the effect of drift on grain nucleation and growth. Since the drift velocity, through the radiation pressure, depends on the number density and size spectrum of the grains we are now dealing with a very strong coupling between the dynamics and the chemistry rates in the outflow. When drift is taken into account, and in absence of sputtering, grains become bigger and more abundant than in the case when only thermal collisions are considered. In the case of drift driven grain chemistry, a wind with a terminal velocity and a mass loss rate that fluctuate around a constant value establishes itself. In the case of purely thermally driven grain chemistry, the gas outflow velocity and mass loss rate keep decreasing. At the same time dust continues to flow out at a high rate. This is illustrated in Figure 1, in which the mass loss history of an object with $`M_{}=1M_{}`$, $`T_{}=2200K`$, $`L_{}=1.0\times 10^4L_{}`$ and $`ϵ_C/ϵ_O=2`$ is shown. Due to the strong coupling of chemistry and dynamics, a small change in the parameters or flow variables may result in large changes in the flow. To illustrate this, we compare the outflows of a non–rotating object and an object with a rotation period of 50 years. The non–rotating object may be interpreted to represent the outflow in the polar direction and the rotating object to represent the outflow in the equatorial plane of an AGB star with a 50 year rotation period. The rotation is simulated by suitable adjustment of the effective gravity. Figure 2 shows the dynamical evolution of the density and velocity structure of the outflows. It turns out that the mass loss rate in the equatorial plane is twice the mass loss rate in the polar direction. The velocity in the polar direction is higher and the density is lower than in the equatorial plane. One may conclude that this zeroth order model of a rotating AGB object indicates that the initial polar to equatorial density gradient gives rise to a significant difference in mass loss rate, which may lead to a disk like structure. More generally this illustrates the tight coupling between dynamics and chemistry. ## References Berruyer, N. 1991, A&A, 249, 181 Boris, J.P. 1976, NRL Mem. Rep., 3237 Dominik, C., Gail, H.-P., Sedlmayr, E., & Winters, J.M. 1990, A&A, 240, 365 Dominik, C. 1992, Thesis, Technischen Universität Berlin Dorfi. E.A., & Höfner S. A&A, 248, 105 Gail, H.-P., Keller R., & Sedlmayr E. 1984, A&A, 133, 320 Gail, H.-P., & Sedlmayr E. 1988, A&A, 206, 153 Icke, V. 1991, A&A, 251, 369 Krüger, D., Gauger, A., & Sedlmayr, E. 1994, A&A, 290, 573 Simis, Y.J.W., Icke, V., & Dominik, C. 2000, A&A, in preparation
no-problem/9909/hep-lat9909093.html
ar5iv
text
# BNL-HET-99/23 RBRC preprint The parity partner of the nucleon in quenched QCD with domain wall fermionsThanks to RIKEN, Brookhaven National Laboratory and to the U.S. Department of Energy for providing the facilities essential for the completion of this work. ## Abstract We present preliminary results for the mass spectrum of the nucleon and its low-lying excited states from quenched lattice QCD using the domain wall fermion method which preserves the chiral symmetry at finite lattice cutoff. Definite mass splitting is observed between the nucleon and its parity partner. This splitting grows with decreasing valence quark mass. We also present preliminary data regarding the first positive-parity excited state. This work focuses on a notable feature in the mass spectrum of the nucleon and its excited states: the mass splitting between the nucleon $`N(939)`$ and its parity partner $`N^{}(1535)`$ is remarkably large . As is well known, this splitting must be absent if chiral symmetry were preserved. Yet models with explicit chiral symmetry breaking such as non-relativistic quark models or bag models fail to reproduce this splitting. In a typical non-relativistic quark model with harmonic-oscillator quark wave function , the lowest negative-parity state is obtained by adding one oscillator quantum to the ground state. The known proton charge radius and magnetic moment lead to a few hundred MeV oscillator quantum, far underestimating the mass difference. It also gives the wrong ordering of positive- and negative-parity excited states: while the positive-parity $`N^{}(1440)`$ lies below $`N^{}(1535)`$ in nature, the model needs two oscillator quanta for $`N^{}`$. A similar problem arises in bag models where the excitation energy is linked to the inverse of the bag radius which in turn is determined by the proton charge radius . Thus it is an interesting question whether lattice QCD, which appears so successful in describing spontaneous breaking of chiral symmetry, can reproduce this mass splitting. Most conventional lattice fermion schemes are inadequate for this interesting challenge: they break chiral symmetry explicitly at finite lattice cutoff and thus are prone to failure in explaining the splitting. Fortunately, however, the domain wall fermion (DWF) method seems capable of going around this pathology . Here we report preliminary results of the first quenched calculation of this issue using DWF. In this paper, 24 well separated quenched gauge configurations on a $`16^3\times 32`$ lattice at $`6/g^2`$=6.0 are used. We use a fifth (DWF) dimension of $`N_s`$=16 sites and domain-wall height of $`M`$=1.8 . We focus on the spin-half isodoublet baryons. Then there are only two possible choices for positive parity baryons if we restrict them to contain no derivatives: $`B_1^+`$ = $`\epsilon _{abc}(u_a^TC\gamma _5d_b)u_c`$ and $`B_2^+`$ = $`\epsilon _{abc}(u_a^TCd_b)\gamma _5u_c`$, where $`abc`$, $`ud`$, $`C`$ and $`\gamma _5`$ have usual meanings as color, flavor, charge conjugation and Dirac matrix. In previous lattice calculations of ground-state hadrons, the operator $`B_1^+`$ was used for the nucleon ground state. Since the operator $`B_2^+`$ vanishes in the non-relativistic limit, it was considered ineffective. Indeed, nobody succeeded in extracting the nucleon mass using it . We will come back to this point later. The negative-parity baryon interpolating operators are defined with an extra $`\gamma _5`$ : $`B_1^{}`$ = $`\epsilon _{abc}(u_a^TC\gamma _5d_b)\gamma _5u_c`$ and $`B_2^{}`$ = $`\epsilon _{abc}(u_a^TCd_b)u_c`$. As a result of the definition $`B_{1,2}^{}=\gamma _5B_{1,2}^+`$, each two-point baryon correlator constructed from any one of them actually contains both positive- and negative-parity contributions . This means that there is contamination from the opposite parity state propagating backwards in time. Thus, to extract parity-eigenstate signals we use a linear combination of quark propagators, one obtained with periodic and another with anti-periodic boundary conditions in the time direction. We use seven values for the valence quark mass $`m`$ in the range of $`0.02m0.125`$, corresponding to the $`\pi \rho `$ meson mass ratios $`m_\pi /m_\rho 0.590.90`$. Quark propagators are calculated with wall source and point sink, and two different source positions are used for each gauge configuration. Definite plateaus are seen in the effective mass plots for $`B_1^+`$, $`B_1^{}`$ and $`B_2^{}`$ operators. In Figure 1, we present our estimates of the nucleon ($`N`$) and its parity partner ($`N^{}`$) mass values obtained by taking a weighted average of the effective mass in appropriate time ranges. The nucleon mass is extracted from the $`B_1^+`$ operator. $`N^{}`$ mass estimates from $`B_1^{}`$ and $`B_2^{}`$ operators agree within errors in the whole quark mass range, as expected from their common quantum numbers. An important feature is that the $`N`$-$`N^{}`$ mass splitting is observed in the whole range and even for light valence quark mass values. Another is that the splitting grows as the valence quark mass decreases, suggesting that the large splitting observed in nature indeed comes from the spontaneous breaking of chiral symmetry. Linear extrapolation in valence quark mass gives us $`m_N`$=0.56(2) and $`m_N^{}`$=0.77(2) in lattice units for values in the chiral limit which are consistent with the experimental value ($`a^11.9`$ GeV from the $`\rho `$-meson mass ). In Figure 2, we compare two mass ratios, one from the baryon parity partners $`m_N^{}/m_N`$ and the other from pseudo-scalar and vector mesons $`m_\pi /m_\rho `$. Experimental points are marked with stars, corresponding to non-strange (left) and strange (right) sectors. In the strange sector we use $`\mathrm{\Sigma }`$ and $`\mathrm{\Sigma }(1750)`$ as baryon parity partners and $`K`$ and $`K^{}`$ for mesons . We find the baryon mass ratio grows with decreasing meson mass ratio, toward reproducing the experimental values. In contrast to our naive expectation that the operators $`B_1^+`$ and $`B_2^+`$ should give the same mass estimate, we find different plateaus in effective mass plots from these two operators. In Figure 3, shows that two masses extracted from $`B_1^+`$ and $`B_2^+`$ are quite different. For heavy quarks ($`m0.04`$), we identify $`B_2^+`$ with the first positive-parity excited state of nucleon ($`N^{}`$) for the following reasons: The operator $`B_2^+`$ is expected to couple weakly to the ground state of the nucleon as we mentioned earlier . We suspect the reason why we see a clear $`B_2^+`$ signal for the first time in this study while previous studies failed to do so is related to mixing induced by explicit chiral symmetry breaking at finite lattice cutoff which is absent in the former but severe in the latter. Although $`B_1^+`$ and $`B_2^+`$ do not mix in the continuum because of different chiral structures, it is known that unwanted mixing between them comes about through the breaking of chiral symmetry by conventional lattice fermions . On the other hand, the DWF exponentially suppresses this breaking and thus significantly reduce the unwanted mixing . As a result, we are able to numerically confirm an expected feature of $`0|B_2^+|N0`$ at a valence quark mass of $`m`$=0.04. So for this valence quark mass we believe the $`B_2^+`$ operator gives an $`N^{}`$ mass signal. For heavier quark mass values, the mass splitting between $`N^{}`$ and $`N^{}`$ approaches the splitting between $`N^{}`$ and $`N`$ just like in the naive quark or bag models. Unfortunately, however, we have yet to perform the $`0|B_2^+|N`$ calculation for lighter quark mass values and hence have not ruled out the possibility that $`B_2^+`$ couples to the ground-state nucleon. In conclusion, we have studied the spectrum of the nucleon and its excited states by using DWF. We found the large mass splitting between $`N`$ and $`N^{}`$ for light quark by using two distinct interpolating operators. Our $`N^{}`$ mass $`m_N^{}`$=0.77(2) in the chiral limit is closer to the experimental value than any other study using other fermion schemes . We also observed that the unconventional nucleon operator gives a clear signal for the first excited nucleon, at least for heavy quarks.
no-problem/9909/astro-ph9909362.html
ar5iv
text
# Warsaw Variability Surveys ## 1. Warsaw-LCO survey of globular clusters The main goal of the survey is identification of detached eclipsing binaries in globular clusters. Such binaries can be used as excellent distance and age indicators (Paczyński 1997). Thirteen globular cluster were surveyed since 1993. Starting with 1997 most of the data are collected with the Las Campanas 1.0m Swope telescope equipped with SITe $`2048\times 3150`$ CCD giving field of view $`14.4\times 22.8`$ arcmin<sup>2</sup>. Some additional follow-up observations are conducted on the 2.5m du Pont telescope. For a given cluster the total length of monitoring ranges from 30 to 150 hours with the median value around 70 hours. Precise and well sampled light curves of RR Lyr and SX Phe stars are collected as a side result of the survey. These data become available through Internet (http://sirius.astrouw.edu.pl/`~`jka/) immediately after results for a given object are published. A list of monitored clusters and numbers of variables identified in them are given in Table 1. While monitoring M55 we identified five variables being likely members of the Sagittarius dwarf galaxy: two RRab stars and three SX Phe variables. ### 1.1. Non radial pulsators in M55 Our sample of pulsating variables from the globular cluster M55 includes five newly identified RRc stars. The light curves of three of these stars exhibit changes in amplitude of over $`0.1`$mag on the time scale shorter than a week. Detailed analysis indicates that observed changes are most probably due to non radial pulsations (Olech et al. 1999). At least 12 out of 24 SX Phe variables identified in M55 show a presence of two or more periodicities in their light curves. Table 2 lists principal periods, ratio $`P_1/P_2`$ and amplitudes for first terms of Fourier series calculated for both periods after appropriate pre-whitening was applied. The derived values of $`P_1/P_2`$ indicate that we are dealing with non radial pulsations. In Fig. 1 we present light curves obtained on two consecutive night for one of the multi-periodic SX Phe stars from M55. We note that also light curve of an SX Phe variable identified in M3 by Kaluzny et al. (1998) exhibits evidence for non radial pulsations of that star. ### 1.2. M3/M55 dichotomy Our sample of monitored clusters includes M3 and M55. These objects have similar metallicities and exhibit similar morphology of their color-magnitude diagrams (in particular horizontal branches are similar to each other). Both cluster are rich in blue stragglers yet show very different relative frequencies of SX Phe stars. This can be illustrated by comparing numbers of blue stragglers for which we obtained good quality light curves with numbers of identified SX Phe stars: M55: $`N_{BS}=40`$, $`N_{SX}=24`$, $`[\mathrm{Fe}/\mathrm{H}]=1.81`$ M3: $`N_{BS}=25`$, $`N_{SX}=1`$, $`[\mathrm{Fe}/\mathrm{H}]=1.57`$ Our data for M3 show several blue stragglers located inside instability strip which do not show any variability exceeding about 0.02 mag in the V band. ### 1.3. How complete is a sample of SX Phe stars identified in GCs? Figure 2 shows full amplitude versus period diagram for 96 SX Phe stars identified by our group in globular clusters. 53 out of these stars exhibit light curves with full range not exceeding $`0.10`$mag in the V band. Many variables show amplitudes approaching detection limit of our survey which, for most clusters reaches $`0.020.03`$mag. That indicates that a significant fraction of SX Phe stars residing in surveyed cluster was most likely missed. ## 2. OGLE-1 survey The OGLE project (Udalski et al. 1993; http://sirius.astrouw.edu.pl/`~`ftp/ogle/) is aimed primarily at detection and photometry of microlensing events in the Galactic bulge and LMC/SMC. The first phase of the survey, refereed here as OGLE-1, was conducted at Las Campanas Observatory on the 1.0m Swope telescope during 4 seasons covering the period 1992-1995. About $`2\times 10^6`$ stars in 20 $`15\times 15`$ arcmin<sup>2</sup> fields toward the Galactic bulge were monitored. 20 microlensing events were detected (Wozniak & Szymanski 1998). Five parts of the Variable Star Catalog were published, containing 2861 stars from the Galactic bulge (Udalski et al. 1997a). That catalog includes 269 pulsating objects, mostly RR Lyr stars. Several side projects were attempted by the OGLE-1 team. We list here these of them which yield some possibly interesting results for pulsating variables. Mateo et al. (1995) searched with success for RR Lyr stars belonging to the Sagittarius dwarf galaxy. They reported VI photometry for 7 variables from that galaxy. V band data for 226 RR Lyr stars from the Sculptor dSph galaxy were obtained by Kaluzny et al. (1995a). That galaxy may prove to be an ideal target for calibration of the luminosity-metallicity relation for RR Lyr stars. Population of stars hosted by Sculptor shows significant range of metallicities and the interstellar reddening toward the galaxy is very low. Kaluzny et al. (1996, 1997) identified 34 SX Phe stars in the globular cluster $`\omega `$ Cen. V band data for 141 RR Lyr stars (33 newly identified) and Pop II Cepheids in the same cluster were published by Kaluzny et al. (1997a). In 1996 the OGLE project entered its second phase known as OGLE-2 (Udalski, Szymanski & Kubiak 1997). OGLE-2 results are described in this volume by Paczyński (1999). ## 3. DIRECT The DIRECT project (http://cfa-harvard.edu/`~`kstanek/DIRECT/) aims at determination of distance to M31 and M33 galaxies by using detached eclipsing binaries and Cepheids. The project is currently conducted by a group including astronomers from CfA and Warsaw. About 200 nights on the 1.2m FLWO and 1.3 MDM telescopes were used between September 1996 and November 1999 to search both galaxies for variables suitable for more detailed follow-up. So far five catalogs of variables in M31 were released (Kaluzny et al. 1998a, 1999; Stanek et al. 1998, 1999; Mochejska et al. 1999). 410 variables (most of them new) were identified, including 206 Cepheids and 48 eclipsing binaries. Photometry of many RV Tau and LPV stars was also reported. The remaining catalogs shall be released over the coming year. Some results of DIRECT concerning Cepheids are presented in this volume by Sasselov (1999) ## 4. ASAS ASAS (the All Sky Automated Survey; Pojmanski 1997; http://www.astrouw.edu.pl/`~`gp/asas) is a project which ultimate goal is low cost monitoring of the whole sky on a nightly basis down to about 15 magnitude. A prototype robotic telescope consisting of the 135mm telephoto lens, off-the-shelf CCD camera (512 x 768 pixels) and small automated mount was set up at the Las Campanas Observatory in April 1997. It has been monitoring 24 selected fields covering about 150 deg<sup>2</sup> of the sky. Usefull photometry in the I band was obtained for over 45000 stars brighter than 13 magnitude. The first two month of observation revealed 126 short period variables (Pojmański 1998), of which 70 % were previously unknown. The catalogue includes several newly identified RR Lyr stars and Cepheids. In 1998 the survey was extended to cover additional 150 deg<sup>2</sup> and the updated version of the catalogue will include data for a large number of pulsating stars with periods up to 300 days (Pojmanski 1999, private communication). At the end of 1999 two new robotic telescopes with 2K<sup>2</sup> CCD’s will be installed at Las Campanas. #### Acknowledgments. JK was supported by the Polish Committee of Scientific Research through grant 2P03D000.00 and by NSF grants AST-9528096 and AST-9819787 to Bohdan Paczyński. ## References Kaluzny, J., 1997, A&AS, 122, 1 Kaluzny, J., Krzeminski, W., & Mazur, B. 1993, MNRAS, 264, 785 Kaluzny, J., Krzeminski, W., & Mazur, B. 1995, AJ, 110, 2206 Kaluzny, J., Kubiak, M.,, Szymanski, M., Udalski, A., Krzeminski, W., & Mateo, M. 1995a, A&AS, 112, 407 Kaluzny, J., Kubiak, M., Szymanski, M., Udalski, A., Krzeminski, W., Mateo, M. 1996, A&AS, 120, 139 Kaluzny, J., Kubiak, M., Szymanski, M., Udalski, A., Krzeminski, W., Mateo, M. 1997, A&AS, 122, 471 Kaluzny, J., Kubiak, M., Szymanski, M., Udalski, A., Krzeminski, W., Mateo, M., & Stanek, K.Z. 1997a, A&AS, 125, 343 Kaluzny, J., Krzemiński, W., & Nalezyty, M. 1997b, A&AS, 125, 337 Kaluzny, J., Hilditch, R., Clement, C., Ruciński, S.M. 1998, MNRAS, 296, 345 Kaluzny, J., Stanek, K.Z., Krockenberger, M., Sasselov, D.D., Tonry, J., & Mateo, M.M. 1998a, AJ, 115, 1016 Kaluzny, J., Mochejska, B., Stanek, K.Z., Krockenberger, M., Sasselov, D.D., Tonry, J.,& Mateo, M.M. 1999, AJ, 118, 346 Kaluzny, J., Olech, A., Thompson, I., Pych, W., Krzemiński, W., & Schwarzenberg-Czerny, A. 1999a, A&A, in press Kaluzny, J., Thompson, I., Krzemiński, W., & Pych, W. 1999b, A&A, in press Mateo, M.M., Kubiak, M., Szymanski, M., Kaluzny, J., Krzeminski, W., & Udalski, A. 1995, AJ, 110, 1141 Mazur, B., Krzemiński, W., & Kaluzny, J. 1999, MNRAS, 306, 727 Mochejska, B., Kaluzny, J., Stanek, K.Z., Krockenberger, M., & Sasselov, D.D. 1999, AJ, in print Olech, A., Kaluzny, J., Thompson, I., Pych, W., & Krzemiński, W. 1999, AJ, 118, 442 Olech, A., Wozniak, P., Alard, C., Kaluzny, J., & Thompson, I. 1999a, MNRAS, in press Olech, A., Kaluzny, J., Thompson, I., & Pych, W. 1999a, AJ, 118, 442 Paczyński, B. 1997, in The extragalactic Distance Scale, ed. M. Livio, M. Donahue, & N. Panagia (Cambridge Univ. Press) 273 Paczyński, B. 1999, in IAU Coll. 176, The Impact of Large Scale Surveys on Pulsating Stars Research, ASP Conf. Ser. Vol. XXX, ed. L. Szabados & D. Kurtz (San Francisco: ASP) xxx. Pojmański, G. 1997, Acta Astronomica, 47, 467 Pojmański, G. 1998, Acta Astronomica, 48, 35 Sasselov, D.D. 1999, in IAU Coll. 176, The Impact of Large Scale Surveys on Pulsating Stars Research, ASP Conf. Ser. Vol. XXX, ed. L. Szabados & D. Kurtz (San Francisco: ASP) xxx. Stanek, K.Z., Kaluzny, J., Krockenberger, M., Sasselov, D.D., Tonry, J., & Mateo, M.M. 1998, AJ, 115, 1894 Stanek, K.Z., Kaluzny, J., Krockenberger, M., Sasselov, D.D., Tonry, J., & Mateo, M.M. 1999, AJ, 117, 2810 Thompson, I., Kaluzny, J., Pych, W., & Krzeminski, W. 1999, AJ, 118, 462 Udalski, A. et al. 1993, Acta Astronomica, 43, 289 Udalski, A., Szymanski, M. & Kubiak, M. 1997, Acta Astronomica, 47, 319 Udalski, A. et al. 1997, Acta Astronomica, 47, 1 Wozniak, P., & Szymanski, M. 1998, Acta Astronomica, 48, 269
no-problem/9909/cond-mat9909149.html
ar5iv
text
# Universality classes for the ”ricepile” model with absorbing properties ## 1 Introduction Self organized criticality (SOC) is widely studied phenomenon in the last ten years. Theory of SOC, proposed by Bak, Tang and Wiesenfeld describes dynamical behaviour of many particle systems with local interactions. Paradigmaticaly, the description is based on the dynamics of a pile of sand. If the sandpile is randomly driven by slow addition of sandgrains, the slope of pile grows up and after some time local stability conditions are violated somewhere on the pile surface. The avalanche starts to slide down the slope. This type of dynamics is easily modeled by a cellular automaton. Such a model ”sandpile” is defined on a large n- dimensional lattice. The avalanche dynamics is, under the action of slow drive, governed by the local critical conditions (such as the local critical slope, for example), and local toppling rules. This dynamics leads to the steady state called self organized critical state, characterized by the critical scaling of the avalanche size distribution: $$p(s,L)=s^\tau f(\frac{s}{L^D})$$ (1) In (1), $`L`$ is the system size and $`s`$ exhibits the size of the avalanche. Critical exponents $`\tau `$ and $`D`$, depend on the particular model. Different models can be divided into the various universality classes, defined by a specific set of the critical exponents . A natural step from the sandpile model systems, leads to the investigation of real piles of granular material from the point of view of SOC dynamics. Several efforts have been made in this direction , but no clear evidence of self organized critical dynamics (1) has been found. Finaly, in 1996 an experiment has been done by the group of experimentalists and theoretists in Oslo . In the Oslo experiment dynamical behaviour of the driven quasionedimensional pile of rice has been investigated. The avalanche sizes in the steady state were measured in terms of dissipated potential energy. Two type of grains were used (elongated and round ones), showing completely different dynamics. In the case of the ricepile, consisting of elongated grains, SOC state has been established, in which the avalanche size distribution has a power law character with critical exponent $`\tau 2.02`$ . Soon after the experimental results were published, the model ”ricepile” cellular automata were suggested and numerically studied .The model ricepile is, in principle, cellular automaton defined on onedimensional lattice, with randomness incorporated into the toppling rules and with deterministic drive. Changes in toppling rules are often manifested by different dynamical behaviour of the model and different universality class into which the model belongs. In this paper we study a model, which exhibits a modification of the two threshold ricepile model . In the gravity effects, grain friction and the local conditions on the pile are described by the parameter $`p`$. Two thresholds, namely the critical threshold and the gravity threshold are defined. Thresholds governs the movement of the grain on the pile surface. We removed the second, gravity threshold of the ricepile model. This causes, that the model, depending on the parameter $`p`$ value, has absorbing properties and interesting dynamical regimes . One threshold model in two dimensions has been studied by Tadić and Dhar . Here, different versions of the absorbing model are defined using different toppling rules. Dynamical behaviour of the local limited (LLIM), local unlimited (LUNLIM), nonlocal limited (NLIM) and nonlocal unlimited (NUNLIM) absorbing model is studied and two distinct universality classes are recognized. ## 2 Ricepile models The experimental results of the Oslo group motivated further theoretical studies of the avalanches and dynamics of granular material . The main question is, which physical properties of the pile granular material, are important for SOC state to be established. What is, for example, the role of friction, what is the role of the grain shape, the gravity and inertia of rolling grains? The ricepile experiment reveals, that no SOC is possible if the ricepile consists of round ricegrains. On the contrary, the dynamics of pile consisting of elongated grains, is self organized critical . Certainly, the shape of grains, and the additional effects related to the shape, such as better packing of grains due to the elongated shape, supressed rolling of grains and thus supressed inertia effects, are of great importance . Ricepile models are cellular automata, in which friction and gravity effects are taken into account in a simple way, through the parameter $`p`$. The value of the parameter $`p`$ decides, whether the grain will stop on the site, or roll further down the slope. The two threshold ricepile model introduced in is defined on a onedimensional lattice of size $`L`$, with a wall at the zero position and open boundary at the other end. At the open boundary, particles are free to flow out of the system. As in the experimental set up, the system is driven by adding particles to the position one, at the closed end. Every time unit one particle is added. Two thresholds are defined in the model: the critical threshold $`z_c`$, which is the local condition for the onset of avalanche, and the gravity threshold $`z_g`$ ($`z_c<z_g`$). If the local slope $$z_i=h_ih_{i+1},$$ (2) (where $`h_i`$, $`i=1,2,3,\mathrm{},L`$ is a height profile of pile) is less then $`z_c`$, it is too low and the friction stops the grain movement. The grain resides on the position $`i`$. If $`z_i>z_g`$, the local slope is too high and the grain moving downslope is not allowed to stop at the i-th position. But, in the case, that $`z_c<z_i<z_g`$, the grain moves from $`i`$ to $`i+1`$ with probability $`p`$. Through the parameter $`p`$, effective friction is introduced into the model. All supercritical slopes can topple, with probability $`p`$, but for local slopes, which are too big $`(z_i>z_g)`$, the gravity becomes decisive and the site topples with probability $`p=1.0`$. The dynamics of the ricepile model is as follows: 1. Each avalanche starts at $`i=1`$. If $`z_1>z_c`$, the site one is activated and topples a particle to the next nearest position (two), with the probability $`p`$. If even $`z_1>z_g`$; $`p=1.0`$. 2. Every particle, sliding from the position $`i`$ to $`i+1`$ activates three columns, namely $`i1`$, $`i`$ and $`i+1`$. The position $`i1`$ is activated, because it possibly can become supercritical , when removing a grain from the $`i`$ \- th column. Columns $`i`$ and $`i+1`$ are activated, because they are destabilized by sliding or stopping particles, respectively. In the next time step, all supercritical active sites topple a particle to the $`i+1`$-st position with probability $`p`$ ($`p=1.0`$, if $`z_i>z_g`$). 3. Step two is repeated, untill there are no active sites in the system, that means, untill the avalanche is not over. Changing slightly the local toppling rules, different versions of the ricepile model are defined : a) The number of particles toppled from the position $`i`$ is constant and independent on the supercritical local slope $`z_i`$ \- the model is called limited. b) The number of toppled particles is a function of the supercritical local slope $`z_i`$, the model is called unlimited. c) If the particle (or more particles) topples from the site $`i`$ and moves only to the next nearest position $`i+1`$, the model is defined as local. d) The model is called nonlocal, if $`n`$ toppled particles moving from the $`i`$-th site are added subsquently to $`n`$ nearest downslope positions (one particle per site) $`i+1`$, $`i+2`$,…, $`i+n`$. Thus four different ricepile models are recognized: 1. local limited model (LLIM) 2. local unlimited model (LUNLIM) 3. nonlocal limited model (NLIM) 4. nonlocal unlimited model (NUNLIM) Universality classes for the ricepile model were studied by Amaral and Lauritsen . Their results show, that local models (LLIM, LUNLIM) belong to the wide universality class called local linear interface universality class (LLI class) . The authors also found, that the nonlocal toppling rules lead to two new universality classes, with a different set of critical exponents. But none of the universality classes is the one of the real ricepile. ## 3 Absorbing model Our absorbing model exhibits simplified, one threshold version of the ricepile model . The gravity threshold is removed and all supercritical active sites are alloved to topple with probability $`p<1`$. There , therefore, persists small, but nonzero probability, that also the extremely large local slopes are possible. Physically this seems to be quite plausible. It is not probable, that in real piles of granular material there exists a strict gravity threshold. There is rather a continuous transition to the local slopes , which are already so large that they, when activated, almost always topple. We investigated numerically all four versions of the absorbing model: LLIM, LUNLIM, NLIM, NUNLIM. Several quantities were measured for all of the models: 1. Material transport as a ratio of the number of outgoing to ingoing particles $$J(p)=\frac{n_{out}(p)}{n_{in}(p)},$$ (3) and its dependence on the parameter $`p`$. 2. Average material transport $`J(p)`$ as a function of $`p`$. 3. Avalanche size distribution for different parameter values $`p`$. Avalanche sizes are measured in terms od dissipated potential energy, in accordance with the experiment . 4. Changes in the pile profile, due to changes in the parameter $`p`$ value. If the probability parameter $`p`$ changes slowly in the interval $`(0,1)`$, the model typically passes through different dynamical regimes: i) isolating, in which all particles are absorbed in the system and none of them reaches the open boundary; ii) partialy conductive, in which the pile profile grows up as a bulk, because a certain fraction of the particles, depending on $`p`$, is absorbed in the system (absorbing properties); iii) and totaly conductive, when the number of ingoing and outgoing particles is balanced. ## 4 Universality classes for the absorbing model ### 4.1 LLIM Dynamical properties of the local limited absorbing model are in details described in . Here I only briefly list the main results. Local limited toppling rules are defined as follows: active supercritical site topples one particle to the next nearest position with the probability $`p`$. Looking at the average material transport $`J(p)`$, three dynamical regimes of the LLIM model are recognized $`(Fig.1a)`$: a) For $`0<p<p^{^{}}`$; $`p^{^{}}0.53865`$, the system is completely isolating. The average transport $`J(p)`$ is zero. For $`p`$ close to zero, almost all ingoing particles are absorbed. The avalanches die out soon , their size is exponentially bounded. Close to the first phase transition point $`p^{^{}}`$, the steepness of the pile is still high enough to say, that the local slopes are almost everywhere higher than the critical threshold $`z_c`$. This is the reason that the spreading of active sites in time is practically determined by the probability $`p`$; the same way as it is in the percolation process. In the space - time coordinate system, we have therefore a picture of directed percolation with three descendants and absorbing boundary . $`p^{^{}}`$ is thus simply the critical percolation threshold. Close to the percolation threshold $`p^{^{}}`$, the average transport $`J(p)`$ scales with $`p`$ as $$J(p)J^{^{}}(pp^{^{}})^\delta ^{^{}}$$ (4) $$\delta ^{^{}}=0.9\pm 0.01$$ where $`J^{^{}}`$ is the current flowing due to the finite size of the system. b) For the probability interval $`p^{^{}}<p<p_c`$; $`p_c0.7185`$, the system is partialy conductive, with constant average slope. That means, the height profile grows as a bulk with velocity $`v(p)`$. Fluctuations of transport $`J(p)`$ exhibit long range correlations. Above the percolation threshold $`p^{^{}}`$, the percolation picture breaks down. The subcritical, absorbing states are randomly distributed throughout the system and the avalanche can stop anywhere. As $`pp_c`$, the long range correlations in transport fluctuations are destroyed, and the region of small local slopes spans the whole system. The pile stops to grow and at $`p=p_c`$ it is pinned at the position $`i=L`$. Critical point $`p_c`$ is thus understood as the depinning transition point. Close to the depinning critical point, average transport scales as $$1J(p)pp_c^\delta $$ (5) $$\delta =0.9\pm 0.01$$ c) In the interval $`(p_c,1)`$ the system is completely conductive. Transport fluctuations are of white noise type and the average transport $`J(p)=1.0`$. In the dynamical regimes b) and c) the system is in the SOC state, having power law distribution of avalanche sizes (1) with critical exponent $`\tau =1.57\pm 0.05`$. ### 4.2 LUNLIM Local unlimited toppling rules are in the absorbing model defined as follows: In order to get a realistic profile of the pile, each supercritical active site topples $`k`$, $`k=int(\frac{z_i}{2.0})`$, grains to the next nearest downslope position with probability $`p`$. This way one gets smooth profile without cavities $`(Fig.2a)`$. Numerical investigations of $`J(p)`$ reveals, that only two dynamical regimes are clearly recognized $`(Fig.1b)`$: the pile is either completely isolating, $`(J(p)=0)`$, or completely conductive $`(J(p)=1.0)`$. Partialy conductive dynamical regime is missing. a) For $`0<p<p_c`$; $`p_c0.6995`$ $`(Fig.1b)`$, the system is completely isolating. From the definition of local toppling rules it is clear, that absorbing states $`(z_i<z_c)`$ are easily created even for very small values of the parameter $`p`$. It means, that the percolation picture in the space - time coordinate system is not correct in the case of the LUNLIM model. Near transition point $`p_c`$ the average transport $`J(p)`$ scales with $`p`$ as $`(Fig.3)`$ $$J(p)(pp_c)^\delta $$ (6) $$\delta =1.93\pm 0.07$$ b) In the second dynamical regime $`(p_c<p<1.0)`$ the system is completely conductive, with pile profile pinned at $`i=L`$ $`(Fig.2a)`$. $`J(p)`$ as a function of time exhibits white noise features $`(Fig.4)`$. Avalanche size distribution is critical (1) with critical power law exponent $`\tau =1.54\pm 0.02`$ $`(Fig.5a)`$. ### 4.3 NLIM Nonlocal limited toppling rule means, that the supercritical active site topple, with probability $`p`$, $`N`$ particles to the $`N`$ nearest downslope positions. The nonlocal limited toppling rules preserve three dynamical regimes, the same way as it is in the LLIM case. In $`Fig.1c`$, isolating, partialy conductive and totaly conductive regimes are recognized. a) For $`0<p<p^{^{}}`$, $`p^{^{}}0.267`$ $`(Fig.1c)`$, the pile is in the isolating regime. To understand the nature of the first phase transition point $`p^{^{}}`$, the percolation picture in the time - space coordinate system is still usefull. But now, the number of descendants is, in principle, greater than three. That is the reason for the fact, that the percolation threshold is shifted to the lower parameter values as one can see when comparing $`Fig.1a`$ and $`Fig.1c`$, e. g. $`(p_{}^{^{}}{}_{\mathrm{NLIM}}{}^{}<p_{}^{^{}}{}_{\mathrm{LLIM}}{}^{})`$. In the model studied here, the number of particles toppling from the activated supercritical site is $`4`$. Five sites are thus activated by every toppling from the position $`i`$, namely $`i1`$, $`i`$, $`i+1`$, $`i+2`$, $`i+3`$. There are therefore five descendant sites in the directed percolation with absorbing boundary . In order to estimate $`p^{^{}}`$ with greater accuracy, systematic studies of the dependence of percolation threshold on the number of descendant sites are necessary. The average transport near the percolation threshold $`p^{^{}}`$ scales as $`(Fig.6a)`$: $$J(p)(pp^{^{}})^\delta ^{^{}}$$ (7) with the critical exponent $$\delta ^{^{}}=1.18\pm 0.04$$ b) In the interval $`p^{^{}}<p<p_c`$, the pile grows up with constant velocity $`v(p)`$, maintaining the global slope on a constant value for a constant probability parameter $`p`$. Transport $`J(p)`$ as a function of time shows long range correlations, on the contrary to the totaly conductive regime, where it has a character of white noise $`(Fig.7)`$. c) Depinning transition occurs at $`p_c0.365`$ $`(Fig.6b)`$. Average transport scales with $`p`$ close to the critical point as : $$1J(p)pp_c^\delta $$ (8) $$\delta =1.12\pm 0.06$$ For the probability interval $`p_c<p<1`$, the profile of pile is pinned at $`i=L`$ $`(Fig.2b)`$, and the average transport $`J(p)=1.0`$ $`(Fig.1c)`$. $`Fig.5b`$ demonstrates the avalanche size distribution in a case of partialy conductive and conductive dynamical regimes. In both cases the dynamics of the pile is self organized critical, which is demonstrated by the critical , power law scaling (1). The critical exponent $`\tau =1.35\pm 0.05`$. ### 4.4 NUNLIM The nonlocal unlimited toppling rule is defined as follows: $`N(z_i)`$ particles are released (with probability $`p`$) from the activated supercritical position and are added to $`N(z_i)`$ nearest downslope positions. The nonlocal unlimited version of the absorbing model shows completely different behaviour. First, no distinct dynamical regimes are recognized. The pile is completely conductive already for $`p`$ close to zero as can be seen from $`Fig.8a`$. Pile profile is pinned at $`i=L`$ $`(Fig.2c)`$. Avalanche size distribution shows the critical scaling (1) with critical exponent $`\tau =1.51\pm 0.05`$ $`(Fig.5e)`$. ## 5 Discussion and conclusion Probability density function (1) scales with the system size as $$p(s,L)=L^\beta g(\frac{s}{L^D})$$ (9) with $`\beta =D\tau `$. For the LLIM, LUNLIM and NUNLIM absorbing models the best data collapse has been found for $`D=2.24`$, what indicats, that these models belong to the same universality class, called LLI universality class . On the contrary, for the NLIM absorbing model the best data collapse was found for $`D=1.55`$. The critical exponents $`\tau `$ and $`D`$ are different from that of LLI class and defines a new universality class to which belongs also the NLIM version of the two threshold ricepile model . The reason of lowering the $`\tau `$ exponent of the NLIM model in comparison with the LLIM model is as follows: The average slope of the NLIM and the LLIM pile is simillar. For example for $`p=0.8`$, the average slope of the NLIM pile is $`67.87^o`$ and for the LLIM pile $`60.53^o`$. Due to the nonlocal toppling rules in the NLIM model, more columns are perturbed and thus the probability of greater avalanches is enhanced. Therefore the exponent $`\tau `$ is lowered. The same argument could be used in the case of the NUNLIM and the LUNLIM model. But here the situation is different. The average slope changes significantly with changes in toppling rules from local to nonlocal. For example if $`p=0.8`$, the average slope of the LUNLIM model is $`82.22^o`$ and that of the NUNLIM model equals $`53.6^o`$. Because the number of toppled particles is proportional to the slope in the unlimited model, it seems, that the relatively small average slope of NUNLIM pile leads to relatively few particles released on average in one toppling. This fact should enhance the probability of small avalanches and the avalanche size distribution function should have $`\tau `$ exponent greater than $`1.55`$. This really happens for the NUNLIM two threshold ricepile model . In this model the propability of big local slope decreases exponentially with $`z_i`$. But looking at $`(Fig.2c)`$ one can see, that it is not an exception to have a big local slopes in the NUNLIM absorbing pile. During a toppling event, the site with big local slope releases a number of particles (proportional to the local slope), which disturb a lot of downslope columns. This efect increases the probability of large avalanches. It seems, that in the case of the absorbing model the two described effects balance each other and thus the exponent $`\tau `$ remains untouched by changed toppling rules. This is different from the NUNLIM two threshold model . Here the first effect is decisive and the model belongs to a new universality class $`(\tau =1.63)`$. Another question, which should be discussed, is the nonexistence of partialy conductive regime of the LUNLIM model. First, the model is local. That means, in every toppling, only three columns are activated by each toppled particle. Therefore, the probability of avalanches, having a chance to reache the end of the system and thus to transport a material, doesn’t increase due to more activated sites by every toppled particle. As it has been already told, in the LLIM model, there are no absorbing $`(z_i<z_c)`$ states in the system for the isolating dynamical regime. This is not the case of the LUNLIM model. Absorbing states, therefore, exhibits another obstacle for bigger avalanches to develop and transport the particles. The existence of absorbing states, even for a small parameter values, destroys the percolation picture of the spreading of active sites and this is also the reason for nonexisting critical percolation probability $`p^{^{}}`$ and only the transition to completely conductive dynamical regime is present. Last, some words should be told about the pile slopes in all of the three dynamical regimes. In the isolating regime average transport $`J(p)=0`$. All the added particles are absorbed in the system. Moreover, the avalanche sizes in this regime are exponentialy bounded. That means, majority of the particles is absorbed on the first few columns of the pile. Therefore, if the driving time $`t`$ tends to infinity, the average slope of the pile grows to infinity. That in consequence means, that also the local slopes become arbitrarily large. In the partialy conductive regime, constant amount of particles, depending on the parameter $`p`$, is absorbed in the system. As $`t\mathrm{}`$, average slope of the pile remains constant; pile grows up as a bulk. The local slopes are finite, except of $`z(L)`$ (see $`(2)`$), which tends to infinity. In the conductive regime $`J(p)=1.0`$. The average slope of the pile is constant depending only on the parameter $`p`$. All local slopes are finite in this regime. In conclusion, we have studied numerically the LLIM, LUNLIM, NLIM and the NUNLIM absorbing models. We have found, that the models belong to two different universality classes, characterized by different critical exponents. Both of the universality classes are different from the one of real pile of rice. We have studied the transport properties of all of the models and found phase transitions between different dynamical regimes. We state, that the dynamics of LLIM and NLIM model is directly mapped to the dynamics of directed percolation process at the absorbing boundary for the defined interval of parameter $`p`$. This work has been supported by the grant of VEGA number 2/6018/99. ## 6 Figure captions Fig.1 Average transport ratio of particles through the system as a funcion of the parameter $`p`$. $`(a)`$ LLIM model: Three different dynamical regimes are recognized: isolating, $`p(0,0.53865)`$; partialy conductive, $`p0.53865,0.7185)`$ and conductive, $`p0.7185,1.0`$. $`(b)`$ LUNLIM model: Only two different dynamical regimes are recognized: isolating, $`p(0,0.6995)`$ and conductive, $`p0.6995,1.0`$. $`(c)`$ NLIM model: Again, three different dynamical regimes are depicted: isolating, $`p(0,0.267)`$; partialy conductive, $`p0.267,0.365)`$ and totaly conductive, $`p0.365,1.0`$. Fig.2 Pile profiles of the LUNLIM $`(Fig.2a)`$, NLIM $`(Fig.2b)`$ and NUNLIM $`(Fig.2c)`$ absorbing models for different values of the probability parameter $`p`$. Notice, that in the totaly conductive regime, the pile profile is pinned at $`i=L`$ $`(a)`$, $`(c)`$. The pile is growing as a bulk with velocity $`v(p)`$ in the case of the partialy conductive regime and is pinned in the totaly conductive regime $`(b)`$. Fig.3 Ln-ln plot of the average transport as a function of the distance from the critical point $`p_c`$ in the LUNLIM absorbing model, $`ϵ=(pp_c)`$. We find, that the best scaling is obtained for $`p_c=0.6995`$. Fig.4 Transport $`J(p)`$ as a function of time (in iterations) in the conductive regime of the LUNLIM model for two different values of $`p`$. For $`pp_c`$, $`p_c=0.6995`$, white noise is observed. For $`p<p_c`$ all particles are absorbed and therefore there are no fluctuations. Fig.5 Ln -ln plots of the power law parts of the avalanche size distributions (unnormalized). The critical exponent $`\tau `$ of the LLIM , LUNLIM $`(a)`$ and NUNLIM $`(c)`$ models is $`\tau =1.55`$. The NLIM model $`(b)`$ belongs to the different universality class with $`\tau =1.35`$. The system size in $`Figs.5b,c`$ is $`L=300`$ and in $`Fig.5a`$ $`L=500`$. In the NLIM model $`(b)`$ the bump shift with system size has been numericaly tested. With growing system size, the bump shifts to the higher values of $`s`$, indicating thus SOC. Fig.6 $`(a)`$ Ln-ln plot of the average transport as a function of the distance from the critical point $`p^{^{}}`$ in the NLIM absorbing model, $`ϵ=(pp_c)`$. We find, that the best scaling is obtained for $`p_c=0.276`$. $`(b)`$ Ln-ln plot of the average transport as a function of the distance from the critical point $`p_c`$ in the NLIM absorbing model, $`ϵ=(p_cp)`$. We find, that the best scaling is obtained for $`p_c=0.3441`$. Fig.7 Transport $`J(p)`$ as a function of time (in iterations) in the partialy conductive regime $`(p=0.339)`$ and totaly conductive regime $`(p=0.4)`$ of the NLIM absorbing model. For $`pp_c`$, $`p_c=0.365`$, white noise is observed. For $`p^{^{}}<p<p_c`$ the character of fluctuations is different. Long range correlations (reminiscent of Brownian motion) are observable in the time signal. For $`p<p^{^{}}`$ all particles are absorbed and therefore there are no fluctuations. Fig.8 Transport $`J(p)`$ as a function of time (in iterations) in the NUNLIM model for three different values of the parameter $`p`$. The pile is totaly conductive in wide range of the parameter $`p`$ and the signal has a character of white noise, with fluctuations depending on $`p`$.
no-problem/9909/astro-ph9909320.html
ar5iv
text
# Transformation-Induced Nonthermal Neutrino Spectra and Primordial Nucleosynthesis ## I Introduction The role of neutrinos in primordial nucleosynthesis has been exploited to infer constraints on the number of leptons and light neutrinos in the Standard Model of particle physics . Complimentary to this, the width of the $`Z_0`$ was found to allow only three light weakly-coupled neutrinos . This experimental determination of the number of neutrinos, $`N_\nu `$, has led some to describe primordial nucleosynthesis as determined by only one parameter: the baryon-to-photon ratio, $`\eta `$. Along with the assumptions of homogeneity and isotropy, the theory of Standard Big Bang Nucleosynthesis (SBBN) predicts the abundance of the lightest nuclides within reasonable error through the single parameter $`\eta `$. Now, however, indications of neutrino oscillations from the atmospheric muon neutrino deficit, the solar neutrino deficit and the LSND excess may prove that our understanding of neutrino physics is incomplete . This uncertainty in neutrino physics threatens to destroy the aesthetics of a one parameter SBBN with the several parameters of the neutrino mass and mixing matrix. Neutrino mixing in the early universe can induce effects including matter-enhanced neutrino flavor transformation, partial thermalization of extra degrees of freedom, and lepton number generation. Efforts are sometimes made to crowd the multiple effects of neutrino mixing into a single parameter, $`N_\nu ^{\mathrm{eff}}`$, the effective number of neutrinos. The kinetic equations describing the effects of neutrino mixing in the early universe were first discussed by Dolgov . Larger $`\delta m^2`$ and $`\mathrm{sin}^22\theta `$ accelerate population of sterile states. Resonant conversion of active neutrinos to sterile neutrinos may also take place when the mass eigenvalue corresponding to $`\nu _s`$ is smaller than that for the $`\nu _\alpha `$, i.e., $`m_s^2m_\alpha ^2=\delta m_{s\alpha }^2<0`$. Both resonant and non-resonant conversion has been used to place constraints on active-sterile neutrino properties . Several formulations of neutrino dynamics in the early universe evolve the system at a monochromatic average neutrino energy with interactions occuring at an integrated rate. This approach is appropriate when the time scale of interactions between the neutrinos and the thermal environment is much shorter than the dynamic time scale of neutrino flavor transformations. However, when tranformations and the Hubble expansion are more rapid than thermalization, neutrino energy distributions can be distorted from a Fermi-Dirac form. Furthermore, such spectral distortions can survive until nucleosynthesis . The role of neutrino energy distributions in primordial nucleosynthesis has been discussed since the earliest work on the subject . Recent work has emphasized the importance of neutrino energy distributions on the weak nucleon interconversion rates $`(np)`$. Kirilova and Chizhov included the distortion of neutrino spectra from nonresonant transformations in their nucleosynthesis calculation. A time-evolving distortion was also included in a calculation of helium production after a neutrino-mixing-generated lepton asymmetry in electron-type neutrinos from $`\nu _e\nu _s`$ or $`\overline{\nu }_e\overline{\nu }_s`$ (active-sterile) mixing. In this paper we explore in detail the effects on primordial nucleosynthesis of the nonthermal neutrino spectra produced by the resonant transformations discussed in . The resonant transformation of electron-type neutrinos to steriles can leave a near absence of neutrinos in the low energy end of the active neutrino energy spectrum. We find that this suppression of low energy neutrinos or anti-neutrinos in the otherwise Fermi-Dirac spectrum alters the neutron-to-proton weak interconversion reactions. These alterations stem from initial and final phase space effects. Specifically, we find that the absence of low energy electron anti-neutrinos lifts the Fermi blocking of final state neutrinos in neutron decay. In addition, the absence of low energy electron neutrinos capturing on neutrons significantly alters the rate of this interaction. Both cases lead to an alteration of the neutron-to-proton ratio at weak freeze out, which is directly related to the produced mass fraction of primordial helium. ## II The Dynamics of Neutrino Transformation The general equations describing the evolution of $`M`$ flavors of neutrinos in a high density environment such as the early universe are derived from the evolution equation for the density operator $$i\frac{\rho }{t}=[H,\rho ].$$ (1) Here $`\rho `$ can be the density operator for the entire system of $`M`$ neutrinos or any smaller neutrino system that evolves rather independently of the other neutrino mixings. Several authors explicitly follow the evolution of neutrinos through the density matrix. The evolution of the density operator for a two state active-sterile system has been popularized in a vector formalism where the density operator is $$\rho =\frac{1}{2}P_0(1+𝐏\sigma )\overline{\rho }=\frac{1}{2}\overline{P}_0(1+\overline{𝐏}\sigma )$$ (2) where $`\overline{\rho }`$, $`\overline{𝐏}`$ and $`\overline{P}_0`$ refer to the anti-neutrino states. For an active-sterile two state system, the Hamiltonian of eqn. (1) includes forward scattering off of the $`e^\pm `$ background, and elastic and inelastic scattering of neutrinos from all leptons (see, e.g., Ref. ). For this system, substituting (2) into the evolution equation resulting from Eqn. 1 gives $$\frac{d}{dt}𝐏=𝐕\times 𝐏+(1P_z)\left(\frac{d}{dt}\mathrm{ln}P_0\right)\widehat{𝐳}\left(D^E+D^I+\frac{d}{dt}\mathrm{ln}P_0\right)(P_x\widehat{𝐱}+P_y\widehat{𝐲})$$ (3) $$\frac{d}{dt}P_0=\underset{\alpha =e,\nu _\mu ,\nu _\tau }{}\mathrm{\Gamma }(\nu _e\overline{\nu }_e\alpha \overline{\alpha })(\lambda _\alpha n_\alpha n_{\overline{\alpha }}n_{\nu _e}n_{\overline{\nu }_e})$$ (4) where an average over the momentum ensemble of neutrinos is taken for the vector potential $`𝐕`$ and the interaction rates $`\mathrm{\Gamma }`$. Here $`D^I`$ and $`D^E`$ are the damping coefficients due to inelastic and elastic neutrino scattering off of $`e^\pm `$ and themselves (see ). Also, $`\lambda _\nu =\frac{1}{4}`$ and $`\lambda _e=1`$. This set of equations (3,4) can be followed for specific neutrino momenta. An initial thermal neutrino spectrum can be discretized into $`𝒩`$ energy bins. Along with the number density evolution equations of neutrinos that do not undergo transformation, these equations form $`8𝒩`$ equations describing the evolution of the system. Here we assume that the sterile neutrino sea is unpopulated at a temperature $`T100\mathrm{MeV}`$ (see Ref. ). Active neutrinos can undergo resonant transformation to sterile neutrinos as the density of $`e^\pm `$ decreases with the expansion. As in the sun, the resonance point is energy dependent. For example, the resonance, for $`\delta m_{s\alpha }^2<0`$, occurs at $$\left(\frac{E}{T}\right)_{\mathrm{res}}\frac{|\delta m^2/\mathrm{eV}^2|}{16(T/\mathrm{Mev})^4L_{\nu _\alpha }},$$ (5) where $`E/T`$ is the neutrino energy scaled by the ambient temperature. This resonance adiabatically converts $`\nu _\alpha (\overline{\nu }_\alpha )`$ to $`\nu _s(\overline{\nu }_s)`$ as long as the timescale of the resonance is much longer than the neutrino oscillation period at resonance, and the resonance position must move slowly through the neutrino energy spectrum (see Ref. ). As the universe expands and the ambient temperature decreases, the resonance moves from lower to higher neutrino energies. The energy-dependence of the resonance becomes important at temperatures below $`T_{\mathrm{dec}}3\mathrm{MeV}`$ . Below this temperature, the neutrinos decouple from the $`e^\pm `$ background. The weak interaction rates of elastic and inelastic scattering of neutrinos above $`T_{\mathrm{dec}}`$ are much faster than the expansion time scale. In this case, the neutrino energy distributions can reshuffle into a thermal Fermi-Dirac spectrum before the onset of nucleosynthesis. However, if the transformation occurs after decoupling, the expansion rate will be too rapid, and therefore the neutrino spectra enter the epoch of nucleosynthesis in a non-thermal state. This will be the case, usually, when the mass-squared difference between mass eigenvalues corresponding to $`\nu _e`$ and $`\nu _s`$ is $`|\delta m_{es}^2|1\mathrm{eV}^2`$. Direct $`\nu _e\nu _s(\overline{\nu }_e\overline{\nu }_s)`$ transformation was explored by Foot and Volkas as a possible means to allow for a sterile neutrino within SBBN with minimal modification to the theory, and also to reduce the difference between the <sup>4</sup>He abundance predicted by the observed primordial deuterium abundance and the primordial <sup>4</sup>He abundance inferred by Olive et al. . The change in primordial helium production in this $`\nu _e\nu _s(\overline{\nu }_e\overline{\nu }_s)`$ scenario, including alterations of neutrino spectra, was described in . The resonant transformation of $`\nu _e\nu _s(\overline{\nu }_e\overline{\nu }_s)`$ produces a lepton number asymmetry in electron-type neutrinos, $`L_{\nu _e}`$. Here we define the net lepton number residing in neutrino species $`\nu _\alpha `$ to be $`L_{\nu _\alpha }(n_{\nu _\alpha }n_{\overline{\nu }_\alpha })/n_\gamma `$, where $`n_{\nu _\alpha }`$, $`n_{\overline{\nu }_\alpha }`$, and $`n_\gamma =(2\zeta (3)/\pi ^2)T^3`$ are the proper number densities of $`\nu _\alpha `$, $`\overline{\nu }_\alpha `$, and photons, respectively. The sign of this asymmetry depends sensitively on initial thermal conditions and can be different in causally disconnected regions . In the following sections, we address how non-thermal spectra affect the production of primordial helium. ## III Weak Nucleon Interconversion Rates and Primordial Helium Since nearly all of the neutrons present during primordial nucleosynthesis go into <sup>4</sup>He nuclei, the production of primordial helium in the early universe is dominated by the abundance of neutrons relative to protons. The weak nucleon interconversion reactions $$n+\nu _ep+e^{}n+e^+p+\overline{\nu }_enp+e^{}+\overline{\nu }_e.$$ are in steady state equilibrium at temperatures $`T>0.7\mathrm{MeV}`$. The neutron-to-proton ratio $`(n/p)`$ freezes out of equilibrium when the weak nucleon interconversion rates fall below the expansion rate (see Fig. 1). The “frozen” $`n/p`$ ratio slowly evolves as free neutrons decay until the epoch ($`T0.1`$ MeV) when almost all neutrons are incorporated into alpha particles (“nucleosynthesis”). In steady state equilibrium, $`n/p`$ can be approximated by the ratio of the nucleon interconversion rates, $$\frac{n}{p}\frac{\lambda _{pn}}{\lambda _{np}}.$$ The evolution of $`n/p`$ is followed numerically in detail in Kawano’s update of Wagoner’s code . The suppression of low energy electron-type neutrinos affects the rates of the six weak nucleon reactions differently. If sterile neutrinos are partially or completely thermalized, they will increase the expansion rate since they will contribute to the energy density of the primordial plasma. The expansion rate, $`H`$, is related to the total energy density, $`\rho _{\mathrm{tot}}`$ and the cosmological constant, $`\mathrm{\Lambda }`$ as $$H=\sqrt{\frac{8\pi }{3}G\left(\rho _{\mathrm{tot}}+\frac{\mathrm{\Lambda }}{3}\right)}.$$ In general, an increase to the energy density and expansion rate will increase $`n/p`$ at freeze-out, and thus increase the primordial <sup>4</sup>He yield, $`Y_p`$. The contribution to the expansion rate is not significant for $`\nu _e\nu _s(\overline{\nu }_e\overline{\nu }_s)`$ conversion post-decoupling, since the energy density in electron-type neutrinos is just transferred to steriles without further thermal creation of neutrinos (since they have decoupled from the $`e^\pm `$ plasma before the onset of transformation). For the case where $`L_{\nu _e}<0`$, the reactions influenced by the $`\nu _e`$ distortion are the forward and reverse reactions of $`n+\nu _ep+e^{}`$. Because of the neutron/proton mass difference, very low energy neutrinos readily participate in $`n+\nu _ep+e^{}`$. Here, the neutrino energy is $`E_\nu =E_eQ>0`$, where $`Qm_nm_p`$ is the nucleon mass difference, and $`E_e`$ is the electron energy. The suppression of low energy neutrinos impacts this reaction by significantly limiting the phase space of initial states of the interaction. The neutrino energy distribution is accounted for in the $`n+\nu _ep+e^{}`$ rate integral in the following manner: $$\lambda _{n\nu pe}=Av_eE_e^2p_\nu ^2𝑑p_\nu [e^{E_\nu /kT_\nu }+1]^1[1+e^{E_e/kT}]^1.$$ (6) The rate of the reaction $`p+e^{}n+\nu _e`$ is given by $$\lambda _{pen\nu }=AE_\nu ^2p_e^2𝑑p_e[e^{E_e/kT}+1]^1[1+e^{E_\nu /kT_\nu }]^1.$$ (7) In these equations we have used Weinberg’s notation . The deficit in low energy neutrinos renders the integrand of (6) to be nearly zero for energy ranges where $`\nu _e`$ have transformed to steriles (see Fig. 2). The reaction $`p+e^{}n+\nu _e`$ is altered by a lifting of Fermi-blocking $`[1+e^{E_\nu /kT_\nu }]^1`$ at low energies because of the reduction of low energy neutrino numbers. However, the rate $`\lambda _{pen\nu }`$ is changed less significantly than $`\lambda _{n\nu pe}`$ and thus affects $`n/p`$ less. This can be seen in Fig. 4 (a), where the change in rate with respect to the overall $`np(pn)`$ rate is shown for rate (2) $`\lambda _{n\nu pe}`$ and (6) $`\lambda _{pen\nu }`$. For $`L_{\nu _e}>0`$, the reactions affected by $`\overline{\nu }_e`$ distortion are $`n+e^+p+\overline{\nu }_e`$ and $`np+e^{}+\overline{\nu }_e`$. In the forward and reverse reactions $`n+e^+p+\overline{\nu }_e`$, the escaping or incident $`\overline{\nu }_e`$ must have an energy $`E_{\overline{\nu }_e}=Q+E_e1.804\mathrm{MeV}`$. Even for the case we examined with the greatest spectral distortion ($`|\delta m_{se}^2|=1\mathrm{eV}^2`$), the spectral cutoff never extends above $`E_{\overline{\nu }_e}=1.35\mathrm{MeV}`$. Therefore, $`n+e^+p+\overline{\nu }_e`$ is not altered by this low energy distortion (see Fig. 4 (b)). The interaction $`p+e^{}+\overline{\nu }_en`$ makes a very small contribution to $`\lambda _{pn}`$, and modifications of its rate are not important to $`n/p`$ at freeze-out. The only significant reaction rate affected by the $`\overline{\nu }_e`$ distortion is neutron decay: $`np+e^{}+\overline{\nu }_e`$ (Fig. 4). The reaction is limited in SBBN by Fermi-blocking of low-energy $`\overline{\nu }_e`$ products. This blocking is lifted when low-energy $`\overline{\nu }_e`$ transform to $`\nu _s`$. The integrand of the neutron decay rate $$\lambda _{npe\nu }=Av_eE_\nu ^2E_e^2𝑑p_\nu [1+e^{E_\nu /kT_\nu }]^1[1+e^{E_e/kT_\nu }]^1$$ (8) can be seen in Fig. 3 for the most dramatic case of $`|\delta m^2|=1\mathrm{eV}^2`$. The Fermi-blocking term, $`[1+e^{E_\nu /kT_\nu }]^1`$, is unity for low energy neutrinos that have transformed to steriles. The change in $`\lambda _{n\nu pe}`$ (for $`L_{\nu _e}<0`$) is significantly larger than that for $`\lambda _{npe\nu }`$ (for $`L_{\nu _e}>0`$). The change in helium mass fraction ($`Y_p`$) produced by BBN for $`L_{\nu _e}<0`$ is $`\delta Y_p0.022`$, while a $`L_{\nu _e}>0`$ correspondingly causes smaller a change of $`\delta Y_p0.0021`$ . In summary, if electron-type neutrino spectra are altered through matter-enhanced transformation and survive until nucleosynthesis, they can modify $`np`$ interconversion rates through the availability of initial and final neutrino energy states. This kind of scenario is realized in $`\nu _e\nu _s(\overline{\nu }_e\overline{\nu }_s)`$ matter-enhanced transformations with $`|\delta m^2|1\mathrm{eV}^2`$. Non-thermal $`\nu _e/\overline{\nu }_e`$ spectra will arise in any case where flavor transformations occur near or below $`T_{\mathrm{dec}}`$ between a more-populated electron-type neutrino and some other less-populated neutrino flavor, and will produce the same kinds of limitations on reaction phase space as the resonance point goes from lower to higher neutrino energies. The effects of non-thermal electron-type neutrino distributions will then need to be included. ## ACKNOWLEDGMENTS K. A., X. S., and G. M. F. are partially supported by NSF grant PHY98-00980 at UCSD. K. A. acknowledges partial support from the NASA GSRP program.
no-problem/9909/astro-ph9909290.html
ar5iv
text
# ON THE RELATIONSHIP BETWEEN GALAXY FORMATION AND QUASAR EVOLUTION ## 1 INTRODUCTION For long time quasar evolution and galaxy formation have been seen as quite unrelated items of observational cosmology. In addition to the difficulty to sample homogeneous redshift intervals for the two source populations, the main reason for this dichotomy was a lack of understanding of the physical processes ruling the quasar phenomenon and galaxy formation and evolution. In this context, an important discovery by the refurbished HST was that most (if not all) massive galaxies in the local universe harbor a nuclear super-massive ($`10^710^9M_{}`$) dark object (probably a BH), with a mass proportional to that of the hosting spheroid ($`M_{BH}0.0020.005M_{spheroid}`$; Kormendy and Richstone 1995; Faber et al. 1997; Magorrian et al. 1998). This is indeed the prediction of the canonical model for AGNs – assuming energy production by gas accretion onto a super-massive BH – and is consistent with evolutionary schemes which interprete the quasar phase as a luminous short-lived event occurring in a large fraction of all normal galaxies, rather than one concerning a small minority of ”pathological” systems (e.g. Cavaliere & Padovani 1988). This association of nuclear massive dark objects with luminous spheroids is also directly proven by HST out to moderate redshifts ($`z0.4`$), where the host galaxies of radio-quiet and radio-loud quasars are found to be large massive ellipticals (e.g. Mc Lure et al. 1999). On the distant-galaxy side, important progresses have been made in the characterization of the star-formation history as a function of redshift (Lilly et al. 1996; Madau et al. 1996). Deep HST imaging also allowed to differentiate this history as a function of morphological type, for both cluster (Stanford et al. 1998) and field galaxies (Franceschini et al. 1998). Interpretations of the relationship of quasar activity and galaxy formation have been attempted in the framework of the hierarchical dark-matter cosmogony (e.g. Haehnelt, Natarajan & Rees, 1998). However, the QSO-galaxy connection is still enigmatic in several respects. In particular, while hierarchical clustering seems to successfully account for the onset of the quasar era at $`z3`$ \[assumed that supermassive BH form proportionally to the forming dark matter halos\], the progressive decay of the quasar population at lower $`z`$ (the QSO evolution) is not accounted for as naturally, and has to require a substantially more detailed physical description (e.g. including the role of galaxy interactions, see Cavaliere & Vittorini 1998). To provide further constraints on this interpretative effort, we confront here the most detailed information on the evolutionary histories of the AGN emissivity and of the stellar populations in galaxies. The available information on the evolution of the Star Formation Rate (SFR) is summarized in Sect. 2 , and compared in Sect. 3 with that of AGN emissivity. Some consequencies, in particular concerning the stellar and BH remnants of the past activity and implications for models of structure formation, are discussed in Sect. 4, while Sect. 5 summarizes our conclusions. $`H_0=50Km/s/Mpc`$ and two values ($`q_0`$=0.5 and 0.15) for the deceleration parameter are assumed. ## 2 STELLAR FORMATION HISTORIES OF GALAXY POPULATIONS The systematic use of the Lyman ”drop-out” technique to detect $`z>2`$ galaxies (Steidel et al. 1994), combined with spectroscopic surveys of galaxies at $`z1`$ (Lilly et al.1995) and $`H\alpha `$-based measurements of the present-day SFR, have allowed to determine the evolution of the SFR in galaxies over most of the Hubble time (Madau et al. 1996). Figures 1 and 2 summarize some results of these analyses (for two values of $`q_0`$) in the form of small filled circles, where the SFR is measured from fits of the UV rest-frame flux (assuming a Salpeter IMF with low mass cutoff $`M_{low}=0.1M_{}`$). We see, in particular, an increase by a factor $`1020`$ of the SFR density going from z=0 to $`z11.5`$. The main uncertainty in these estimates is due to the - a priory unknown - effect of dust, erasing the optical-UV flux and plaguing the identification of distant reddened galaxies in the ”drop-out” catalogues. Corrections for this effect have been estimated to range from 1 to 3 magnitudes, and mostly apply above $`z1`$. Although rather uncertain at the moment, the SFR density above redshift 1 seems to keep roughly constant up to z$`3`$ (the datapoint at z=3 in Fig. 1 comes from Meurer et al. 1999). Exploiting the exceptional imaging quality and spectral coverage available in the Hubble Deep Field, Franceschini et al. (1998) have analysed a sample of morphologically-selected early-type galaxies with $`z<1.3`$. By fitting synthetic galaxy spectra to these data, it has been possible to estimate the baryonic mass and age distribution of stellar populations, and hence the star-formation history per comoving volume (reported as shadowed regions in Figs. 1 and 2): the evolutionary SFR density of field early-type galaxies is roughly constant at $`z1.5`$, while showing a fast convergence at lower redshifts (correspondingly, 90% of stars in E/S0 galaxies have been formed between $`z1`$ and 3 for $`q_0=0.15`$; $`z1`$ and 4 for $`q_0=0.5`$). This evolution pattern is quite different from that of the general field population (small circles in Figs. 1 and 2), which displays a much shallower dependence on cosmic time. We note that at $`z>2`$ the uncertainties in all estimates of the SFR become very serious. The SFR estimate of early-type galaxies by Franceschini et al. is based on fits to the SEDs of galaxies at $`z<1.3`$, and then become uncertain at $`z>2`$. Similar problems affect the SFR estimates for the Lyman ”drop-out” galaxies, because of the reddening problem. The present analysis will therefore mostly concentrate on data at $`z<2`$. Thus galaxies with early morphological types – dominating the galaxy populations in high-density environments – show an accelerated formation history, while later types, more typical of low-density regions, are slower in forming their stellar content. In the merging picture of elliptical and S0 galaxy formation (e.g. Toomre & Toomre 1972; Barnes & Hernquist 1992) an accelerated SF history in high-density environments is naturally accounted for by a higher rate of galaxy interactions and mergers at early epochs. ## 3 FORMATION AND EVOLUTION OF ACTIVE GALACTIC NUCLEI Quasars and AGNs have been studied in all accessible wavebands, in all of them showing evidence for strong cosmological evolution. The most efficient AGN selection comes from X-ray surveys: a purely flux-limited sample at X-ray energies above 0.5 KeV contains a vast majority of AGNs with no particular bias as a function of redshift. The same is not true for optical searches, suffering incompleteness at both low-luminosities (because of the contribution of the host galaxy reddening the colours) and at the high redshifts (because of the effect of dust and intergalactic opacity), and for surveys of radio sources, since only a minority of AGNs emit in the radio. A further crucial advantage of the X-ray selection is provided by the X-ray background (XRB), which sets an integral constraint on AGN emissivity at fluxes much fainter than observable with the present facilities. Miyaji et al. (1999) have used a large X-ray AGN database of 7 ROSAT samples – including wide-area surveys (Appenzeller et al. 1998) and very deep PSPC and HRI integrations on small selected areas (Hasinger et al. 1998) – to analyse the evolution of X-ray AGNs over large redshift ($`0<z<5`$) and luminosity ($`10^{42}<L_{0.52keV}<10^{47}erg/s`$) intervals. Various kinds of evolution patterns have been tested, including Pure Density (PDE), Pure Luminosity (PLE), and Luminosity Dependent Density Evolution (LDDE). These X-ray surveys indicate for the first time that the AGN evolution is inconsistent with strict number density conservation (PLE), and rather require a combined increase with redshift of the source number and luminosity (LDDE). The X-ray luminosity function $`n(L_x,z)`$ (sources per unit volume and unit logarithmic $`L_x`$ interval) is modelled by Miyaji et al. as a two power-law expression $$n(L_x,z)[(L_x/L_{})^{\gamma _1}+(L_x/L_{})^{\gamma _2}]^1e(L_x,z),$$ where $`L_x`$ the luminosity in the 0.5-2 keV band, $`e(L_x,z)`$ is the luminosity-dependent evolution term and $`\gamma _10.6,\gamma _22.3`$. The best-fit LDDE model implies a lower evolution rate at lower $`L_x`$, and simple density evolution (PDE) for the higher luminosity sources: $`e(L_x,z)=(1+z)^p`$, where $`p`$ linearly increases with $`logL_x`$ for $`L_x10^{4444.5}erg/s`$ and is $`5.5`$ above. The number density of high-$`L_x`$ QSOs as a function of $`z`$ has been used by Hasinger (1998) to estimate the evolution of the AGN emissivity. We report in Table 1 and in Figs. 1 and 2 (as large filled squares) estimates of the soft X-ray emissivity per comoving volume versus $`z`$ for AGNs more luminous than $`L_{th}=10^{44.25}erg/s`$ (this is the luminosity threshold above which a pure density evolution applies). It is apparent in these plots a remarkable similarity in the $`z`$-dependence of the emissivity of high-luminosity AGNs and the SFR history of elliptical/S0 galaxies (shadowed areas). Both functions show a quite faster decrease with cosmic time of the emissivity with respect to that of the general field population. A very steeply declining emissivity of luminous AGNs is also confirmed by the fast evolution of optical quasars (thick dashed lines). This result is in agreement with the local evidence for an association between nuclear super-massive dark objects and massive spheroids, and with the identification of the hosts of luminous quasars as galaxies with an early-type morphology. On the other end, a direct comparison with the population emissivities of lower luminosity AGNs is difficult, as the present sensitivity limits prevent measurement of the LF of very faint AGNs. Here the information from the XRB becomes essential. Adopting the LDDE best-fit model as a description the LF of medium- to high-L AGNs, we extrapolated it to lower $`L_x`$ in such a way to reproduce most of the observed low-energy XRB. The top two continuous lines in Figs. 1 and 2 are the AGN 0.5-2 KeV luminosity density as a function of $`z`$ corresponding to such two extrapolations, one producing 60% (henceforth LDDE1) and the other 90% (LDDE2) of the XRB at 1 KeV. We see an accurate match between the redshift dependences of the global AGN emissivity and of the rate of formation of stars in the field galaxy population at $`z<1.2`$. At $`z1.5`$ X-ray observations indicate that the AGN emissivity keeps constant, roughly as it has been found for the SFR density. Then, within the uncertainties, the close association of galaxy and AGN evolution may hold over a substantial redshift interval. ``` _________________________________________________________ TABLE 1. Luminosity density Ld between 0.5 and 2 keV per Ψ comoving volume in erg/s/Mpc^3 of AGN with Ψ log L>log L_th=44.25. Note that for any values Ψ of log L_th>44, Ld scales proportionally to Ψ (L_th/10^44.25)^1.3 ΨΨ zmin zmax Ld sig(Ld) Ld sig(Ld) ----q0=0.5------ ----q0=0.15----- 0.05 0.20 1.71e37 3.20e36 1.68e37 3.08e36 0.20 0.40 3.24e37 6.02e36 3.15e37 5.66e36 0.40 0.75 5.95e37 1.33e37 6.39e37 1.33e37 0.75 1.20 2.85e38 5.58e37 3.23e38 5.39e37 1.20 1.50 9.52e38 1.87e38 1.09e39 2.02e38 1.50 2.00 1.10e39 2.60e38 1.71e39 3.18e38 2.00 3.00 2.01e39 4.62e38 2.13e39 4.43e38 3.00 4.60 1.52e39 6.79e38 2.27e39 8.56e38 ________________________________________________________ ``` How significant is the difference in the evolution rates between high and low X-ray luminosity AGNs? Assuming a luminosity threshold at $`L_{th}=10^{44.25}erg/s`$, we find: a) the LDDE1 $`q_0=0.15`$ model implies an increase in the volume emissivity going from $`z=0`$ to $`z=1.5`$ by a factor 100 for sources with $`L>L_{th}`$ and by a factor 10 for sources with $`L<L_{th}`$; in the $`q_0=0.5`$ case, the two factors become 100 and 16; b) for the LDDE2 $`q_0=0.15`$ model the evolution in the volume emissivity is by a factor 100 for $`L>L_{th}`$ and 30 for $`L<L_{th}`$, while for $`q_0=0.5`$ the two factors become 100 and 50. Altogether, there is at least a factor 2 difference in the evolution rates of high-$`L_x`$ and low-$`L_x`$ AGNs, which may get as high as 10 for the open LDDE1 case. ## 4 DISCUSSION The similarities of the evolution rates between high-L AGNs and massive spheroidal galaxies, as well as those between low-L AGNs and the field galaxy population, call for a close relationship of the processes triggering star-formation with those responsible for the gas flow fueling the massive BH. Were the SF and BH accretion processes really concomitant, we would expect that the luminosity functions of starbursting galaxies and of quasars and AGNs should have similar shapes and perhaps similar normalizations when the luminosities are expressed in bolometric units. This seems indeed indicated by the comparative study of the bolometric luminosity functions of luminous far-IR galaxies (the far-IR flux being a good measure of the SFR), of bright optical quasars (Schmidt & Green 1983), and lower luminosity Markarian Seyferts, as reported by Soifer et al. (1987) and Sanders & Mirabel (1996): these LFs have been found to share not only very similar power-law shapes at the high luminosities ($`n[L_{bol}]L^{2.1}`$), but even similar (within a factor of $`2`$) normalizations. We interprete this coincidence as a further support to our results. ### 4.1 The energetics associated with SF and AGN activity Although substantial uncertainties are inherent in the absolute normalization of various curves in Figs. 1 and 2, it may be worth making an order-of-magnitude comparison of the energetics associated with the processes of stellar formation in galaxies and BH accretion in AGNs. Interesting constraints will ensue from matching them with the local remnants (i.e. low-mass stars and supermassive BHs) of the past activities. Let us first evaluate the global bolometric emissivity by AGNs as a function of $`z`$ by applying a bolometric correction to the 0.5-2 KeV emissivity (top two lines in the figures). From the average X-ray to optical spectral index $`\alpha _{ox}=log(L_{2500A}/L_{2KeV})/2.6051.4`$ (La Franca et al. 1995) we obtain the $`2500A`$ flux, and then correct it by a further factor 5.6 to get the bolometric flux (Elvis et al. 1994). For type-I AGNs (dominating the soft X-ray samples) we then find $`L_{BOL,typeI}50L_{0.52keV}.`$ Since type-I AGNs contribute only $`20\%`$ to the total hard X-ray background (e.g. Schmidt & Green 1983), we have another factor $`5`$ to account for the contribution of type-II AGNs to the global emissivity: $$L_{BOL,AGN}5L_{BOL,typeI}250\left(\frac{f_{BOL}}{250}\right)L_{0.52keV},$$ (1) where $`f_{BOL}`$ parametrizes the average bolometric correction factor to the 0.5-2 keV flux. As for galaxies, the scaling factor from $`\dot{M}(M_{}/yr)`$ to bolometric luminosity is simply given by $$L_{BOL,SFR}[erg/s]=c^2ϵ\dot{M}\mathrm{6\; 10}^{43}\frac{ϵ}{0.001}\frac{\dot{M}}{[M_{}/yr]},$$ (2) for a stellar radiative efficiency of $`ϵ=0.001`$ consistent with the assumption by Franceschini et al. (1998) and Madau et al. (1996) of a Salpeter IMF with lower mass limit of $`M_{low}=0.1M_{}`$ and primordial initial composition (newly born stars are assumed to immediately release most of their energy). Assuming a fiducial $`\eta =10\%`$ radiation efficiency by BH accretion, and taking into account that there is a factor $`\mathrm{2.4\; 10}^{40}`$ to bring the left-hand axis scale in Figs. 1 & 2 to coincide with that of the right-hand axis, we find that the ratio of the mass $`M_{BH}`$ of the remnant locked as a supermassive BH after the AGN phase to the remnant in low-mass stars ($`M_{}`$) should be: $$M_{BH}0.001\left(\frac{ϵ}{0.001}\right)\left(\frac{0.1}{\eta }\right)\left(\frac{f_{BOL}}{250}\right)M_{}.$$ (3) If compared with the locally observed ratio of $`M_{BH}`$ to the mass of the hosting spheroid $$M_{BH}(0.0020.005)M_{spheroid},$$ (4) this result is consistent with the fact that nuclear massive BH’s are found primarily associated with galactic bulges, and not e.g. with the disk components which substantially contribute to $`M_{}`$ in eq.(3). A similar exercise comparing the low-mass stellar remnants in E/S0’s with the BH remnants of massive/luminous quasars (respectively shaded regions and filled squares in the figures) would involve definition of the X-ray luminosity threshold $`L_{th}`$ above which AGN are hosted by massive spheroidal galaxies, and the ratio of type-II to type-I quasars. Assuming as above $`L_{th}=10^{44.25}`$ would bring to a relation between the BH mass and the mass of the hosting spheroid similar to that in eq.(3). If we consider that the ratio of type-II to type-I objects among luminous quasars is lower than for lower luminosity AGNs, then to get consistency with the large observed ratio of eq. would require either: a) a very low radiative efficiency in AGNs ($`\eta <0.1`$, e.g. due to a violent super-Eddington accretion phase), or b) an higher bolometric correction for AGNs $`f_{BOL}>250`$ (e.g. because of a heavily dust-extinguished phase during quasar formation; Haehnaelt et al. 1998, Fabian & Iwasawa 1999), or finally, c) a very high ($`ϵ>0.001`$) radiative efficiency of stars in spheroidal galaxies, as allowed by top-heavy stellar IMFs (e.g., for $`M_{low}=8M_{}`$, $`ϵ`$ in eq. would increase to $`0.006`$). Note that lowering the value for $`L_{th}`$, while improving the match of the predicted BH mass in eq.(3) with the observation in eq.(4), would also spoil unacceptably the match between the evolution rates of QSO’s and galaxies in Figs. 1 and 2, hence is not a solution. ### 4.2 Quasars and models of structure formation We discussed evidence that luminous quasars, associated with massive BH’s in massive spheroidal galaxies, evolve on a shorter cosmic timescale than lower luminosity, lower mass objects. This result does not seem to fit into simple predictions based on the gravitationally-driven Press-Schechter formalism. For example Haiman & Menou (1998) obtain from the Press-Schechter theory a much faster decay with cosmic time of the AGN accretion rate (hence of the AGN luminosity) for low-mass BH’s in low-mass dark-matter halos, while that of massive objects is barely expected to decrease from z=3 to z=0. In fact, quite the opposite trend is indicated by our analysis in the previous Sections. A similar problem likely arises when explaining with a process of purely gravitational clustering the origin of galaxies, and of their spheroidal components in particular. What is the trigger and the ruling mechanism for generating a galaxy spheroid together with a super-massive collapsed remnant including $``$0.2% of the mass of the host? As discussed by many authors (Kormendy & Sanders 1992; Barnes & Hernquist 1992), a ruling effect is in the gas/stellar dynamical processes related with galaxy interactions and mergers. This is probably the only way to achieve stellar systems with very high central concentrations as observed in early-type galaxies, an obviously favourable birth-place for massive nuclear star-clusters and a super-massive collapsed object. The merging/interaction concept, together with the progressive exhaustion of the fuel, allows to understand the ”accelerated” evolution of luminous AGNs (Cavaliere & Vittorini 1998), but also the fast decay with time of the SF in massive spheroidal galaxies. In cosmic environments with higher-than-average density the forming galaxies have experienced a high rate of interactions already at high redshifts ($`z>1`$), this bringing shortly to a population of spheroid-dominated galaxies (observable at low-z mostly in groups and clusters of galaxies) containing a massive BH whose accretion history is the same as that of the host galaxy. In lower density environments the interaction rate was slower, and proportionally lower was the mass locked in a bulge with respect to that making stars quiescently in a disk. Galaxies in these low-density environments keep forming stars and accreting matter on the BH down to the present epoch and form the low-redshift AGN population. ## 5 CONCLUSIONS We have found a close match bewteen the evolution rates of the star-formation in galaxies and of the volume emissivity in AGNs. This similarity seems to hold not only for high luminosity AGNs ($`L_x>10^{44.25}erg/s`$) compared with the SF in massive spheroidal galaxies, but also when comparing the average properties of the global populations, including low mass/luminosity systems. A concomitance of SF and AGN/quasar activity is also locally indicated by the almost coincidence of the bolometric luminosity functions of luminous star-forming IR galaxies with those of optical quasars and Markarian Seyferts. Then the same processes triggering the formation of stars also make a fraction of the available gas to accrete and fuel the AGN. These processes are likely to be the interaction and merging events between gas-rich systems (Barnes and Hernquist 1992; Cavaliere and Vittorini 1998). Assuming standard energy production efficiencies for BH accretion and star formation, we have found rough agreement between the volume emissivities by distant quasars and star-forming galaxies and the observed ratios of low-mass stellar and supermassive BH remnants in local objects. We have finally compared our finding that high mass/luminosity systems evolve on a faster cosmic timescale than the lower-mass ones with predictions based on structure formation theories based on the gravitational clustering and coalescence of dark matter halos. Altogether, if the latter provide the needed background conditions for the development of structures, much more physics is needed – in terms of gas/stellar dynamical processes related with galaxy interactions, in terms of the progressive exhaustion of the baryonic fuel available and of feedback reaction – to explain observations of AGN evolution and (spheroidal) galaxy formation. Acknowledgements We are indebted to an anonymous referee for useful comments and criticisms, and to A. Cavaliere for discussions. Work partly supported by ASI Grant ARS-96-74 and the EC TMR Networks ’Galaxy Formation’ and ’ISO Surveys’.
no-problem/9909/astro-ph9909098.html
ar5iv
text
# Evolution of Giant Radio Sources ## 1 Introduction Giant radio sources (GRSs), defined to be those with a projected linear size $``$1 Mpc (H=50 km s<sup>-1</sup> Mpc<sup>-1</sup> and q=0.5), are the largest single objects in the Universe, and are extremely useful for studying a number of astrophysical problems. These range from understanding the evolution of radio sources and constraining orientation-dependent unified schemes to probing the intergalactic medium at different redshifts (Subrahmanyan & Saripalli 1993, Ishwara-Chandra & Saikia 1999 for more detailed discussion). In this paper we present radio images of two giant quasars from the Molonglo/1Jy sample, 0437$``$244 and 1025$``$229, and examine the evolution of GRSs and their consistency with the orientation-based unified schemes for radio galaxies and quasars. ## 2 Giant quasars from the Molonglo/1Jy sample 0437$``$244 : This giant radio quasar (Fig. 1, left) is at a redshift of 0.84 and is at present the highest redshift giant quasar known and has a well defined FR II morphology. Its projected linear size is 1.06 Mpc, and has a core contributing about 10% of the total flux density at an emitted frequency of 8 GHz. The age estimates due to synchrotron radiative losses in the Kardashev-Pacholczyk model are 5.8$`\times 10^7`$ and 2.7$`\times 10^7`$ yr for the northern and southern lobes respectively. The rotation measure between 1.4 and 5 GHz are about 13.6$`\pm `$1.3 and 4.2$`\pm `$3.4 rad m<sup>-2</sup> for northern and southern lobes respectively. The quasar shows no significant depolarization till about 1.4 GHz. 1025$``$229 : This is also a well-defined FR II radio source with two hotspots in the southern lobe, a core contributing about 12% of the total flux density of the source, and a possible jet-like structure close to the north of the radio core (Fig. 1, right). The redshift for this quasar is 0.309 and has a projected linear size of 1.11 Mpc. The spectral age estimates due to synchrotron radiative losses are 7.8$`\times 10^7`$ and 1.2$`\times 10^8`$ yr for the northern and southern lobes respectively. The rotation measures are $``$21.3$`\pm `$2.3, $``$15.3$`\pm `$0.9 and $``$21.5$`\pm `$3.6 rad m<sup>-2</sup> for the northern and southern lobes and the steep-spectrum jet-like feature close to the radio core respectively. Both the lobes do not show significant depolarization between 1.4 and 5 GHz. ## 3 Evolution of giant radio sources We investigate the evolution of the GRSs by plotting all known GRSs from the literature in a luminosity-linear size or P-D diagram along with the complete sample of 3CR radio sources (Laing, Riley & Longair 1983) with sizes between 50 kpc and 1 Mpc (Fig. 2). There is a clear deficit of GRSs with high radio luminosity, suggesting that the luminosity of radio sources decrease as they evolve. We superimpose the evolutionary tracks suggested by Kaiser et al. (1997) for three different jet powers in the P-D diagram and find that our sample of GRSs is roughly consistent with their self-similar models where the lobes lose energy due to expansion and radiative losses due to both inverse-Compton and synchrotron processes. In the models developed by Blundell et al. (1999) the luminosity declines more rapidly than the Kaiser et al. tracks, providing somewhat better fit to the upper envelope for large linear sizes. There is also a sharp cutoff in the sizes of the GRSs at about 3 Mpc, with only one exception, namely 3C236, which has a size of 5.7 Mpc. To investigate whether there are larger sources which may have been missed, one requires low-frequency surveys with higher sensitivity to diffuse, low-brightness emission. Systematic searches for GRSs $`>`$ 3 Mpc using telescopes such as the GMRT would help clarify the late stages of their evolution. We study the relative importance of synchrotron and inverse-Compton losses in the evolution of GRSs using the plot of linear size against the ratio, B$`{}_{}{}^{2}{}_{eq}{}^{}`$/(B$`{}_{}{}^{2}{}_{ic}{}^{}`$ \+ B$`{}_{}{}^{2}{}_{eq}{}^{}`$), which represents the ratio of the energy loss due to synchrotron radiation to the total energy loss due to both inverse-Compton and synchrotron processes (Fig. 3). It is clearly seen that synchrotron losses dominate over inverse-Compton losses for almost all sources below about a Mpc while the reverse is true for the GRSs. This also illustrates that inverse-Compton losses are likely to severely constrain the number of GRSs at high redshifts since the microwave background energy density increases as $`(1+z)^4`$. ## 4 Constraints on Orientation and environment We have examined the suggestion that GRSs have powerful radio cores than smaller sources (Gopal-Krishna, Wiita & Saripalli 1989), which may be responsible for their large linear sizes. In Fig. 4, the fraction of the emission from the core at an emitted frequency of 8 GHz is plotted against the total radio luminosity at 1.4 GHz for GRSs as well as for smaller sources. There is an inverse correlation between the degree of core prominence and the total radio luminosity. However, GRSs have core strengths similar to the smaller sources when matched in redshift or luminosity, implying that GRSs are similar objects to the normal radio sources except for being larger and perhaps older. In order to check the consistency of the GRSs with the unified scheme for radio galaxies and quasars, we have compared some of the orientation-dependent features such as core strength and core variability of giant radio galaxies and giant quasars. Although the available data are limited, these properties are consistent with the unified scheme. We have also examined the environments around GRSs using their arm-length ratio and the misalignment angle. The arm-length ratio for GRSs are similar to those for the smaller sources, indicating that the environment might be asymmetric on Mpc scales. The misalignment angle for the GRSs is also similar to the smaller sources, suggesting that their large sizes are unlikely to be due to a steadier ejection axis.
no-problem/9909/astro-ph9909267.html
ar5iv
text
# Joint formation of bright quasars and elliptical galaxies in the young Universe ## 1 Introduction Quasars, and AGN in general, are often supposed to be powered by accretion of gas into supermassive black holes (BHs). In this case, large dormant BHs are expected in the nuclei of nearby galaxies (Soltan 1982; Rees 1984; Cavaliere & Padovani 1986). Assuming accretion at a known fraction of the Eddington rate and efficiency of radiation of 10% in units of $`mc^2`$, it is possible to estimate the expected mass function of dormant BHs. This mass function implies a number density of large BHs ($`>10^8`$ M) compatible with the hypothesis that a BH is present in each bright bulge. Recent detailed observations of the cores of nearby galaxies have lead to the discovery of massive dark objects in most cases (Magorrian et al. 1998; van der Marel 1999). Even though many details are still uncertain, many authors agree in claiming a correlation between the mass of the massive dark object and the host bulge. Interpreting these dark objects as the expected dormant BHs, the BH – bulge correlation strongly suggests a connection between quasar activity and the formation of galactic bulges. ## 2 The mass function of dormant black holes. It is assumed that the accretion of matter onto a BH of mass $`M_{}`$ is a fixed fraction $`f_{\mathrm{ED}}`$ of the Eddington rate, so that the quasar luminosity is $`L=f_{\mathrm{ED}}L_{\mathrm{ED}}`$, where $`L_{\mathrm{ED}}\mathrm{3.4\; 10}^4(M_{}/M_{})L_{}`$. The efficiency of accretion $`f_{\mathrm{ED}}`$ is assumed to increase from 0.1 for the smallest BHs ($`10^6`$ M) to 1 for the largest ones ($`10^{10}`$ M). Then, the mass function of dormant BHs is calculated by integrating the luminosity function of quasars (see Salucci et al. 1999a for details). We assume that a significant fraction of AGNs are heavily obscured, and give the dominant contribution to the hard X-ray cosmological background (see, e.g., Celotti et al. 1995; Comastri et al. 1995; Fiore et al. 1998). We include such objects using the model of Comastri et al. (1995). The resulting expected mass function of dormant BHs is shown in Fig. 1 (dashed line), for $`H_0=70`$ km/s/Mpc and $`\mathrm{\Omega }=1`$. The mass function of dormant BHs residing in nearby galaxies is estimated with two independent methods (see Salucci et al. 1999a for details). Firstly, the mass function of galactic bulges is convolved with a fiducial BH – bulge relation (a lognormal, with width 0.3 dex and average $`M_{}/M_{\mathrm{bulge}}\mathrm{3\; 10}^3`$). Fig. 1 shows the resulting mass function (continuous line). Secondly, exploiting the correlation between radio power from the core of elliptical galaxies and BH mass ($`PM_{}^\alpha `$, where $`\alpha 22.2`$, see also Franceschini et al. 1998), the radio luminosity function of elliptical cores is convolved with a BH – radio power relation to obtain another estimate of the mass function of the dormant objects. The result is again shown in Fig. 1 (points with errorbars). ## 3 A favoured scenario The three determinations of the mass function of dormant BHs agree for reasonable values of the parameters involved. This highlights a dichotomy (in a statistical sense) in the behaviour of BHs. Larger objects ($`M>10^8`$ M) are hosted in ellipticals, shine as bright quasars at high redshift, almost at the Eddington luminosity, are hardly reactivated and hardly obscured, while smaller BHs ($`M<10^8`$ M) are hosted in the bulges of spiral galaxies, shine also at low redshift with a lower luminosity (in Eddington units), and may be reactivated and obscured. The abundance of BHs in the bulges of spiral galaxies is more difficult to estimate. Upper limits have been determined by Salucci et al. (1999b) by analyzing nearly a thousand rotation curves for spirals. ## 4 A model for the joint formation of quasars and elliptical galaxies We have constructed an analytical model for the joint formation of ellipticals and quasars in the framework of hierarchical CDM models. The details are given in Monaco, Salucci & Danese (1999). In a bulge, quasar activity and the main burst of star formation, which mark the main “shining phase” of a galactic dark matter halo, are likely to be close in time (see, e.g., Hamann & Ferland 1993; see also Best, these proceedings). It is supposed that the shining phase of a galactic halo is delayed with respect to its dynamical formation. This delay is assumed to be small for the halos corresponding to large ellipticals, and increasingly larger for smaller halos. In this way the hierarchical order is inverted for halo shining. This is done to reproduce the apparent anti-hierarchical evolution of quasars while preserving a correlation between bulge and BH mass. The mass of the BH formed during the shining phase is assumed to depend on the halo mass, and to be modulated by the same variable which determines the morphological type, so as to obtain a BH-bulge relation. This variable is assumed to be either the spin of the dark matter halo or the fraction of the merging masses at the formation time. The model reproduces successfully the main observable quantities relative both to elliptical galaxies and quasars; see Monaco et al. (1999) for details. The results shown in this paper are for a flat CDM model with $`\mathrm{\Omega }=0.3`$, cosmological constant and $`H_0=70`$ km/s/Mpc. Fig. 2 shows the predicted mass function for the dark matter halos of ellipticals, compared with that inferred from the luminosity function and reasonable hypotheses on the mass-to-light ratios of ellipticals. Fig. 3 shows the comparison between the predicted and observed quasar luminosity functions at different redshifts. The data are taken from Pei (1995), Boyle et al. (1993) ($`z=1`$ to 3) and Kennefick, Djorgovski and Meylan (1996) ($`z=4.4`$). ###### Acknowledgements. The authors thank E. Szuszkiewicz and C. Ratnam for discussions.
no-problem/9909/hep-th9909158.html
ar5iv
text
# 1 Introduction ## 1 Introduction There has been an accumulation of evidence in favor of the quark model of hadrons and we can no longer think of any other substitute for it. Yet, no isolated quarks have been observed to date, and we are inclined to think that observation of isolated quarks is, in principle, impossible. This is the hypothesis of quark confinement, and it has been further extended to that of color confinement that implies not only the unobservability of quarks but also of all the isolated colored particles such as quarks and gluons. Then a natural question is raised of whether or not we can account for this hypothesis within the framework of the conventional quantum chromodynamics (QCD) dealing with the gauge interactions of quarks and gluons. The answer to this question is affirmative and the detailed mathematical proof of color confinement has been published elsewhere \[2-6\]. In this article, therefore, we shall follow the flow of ideas underlying the proof in a qualitative manner. The problem of color confinement may be decomposed into two steps. The first step consists of finding a consensus of interpretations of color confinement. Unless it is properly settled we do not know what we have to prove in the second step. Because of the importance of this subject many authors have proposed various interpretations. A typical example is Wilson’s area law for the loop correlation function in the lattice gauge theory . When it is obeyed the interaction between a quark and an antiquark is given by a confining linear potential. Another example is given by coherent superposition of magnetic monopoles in the vacuum state \[8-13\]. This is dual to the superconducting vacuum based on coherent superposition of charged objects such as the Cooper pairs. Corresponding to the superconductor of the second kind a pair of magnetic monopoles can be connected by a quantized magnetic flux forming a hadronic string whose energy is proportional to the distance between them. Then the situation is similar to the preceding example. In these examples one introduces a topological structure through monopoles, strings and instantons into the configuration space. In the present paper, however, we shall consider a different topological structure in the state vector space. For this purpose we look for a known example of confinement within the framework of known field theories, and we find a prototype example in quantum electrodynamics (QED) . When the electromagnetic field is quantized in a covariant gauge, say, in the Fermi gauge, three kinds of photons emerge, namely, transverse, longitudinal and scalar photons, but only the transverse photons are subject to observation leaving the other two unobservable. We recognize that this is indeed a typical example of confinement, and we may be able to find some clues to color confinement by studying closely the mechanism of confinement of longitudinal and scalar photons in QED. For this reason we analyze its mechanism in Sec. 2 so that we can generalize it and apply it to QCD. One of the profound features of gauge theories is the Becchi-Rouet-Stora (BRS) invariance and its introduction is vital to the interpretation of confinement. Therefore, we shall describe some of the basic properties of this invariance in Sec. 3. The strong interactions described by QCD possess a novel feature called asymptotic freedom , and in Sec. 4 we shall discuss how this aspect of strong interactions drew our attention and how the non-Abelian gauge theory entered the game. Finally in Sec. 5 we shall combine BRS invariance with asymptotic freedom to prove color confinement. ## 2 Quantum Electrodynamics and Indefinite Metric When the electromagnetic field is quantized in a covariant gauge, say, in the Fermi gauge, we find transverse, longitudinal and scalar photons, but the latter two are never observed. We may interpret it as an example of confinement, and we have at least three alternative ways of explaining it. First, we can refer to the representations of the Poincaré group for massless particles . Then, massless particles are known to have only two directions of polarization no matter what their spin is. Thus photons are always transversely polarized and the same would be true with gluons if they could be observed. The second method is to employ the Coulomb gauge by keeping only the transverse photons from the start. The remnants of unobservable photons manifest themselves in the form of the Coulomb potential. This method is applicable, however, only to the linear Abelian gauge theories such as QED. The third and the most useful method is the introduction of a subsidiary condition such as the Lorentz condition. Quantization of the electromagnetic field in a covariant gauge forces us to introduce indefinite metric which is inherited from the Minkowski metric. Thus the whole state vector space in QED can no longer possess the positive-definite metric, and for the physical interpretation of the theory we have to eliminate indefinite metric by imposing the Lorentz condition on the state vectors to select observable or physical states. In order to execute this program let us quantize the free electromagnetic field in the Fermi gauge and for a given momentum we have four directions of polarization, namely, two transverse, one longitudinal and one scalar. Thus we have four kinds of photons specified by the directions of polarization. The canonical quantization then implies that the scalar photons are represented by negative norm states. This is a consequence of the manifest covariance of the quantization of the vector field in the Minkowski space. The emergence of indefinite metric indicates that observable states occupy only a portion of the whole state vector space called the physical subspace. In order to define such a subspace we introduce a subsidiary condition known as the Lorentz condition. Let us consider the four-divergence of the vector field, then it represents a free massless field even in the presence of the interactions. We decompose it into a sum of positive- and negative- frequency parts corresponding to destruction and creation operators, respectively. We find that the photons involved in this operator are special combinations of the longitudinal and scalar photons in the amplitude. We shall call them a-photons, then an a-photon state has zero norm. We can introduce an alternative combination of longitudinal and scalar photons called b-photons in such a way that a b-photon state also has zero norm. Thus for a given momentum we have two transverse (t-) photons, an a-photon and a b-photon. Although both an a-photon state and a b-photon state have zero norm, their inner product is non-vanishing so that they are metric partners. A physical state is defined as such a state that is annihilated by applying the positive frequency part of the four-divergence of the vector field. This is the Lorentz condition. We can easily verify that the S matrix in QED transforms a physical state into another physical state since it commutes with the four-divergence. This is one of the general features of the subsidiary condition. Also we can easily verify that the b-photons are excluded from the physical subspace. Therefore, we have only t-photons and a-photons in the physical states. Then we can show that the inner product of a physical state involving at least one a-photon with another physical state vanishes identically. In other words, a-photons give no contributions to observable quantities, and both a- and b-photons escape detection. This is the confinement mechanism of the longitudinal and scalar photons. In QED only the transverse photons remain observable. In QCD, however, not only longitudinal and scalar gluons but also transverse gluons are unobservable. Thus, there are some essential differences in the nature of confinement between QED and QCD. In the former case confinement is kinematical in the sense that it could be understood without recourse to dynamics of the system, whereas in the latter case it is dynamical in nature as the proof depends sensitively on the dynamical properties of the system. ## 3 Quantum Chromodynamics and BRS Invariance As we shall see in the next section strong interactions of quarks are mediated by a non-Abelian gauge field corresponding to the $`SU(3)`$ color symmetry. Thus we shall discuss one of the most characteristic features of gauge theories known as the BRS invariance in this section . Classical electrodynamics is gauge-invariant. Field strengths expressed in terms of the vector field are invariant under the local or space-time-dependent gauge transformations of the latter. Given a source term, therefore, the solution of the equation for the vector field is not uniquely given, and this non-uniqueness is an obstacle to quantization. In order to overcome this difficulty we add to the gauge-invariant Lagrangian a term violating the local gauge invariance. This extra term is called the gauge-fixing term and was first introduced by Fermi. Later it has been generalized so as to include an arbitrary parameter called the gauge parameter. In the original form introduced by Fermi this parameter is equal to unity. After quantization we find that we have to introduce indefinite metric into the state vector space and that the divergence of the vector field commutes with the S matrix. Because of the inclusion of the gauge-fixing term the field equation deviates from the classical Maxwell equation by a term proportional to the four-divergence of the vector field. It so happens that a matrix element of this four-divergence between two physical states vanishes identically because of the Lorentz condition, and the classical Maxwell equation is recovered in the physical subspace. In this way we find, despite the introduction of the gauge-fixing term, that expectation values of gauge-invariant quantities and the S matrix elements in the physical subspace are independent of the choice of the gauge parameter because of the congeniality between the gauge-fixing term and the subsidiary condition. In what follows we shall extend this approach to QCD. There are many essential differences between QED and QCD, however. The former is an Abelian gauge theory described by a linear field equation, whereas the latter is a non-Abelian gauge theory described by a non-linear field equation. In both cases the gauge-invariant part of the Lagrangian is given by the square of the field strength. So, let us introduce the gauge-fixing term in QCD assuming the same structure as in QED. Then we recognize that it does not work because observable quantities depend explicitly on the gauge parameter. Another difficulty arises from the fact that the four-divergence of the gauge field is no longer a free field, and this prevents us from defining its positive frequency part. In other words, the Lorentz condition cannot be employed to define physical states in QCD. Thus we are obliged to find a device to overcome these difficulties and to this end we shall introduce the Faddeev-Popov ghost fields. In order to eliminate the gauge-dependence of physically relevant quantities Faddeev and Popov have proposed a procedure of averaging the path integral over the manifold of gauge transformations. We skip the mathematical detail here and refer to the original paper , but we should mention that this procedure resulted in a new additional term in the Lagrangian called the Faddeev-Popov (FP) ghost term. This term involves a pair of Hermitian scalar fields, but they are anticommuting and consequently violate Pauli’s theorem on the connection between spin and statistics. For this reason they are called ghost fields. Pauli’s theorem is based on three postulates, (1) Lorentz invariance, (2) local commutativity or microscopic causality and (3) positive-definite metric for state vectors, and the FP ghost fields violate the last one obliging us to introduce indefinite metric into the theory. Thus we face again the problem of eliminating indefinite metric from the theory with the help of an appropriate subsidiary condition to select physical states out of the whole state vector space. When physical states are so defined as those that are annihilated by applying a certain operator, that operator should commute with the S matrix as does the four-divergence of the vector field in QED. In order to find such an operator a novel symmetry discovered by Becchi, Rouet and Stora is extremely useful. Although this symmetry was originally utilized in renormalizing QCD, it plays an essential role in the proof of color confinement in QCD. In a classical gauge theory a local gauge transformation is specified by a function of the space-time coordinates called the gauge function and the classical theory is invariant under such a transformation. This local gauge invariance is lost when the gauge-fixing and FP ghost terms are introduced. Besides, local gauge transformations are defined only for the color gauge field and the quark fields, but they are not even defined for FP ghost fields. The BRS transformations for the color gauge field and the quark fields are given by replacing the gauge function by one of the FP ghost fields in infinitesimal gauge transformations. Since we have a pair of ghost fields we introduce, correspondingly, a pair of BRS transformations. Then a question is raised of how to define BRS transformations of the ghost fields since their gauge transformations are not defined. Fortunately, this problem has a simple but beautiful solution. Their BRS transformations are introduced by demanding the invariance of the total Lagrangian under them. The total Lagrangian including the gauge-fixing and FP ghost terms is no longer invariant under local gauge transformations, but it is invariant under the global BRS transformations. Noether’s theorem then tells us that there must be a pair of conserved quantities corresponding to a pair of BRS symmetries. They are Hermitian and called the BRS charges. As mentioned before there are two kinds of Hermitian FP ghost fields and correspondingly a BRS charge must involve one of the ghost fields. In what follows we keep only one of these two charges for simplicity. The BRS charge that we keep is anticommuting just as the FP ghost field, and consequently the square of the BRS charge vanishes and it is called nilpotent. The Hermiticity and nilpotency of the BRS charge would imply indefinite metric since otherwise it would be a null operator . The nilpotency is important and allows us to introduce the concept of cohomology in the theory. After a long detour we are going to introduce an appropriate subsidiary condition. Physical states are defined as those states that are annihilated by applying the BRS charge . The FP ghost fields do not appear in the conventional QED but we can also introduce them although they are non-interacting fields. Then we can combine the Lorentz condition with the additional condition implying the absence of FP ghosts to define the physical states. When these conditions are satisfied, we can prove that physical states so defined are annihilated by the BRS charge in QED. The BRS charge is the generator of the BRS transformation and the BRS transform of an operator is given by the commutator or anticommutator of that operator with the BRS charge, and this transformation is also nilpotent. An operator which is the BRS transform of another operator is called an exact operator, then it is clear that the matrix element of an exact operator between a pair of physical states vanishes. The equation for the non-Abelian gauge field deviates from the classical Maxwell equation and in fact the divergence of the field strength plus the color current does not vanish but is equal to a certain exact operator, which will be referred to as an exact current hereafter. Therefore, the classical Maxwell equation is recovered when we take the matrix element of the field equation between a pair of physical states. Furthermore, the BRS charge commutes with the S matrix. Thus the scenario in QED is reproduced almost exactly. When single quark states and single gluon states are unphysical these particles are unobservable and consequently confined. Thus the problem of color confinement reduces to that of proving that they are unphysical states. We shall evaluate the expectation value of the exact current in a single quark state or a single gluon state. If they should belong to physical states the expectation values in these states would vanish identically, so that non-vanishing of the expectation values would be a direct indication that these particles are unphysical and confined. The four-divergence of the exact current vanishes, and we can give a set of Ward-Takahashi identities for Green’s functions involving the exact current \[2-4\]. By making use of the above set of Ward-Takahashi identities we can prove that the expectation value of the exact current in a single colored particle state survives when the exact current as applied to the vacuum state does not generate a massless spin zero particle. Therefore, the absence of such a massless particle is a sufficient condition for color confinement \[2-4\]. In order to check its absence we introduce the vacuum expectation value of the time-ordered product of the gauge field and the exact current and evaluate the residue C of the massless spin zero pole of the Fourier transform of this two-point function. The four-divergence of this two-point function is proportional to this constant C except for a trivial kinematical factor, and the divergence can be cast in the form of an equal-time commutator. By checking this equal-time commutator closely we find that C is the sum of a constant a and the Goto-Imamura-Schwinger (GIS) term. The constant a is equal to the inverse of the renormalization constant of the color gauge field. These constants C and a satisfy distinct renormalization group (RG) equations and boundary conditions. We shall not enter this subject here since the mathematical detail has been given elsewhere \[2-5\], but we infer the fact that vanishing of a automatically leads to vanishing of C and color confinement is realized. Indeed, it has been known for some time that gluons are confined when a vanishes , but now with the help of the BRS invariance we could conclude that not only gluons but also all the colored particles are simultaneously confined. We shall come back to this subject again in Sec. 5. ## 4 Asymptotic Freedom In this section we shall review briefly how and why our attention was drawn to the non-Abelian gauge theory in describing strong interactions. In particle physics strongly interacting particles such as nucleons and pions are called hadrons. Hadrons are composite particles of quarks and antiquarks, however, and we have to study the origin of the strong interactions of quarks. We already know that strong interactions are mediated by the color gauge field and the quanta of this field are called the gluons since they glue up quarks together to form hadrons. Dynamics of quarks and gluons is called QCD as mentioned before. In the sixties experiments on the deep inelastic scattering of electrons on protons had been carried out. The differential cross-section had been measured by specifying the energy and direction of electrons without observing the hadrons in the final states. Then, apart from kinematical factors this differential cross-section can be expressed as a linear combination of two structure functions. They are functions of the square of the momentum transfer and the energy loss of the electron in the laboratory system. When these two variables increase indefinitely the two structure functions tend to be functions of the ratio of these two variables except for trivial kinematical factors. This characteristic behavior of structure functions is called the Bjorken scaling , and it is considered to be an empirical manifestation of the properties of strong interactions. What do we learn from this? In 1969 Feynman proposed the parton model and assumed that a nucleon consists of point-like partons moving almost freely inside the nucleon . In order to keep the partons inside the nucleon, however, the four-momentum of a parton must be equal to a fraction x of the total four-momentum of the nucleon. The partons may be identified with the quarks and since x is identified with the ratio of the two kinematical variables referred to in the above the distribution of the fraction x has been shown to be related to the structure functions. ¿From the success of the parton model in reproducing the Bjorken scaling we may infer that quarks inside the hadrons are almost free and that the interactions of quarks turn out to be weaker at shorter distances. This is a distinctive feature of strong interactions and we may express it in the momentum space as follows: The probability of a process involving large momentum transfer in strong interactions is small. We look for a model satisfying this condition and find that only non-Abelian gauge interactions meet this requirement with the help of RG . The concept of RG was first introduced by Stueckelberg and Petermann in 1953 , and it was further advanced by Gell-Mann and Low in QED in 1954 . Let us consider a dielectric medium and put a positive test charge inside, then the medium is polarized, namely, negative charges are attracted and positive ones are repelled by this test charge. As a consequence it induces a new charge distribution in the medium. The total charge inside a sphere of radius r around the test charge is a function of r and we call it the running charge. The vacuum is an example of the dielectric media because of its ability of being polarized – the vacuum polarization. In this case the test charge is called the bare charge and the total charge inside a sphere of a sufficiently large radius is called the renormalized charge. The running charge is a function of the radius r, but it can also be regarded as a function of momentum transfer through the Fourier transformation. The bare charge then corresponds to the limiting value of the running charge for infinite momentum transfer. Gell-Mann and Low have proved on the basis of the RG method that given a finite renormalized charge the bare charge is equal to a certain finite constant independent of the value of the renormalized one or it is divergent . The Bjorken scaling phrased in terms of RG implies that the bare coupling constant must be equal to zero. We shall refer to this property as asymptotic freedom (AF), and the non-Abelian gauge theory is the only known example in which AF is realized as clarified by Gross and Wilczek and by Politzer . The origin of AF may be traced back to the fact that the vector field introduces indefinite metric needed to realize AF and that the non-Abelian gauge theory is the only example involving non-linear interactions of the vector field. Thus starting from the empirical Bjorken scaling we have finally reached the non-Abelian gauge theory of strong interactions, namely, QCD. ## 5 Color Confinement Now we are ready to present the proof of color confinement, at least verbally, by combining arguments given in preceding sections. In QED the square of the ratio of the renormalized charge to the bare one is equal to the renormalization constant of the electromagnetic field. It is equal to the inverse of the dielectric constant of the vacuum relative to the empty geometrical space. Usually the dielectric constant of a dielectric medium is defined relative to the vacuum, but here we define it relative to the empty geometrical space or the void. This dielectric constant of the vacuum is larger than unity as a consequence of the positive-definite metric of the physical subspace, or more intuitively, it is a consequence of the screening effect due to the vacuum polarization. Then, let us consider a fictitious case in which the dielectric constant of the vacuum is smaller than unity. In this case we have antiscreening instead of screening when a test charge is placed in this fictitious vacuum, and such a vacuum is realized when a pair of virtual charged particles of indefinite metric should contribute to the vacuum polarization. In this case the running charge would be an increasing function of the radius r at least for small values of r. Next we shall consider an extreme case of the vanishing dielectric constant, then a small test charge would attract an unlimited amount of like charges around it thereby bringing the system into a catastrophic state of infinite charge. Nature would take safety measures to prevent such a state from emerging, and a possible resolution is to bring another test particle of the opposite charge. The total charge of the whole system is equal to zero and charge confinement would be realized. In QED, however the dielectric constant of the vacuum or the inverse of the renormalization constant is larger than unity, and the above scenario reduces to a mere fiction. The situation in QCD is completely different since it allows introduction of indefinite metric in the vacuum polarization and AF is one of its manifestations. In QCD what corresponds to the dielectric constant of the vacuum in QED is the inverse of the renormalization constant of the color gauge field denoted by a in Sec. 3. If a should vanish we would encounter a scenario similar to the one mentioned above and a test color charge would induce an intolerable catastrophic state. In Sec. 3 we have shown that such a state is excluded by means of the subsidiary condition that selects physical states. Therefore, what can be realized are states of zero color charge and this is precisely color confinement. Unlike electric charge, color charge is not a simple additive quantum number but a member of a Lie algebra $`su(3)`$, so that physically realizable states should belong to the one-dimensional representation of this algebra. Thus the entire problem of color confinement reduces to the proof that the constant a vanishes. Before presenting its proof we have to introduce the concept of the equivalence class of gauges . When the difference between two Lagrangian densities is an exact operator we say that these two Lagrangian densities belong to the same equivalence class of gauges. For instance, two Lagrangian densities corresponding to two distinct values of the gauge parameter belong to the same equivalence class. In QCD hadrons are represented by BRS invariant composite operators \[30-32\], and the S matrix elements for hadron reactions are obtained by applying the reduction formula of Lehmann, Symanzik and Zimmermann to Green’s functions defined as the vacuum expectation values of the time-ordered products of the BRS invariant composite operators. Then we can readily prove that the S matrix elements for hadron reactions are the same within the same equivalence class of gauges . Color confinement signifies that the unitarity condition for the S matrix in the hadronic sector is saturated by hadronic intermediate states. That means that quarks and gluons have no place to show up in the unitarity condition just as longitudinal and scalar photons never appeared in the S matrix elements in QED. Therefore, we may take it for granted that the concept of color confinement is gauge-independent within the same equivalence class. Then we come back to the evaluation of the constant a. First, it should be stressed that a can be evaluated exactly as a function of the gauge coupling constant and the gauge parameter thanks to AF . These two parameters define a two-dimensional parameter space, which is then decomposed into three domains according to the value of a, namely, zero, infinity and finite. It should be stressed here that the existence of these three domains can be proved without recourse to perturbation theory. Of these three domains color confinement is manifestly realized in the first one, and also in the other two confinement should prevail because of the gauge-independence of the concept of color confinement. Evaluation of a by means of RG based on AF is a very interesting mathematical problem, but we shall refer to the original paper for the technical detail. Finally, it should be stressed that confinement as has been discussed in this paper is realized only when we have an unbroken non-Abelian gauge symmetry . When a certain gauge symmetry is spontaneously broken the exact current generates a massless spin zero particle as the Nambu-Goldstone boson and our proof of confinement breaks down. For instance, the electroweak interactions are formulated on the gauge group $`SU(2)\times U(1)`$, but spontaneous symmetry breaking reduces the gauge symmetry to the Abelian $`U(1)`$ corresponding to the electromagnetic gauge symmetry. Thus the electroweak interactions do not possess any unbroken non-Abelian gauge symmetry and are not capable of confining any particle. To conclude, we have presented the flow of ideas towards intuitive understanding of the mechanism of color confinement without recourse to mathematical detail, but interested readers are encouraged to refer to the original articles. Acknowledgement: The financial support of the Academy of Finland under the Project no. 163394 is greatly acknowledged. The authors are grateful to Professor A. N. Mitra for kindly inviting us to contribute this article to the INSA book.
no-problem/9909/cond-mat9909232.html
ar5iv
text
# Exciton Beats in GaAs Quantum Wells: Bosonic Representation and Collective Effects ## Abstract We discuss light-heavy hole beats observed in transient optical experiments in GaAs quantum wells in terms of a free-boson coherent state model. This approach is compared with descriptions based on few-level representations. Results lead to an interpretation of the beats as due to classical electromagnetic interference. The boson picture correctly describes photon excitation of extended states and accounts for experiments involving coherent control of the exciton density and Rayleigh scattering beating. The optical properties of semiconductor quantum wells (QW) and, in particular, the coherent dynamics of excitons following resonant excitation with ultrafast laser pulses have attracted much attention in recent years . It is generally accepted that there is a transfer of coherence between the optical field and the QW that disappears in a characteristic time $`T_2`$ (picoseconds for GaAs) after the laser is turned off. However, the questions as to how the coherence is actually induced and that of the nature of the coherent state of the solid are poorly understood. In this paper, we address these points by re-examining the long-standing problem of the (classical vs. quantum) nature of the ubiquitous beats associated with the light-hole (LX) and heavy-hole (HX) excitons, which are observed in transient optical experiments on QW . To this end, we consider the coherent behavior of excitons using the simplest albeit non-trivial model where they are treated as non-interacting bosons. Hence, our work relates directly to studies for which nonlinear effects are not important but it is not aimed at explaining the four-wave-mixing (FWM) experiments that dominate the field . Nevertheless, since nonlinear effects are, typically, weak compared with harmonic contributions, the free-boson picture provides in all cases the correct lowest-order wavefunction of the photoexcited solid. It should be emphasized that our results apply only to excitons in weakly-localized states. The structure of this paper is as follows. First, we discuss the bosonic representation of excitons in QW’s, obtaining the exact collective state of a QW driven by an arbitrary laser pulse and show that its properties vis-a-vis coherence are identical to those of coherent optical fields . From this, it follows that laser-induced coherence is a collective property of the exciton field that is not owed by individual excitons. Using the many-exciton wavefunction, we provide a quantitative description of recent experiments where two laser pulses are used to coherently control the HX density in a GaAs QW. We also analyze the case where a single pulse excites both the LX and HX states and argue that the resulting beats are not due to (single-exciton) quantum interference, as advocated by few-level models, but to polarization interference associated with the emission of phased arrays of classical antennas. Finally, we consider Rayleigh scattering experiments and show that the bosonic approach accounts for the quadratic rise in the intensity at short times that is observed in the experiments . At small electric fields and near band-gap excitation, the quanta of the induced polarization field, $`𝐏`$, are the excitons . Since their number is proportional to the illuminated volume of the sample, $`V`$, it is clear that the quantum description of an excited QW (and, as we shall see, that of the beats) becomes, in some sense, a many-exciton problem. Our discussion concerns itself with extended states. The atomic-like scheme where excitons are represented by a collection of distinguishable few-level systems is the correct representation in the strongly localized regime as in the case of quantum dots or excitons bound to islands . It should be emphasized that, in the latter picture, the optically-induced coherence relies on intra-level quantum entanglement and, therefore, that it is a single- or, at most, a few-exciton effect. For our approach to work, the areal density of photogenerated excitons, $`n`$, must be sufficiently low so that the excited state of the solid can be described as a system of non-interacting bosons . Hence, our discussion applies to a GaAs QW excited with low intensity pulses using photon energies in the vicinity of the LX and HX absorption lines. Note that, under these conditions, the bosonic picture follows directly from the semiconductor Bloch equations and BCS-like fermionic theories . We are aware that our approach ignores all but a small fraction of the QW Hilbert space. However, the experiments considered here are well described by models that rely on the same restricted basis and, thus, we conclude that the neglected sectors of the Hilbert space (e.g., the electron-hole continuum) play only a secondary role in many cases of interest. For a perfect QW (the weak-disorder limit is discussed later), the Hamiltonian describing free excitons coupled to a classical electromagnetic field is : $$\widehat{H}=\underset{𝐤,\alpha ,M}{}\mathrm{}\omega _{𝐤,\alpha }A_{𝐤,\alpha ,M}^{}A_{𝐤,\alpha ,M}𝐏𝐄(𝐫,t)𝑑V$$ (1) where $`𝐄`$ is the electric field. $`A_{𝐤,\alpha ,M}^{}`$ is a boson operator that creates a QW exciton with in-plane momentum $`𝐤`$ valence band index $`\alpha `$ and pseudo angular momentum $`M`$. Relevant to our problem are the lowest-lying optically active ($`M=\pm 1`$) heavy- ($`\alpha =H`$) and light-hole ($`\alpha =L`$) QW states. The single-particle energy is $`\mathrm{}\omega _{𝐤,\alpha }=E_\alpha ^gϵ_\alpha +K_E`$where $`K_E=\mathrm{}^2k^2/2m_\alpha `$ is the center-of-mass kinetic energy, m<sub>α</sub> is the exciton mass, $`E_\alpha ^g`$ is the relevant QW gap and $`ϵ_\alpha `$ is the exciton binding energy. Consider normal incidence, i.e., the light couples only to the state at $`𝐤=\mathrm{𝟎}`$. Using the fact that typical QW widths are considerably smaller than the light wavelength, we write the interaction term as $`V_MP_ME_M`$ where $$P_M=\frac{1}{\sqrt{V}}\underset{\alpha }{}G_{\alpha ,M}\left(A_{\mathrm{𝟎},\alpha ,M}^{}+A_{\mathrm{𝟎},\alpha ,M}\right)$$ (2) and $`E_M(𝐫=0,t)`$ are, respectively, the $`M=\pm 1`$ components of $`𝐏`$ and $`𝐄`$, and $`G_{\alpha ,M}`$ are constants proportional to the dipole matrix element . The Hamiltonian (1) is equivalent to that of a set of independent harmonic oscillators (the exciton $`H`$and $`L`$modes at $`𝐤=\mathrm{𝟎}`$) driven by an external field. For arbitrary driving force and initial state, this problem can be solved exactly by applying a time-dependent Glauber transformation . In particular, if the QW is initially in its ground state and the external field is turned on at $`t=\mathrm{}`$ , the exact state of the exciton field at time $`t`$ is $$|\mathrm{\Xi }=e^{i\widehat{H}_0t/\mathrm{}}\underset{\alpha ,M}{}e^{\left|K_{\alpha ,M}\right|^2/2}e^{iK_{\alpha ,M}A_{\mathrm{𝟎},\alpha ,M}^{}}|0$$ (3) where $`\widehat{H}_0`$ is the free exciton Hamiltonian and $$K_{\alpha ,M}(t)=\frac{\sqrt{V}G_{\alpha ,M}}{2\mathrm{}}_{\mathrm{}}^tE_M(s)e^{i\omega _{\mathrm{𝟎},\alpha }s}𝑑s.$$ (4) The wavefunction (3) is a product of states of individual modes that is formally identical to the so-called (multimode) coherent state proposed by Glauber as the quantum counterpart to classical light . As for the photon case, exciton coherent states are fully characterized by the complex function $`K_{\alpha ,M}(t)`$ which defines the classical phase. Because the system is not nonlinear, the induced polarization is exactly given by $$\mathrm{\Xi }|P_M|\mathrm{\Xi }=\frac{2}{\sqrt{V}}\underset{\alpha }{}G_{\alpha ,M}Re\left(ie^{i\omega _{\mathrm{𝟎},\alpha }t}K_{\alpha ,M}(t)\right).$$ (5) while the areal density of $`\alpha `$-excitons with momenta $`𝐤`$ and $`M`$ is $`n_{𝐤,\alpha ,M}=(N_{\mathrm{𝟎},\alpha ,M}l/V)\delta _{𝐤,\mathrm{𝟎}}`$where $$N_{\mathrm{𝟎},\alpha ,M}=\mathrm{\Xi }|A_{\mathrm{𝟎},\alpha ,M}^{}A_{\mathrm{𝟎},\alpha ,M}|\mathrm{\Xi }=|K_{\alpha ,M}(t)|^2$$ (6) and $`l`$is the width of the well. Here, we note that the linear susceptibility, as easily obtained from (5), is identical to that of the analogous few-level model. These expressions apply strictly to non-interacting excitons. Coupling with the environment and interactions between excitons lead to dephasing in that the pure state (3) evolves into a statistical mixture of coherent states with random phases. It is beyond the scope of this work to provide a microscopic account of these interactions. Instead, we will rely on the exponential decay approximation and treat dephasing phenomenologically. Coherent control theory.— We now analyze recent experiments where light pulses are used to control the exciton density in a GaAs-QW . Within the bosonic description, these results constitute a striking demonstration of collective behavior. The experiments rely on two phase-locked pulses tuned close to an exciton mode of energy $`\mathrm{}\omega `$ and separated by a time delay $`\tau `$which serves as the control parameter for the exciton density ($`n=_{𝐤,\alpha ,M}n_{𝐤,\alpha ,M}`$ is probed indirectly by monitoring the reflectivity of a third pulse or the luminescence intensity ). The data can be fitted to $`n=2n_s[1+\mathrm{cos}(\omega \tau )e^{\tau /T_2}]`$ where $`n_s`$ represents the exciton density generated by a single pulse. Hence, small changes in the time delay ($`\pi /\omega `$ $`1`$ fs) lead to large variations of $`n`$ from zero, corresponding to destructive interference between the pulses, to nearly four times the value for one pulse . These results can be easily explained using the expressions derived previously. To account for the double pulse, we write $`E=E_1(t)+E_2(t)`$ where $`E_1(t)=F(t)`$, $`E_2(t)=F(t\tau )`$ and $$F(t)=E_0\mathrm{sin}(\mathrm{\Omega }t)e^{(t/T)^2}.$$ (7) $`T`$ and $`\mathrm{\Omega }`$are the pulse width and central frequency. From (4),(6) and (7), and assuming that the pulses couple only to a single $`𝐤=\mathrm{𝟎}`$mode of frequency $`\omega `$(the extension to many modes is straightforward) we obtain the areal density $$n|K(\mathrm{})|^2(l/V)=2n_s[1+\mathrm{cos}(\omega \tau )].$$ (8) $`n_s=(gE{}_{0}{}^{}T/\mathrm{})^2\mathrm{exp}[\omega ^2T^2(1+r^2)/4]\mathrm{sinh}[r\omega ^2T^2/2]`$ is the average density created by one pulse, $`\mathrm{g}`$ is a constant and $`r=\mathrm{\Omega }/\omega `$ measures the detuning between the laser and the exciton resonance. The result (8) contains the essential feature of coherent control, namely, the oscillatory term. Decay can be incorporated into the model by considering, e.g, a distribution of modes of different energies . It should be emphasized that, in the coherent-state representation, control of the density follows from the fact that the wavefunction (3) is a linear superposition of states with different number of excitons (not surprisingly, the same result can be obtained from a classical analysis of a kicked harmonic oscillator as it is done in phonon control studies ). Also note that the state induced by the pair of pulses $$|\mathrm{\Xi }_{E_1+E_2}e^{i(K_1+K_2)A^{}}|0$$ (9) cannot be approximated by the superposition state $`[1+i(K_1+K_2)A^{}]|0`$ because $`K_{1,2}\sqrt{V}`$. Hence, our interpretation differs significantly from that of the two-level model where control is understood as a single-exciton quantum interference effect. LX-HX beats.— We now turn to the main subject of this work, namely, the beats of frequency $`\omega _L\omega _H`$ ($`\omega _L`$ and $`\omega _H`$are, respectively, the frequencies of the LX and HX excitons) as reported in a wide variety of experiments , which are conventionally characterized as a quantum interference phenomenon much like the so-called quantum beats of atomic physics . Within the atomic-like interpretation, the QW is treated as a set of three-level systems whose excited states are the LX- and HX- states. The role of the optical pulses is to bring each system into a sum state, i.e., a linear combination of LX, HX and the ground state and, thus, the beats are a consequence of intra-exciton quantum entanglement . As we shall see, the actual state of the solid is very different from that of the atomic-like picture. Consider first a perfect QW and circularly polarized pulses of bandwidth large enough so that both LX- and HX-modes of well-defined angular momentum $`M`$ are resonantly excited. From (3), it directly follows that the wavefunction in this case is a product of LX and HX coherent states $$|\mathrm{\Xi }_{LH}e^{i\omega _HtN_H}e^{iK_H(t)A_H^{}}e^{i\omega _LtN_L}e^{iK_L(t)A_L^{}}|0$$ (10) where $`N_\alpha A_\alpha ^{}A_\alpha `$. Using (5), it can be shown that (10) leads to LX-HX beats that reflect interference of coherent light emitted by two phased arrays of antennas. We note that the beating field, associated with $`\mathrm{\Xi }_{LH}|P|\mathrm{\Xi }_{LH}0`$, is classical in nature as opposed to the field due to spontaneous emission that characterizes quantum beats . Moreover, because $`\mathrm{\Xi }_{LH}`$cannot be expressed in terms of sums of LX and HX states (since $`K_{L,H}\sqrt{V}`$, this is not even possible as an approximation), it is apparent that quantum-superposition arguments cannot be used to describe beats associated with extended excitons. Rayleigh scattering.— To consider the experiments on resonant secondary emission involving elastic (Rayleigh) scattering , we add to (1) the term $$\widehat{U}=\underset{\alpha ,𝐤\mathrm{𝟎}}{}V_\alpha (𝐤)\left(A_{𝐤,\alpha }^{}A_{\mathrm{𝟎},\alpha }+A_{𝐤,\alpha }A_{\mathrm{𝟎},\alpha }^{}\right)$$ (11) describing the elastic scattering from the state coupled to the laser ($`𝐤=\mathrm{𝟎}`$) to other states ($`𝐤0`$) due to interaction with defects such as impurities and interface roughness. Since the disorder described by (11) does not affect the internal degrees of freedom, our results do not apply to quantum dots . $`\widehat{U}`$ gives rise to Rayleigh scattering, i.e., emission of photons with the same energy but different in-plane momentum than the incident light . Following , we adopt the Heisenberg picture and, according with (3), we assume that the exciton field at $`t=0`$ is described by $`A_{\mathrm{𝟎},\alpha }=K_{\mathrm{𝟎},\alpha }(t=0)`$ and $`A_{𝐤\mathrm{𝟎},\alpha }=0`$ (all but the $`𝐤=\mathrm{𝟎}`$ mode are empty after the pulse strikes). This approximation is valid for fast pulses and weak disorder. Since $`\widehat{H}+\widehat{U}`$ does not mix LX and HX, we solve for each $`\alpha `$ ($`=L,H`$) the problem of a single exciton of momentum $`𝐤=\mathrm{𝟎}`$ coupled to a continuum of $`\alpha `$-excitons at $`𝐤\mathrm{𝟎}`$. The time evolution is given by $`A_{𝐤\mathrm{𝟎},\alpha }(t)`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Lambda }_\alpha V_\alpha (𝐤)}{\delta \mathrm{\Omega }_\alpha i\mathrm{\Gamma }_\alpha }}e^{i\omega _\alpha t}[e^{(i\delta \mathrm{\Omega }_\alpha +\mathrm{\Gamma }_\alpha )t}1]`$ (12) $`A_{\mathrm{𝟎},\alpha }(t)`$ $`=`$ $`\mathrm{\Lambda }_\alpha e^{(i\omega _\alpha +i\delta \mathrm{\Omega }_\alpha +\mathrm{\Gamma }_\alpha )t}`$ (13) where $`\mathrm{\Lambda }_\alpha =A_{\mathrm{𝟎},\alpha }(t=0)`$. $`\delta \mathrm{\Omega }_\alpha `$ and $`\mathrm{\Gamma }_\alpha `$ are, respectively, the small energy renormalization and the decay constant of the state at $`𝐤=\mathrm{𝟎}`$ due to $`\widehat{U}`$. The following conclusions can be drawn from (13). First, elastic (disorder-induced) scattering leads to transfer of coherence from the mode initially excited by the laser to states with $`𝐤\mathrm{𝟎}`$. This accounts for the observed light emission in the non phased matched direction $`𝐤\mathrm{𝟎}`$ . Second, within our model, the scattered field is coherent with the laser field, in agreement with recent interferometric experiments . The intensity of the secondary emission is given by $`IA_{𝐤\mathrm{𝟎},L}^{}A_{𝐤\mathrm{𝟎},L}(t)+A_{𝐤\mathrm{𝟎},H}^{}A_{𝐤\mathrm{𝟎},H}(t)+`$ (14) $`A_{𝐤\mathrm{𝟎},L}^{}A_{𝐤\mathrm{𝟎},H}+A_{𝐤\mathrm{𝟎},H}^{}A_{𝐤\mathrm{𝟎},L}`$ (15) where the last two terms add up to the beating term $`\mathrm{cos}[(\omega _L\omega _H)t]`$ observed in the experiments. It is clear that, within our model and as for the perfect QW, these Rayleigh beats are due to interference between the fields associated with the HX and LX polarizations, which behave as a distribution of classical antennas. Another conclusion that can be drawn from (13) is that, at short times, $`A_{𝐤\mathrm{𝟎},\alpha }t`$. This is in agreement with the quadratic ($`t^2`$) rise in the Rayleigh signal observed at short times and very low exciton densities , which is expected when disorder is the only (or the fastest) source of scattering. The emission for $`𝐤=\mathrm{𝟎}`$ decays exponentially with time constant $`1/\mathrm{\Gamma }_\alpha `$ due to the coupling with the continuum of modes at $`𝐤\mathrm{𝟎}`$. As discussed earlier, the perfect QW cannot be described as a collection of few-level systems because the light creates macroscopic populations of $`𝐤=\mathrm{𝟎}`$ exciton modes that are uncorrelated. The situation is somehow different for $`\widehat{U}0`$. Here, the wavefunction can be formally obtained by applying the transformation $`A_{𝐤,\alpha }=_\xi c_{\xi ,\alpha }(𝐤)B_{\xi ,\alpha }/\sqrt{V}`$ so that the Hamiltonian $`\stackrel{~}{H}_0=\widehat{H}_0+\widehat{U}`$ takes on the diagonal form $`\stackrel{~}{H}_0=_{\xi ,\alpha }\mathrm{}\mathrm{\Omega }_{\xi ,\alpha }B_{\xi ,\alpha }^{}B_{\xi ,\alpha }`$ ( $`B_{\xi ,\alpha }`$ and $`B_{\xi ,\alpha }^{}`$ are boson operators and $`c_{\xi ,\alpha }(𝐤)`$ are constants). Following the impulsive excitation, the exciton field is $$|\stackrel{~}{\mathrm{\Xi }}e^{i\stackrel{~}{H}_0t/\mathrm{}}\underset{\xi ,\alpha }{}e^{i\stackrel{~}{K}_{\xi ,\alpha }(t)B_{\xi ,\alpha }^{}}|0$$ (16) where $`\stackrel{~}{K}_{\xi ,\alpha }=c_{\xi ,\alpha }^{}(\mathrm{𝟎})K_{\alpha ,M}/\sqrt{V}`$. As (10), $`\stackrel{~}{\mathrm{\Xi }}`$ is a product state of individual modes that carries a macroscopic polarization . Unlike the perfect case, however, the occupation of individual modes, $`B_{\xi ,\alpha }^{}B_{\xi ,\alpha }`$, is not macroscopic since $`\stackrel{~}{K}_{\xi ,\alpha }`$ does not depend on $`V`$; see (4). Hence, the question arises as to which extent $`\stackrel{~}{\mathrm{\Xi }}`$ can be distinguished from the sum-state $$|\stackrel{~}{\mathrm{\Psi }}\underset{\xi }{}[1+i\stackrel{~}{K}_{\xi ,L}(t)B_{\xi ,L}^{}+i\stackrel{~}{K}_{\xi ,H}(t)B_{\xi ,H}^{}]|0$$ (17) that carries the same polarization and gives the same beats at $`\omega _L\omega _H`$ (note that $`\stackrel{~}{\mathrm{\Psi }}`$ represents, in some sense, a collection of atomic 3-level systems with randomly-oriented dipole moments). The main problem with (17) is that, since the total Hamiltonian involves a sum over the light- and heavy-hole modes, the time-dependent wavefunction must be in a product form. Further, there are observable differences between states (17) and (16). The two expressions give, in general, different answers for the mode occupation which is always $`<1`$ for $`\stackrel{~}{\mathrm{\Psi }}`$ but can have any value for $`\stackrel{~}{\mathrm{\Xi }}`$ and, moreover, the spectrum of polarization fluctuations $`P^2P^2`$ evaluated with $`\stackrel{~}{\mathrm{\Xi }}`$ does not have components at $`\omega _L+\omega _H`$ while $`\stackrel{~}{\mathrm{\Psi }}`$ does. These differences could be used in the experiments to identify (16). In conclusion, light-induced exciton coherence must be understood in terms of a collective description of the exciton field. This gives product as opposed to sum states. The resulting LX-HX beats are not due to quantum mechanical but to classical electromagnetic interference. We acknowledge discussions with P. Berman, N. H. Bonadeo, B. Deveaud, N. Garro, S. Haacke, R. Philips, D. Porras, J. Shah, L. J. Sham and D. G. Steel. Work supported by MEC of Spain under contract PB96-0085, by the Fundación Ramón Areces and by the U. S. Army Research Office under Contract Number DAAH04-96-1-0183.