id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9911/nucl-th9911025.html
ar5iv
text
# 1 Introduction ## 1 Introduction The recent progress of radioactive nuclear beam facilities has provided us with marvelous findings in nuclear physics. Exotic structures such as neutron halos and neutron skins have been found in experimental studies of light unstable nuclei in the neutron-rich region. Much new information on the shapes and structures of nuclei far from stability is being revealed by the systematic measurement of radii and moments of unstable nuclei . Planned facilities in the world will access a large number of unstable nuclei in the whole region of the nuclear chart and enable us to explore where and how exotic phenomena of nuclear structure appear in the region far from the stability line . One of the great interests is to know where the deformation of unstable nuclei appears and how the shape of these nuclei changes along the isotopic and isotonic chains. At the same time, the relativistic many-body framework has been extensively applied to study nuclei and nuclear matter . This has been motivated by the recent success of the relativistic Brueckner-Hartree-Fock (RBHF) theory, in which the strong density-dependent repulsion arises automatically from the relativistic many-body treatment, in reproducing the saturation property of nuclear matter . Among other properties, the relativistic mean field (RMF) theory, which is the phenomenological framework of the RBHF theory, has been shown to be excellent at describing the properties of unstable nuclei as well as stable ones . The RMF theory has also been successfully applied to the study of the deformation of nuclei, as well as other properties of the stable and unstable nuclei . Furthermore, the RMF theory has been used to calculate the equation of state(EOS) of nuclear matter in the wide density and temperature regions tabulated for the application to supernova simulations . Recently, a systematic study of all even-even nuclei up to the drip lines in the nuclear chart has been performed in the RMF theory with axial deformation . The ground state properties of about 2000 even-even nuclei from $`Z=8`$ to $`Z=120`$ have been studied and all possible deformations of each nuclide have been surveyed using a constrained, axially deformed RMF model. Through the systematic analysis of the ground state deformations thus found, the pattern of the appearance of prolate and oblate deformations has been obtained. In the same study, it was also found that the coexistence of prolate and oblate shapes with similar binding energies occurs in many nuclei in the nuclear chart. This coexistence suggests the possible appearance of deformation beyond the axial kind, such as triaxial or even higher order multipole deformations. The appearance of triaxial deformation in this context has been studied in the case of the neutron-rich Sulfur isotopes . In the axial RMF calculation for neutron-rich Sulfur isotopes, the energy curves as a function of the $`\beta `$ deformation have two minima, at both prolate and oblate deformations, with energies very close to each other. Judging solely from the energy curve, one cannot conclude which ground state deformation is realized or whether yet another type of deformation appears. The RMF calculations with triaxial deformation of the same isotopes have been performed to clarify this point and a smooth shape transition from prolate to oblate shapes through triaxial shape has been found along the Sulfur isotopic chain . This example motivates us to study further the appearance of triaxial deformation in other regions of the nuclear chart. It is interesting to explore where triaxial deformation appears in the nuclear chart, especially in relation with the behavior of the appearance of axial deformation. In the present study, we have chosen to explore the proton-rich Xe region for the appearance of triaxial deformation. We have made a systematic study of 25 even-even nuclei covering $`Z=5058`$ and $`N=6472`$, using the RMF theory with triaxial deformation, in order to clarify how their shapes change as a function of $`N`$ and $`Z`$ in this region. We have calculated the energy surface of those nuclei as a function of the deformation parameters, $`\beta `$ and $`\gamma `$, to explore the ground state deformation. The previous study of <sub>54</sub>Xe, <sub>55</sub>Cs and <sub>56</sub>Ba isotopes using the RMF theory with axial deformation was successful in reproducing the general features of the ground state properties. However, disagreement with the measured isotope shift for the proton-rich region, which might be due to triaxial deformation, was observed. In the systematic RMF calculation with axial deformation , which we will discuss in Sect. 3, the shape change from oblate to prolate shape occurs as $`Z`$ increases, in a region in which the two shapes coexist. Thus, the axially symmetric RMF calculations strongly suggest that this region could contain triaxial deformed nuclei. This region has been discussed as a possible region for triaxial deformation in studies using conventional frameworks . There have also been experimental efforts to measure excitation energies in order to study the collective nature of the nuclei in this region . Studies of triaxial deformation within the mean field approach have also been performed in other regions of the nuclear chart . In Sect. 2, we describe the framework of the RMF theory with deformation. We discuss the behavior of the shape within the RMF theory under the assumption of axial symmetry in Sect. 3. We present the results of the calculations in the RMF theory with triaxial deformation in Sect. 4. Results of the calculations are discussed in Sect. 5. We summarize the paper in Sect. 6. ## 2 Relativistic mean field theory We briefly describe the framework of the RMF theory and the procedure of the calculation. All details can be found in . In the RMF theory, the system of nucleons is described by fields of mesons and nucleons under the mean field approximation. We start with the effective lagrangian, which is relativistically covariant, composed of meson and nucleon fields. We adopt a lagrangian with non-linear $`\sigma `$ and $`\omega `$ terms , $`_{RMF}`$ $`=`$ $`\overline{\psi }\left[i\gamma _\mu ^\mu Mg_\sigma \sigma g_\omega \gamma _\mu \omega ^\mu g_\rho \gamma _\mu \tau _a\rho ^{a\mu }e\gamma _\mu {\displaystyle \frac{1\tau _3}{2}}A^\mu \right]\psi `$ $`+{\displaystyle \frac{1}{2}}_\mu \sigma ^\mu \sigma {\displaystyle \frac{1}{2}}m_\sigma ^2\sigma ^2{\displaystyle \frac{1}{3}}g_2\sigma ^3{\displaystyle \frac{1}{4}}g_3\sigma ^4`$ $`{\displaystyle \frac{1}{4}}W_{\mu \nu }W^{\mu \nu }+{\displaystyle \frac{1}{2}}m_\omega ^2\omega _\mu \omega ^\mu +{\displaystyle \frac{1}{4}}c_3\left(\omega _\mu \omega ^\mu \right)^2`$ $`{\displaystyle \frac{1}{4}}R_{\mu \nu }^aR^{a\mu \nu }+{\displaystyle \frac{1}{2}}m_\rho ^2\rho _\mu ^a\rho ^{a\mu }{\displaystyle \frac{1}{4}}F_{\mu \nu }F^{\mu \nu },`$ where the notation follows the standard one. On top of the Walecka $`\sigma `$ \- $`\omega `$ model with photons and isovector-vector $`\rho `$ mesons, non-linear $`\sigma `$ meson terms are introduced to reproduce the properties of nuclei quantitatively and give a reasonable value for the incompressibility . The inclusion of the non-linear term of $`\omega `$ meson is motivated by the recent success of the relativistic Brueckner-Hartree-Fock theory . Deriving the Euler-Lagrange equations from the lagrangian under the mean field approximation, we obtain the Dirac equation for the nucleons and Klein-Gordon equations for the mesons. The self-consistent Dirac equation and Klein-Gordon equations are solved by expanding the fields in terms of harmonic-oscillator wave functions . The RMF model contains the meson masses, the meson-nucleon coupling constants and the meson self-coupling constants as free parameters. We adopt the parameter set TMA, which was determined by fitting the experimental data of masses and charge radii of nuclei in a wide mass range . The parameters are listed in Table 1. We remark that this parameter set has a mass dependence so as to reproduce nuclear properties quantitatively from the light mass region to the superheavy region. With the TMA parameter set, the symmetry energy is 30.68 MeV and the incompressibility is 318 MeV. Note that the bulk properties of nuclear matter at saturation with the parameter set TMA is calculated for uniform matter in the limit of infinite mass number. The TMA parameter set has been used for the systematic study of all even-even nuclei up to the drip lines in the nuclear chart within the RMF framework under the assumption of axial symmetry . It has been shown that the overall agreement of the calculated results using TMA with the experimental data of masses and charge radii is excellent and is found to be much better than the results of spherical RMF calculations with TMA. In the present study, we extend the RMF calculation with the TMA parameter set from the axially symmetric to the triaxially symmetric one, in order to explore the appearance of triaxial deformation in the region in which axial deformations with similar binding energies coexist. We discuss the correspondence between the axial and triaxial RMF calculations in the subsequent section. In order to take into account triaxial deformation, the fields are expanded in terms of the eigenfunctions of a triaxially deformed harmonic oscillator potential . We perform calculations that constrain the quadrupole moments of the nucleon distribution, in order to survey the coexistence of multiple shapes and to identify the ground state deformation. We use a quadratic constraint to calculate a complete map of the energy surface as a function of the deformation . We take a basis of up to $`N=12`$ major shells of the harmonic oscillator wave functions. This is normally enough for the constrained calculations in this mass range . We have performed calculations with $`N=14`$ major shells in the case of no-pairing and found that the energy surface does not change significantly for moderate deformation. We have also performed non-constrained calculations with pairing up to $`N=16`$ and the convergence was generally good with $`N=12`$. We take the pairing window as given by Gambhir et al. as $`\epsilon \lambda 2(41A^{1/3})`$MeV. As for the pairing correlations, we perform the RMF calculations with triaxial symmetry using a BCS formalism . Since we calculate the energy surface in the full $`\beta \gamma `$ range, we take the pairing interaction strength for a given nucleus as $`G=23/A`$\[MeV\] for both protons and neutrons . Although the BCS-type treatment has often been used in the RMF calculations, it would be preferable to incorporate the pairing correlations in the relativistic many body framework in a consistent manner. A study with the relativistic Hartree-Bogoliubov theory has been performed under the assumption of spherical symmetry . A systematic study of nuclear deformation within such a relativistic many-body framework is currently being made . ## 3 Numerical results Before we present the results of the RMF calculation with triaxial symmetry, we discuss the corresponding results of the RMF calculation with axial symmetry . We examine the behavior of the calculated ground state properties of even-even nuclei in the proton-rich Xe region. We note here that the calculated properties of deformations as well as masses and radii in this region agree very well with the experimental data. In the region of $`Z=5058`$ and $`N=6472`$, roughly speaking, the deformation changes according to the proton number, with a few exceptions. The Sn isotopes ($`Z=50`$) are dominated by a spherical shape and the Te isotopes ($`Z=52`$) have, for the most part, an oblate shape. The Xe, Ba and Ce isotopes ($`Z=5458`$) all have a prolate shape. We show the energy surfaces for Te and Xe isotopes as functions of $`\beta `$ deformation (Fig. 1). The general trend shows a transition in shape from spherical, through oblate, to prolate as the proton number increases. Furthermore, there is always more than one energy minimum and shape coexistence is generally observed in this region of nuclides, as can be seen in Fig. 1. For each nucleus there is a corresponding second minimum which has the deformation parameter, $`\beta `$, of opposite sign to that of the absolute minimum. The energy difference between the absolute minimum and the second minimum is generally small in this region. We next present the RMF calculations with triaxial deformation for 25 nuclei covering the range $`Z=5058`$ and $`N=6472`$. We present the energy surfaces of these nuclei in the $`\beta `$ and $`\gamma `$ deformation parameter space, calculated using constraints on the two deformation parameters. Figure 2 displays the energy surfaces of the Sn, Te, Xe, Ba, Ce ($`Z=5058`$) isotopes with $`N=6472`$, arranged in the form of the nuclear chart. The spacing of contours is 1 MeV in total energy in all figures. The energy minimum is marked by the black region, in which the energy difference is less than 1 MeV from its absolute minimum energy. As for the Sn isotopes, the minima appear consistently at the spherical shape ($`\beta =0`$). The Te isotopes show a $`\gamma `$-soft character, having similar energies along the $`\gamma `$ direction. Most of the Xe, Ba and Ce isotopes have minima at prolate deformation. In some cases (<sup>124</sup>Ba, <sup>128</sup>Ba, <sup>126</sup>Ce, <sup>128</sup>Ce and <sup>130</sup>Ce), the region of the shallow minimum extends to quite large $`\gamma `$ deformation. These results indicate that the triaxial shape is not stable in those nuclei, but the energy surfaces are very $`\gamma `$ soft. ## 4 Discussion We discuss here the influence of the pairing correlations and the effective interaction on the triaxial RMF calculations. Since the consistent calculation of pairing correlations with deformation within the relativistic many-body framework is still under development, we have performed the RMF calculations with pairing correlations in the BCS formalism, as a first study of the appearance of axial and triaxial deformation. We see here the effect of pairing correlations on the magnitude of the binding energy and the deformation by comparing the results of calculations with and without pairing. We show in Fig. 3 the energy surfaces in the $`\beta \gamma `$ plane of <sup>120</sup>Te, <sup>122</sup>Xe and <sup>124</sup>Ba without and with pairing correlations. The qualitative behavior of the two cases is very similar. Generally speaking, the magnitude of the $`\beta `$ deformation is reduced by the pairing correlations. The energy minimum seen at finite $`\gamma `$ deformation in <sup>124</sup>Ba is washed out by the inclusion of the pairing correlations. In Fig. 4 we show the proton deformation parameters $`\beta `$ extracted from the RMF calculation with axial deformation. In the same figure we also show the experimental data obtained from the B(E2) values. In this figure we see that our results compare well with experimental data. We also calculate the energy surface of <sup>124</sup>Ba with alternative NL1 parameter set of the RMF theory in order to test the dependence of triaxial deformation on the choice of the RMF parameter. In Fig. 5, we compare the energy surfaces obtained by using the TMA and NL1 parameter sets. A well-distinguished minimum is seen at prolate deformation for the case of the NL1 parameter set. This feature is slightly stronger than in the TMA case. We mention that triaxial deformation is not found in calculations within the Skyrme-Hartree-Fock(SHF) theory. It would be interesting to compare the energy surfaces of RMF and SHF theories in a wide mass range. In Fig. 6 we show the total binding energy of the nuclei studied. The theoretical values are taken from the axially symmetric calculations, since no distinguished triaxial shapes were found in the triaxial calculations. We see slightly over binding for the Sn isotopes, which may be due to the use of the constant pairing correlations and will be studied further. We show in Fig. 7 the charge radius as a function of the neutron number. The general tendency is found to be quite satisfactory. ## 5 Summary We have studied systematically the triaxial deformation of 25 even-even nuclei in the proton-rich Xe region. We have calculated their ground state structures in the RMF theory with triaxial deformation and with pairing correlations and obtained their energy surfaces in the plane of the deformation parameters, $`\beta `$ and $`\gamma `$, by constraining the quadrupole moments. We have explored the appearance of triaxial deformation in the region covering $`Z=5058`$ and $`N=6472`$, by looking for the minima of the derived energy surfaces in the triaxial deformation parameter space. Through comparisons with the results obtained in the RMF calculations with axial symmetry, we have discussed the correspondence between the coexistence of axial shapes and the appearance of triaxial shapes. We have found no distinguished energy minima at triaxial deformations. However, the energy surfaces are often very $`\gamma `$ soft. This feature is caused by the pairing correlations since, when we remove the pairing correlations in the RMF calculations, we find well distinguished triaxial deformation in this mass region. We have compared the energy surfaces of two parameter sets, the TMA and NL1 ones. The TMA parameter set provides more softness in the $`\gamma `$ direction than the NL1 one provides. Comparisons of the binding energies and deformations with experimental data are, in general, quite satisfactory. We note here that we have not worked out the angular momentum and particle number projection, which have already been developed in the non-relativistic approach. The restorations of these symmetries may change somewhat the results on the deformation, as has been discussed in the non-relativistic description of deformed nuclei. The relativistic approach to triaxial deformation is not yet at such a level of systematic study and these refinements have not been considered here. This is certainly a direction for future work. ## Acknowledgment We would like to thank J. Meng for fruitful discussions. The entire calculation was performed on the Fujitsu VPP500/30 and VPP700E/128 supercomputer at RIKEN, Japan. K. S. would like to express special thanks to the Computing Facility of RIKEN for a special allocation of VPP500/30 computing time for the first stage of this study. Table 1 | $`m_N`$ \[MeV\] | 938.900 | | --- | --- | | $`m_\sigma `$ \[MeV\] | 519.151 | | $`m_\omega `$ \[MeV\] | 781.950 | | $`m_\rho `$ \[MeV\] | 768.100 | | $`g_\sigma `$ | 10.055 $`+`$ 3.050/$`A^{0.4}`$ | | $`g_\omega `$ | 12.842 $`+`$ 3.191/$`A^{0.4}`$ | | $`g_\rho `$ | 3.800 $`+`$ 4.644/$`A^{0.4}`$ | | $`g_2`$ | $``$0.328 $``$ 27.879/$`A^{0.4}`$ | | $`g_3`$ | 38.862 $``$ 184.191/$`A^{0.4}`$ | | $`c_3`$ | 151.590 $``$ 378.004/$`A^{0.4}`$ | ## Figure captions The energy curve obtained in axial RMF calculations as a function of the deformation parameter, $`\beta `$, for the <sub>52</sub>Te and <sub>54</sub>Xe isotopes. Calculated points are connected by dashed curves to guide the eye. Note that the curves for <sup>122</sup>Te and <sup>124</sup>Te are shifted downward by 0.02 MeV to distinguish them from the other curves. The energy surface in the plane of deformation parameters, $`\beta `$ and $`\gamma `$, calculated in the RMF theory with triaxial deformation for nuclei in the range of $`Z=5058`$ and $`N=6472`$, arranged in the form of the nuclear chart. The energy difference between the contours is 1 MeV in total binding energy. The energy minimum is marked by the black region, in which the energy difference is less than 1 MeV. The energy surfaces obtained in the triaxial RMF calculation without and with pairing are shown for <sup>120</sup>Te, <sup>122</sup>Xe and <sup>124</sup>Ba. The energy difference between the contours is 1 MeV in total binding energy. The energy minimum is marked by the black region. The proton deformation parameter $`\beta `$, obtained from the RMF calculation with axial deformation, is shown as a function of the neutron number. The experimental data are extracted from Ref. . The energy surfaces obtained in the triaxial RMF calculation with the NL1 and TMA parameter sets are shown for <sup>124</sup>Ba. The energy difference between the contours is 1 MeV in total binding energy. The energy minimum is marked by the black region. The total binding energy calculated using the RMF theory with axial deformation is shown as a function of the neutron number. The experimental data are extracted from Ref. . The charge radius calculated using the RMF theory with axial deformation is shown as a function of the neutron number. The experimental data are extracted from Ref. .
no-problem/9911/quant-ph9911101.html
ar5iv
text
# Quantum Coins, Dice and Children: Probability and Quantum Statistics ## I Quantum coin tossing We will start with the simplest possible example — the quantum coin tossing problem. (Our quantum coin tossing problem has little to do with another problem with the same name in quantum information theory.) Each quantum coin is a particle in one of the two possible quantum states, labeled “heads” (H) or “tails” (T), which are a priori equally likely. It is clear that the probability of getting a “heads” is 50%, regardless of the statistics of the coin. Now consider tossing a set of two coins, in that we mean preparing a mixed state for which all distinct allowable quantum two-particle states are a priori equally likely. These conditions are physically realizable for systems with two low-lying single particle discrete levels, which are well separated from the other levels. More specifically, both the energy splitting between the two states, $`\delta `$, and the interaction energy between particles in these states, $`ϵ`$, are much smaller than the temperature, so that by equipartition both states are equally likely to be occupied. The temperature is in turn much smaller than $`\mathrm{\Delta }`$, the energy splitting between these two low-lying states, and the rest of the spectrum, so that these higher states are essentially empty. In other words, the temperature $`T`$ should be chosen in such a way that $`ϵ,\delta T\mathrm{\Delta }`$. If such conditions are satisfied, what is the probability the outcome is two “heads”? The answer depends on which statistics the coins obey. $``$ With classical statistics, i.e., where the particles are distinguishable, there are four possible outcomes: $$\text{HH},\text{HT},\text{TH},\text{TT}.$$ (1) Since all four outcomes are a priori equally likely, the probability for HH is $`1/4`$. This is applicable to tossing macroscopic coins, where quantum effects are negligible. $``$ With Bose–Einstein statistics, where the allowable states must be symmetric under exchange, there are only three possible outcomes: $$\text{HH},(\text{HT+TH})/\sqrt{2},\text{TT}.$$ (2) Consequently, the probability for HH increases to $`1/3`$. This is applicable, for example, to a simple system of two bosons in an external potential with doubly degenerate ground states labeled as H and T. It is also applicable to two photons in a rectangular optical cavity with dimensions $`a\times a\times b`$ ($`ab`$). Such a cavity has two degenerate ground states, which can be labeled as H and T, respectively. Then the probability of finding both photons in the H state is $`1/3`$. (This example has been studied in Dirac’s “The Principle of Quantum Mechanics” .) $``$ With Fermi–Dirac statistics the outcomes of HH and TT are forbidden as the allowable states must be antisymmetric under exchange; there is only one possible state: $$(\text{HT}\text{TH})/\sqrt{2}$$ (3) The probability for HH is obviously zero. This is applicable to a system of two fermions in an external potential with doubly degenerate ground states. This above analysis clearly shows that the outcomes of measurements on the two coins are not statistically independent. Classically, two systems are usually regarded as statistically independent if they do not interact with each other. This, however, is not necessarily true for quantum mechanical systems of identical particles, where the two-particle wavefunction is entangled unless it can be written as the product of two single-particle wavefunctions. More precisely, for probability applications where one studies mixed states, correlations occur unless the two-particle density matrix can be factored into two density matrices, each describing one of the particles. Bosonic (fermionic) wavefunctions, however, are obtained via symmetrization (antisymmetrization) of independent two-particle wavefunctions, and such symmetrizations or antisymmetrizations destroy statistical independence. It is manifestly clear in the case of fermions: the Pauli exclusion principle, decreeing that two identical fermions cannot be in the same state, is incompatible with statistical independence. The analogous effect for bosons is Bose enhancement, which states that bosons are more likely to be found in the same state than statistically independent particles. This simple example of quantum coin tossing illustrates, in a very compelling way, the differences between classical and quantum statistics. We mention in passing that one can easily generalize the above analysis to the following problem. For $`n`$ dice, each equally likely to be any of $`k`$ state (one of which is called “$``$”), what is the probability that all of them end up being in the “$``$” state? For distinguishable particles there are $`k^n`$ distinct possible outcomes, and the probability for any one of them is $`k^n`$. For fermions the probability for an “all $``$ state” is trivially 0 (for $`n>1`$), and for bosons it is easy to show that there are $`\left(\genfrac{}{}{0pt}{}{k+n1}{n}\right)`$ distinct possible outcomes. Since these outcomes are all equally likely, the probability for the “all $``$ state” is $`1/\left(\genfrac{}{}{0pt}{}{k+n1}{n}\right)`$, which is always larger than $`k^n`$. In other words, Bose statistics always increases the chance of finding two identical bosons in the same state; Bose enhancement is really an enhancement. (It is important to note that the above analysis holds if and only if there are exactly $`k`$ accessible states as stated in the problem. The answer will be different if, for example, there are $`k`$ doublets (i.e., $`2k`$ accessible states) and one of the doublets is labeled “heads”.) Lastly, a word of caution: real coins and dice do not behave like quantum coins and dice — they are essentially classical objects. Coins and dice are always distinguishable from one another, while the discussion above is only applicable to indistinguishable particles. Even usual quantum systems such as electrons in an external magnetic field do not behave as quantum coins as described above. The analysis above is valid only if there are exactly two allowable states, while an electron in a magnetic field has two spin states for each accessible spatial quantum state. There are even more allowable states for real coins and dice, which are distinguished not only by the spatial location but also for physical variations. As a result, the terminologies like “quantum coins” should be taken in a metaphorical sense only. ## II Conditional probabilities: The Quantum Crib Now we will move on to conditional probabilities, which are even more intriguing and counterintuitive. Consider the following famous problem: (I) Two children sleep in a crib. If one is chosen at random and turns out to be a boy, what is the probability that both are boys? (II) Two children sleep in a crib. If at least one of them is a boy, what is the probability that both are boys? The answer is well known: $`1/2`$ for the question (I), $`1/3`$ for question (II). These answers presume that the two genders are a priori equally likely, and also that the children are distinguishable objects and their genders are statistically independent. (Whether these assumptions are strictly true in the real world is beyond the scope of this paper.) But what if we assume the children obey quantum statistics instead? In order to study this question, we will reformulate the above puzzle in the following way to make it applicable to quantum particles: Consider two identical particles, each being equally probable of being in one of two quantum states: either boy (B) or girl (G) at the same spatial position, with all distinct allowable gender combinations being a priori equally likely. (Here the terminologies “boy” and “girl” are used in a metaphorical sense only — real children are distinguishable classical objects. Recall the discussion at the end of the previous section.) We will adopt the following shorthand: “the particle is a B” stands for “the particle is in state B”. Then: (I) One particle is selected in a random manner. If it is a B, what is the probability that the other one is also a B? (II) Both particles are measured and at least one of them is a B. What is the probability that the other is also a B? Both of these questions can be easily answered by listing the elements of the spaces of possible combinations. For distinguishable children, the space of possible combinations is $`\{`$BB, BG, GB, GG$`\}`$. Out of the four combinations three of them have at least one B, but among them only one is BB, the answer to the question (II) is $`1/3`$, as forecasted above. On the other hand, since all four combinations are equally probable, and for each outcome both particles are equally likely to be selected, there are $`4\times 2=8`$ equally likely cases: $`\text{B}\text{B},`$ $`\text{B}\text{G},\text{G}\text{B},`$ $`\text{G}\text{G},`$ (4) $`\text{B}\text{B},`$ $`\text{B}\text{G},\text{G}\text{B},`$ $`\text{G}\text{G};`$ (5) where the underlined particle is being selected. Since in four of these cases B is selected, and among them only two cases the remaining particle is a B, the answer to the question (I) is $`2/4=1/2`$. This answer reflects that the two particles are presumed to be statistically independent, and knowledge of one of the two children does not have any implication for the other child. The situation is dramatically changed if these children obey quantum statistics instead. It is easy to see that for fermionic children, the BB combination is forbidden by Pauli exclusion principle, and hence the answer to both questions above is: 0. For bosonic children, with the space of possible combination being $`\{`$BB, (BG+GB)$`/\sqrt{2}`$, GG$`\}`$, two out of the three combination have at least one B, and one of them is BB, so the answer to question (II) is $`1/2`$, in contrast to $`1/3`$ for the case with distinguishable children. The analogy of Eq. (4) is $`\text{B}\text{B},`$ $`(\text{B}\text{G+G}\text{B})/\sqrt{2},`$ $`\text{G}\text{G},`$ (6) $`\text{B}\text{B},`$ $`(\text{B}\text{G}\text{+}\text{G}\text{B})/\sqrt{2},`$ $`\text{G}\text{G}.`$ (7) In three of these cases B is selected, and since in two of them the remaining particle is also B, the answer to question (I) is $`2/3`$, not $`1/2`$. Again, we see that Bose statistics enhances the probability of finding two identical bosons in the same state. In the above, we have analyzed the problems by listing all the possible combinations. This becomes less practical for more complicated problems, and one may wonder if it is possible to re-analyze these problems in a way which can be generalized to more complex settings. Since we are studying mixed states, a natural description is via density matrices. Both question (I) and (II) will be re-analyzed in the appendix by using the density matrices formalism. However, the remainder of this paper (except the appendix) is in fact accessible without reference to the density matrices formalism. ## III Conditional Probabilities: The Quantum Day Care Center We will now move on from the crib to the quantum day care center. Consider the following problem: Consider $`n`$ quantum children (where $`n1`$) in a day care center, where by the equal opportunity laws all distinct allowable gender combinations are a priori equally likely. We will define $`R`$ as the ratio of quantum boys to the total number of children in the day care center. Then (III) What is the probability distribution of $`R`$? (IV) One child is selected in random, which is found to be a boy. What is the probability distribution of $`R`$ for the remaining children? For distinguishable children obeying classical statistics, statistical independence implies that the outcome of the remaining $`n1`$ children are not affected by the outcome of the first child. As a result, the probability distribution of $`R`$ is a sharply peaked Gaussian around $`R=1/2`$ for both questions (III) and (IV). On the other hand, one can study this problem for bosonic children by enumeration. There are $`n+1`$ distinct allowable gender combinations: $$C_k=\{k\text{boys},nk\text{girls}\},0kn,$$ (8) with all of these combinations a priori equally likely, i.e., $`P(k)=1/(n+1)`$. As a result, the probability for $`R=k/n`$ is $`P(R=k/n)=1/(n+1)`$ where $`k`$ is an integer between 0 and $`n`$ and hence $`0R1`$. When $`n\mathrm{}`$, this approaches the uniform probability distribution: $$f_0(R)\frac{dP(R)}{dR}=1,R_0Rf_0(R)𝑑R=1/2.$$ (9) Question (IV) asks for the probability distribution of $`R`$ on the condition that the first child selected is a boy. After the selection, there are only $`n1`$ quantum children remaining in the quantum day care center, and hence the number of boys left can be any integer between 0 and $`n1`$. Now the probability is $`P(k)=1/(n+1)`$ for each gender combination $`C_k`$, which after one boy is selected is left with $`k1`$ left in the quantum day care center, so by Bayes’ formula one has $`\stackrel{~}{P}(m)`$ $``$ $`P(m\text{boys left}|\text{first child selected is a boy})`$ (10) $`=`$ $`P(m+1\text{boys before selection}|\text{first child selected is a boy})`$ (11) $`=`$ $`{\displaystyle \frac{P(\text{first child selected is a boy}|m+1\text{boys before selection})P(m+1\text{boys before selection})}{_{j=0}^nP(\text{first child selected is a boy}|j\text{boys before selection})P(j\text{boys before selection})}}`$ (12) $`=`$ $`{\displaystyle \frac{(m+1)/n\times 1/(n+1)}{_{j=0}^nj/n\times 1/(n+1)}}={\displaystyle \frac{2(m+1)}{n(n+1)}}.`$ (13) (We will give a brief description of Bayes’ formula for readers who are not familiar with probability theory. Let $`H_j`$ ($`j=1,\mathrm{},N`$) be $`N`$ mutually exclusive events, with probabilities $`P(H_j)`$. Then $`P(H_k|A)`$, the conditional probability of a particular $`H_k`$ upon the condition that another event $`A`$ occurs, is given by the Bayes’ formula: $$P(H_k|A)=\frac{P(H_k)P(A|H_k)}{_jP(H_j)P(A|H_j)}.$$ (14) Discussions of Bayes formula can be found in most standard textbooks on probability theory. See, for example, Fraser or Roe .) Returning to Eq. (13), one can easily check that the probabilities of different possible outcomes add up to unity. $$\underset{m=0}{\overset{n1}{}}\stackrel{~}{P}(m)=\underset{m=0}{\overset{n1}{}}\frac{2(m+1)}{n(n+1)}=1.$$ (15) The conditional expectation value of $`m`$ is $$\underset{m=0}{\overset{n1}{}}mP(m)=\underset{m=0}{\overset{n1}{}}\frac{2m(m+1)}{n(n+1)}=\frac{2}{3}(n1).$$ (16) Since there are $`n1`$ quantum children remaining in the quantum day care center, $`R=m/(n1)`$, and the conditional expectation value of $`R`$ is 2/3, i.e., we expect two-thirds of the remaining quantum children to be boys, having determined that a single child (out of a huge day care center) is male! A little quantum knowledge goes a long way in this problem. As the number of children in the quantum day care center tends to infinity, i.e., $`n\mathrm{}`$, the conditional probability $`\stackrel{~}{P}`$ approaches a linear probability distribution: $$f_1(R)\frac{d\stackrel{~}{P}(R)}{dR}=2R,R_1Rf_1(R)𝑑R=2/3,$$ (17) in agreement with the conditional expectation value obtained above. As a last example, we generalize the previous case to the quantum die rolling problem: (V) A quantum die is a quantum mechanical particle, a priori equally likely to be in one of $`k`$ possible states. (Again, the terminology “quantum die” is used in a metaphorical sense only — real dice are distinguishable classical objects.) Consider tossing a set of $`n`$ quantum dice (where $`n1`$), by which we mean preparing a mixed state for which all distinct allowable quantum $`n`$ particle states are a priori equally likely. Let’s label one of the states “state 1” and define $`R`$ to be the fraction of quantum dice being in state 1. Then $`n^{}`$ coins are selected in random and $`N_1`$ of them turn out to be in state 1, $`N_2`$ of them in state 2, etc., such that $`N_1+N_2+\mathrm{}+N_k=n^{}n`$. What is the probability distribution of $`R`$ for the remaining dice? This is a straightforward generalization of questions (III) and (IV), which are recovered by setting $`k=2`$, $`N_2=0`$ and $`N_1=0`$ in question (III) or $`N_1=1`$ in question (IV). For distinguishable dice, by statistical independence, the probability distribution of $`R`$ is sharply peaked around $`1/k`$. We will show in the appendix, by using the density matrix formalism, that the conditional probability distribution of $`R`$ for the remaining bosonic dice is $$f_{N_1,N_2,\mathrm{},N_k}(R)=R^{\nu _11}(1R)^{\nu _2+\mathrm{}+\nu _k1}/B(\nu _1,\nu _2+\mathrm{}+\nu _k),$$ (18) where $`\nu _j=N_j+1`$ and $`B(x,y)`$ is the Beta function, and the conditional expectation value of $`R`$ is $$R_{N_1,N_2,\mathrm{},N_k}Rf_{N_1,N_2,\mathrm{},N_k}(R)𝑑R=\nu _1/(\nu _1+\nu _2+\mathrm{}+\nu _k).$$ (19) ## IV Discussions We emphasize that the examples above are not merely academic but may be experimentally realizable and testable. For example, a Bose–Einstein condensate of $`F`$-spin-1 atoms ($`F`$-spin is the total spin of the atom, which is the quantum mechanical sum of the total angular momentum of the electron system and the nuclear spin) provides a natural realization of a system of quantum dice with $`k=3`$, where the three states correspond to $`F_z=1`$,0 and $`1`$ along some axis $`\widehat{z}`$. All distinct allowable combinations of $`F`$-spins are a priori equally likely as long as the system is isotropic, or alternatively the temperature is sufficiently high that the anisotropic term in the Hamiltonian is negligible, while at the same time being low enough to support a Bose–Einstein condensate. If such a scenario is realizable, a randomly extracted atom from the condensate is equally likely to be in any of the three spin states. If the first atom turns out to be in the $`F_z=1`$ state, however, Eq. (19) (with $`k=3`$ and $`(N_1,N_2,N_3)=(1,0,0)`$) predicts that half of the remaining atoms will also be in the $`F_z=1`$ state. In our discussion, we have referred to the particles as “coins” (with states heads and tail), “children” (with states boy and girl) and “dice” (with states labeled by dots). It must be understood that these terminologies are being used in a merely metaphorical sense. Real children do not spontaneously fluctuate between boy states and girls states. Macroscopic coins and dices are always distinguishable from one another, both by physical variations and by their locations in space. Quantum statistics applies only to particles that are indistinguishable and sharing the same physical location. We have seen that counterintuitive results often arise when one tries to study probabilities for systems with identical particles obeying quantum statistics. Given the simplicity of our examples, one may wonder why they are not discussed or even mentioned in most undergraduate textbooks on quantum physics or statistical mechanics. We have attempted a literature search for similar discussions; as far as we know, there is no mention of these topics in most standard textbooks on quantum mechanics and/or statistical physics. On the other hand, as mentioned before, the quantum coin tossing problem with two coins was discussed by Dirac in Ref. . There are also discussions in Griffiths and Stowe which share the philosophy of this paper; the specific examples being considered, however, are different from the ones discussed here. In particular, none of these discussions studied conditional probabilities, which give the clearest and the most counterintuitive manifestation of the differences between classical and quantum statistics. Returning to the question of why these issues not being brought up in most undergraduate textbooks, the reason, we believe, lies in the observation that these particularly counterintuitive results occur only in systems with a finite number of accessible levels. This condition is rarely met in important physical systems; as a result, these subtleties are seldom discussed in most undergraduate textbooks, which understandably tend to focus on systems with more immediate applications. However, we believe our examples can highlight the differences between classical and quantum statistics, and deserve some discussion in undergraduate classrooms. The main lesson of this discussion is the lack of statistical independence between identical particles in quantum statistics. For the outcomes of measurements on two particles to be statistically independent, the wavefunctions of the two particles must be disentangled. In other words, the density matrix describing the two-particle mixed state must be factorizable into two separate density matrices, each describing one of the particles. For identical particles obeying quantum statistics, however, their density matrices are always entangled due to (anti)symmetrizations. As a result, the outcomes of measurements on identical particles are always correlated, violating statistical independence. Support of this research by the U.S. Department of Energy under grant DE-FG02-93ER-40762 is gratefully acknowledged. APPENDIX In this appendix, we will re-analyze questions (I) – (IV) in the density matrices formalism, which carries the advantages of being a systematic procedure and can be easily generalized to systems of arbitrary number of particles and accessible states. Let us start with a single quantum coin, with the density matrix: $$\rho =\frac{1}{2}|HH|+\frac{1}{2}|TT|,$$ (20) and for two quantum coins obeying classical statistics (i.e., being distinguishable), the two-particle density matrix is $$\rho _{cl}\rho \rho =\frac{1}{4}|HHHH|+\frac{1}{4}|HTHT|+\frac{1}{4}|THTH|+\frac{1}{4}|TTTT|,$$ (21) and the coefficient $`\frac{1}{4}`$ of the $`|HHHH|`$ gives the probability of getting two “heads” when a set of two coins are tossed. Notice that statistical independence is manifest as the two-particle density matrix $`\rho _{cl}`$ is the product of two single-particle density matrices $`\rho `$. On the other hand, for quantum coins obeying bosonic (fermionic) statistics, the two-particle density matrix is obtained by $`\rho _{cl}`$ by (anti)symmetrization. $`\rho _{BE}`$ $`\lambda _{BE}S\rho _{cl}S=`$ $`\frac{1}{3}|HHHH|+\frac{1}{3}|SS|+\frac{1}{3}|TTTT|`$ (22) $`\rho _{FD}`$ $`\lambda _{FD}A\rho _{cl}A=`$ $`|AA|;`$ (23) where $`S`$ and $`A`$ are the symmetrization and antisymmetrization projection operators, respectively; the $`\lambda `$’s are normalization constants to ensure that the density matrices are properly normalized, i.e., $`\mathrm{Tr}\rho _{BE}=\mathrm{Tr}\rho _{FD}=1`$, and $$|S=(|HT+|TH)/\sqrt{2},|A=(|HT|TH)/\sqrt{2}.$$ (24) Again, the probabilities of getting two “heads” can be read off as the coefficients of the operator $`|HHHH|`$. The coefficients are $`1/3`$ and 0 for bosonic and fermionic statistics, respectively, confirming the values obtained through listing. After all, the density matrix formalism is simply a systematic way to generate and organize the list of all possible combinations, with the probability of each combination appearing as the coefficient of the respective projection operator. As for the problems on conditional probabilities, the condition decreed in question (II), namely at least one of the quantum children is a boy, can be imposed by projecting out the subspace of “all girls” by the projection operator $`P=\mathrm{𝟏}|GGGG|`$, with 1 denoting the identity operator. Acting $`P`$ onto the two-particle density matrices $`\rho _{BE}`$ and $`\rho _{FD}`$ above (and renaming “H” as “B” and “T” as “G”), the projected density matrices are, $`\overline{\rho }_{BE}`$ $`\overline{\lambda }_{BE}P\rho _{BE}P=`$ $`\frac{1}{2}|BBBB|+\frac{1}{2}|SS|,`$ (25) $`\overline{\rho }_{FD}`$ $`\overline{\lambda }_{FD}P\rho _{FD}P=`$ $`|AA|,`$ (26) where the $`\overline{\lambda }`$’s are again normalization constants. Again, the conditional probabilities of BB can be easily read off. The condition decreed in question (I), that a randomly chosen quantum child turns out to be a boy, can be imposed by using the “boy annihilation operator” $`a_B`$, satisfying $$a_B|m_B\text{ boys, }m_G\text{ girls}=\sqrt{m_B}|m_B1\text{ boys, }m_G\text{ girls}.$$ (27) After randomly taking a child out of the crib and find that it is a boy, the density matrix of the remaining child is $`\stackrel{~}{\rho }_{BE}`$ $`\stackrel{~}{\lambda }_{BE}a_B\rho _{BE}a_B^{}=`$ $`\frac{2}{3}|BB|+\frac{1}{3}|GG|,`$ (28) $`\stackrel{~}{\rho }_{FD}`$ $`\stackrel{~}{\lambda }_{FD}a_B\rho _{FD}a_B^{}=`$ $`|GG|,`$ (29) where as before $`\stackrel{~}{\lambda }`$’s are normalization constants. The conditional probabilities of the remaining quantum child is a boy (so that both quantum children are boys) again appear as coefficients. In the remainder of this appendix, we derive the answers (9) and (17) to the quantum day care center problem for bosonic children. Instead of tackling questions (III) and (IV) specifically, we will study the more general question (V). Questions (III) and (IV) are recovered by setting $`k=2`$, $`N_2=0`$ and $`N_1=0`$ in question (III) or $`N_1=1`$ in question (IV). Recall that the “state 1 annihilation operator”, $`a_1`$, annihilates a quantum die in state 1, and one can analogously define $`a_j`$ for other states, with $`1jk`$. For any complex unit vector $`\stackrel{}{r}=(r_1,\mathrm{},r_k)`$ satisfying $`_{j=1}^k|r_j|^2=1`$, the linear combination $`A_\stackrel{}{r}=\stackrel{}{r}\stackrel{}{a}`$ (where $`\stackrel{}{a}=(a_1,\mathrm{},a_k)`$) satisfies $`[A_\stackrel{}{r},A_\stackrel{}{r}^{}]=1`$ (with $`\mathrm{}`$ also set to unity), and is the annihilation operator for a quantum die in a particular state described by the “polarization vector”, $`\stackrel{}{r}`$. It is convenient to parameterize the components of $`\stackrel{}{r}`$ in the following way. $`r_1`$ $`=`$ $`e^{i\alpha _1}\mathrm{cos}\theta _1,`$ (30) $`r_2`$ $`=`$ $`e^{i\alpha _2}\mathrm{sin}\theta _2\mathrm{cos}\theta _2,`$ (31) $`\mathrm{}`$ $`\mathrm{}`$ (32) $`r_{k1}`$ $`=`$ $`e^{i\alpha _{k1}}\mathrm{sin}\theta _1\mathrm{sin}\theta _2\mathrm{}\mathrm{sin}\theta _{k2}\mathrm{cos}\theta _{k1},`$ (33) $`r_k`$ $`=`$ $`e^{i\alpha _k}\mathrm{sin}\theta _1\mathrm{sin}\theta _2\mathrm{}\mathrm{sin}\theta _{k2}\mathrm{sin}\theta _{k1}.`$ (34) Note that, while a real unit vector in $`𝐑^k`$ lies on a $`k1`$-dimensional sphere and hence is described by $`k1`$ angles $`\theta _j`$, a complex unit vector in $`𝐂^k`$ lies on a $`2k1`$-dimensional sphere and $`k`$ extra phases $`\alpha _j`$ are needed. The density matrix of a state with $`n`$ atoms ($`n1`$), all polarized in the $`\stackrel{}{r}`$ direction, is given by, $$\rho _\stackrel{}{r}=(1/n!)(A_\stackrel{}{r}^{})^n|00|(A_\stackrel{}{r})^n.$$ (35) However, as stated in the problem, all distinct allowable combinations are equally likely. As a result, the density matrix $`\rho `$ for such a state will be a superposition of $`\rho _\stackrel{}{r}`$ for all $`\stackrel{}{r}`$. $$\rho _0=\frac{\mathrm{\Gamma }(k/2)k!}{2\pi ^{3k/2}n!}𝑑r_1𝑑r_1^{}\mathrm{}𝑑r_k𝑑r_k^{}\delta (|\stackrel{}{r}|1)(A_\stackrel{}{r}^{})^n|00|(A_\stackrel{}{r})^n,$$ (36) where the asterisks represent complex conjugation and $`\mathrm{\Gamma }(k/2)k!/(2\pi ^{3k/2}n!)`$ is an overall normalization factor to ensure that $`\mathrm{Tr}\rho _0=1`$. Since the angles $`\alpha _j`$ are the phases of $`r_j`$, this density matrix can be rewritten as $$\rho _0=\frac{2^k\mathrm{\Gamma }(k/2)}{2\pi ^{k/2}}\frac{k!}{n!}_0^{\mathrm{}}|r_1|d|r_1|\mathrm{}|r_k|d|r_k|_0^{2\pi }\frac{d\alpha _1}{2\pi }\mathrm{}\frac{d\alpha _k}{2\pi }\delta (|\stackrel{}{r}|1)(A_\stackrel{}{r}^{})^n|00|(A_\stackrel{}{r})^n.$$ (37) Then we can express the real unit vector $`(|r_1|,\mathrm{},|r_k|)`$ in terms of the angles $`\theta _j`$. Integrating over the Dirac delta distribution $`\delta (|\stackrel{}{r}|1)`$ gives a factor of $`2\pi ^{k/2}/(2^k\mathrm{\Gamma }(k/2))`$, and $`\rho _0`$ can be recast as $$\rho _0=\frac{1}{n!}𝑑\mathrm{\Omega }_𝐂_0^{2\pi }\frac{d\alpha _1}{2\pi }\mathrm{}\frac{d\alpha _k}{2\pi }(A_\stackrel{}{r}^{})^n|00|(A_\stackrel{}{r})^n=𝑑\mathrm{\Omega }_𝐂_0^{2\pi }\frac{d\alpha _1}{2\pi }\mathrm{}\frac{d\alpha _k}{2\pi }\rho _\stackrel{}{r},$$ (38) where $$d\mathrm{\Omega }_𝐂\underset{j=1}{\overset{k1}{}}dP^{(j)}\underset{j=1}{\overset{k1}{}}(kj+1)\mathrm{sin}^{kj}\theta _j\mathrm{cos}\theta _jd\theta _j,_{\theta _j=0}^{\theta =\pi /2}𝑑P^{(j)}=1.$$ (39) The interpretation of Eq. (38) is clear. The measure $`d\mathrm{\Omega }_𝐂`$ gives the probability distribution $`f^{(j)}(\theta _j)`$ in the domain $`[0,\pi /2]`$. $$f^{(j)}(\theta _j)d\theta _j(kj+1)\mathrm{sin}^{kj}\theta _j\mathrm{cos}\theta _j,_0^{\pi /2}f^{(j)}(\theta _j)𝑑\theta _j=1.$$ (40) On the other hand, the phases $`\alpha _j`$ are equally likely to take any value between 0 and $`2\pi `$. $$f(\alpha _j)=1/2\pi .$$ (41) Note that the probability distribution of all three angles $`\theta _j`$ and the phases $`\alpha _j`$ are independent of each other. We are interested in evaluating the expectations of the number operators $`n_1`$ under the density matrix (38). It is convenient to introduce the observable $`R=n_B/n`$, denoting the fraction of quantum dice in state 1. Notice that this definition of $`R`$ coincides with that in problem (III). Since $`n_1=n\mathrm{cos}^2\theta _1`$, the observable $`R`$ can be re-expressed in terms of the angles $`\theta _1`$ as $`R=\mathrm{cos}^2\theta _1`$. It is straightforward to rewrite $`dP^{(1)}(\theta _1)`$ in terms of $`R`$. $$d\mathrm{\Omega }_𝐂=f_{0,\mathrm{},0}(R)dR,f_{0,\mathrm{},0}(R)=(1R)^{k2}/(k1),$$ (42) where the subscripts remind us that no die of any state has been removed. In particular, for $`k=2`$, we have the answer to question (III): the probability distribution of $`R`$ is given by $`f_0(R)=1`$. Now, with the distribution function $`f_{0,\mathrm{},0}(R)`$, it is straightforward to solve problem (V), which is to evaluate the conditional probability distribution given that $`n^{}`$ coins have been removed, and among them $`N_j`$ of them are found to be in state $`j`$. The density matrix after the selection, which can be written as $`\rho _{N_1,N_2,\mathrm{},N_k}=\lambda (a_1^{N_1}a_2^{N_2}\mathrm{}a_k^{N_k})\rho _0(a_1^{N_1}a_2^{N_2}\mathrm{}a_k^{N_k})^{}`$, where $`\lambda `$ is a normalization constant and $`\rho _0`$ is the density matrix defined in Eq. (38). Since $`\rho _1\rho _0`$, the conditional probability distribution of $`R`$ is no longer given by $`f_{0,\mathrm{},0}(R)`$, but instead $`f_{N_1,N_2,\mathrm{}N_k}(R)`$ $`=`$ $`{\displaystyle \frac{R^{N_1}(1R)^{N_2+\mathrm{}+N_k}f_{0,\mathrm{},0}(R)}{_0^1R^{N_1}(1R)^{N_2+\mathrm{}+N_k}f_{0,\mathrm{},0}(R)𝑑R}}`$ (43) $`=`$ $`{\displaystyle \frac{R^{\nu _11}(1R)^{\nu _2+\mathrm{}+\nu _k1}}{B(\nu _1,\nu _2+\mathrm{}+\nu _k)}},`$ (44) reproducing Eq. (18) with $`\nu _j=N_j+1`$. In particular, with $`k=2`$ and $`(N_1,N_2)=(1,0)`$, question (V) reduces to question (IV) with the answer $$f_1(R)=2R,$$ (45) which agrees with Eq. (17).
no-problem/9911/astro-ph9911343.html
ar5iv
text
# SIMULATIONS OF DAMPED LYMAN-ALPHA AND LYMAN LIMIT ABSORBERS IN DIFFERENT COSMOLOGIES: IMPLICATIONS FOR STRUCTURE FORMATION AT HIGH REDSHIFT ## 1 Introduction Systems producing absorption in the spectra of distant quasars offer an excellent probe of the early Universe. At high redshifts, they easily outnumber other observed tracers of cosmic structure, including both normal and active galaxies. The interpretation of low column density quasar absorption systems has undergone somewhat of a revolution during the past several years, with the recognition that they may consist of gas aggregating into mildly nonlinear structures analogous in their dynamical structure to today’s galaxy superclusters (Cen et al., 1994; Petitjean et al., 1995; Zhang et al., 1995, 1997; Hernquist et al., 1996; Miralda-Escudé et al., 1996; Bi & Davidsen, 1997; Hui, Gnedin, & Zhang, 1997). However, damped Ly$`\alpha `$ (DLA) absorbers, with neutral hydrogen column densities $`N_{\mathrm{HI}}10^{20.3}\mathrm{cm}^2`$, are usually thought to be associated with the dense interstellar gas of high-redshift galaxies, based on several lines of circumstantial evidence: similarity between the column densities of damped systems and the column densities through typical spiral disks today, rough agreement between the total mass of atomic hydrogen in damped absorbers at $`z3`$ and the total mass of stars today (Wolfe & Prochaska, 1998), measurements of radial extents $`10h^1\mathrm{kpc}`$ in two DLA systems (Briggs et al., 1989; Wolfe et al., 1993), and direct imaging of a number of DLA hosts from ground based and HST observations (Rao & Turnshek, 1998; Turnshek et al., 2000; Djorgovski et al., 1996; Fontana et al., 1996; Moller & Warren, 1998; Le Brun et al., 1997). The nature of Lyman limit (LL) absorbers, with $`N_{\mathrm{HI}}10^{17.2}\mathrm{cm}^2`$, is less well understood, though most models associate them with the outer regions of galaxies (e.g. Mo & Miralda-Escudé (1996)). Analytic studies based on the Press-Schechter (1974) formalism suggested that the abundance of DLA systems might be a strong test of cosmological models, potentially ruling out those models with little power on galaxy scales at $`z=3`$ (Kauffmann & Charlot, 1994; Mo & Miralda-Escudé, 1994). The most sophisticated of these calculations, that of Kauffmann (1996), implied that the “standard” cold dark matter model (SCDM, with $`\mathrm{\Omega }_m=1`$, $`hH_0/100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1=0.5`$, and a power spectrum normalization $`\sigma _80.7`$) could account for the observed abundance of high-redshift DLA systems, with about 30% of the absorption at $`z=2.5`$ occurring in galaxies with halo circular velocities $`v_c>100\mathrm{km}\mathrm{s}^1`$. Katz et al. (1996, hereafter KWHM) presented the first predictions of the amount of DLA and LL absorption based on 3-dimensional hydrodynamic simulations, concluding that these simulations of the SCDM model came within a factor of two of matching the observed DLA abundance but fell nearly an order of magnitude short of reproducing observed LL absorption. Ma et al. (1997) “calibrated” DLA estimates from collisionless N-body simulations against the KWHM SCDM simulations, then applied this calibration to N-body simulations of cold+hot dark matter (CHDM) models. They concluded that the CHDM scenario failed to reproduce the observed DLA abundance even with a neutrino fraction as low as $`\mathrm{\Omega }_\nu =0.2`$, strengthening the earlier, analytic arguments, which focused on CHDM with $`\mathrm{\Omega }_\nu =0.3`$. Quinn, Katz, & Efstathiou (1996; QKE hereafter) and Thoul & Weinberg (1996) find that halos with circular velocities $`v_c<40\mathrm{km}\mathrm{s}^1`$ are unlikely to harbor DLA absorbers. Gas in halos below this limit does not collapse sufficiently to shield itself from the UV background and reach the necessary HI column densities. The shortcoming of the KWHM calculation was that it could not include the contribution from DLA and LL systems below its resolution limit, corresponding to a halo circular velocity $`v_c100\mathrm{km}\mathrm{s}^1`$. Consequently, the simulations themselves can only provide a lower limit to the total amount of DLA and LL absorption in the Universe. In Gardner et al. (1997a; GKHW hereafter), we addressed this shortcoming by combining the KWHM results with high resolution simulations of individual, low mass objects similar to those of QKE. We used these simulations to obtain a relation between absorption cross-section $`\alpha `$ and halo circular velocity $`v_c`$, which we combined with the Press-Schechter halo abundance to compute the total DLA and LL absorption in the SCDM model. The correction for previously unresolved halos increased the predicted absorption by about a factor of two, bringing the predicted DLA abundance into good agreement with observations but leaving the predicted number of LL systems substantially below the observed number. In Gardner et al. (1997b; GKWH hereafter), we applied the $`\alpha (v_c)`$ relation derived for SCDM to other cosmological models, obtaining more general predictions for DLA and LL absorption under the assumption that the relation between halo $`v_c`$ and gas absorption cross-section was independent of cosmological parameters. In this paper, we present results of simulations of several variants of the inflation+CDM scenario (see, e.g. Katz, Hernquist & Weinberg 1999) and improve upon the GKWH results by using these simulations to predict DLA and LL absorption in these models. We continue to use a Press-Schechter based extrapolation (with the mass function of Jenkins et al. 2001) to compute the contribution of smaller halos to DLA and LL statistics, employing an improved methodology that significantly changes the GKHW predictions for absorption by low mass systems. Using an improved fitting procedure, we obtain more accurate error estimates of our fitted $`\alpha (v_c)`$ to the simulated data. We find that our largest error in estimating the universal amount of DLA and LL absorption arises from the uncertainty in the exact $`v_c`$ at which halos cease to harbor these absorbers. Given the large number of halos at $`v_{c,min}40\mathrm{km}\mathrm{s}^1`$, a small variation in the exact value or form of this cutoff leads to significant deviations in the estimation of total DLA and LL absorption cross sections. In light of these results, we find that we are not yet able to test the four cosmologies we consider against the observed DLA and LL abundances. Instead, we have adopted the approach of determining the value of $`v_{c,min}`$ in each model that yields best agreement with the observations. The nature of the galaxies that host DLA systems has been a controversial topic for many years. Two competing hypotheses have defined the poles of the debate: the idea that most DLA systems are large, rotating gas disks (e.g. Schiano, Wolfe, & Chang 1990), and the idea that a large fraction of DLA absorption arises in dwarf galaxies (e.g. Tyson (1988)). The strongest empirical argument for the dwarf hypothesis is that some imaging studies reveal small galaxies near the line of sight but no clear candidates for large galaxies producing the absorption (e.g. Fontana et al. 1996; Le Brun et al. 1997; Moller & Warren 1998). The recent study of two DLA systems at $`z=0.091`$ and $`z=0.221`$ by Rao & Turnshek (1998) and Turnshek et al. (2000) places especially stringent upper limits on the luminosities of the host galaxies. The strongest argument for the rotating disk hypothesis is the analysis of metal-line kinematics in DLA systems by Prochaska & Wolfe (1997, 1998), who consider a variety of simplified models for the velocity structure of the absorbers and find that only a population of cold, rotating disks with typical circular velocities $`v_c200\mathrm{km}\mathrm{s}^1`$ can account for the observed distribution of velocity spreads and for the high frequency of “lopsided” kinematic profiles. However, hierarchical models of galaxy formation predict that such massive disks should be rare at $`z3`$. In an important paper, Haehnelt, Steinmetz, & Rauch (1998) showed that hydrodynamic simulations of high-redshift galaxies could account for the lopsided kinematic profiles and large velocity spreads found by Prochaska & Wolfe (1997, 1998) even with halo circular velocities substantially below $`200\mathrm{km}\mathrm{s}^1`$, because of large scale asymmetries and departures from dynamical equilibrium (see also Ledoux et al. (1998)). This result makes the $`100\mathrm{km}\mathrm{s}^1`$ median halo circular velocities found by Kauffmann (1996) and GKHW for the SCDM model potentially compatible with the observed metal-line kinematics. Our present simulations do not yet have enough resolution for us to repeat the Haehnelt et al. (1998) analysis; we hope to do so with future simulations to carry out a statistical comparison between results from a randomly chosen cosmological volume and the Prochaska & Wolfe (1997, 1998) data. However, we can already extend the GKHW analysis to other cosmological models, predicting the fraction of DLA absorption arising in halos of different circular velocities. We also revisit an important issue explored by KWHM for the SCDM model, the predicted distribution of projected separations between DLA and LL systems and high-redshift galaxies. Given the number of recent attempts to directly image DLA host galaxies, our predictions will be useful in testing the compatibility of the size and probable luminosity of our simulated DLA hosts with the imaging data. Section 2 describes the simulations and our analysis methods. Section 3 presents our analysis of the DLA and LL systems resolved by the simulations. Section 4 describes and applies our procedures for computing the contribution from unresolved halos. We discuss the implications of our results and present our conclusions in §5. ## 2 Simulations and Methods ### 2.1 The Simulations Our simulations follow the same general prescription as in GKHW, where a periodic cube whose edges measure 11.11$`h^1`$Mpc in comoving units is drawn randomly from a CDM universe and evolved to a redshift $`z=2`$. We examine the effects of cosmology using five principal simulations detailed in the first five lines of Table 1, where $`\sigma _8`$ is the power spectrum normalization, $`hH_0/100`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`\mathrm{\Omega }_m`$ is the fraction of present-day closure density in matter, $`\mathrm{\Omega }_b`$ is the fraction in baryons, and $`\mathrm{\Omega }_\mathrm{\Lambda }\mathrm{\Lambda }/(3H_0^2)`$, where $`\mathrm{\Lambda }`$ is the cosmological constant. The quantity $`n`$ is the index of the inflationary fluctuation spectrum, with $`n=1`$ corresponding to scale-invariant fluctuations. These are the same simulations presented by Katz et al. (1999), who studied the clustering properties of the galaxies. We will often refer to the three $`\mathrm{\Omega }_m=1`$ models (SCDM, CCDM, TCDM) collectively as the “critical” models and the two $`\mathrm{\Omega }_m=0.4`$ models (OCDM, LCDM) as the “subcritical” models. The five principal simulations employ $`64^3`$ gas and $`64^3`$ dark matter particles, with a gravitational softening length of 5$`h^1`$ comoving kpc (3$`h^1`$ comoving kpc equivalent Plummer softening, $`1h^1`$ physical kpc at $`z=2`$). The particle masses are $`1.5\times 10^8M_{}`$ and $`2.8\times 10^9M_{}`$ for the gas and dark matter, respectively, in the critical models and $`6.7\times 10^7M_{}`$ and $`8.3\times 10^8M_{}`$ in the subcritical models. These simulations were performed using TreeSPH (Hernquist & Katz, 1989), a code that unites smoothed particle hydrodynamics (SPH; Lucy (1977); Gingold & Monaghan (1977)) with the hierarchical tree method for computing gravitational forces (Barnes & Hut, 1986; Hernquist, 1987). The five principal simulations were done to study the effects of cosmology on DLA and LL systems. Because of uncertainties arising from resolution issues, we add to this study a further two “next-generation” simulations, L64 and L128, which were done in the same cosmology but with different mass resolutions to investigate the stability of our results with resolution. These were performed much more recently using PTreeSPH (Davé et al., 1999), a new parallelized version of TreeSPH, and their details are also given in Table 1. L64 is the same mass resolution as OCDM and LCDM, while L128 is a factor of eight greater in mass resolution (a factor of two greater spatially) and is valuable in examining absorbers in the lower-mass halos that the five principal simulations cannot resolve. Detailed descriptions of the simulation code and the radiation physics can be found in Hernquist & Katz (1989); Katz, Weinberg, & Hernquist (1996; hereafter KWH); and Davé, Dubinski, and Hernquist (1997). We only summarize the techniques here. For both simulation codes, dark matter, stars, and gas are all represented by particles; collisionless material is influenced only by gravity, while gas is subject to gravitational forces, pressure gradients, and shocks. We include the effects of radiative cooling, assuming primordial abundances, and Compton cooling. Ionization and heat input from a UV radiation background are incorporated in the simulation. We adopt the UV background spectrum of Haardt & Madau (1996), but reduce it in intensity by a factor of two at all redshifts so that the mean Ly$`\alpha `$ forest flux decrement is close to the observed value given our assumed baryon density (Croft et al., 1997). We apply small further adjustments to the background intensity during the analysis stage to precisely match the Press, Rybicki, & Schneider (1993) measurements of the mean decrement (see Croft et al. 1997 for further discussion of this procedure). For example, at $`z=3`$, the background intensity is reduced to 20% of the Haardt & Madau value, first by 50% during the simulation, then by a further 40% during post-processing. If we adopted a higher baryon density, $`\mathrm{\Omega }_b=0.02h^2`$ instead of $`\mathrm{\Omega }_b=0.0125h^2`$, then the background intensity matching the observed mean decrement would be a factor $`2.2`$ higher. We use a simple prescription to turn cold, dense gas into collisionless “star” particles. The prescription and its computational implementation are described in detail by KWH. Details of the numerical parameters can be found in Katz et al. (1999). ### 2.2 Halo and Absorber Identification From the simulation outputs at $`z=4,`$ 3, and 2, we identify dark matter halos and the individual concentrations of cold, collapsed gas that they contain. We initially identify the halos by applying a friends-of-friends (FOF) algorithm to the combined distribution of dark matter and SPH particles, with a linking length equal to the mean interparticle separation on an isodensity contour of an isothermal sphere with an enclosed average density contrast of $`\delta =180`$. Then, the position of the most bound particle in each FOF-identified halo is passed on to the spherical density method (SO; Lacey & Cole (1994)) that calculates the sphere about the most bound particle that contains an overdensity of $`\delta =180`$. The halos used in our analysis are those output from SO, and the circular velocities, which we denote as $`v_c`$, are the actual circular velocities at the $`\delta =180`$ radius ($`R_{180}`$) of each halo. This method of characterizing halo mass has been shown to be the best for computing the halo mass function (Jenkins et al., 2001). To detect discrete regions of collapsed gas capable of producing Lyman limit and damped Ly$`\alpha `$ absorption, we apply the algorithm of Stadel et al. (2001; see also KWH and http://www-hpcc.astro.washington.edu/tools/SKID) to the distribution of cold gas and star particles. SKID identifies gravitationally bound groups of particles that are associated with a common density maximum. Gas particles are only considered as potential members of a SKID group if they have a smoothed overdensity $`\rho _g/\overline{\rho }_g1>\delta _{vir}`$ and temperature $`T<30,000`$ K, and we discard groups with fewer than four members (we will apply a more stringent resolution cut below). All of the gas concentrations found by this method reside within a larger friends of friends halo, even at $`z=4`$. We match each absorber with its parent (SO) halo and discard halos that contain no absorbers. Including or excluding the “absorberless” halos in our mass function does not change the results above our resolution cutoff (explained below), since nearly all halos above our cutoff contain at least one absorber. We calculate the HI column densities for the halos by enclosing each halo within a sphere centered on the most tightly bound gas particle and of sufficient size to contain all the gas particles that might contribute to high column density absorption within the halo. We project the gas distribution within this sphere onto a uniform grid with a cell size of 5.43 comoving kpc, equal to the highest resolution achieved anywhere in the simulation, using the same spline kernel interpolation employed by the TreeSPH code for the hydrodynamics. For the L128 run, we use a pixel size of 2.715 as the peak spatial resolution is a factor of two better in each dimension than the other simulations. Following KWHM, we calculate an initial HI column density for each grid point assuming that the gas is optically thin, then apply a self-shielding correction to yield a true HI column density (see KWHM for details). For each halo we compute the projected area over which it produces damped absorption, with $`N_{\mathrm{HI}}>10^{20.3}\mathrm{cm}^2`$, and Lyman limit absorption, with $`N_{\mathrm{HI}}>10^{17.2}\mathrm{cm}^2`$. For simplicity, we project all halos from a single direction, although we obtain a similar fit of absorption area to circular velocity (see below) if we project randomly in the $`x`$, $`y`$, and $`z`$ directions or average the projections in $`x`$, $`y`$, and $`z`$. Projecting a rectangular prism instead of a sphere yields the same results. To test for convergence, we reprojected several halos at 2 and 4 times smaller grid spacings and found that the cross section for DLA and LL absorption changed by less than 1% in the majority of cases and by at most 2.5%. ### 2.3 Numerical Resolution Considerations Our five principal simulations, which each contain $`64^3`$ gas and $`64^3`$ dark matter particles, lack the dynamic range needed to model simultaneously the full mass range of objects that can contribute to DLA and LL absorption. Simulations by QKE and GKHW show that halos with circular velocities as low as $`35\mathrm{km}\mathrm{s}^1`$ can host DLA absorbers, while photoionized gas is unable to collapse and cool in smaller halos. However, if we adopted a particle mass low enough to resolve $`35\mathrm{km}\mathrm{s}^1`$ halos while retaining the same particle number, then our simulation volume would be too small to include a representative sample of more massive halos. In our analysis of the simulation results, we find that halos consisting of at least 60 dark matter particles nearly always (more than 98% of the time) contain a cold, dense gas concentration. Below this threshold, however, a substantial fraction of halos have no cold gas concentration. Furthermore, in our bootstrap analysis of the variance in the relation between halo $`v_c`$ and absorption cross-section, described in §4.1 below, we find much larger scatter about the mean relation for halos with fewer than 60 dark matter particles than for halos with more than 60 dark matter particles. To safeguard against additional systematic effects near the resolution boundary, we compare the high-resolution L128 run to L64. We find that the mass cut of $`M_{res}=60(m_{dark}+m_{SPH})`$ also allows the L64 halo properties to match smoothly with same mass halos in L128. Near perfect agreement for DLAs (cf. Figure 7 and later discussion) is found with a mass corresponding to 70 particles. So to be as conservative as possible, we adopt $`M_{res}=70(m_{dark}+m_{SPH})`$ as an estimate of the limiting mass below which we cannot accurately compute the amount of absorption in a simulated halo. The 70 particle criterion is more conservative than the 34 particle criterion that we adopted in GKHW, and this change will affect our predictions for absorption in lower mass halos in §4 below. Applying this same mass cut to the L128 run, we find the limiting resolution to be roughly $`50\mathrm{km}\mathrm{s}^1`$ at $`z=3`$, which is unfortunately still above the mass at which halos cease to host DLA and LL absorption. Consequently, even the increased dynamic range of L128 does not allow us to model all absorbers. In the critical density models, the mass resolution limit is $`M_{res}=2.7\times 10^{11}M_{}`$, corresponding to a circular velocity at the virial radius of $`v_{c,res}=140`$, 160, and $`180\mathrm{km}\mathrm{s}^1`$ at $`z=2`$, 3, and 4, respectively. In the subcritical models, $`M_{res}=8.2\times 10^{10}M_{}`$, corresponding to $`v_{c,res}=89`$, 100, and $`112\mathrm{km}\mathrm{s}^1`$ at $`z=2,`$ 3, 4. In L128, the mass resolution is a factor of eight better, hence $`M_{res}=1.0\times 10^{10}M_{}`$ and $`v_{c,res}=50\mathrm{km}\mathrm{s}^1`$ at $`z=3`$. For our statistical analyses of the simulation results below, we always eliminate absorbers in halos whose total mass (dark matter plus baryons, with the spherical overdensity mass definition given above) is $`M<M_{res}`$. Our quoted results apply only to halos above the resolution limit. In §4 we attempt to compute, as a function of $`v_c`$, the contribution of halos with $`M<M_{res}`$ to the total amount of DLA and LL absorption by combining the Jenkins et al. (2001) mass function with our numerical results. We assume throughout our subsequent analysis and discussion that our results for absorption in halos with $`M>M_{res}`$ are only minimally influenced by the residual effects of finite numerical resolution. We give evidence in §4 from the L128 run that the five principal runs are not influenced by resolution effects within a factor of 10 of their resolution cutoff. However, real absorption systems could have substructure that produces large fluctuations in HI column density on scales far below resolution limit of even our highest resolution simulation. In this scenario, the total amount of neutral gas in absorption systems would not be very different from our predictions, except to the extent that clumping shifts gas above or below the DLA/LL column density thresholds, but the distribution of column densities above the thresholds could be quite different. This issue will be difficult to address by direct numerical simulation alone because of the large range of scales involved. However, good agreement between predicted and observed column density distributions would support the contention that the absorbers do not have a great deal of substructure on scales below the simulation resolution limits. KWHM find good agreement between the predicted and observed shape of the column density distribution in the DLA regime, but a compelling case along these lines will require simulations that do resolve the full mass range of objects responsible for damped absorption, and which accurately resolve the low-end cutoff where halos cease to contain such absorption. ## 3 Simulation Results ### 3.1 Absorption in Collapsed Objects Figures 1 and 2 show the incidence of DLA and LL absorption in our five cosmological models: $`n(z)`$ is the mean number of absorbers intercepted per unit redshift above the DLA (Fig. 1) or LL (Fig. 2) column density threshold. The numerical results for halos above the mass resolution limit $`M_{res}`$ are shown at $`z=2,`$ 3, and 4. Observational results for DLA absorption are taken from Storrie-Lombardi & Wolfe (2000; cf. also Storrie-Lombardi, Irwin, & McMahon 1996a; Wolfe et al. 1995) and for LL absorption from Storrie-Lombardi et al. (1994). When comparing the subcritical models to the critical models, note that the subcritical models have lower $`M_{res}`$ and therefore sample the distribution of absorbers down to a lower mass cutoff, boosting the $`n(z)`$ prediction relative to that of the critical models. Taken directly from the simulations and from halos only above $`M_{res}`$, the values in these plots are hard lower limits to the predicted $`n(z)`$. The limiting circular velocities $`v_{c,res}`$ are below the value $`v_c120\mathrm{km}\mathrm{s}^1`$ inferred by Prochaska & Wolfe (1997, 1998) for typical DLA circular velocities based on a rotating disk model for metal-line kinematics, and even so the predicted number of DLA absorbers is usually a factor of two or more below the observed value. We conclude that if the inflationary CDM models considered here are even approximately correct, then the asymmetries and large velocity spreads found by Prochaska & Wolfe must be a result of complex geometry and non-equilibrium dynamics, as proposed by Haehnelt et al. (1998). Figure 3 shows the distribution of impact parameters $`D_{\mathrm{proj}}`$, in physical units, between high column density absorbers and the centers of neighboring galaxies. Specifically, the contour level represents the percentage of lines of sight of a given $`N_{\mathrm{HI}}`$ for which the closest simulated galaxy, in projection, lies within a projected distance $`D_{\mathrm{proj}}`$. Note that in this figure and in subsequent figures we represent distance in kpc and not $`h^1`$ kpc. Nearly all of the high column density systems in our simulations are associated with a galaxy, with the highest column density systems sampling the innermost regions of the galaxy and the lower column density systems occurring at larger impact parameters. At $`z=2`$, nearly all DLA systems lie within 15-20 kpc of a galaxy center, and nearly all LL systems lie within 30 kpc. At higher redshifts, the most likely impact parameter increases, which could indicate a physical contraction of DLA systems as they age or could alternatively reflect the higher neutral fraction associated with a given overdensity at higher redshift (similar to the interpretation of evolution of the low column density forest given by Hernquist et al. (1996) and Davé et al. (1999)). This increase can easily be seen in Figure 4, which plots the mean impact parameter for systems at the DLA and LL cutoff and compares them with the virial radius of a $`v_c=150\mathrm{km}\mathrm{s}^1`$ halo. The mass, and therefore size, of an isothermal sphere with a given circular velocity goes as $`(1+z)^{1.5}`$. Not only does the mean absorption cross section increase at redshift $`z>3`$, but the size of the parent halos decreases, meaning that at $`z=4`$ the fraction of the area of the halo subtended by DLA and especially LL absorption is much larger than at $`z=2`$. We will further examine absorber area vs. halo virial radius in Section 4.1. Figure 5 shows the fraction of critical density in cold collapsed gas, $`\mathrm{\Omega }_{ccg}`$ (solid line), and the fraction of the critical density in stars, $`\mathrm{\Omega }_{}`$ (dotted line), as a function of redshift in the various cosmological models. In subcritical models, we define $`\mathrm{\Omega }_x(z)\rho _x(z)\times (1+z)^3/\rho _c(z=0)`$, i.e., $`\mathrm{\Omega }_x`$ represents the comoving density of component $`x`$ relative to the critical density at $`z=0`$. We obtain $`\mathrm{\Omega }_{ccg}`$ by integrating the column density distribution $`f(N_{\mathrm{HI}})`$ for all of the halos in the simulation. Error crosses show the values derived by Storrie-Lombardi, McMahon, & Irwin (1996b) from a sample of DLA systems. The largest observed column density for DLA systems in the Storrie-Lombardi et al. (1996ab) sample is $`10^{21.8}`$ cm<sup>-2</sup>, probably because higher column density systems are too rare to have been detected. For a more direct comparison to the data, we therefore compute an “observational” value, $`\mathrm{\Omega }_{obs}`$, for which we only count gas along lines of sight with $`N_{\mathrm{HI}}10^{21.8}\mathrm{cm}^2`$. The contribution to $`\mathrm{\Omega }_{ccg}`$ from higher column density systems is generally small, but it is significant in the SCDM model. In all cases, we include only gas in halos with $`v_cv_{c,res}`$. For all the models in Figure 5, $`\mathrm{\Omega }_{}`$ increases steadily as the Universe evolves. However, $`\mathrm{\Omega }_{ccg}`$ remains constant to within a factor of two from redshift 4 down to redshift 2, indicating that additional gas cools and collapses to replace the gas that is turned into stars. Gas reaches higher densities in the the SCDM and CCDM models, leading to a larger difference between $`\mathrm{\Omega }_{ccg}`$ and $`\mathrm{\Omega }_{obs}`$. In nearly all cases, our “observed” cold gas densities $`\mathrm{\Omega }_{obs}`$ fall at least a factor of two below the observational data of Storrie-Lombardi et al. (1996b), which could themselves be underestimates of the true cosmological values of $`\mathrm{\Omega }_{ccg}`$ if dust extinction is important (Pei & Fall, 1995). If any of these models are to be viable, a substantial fraction of the high redshift HI must reside in systems below our resolution limit, an issue to which we turn in §4. To obtain the simulation values of $`\mathrm{\Omega }_{ccg}`$ in Figure 5, we had to alter the procedure described in §2.2 for computing HI column densities. While our standard grid spacing of 5.43 comoving kpc is sufficient to resolve objects with HI columns of $`10^{20.3}`$ cm<sup>-2</sup> and lower, a finer mesh is required to resolve the cold dense knots of gas at higher column densities, which contribute significantly to the $`\mathrm{\Omega }_{ccg}`$ integral. As described in KWHM, we generate the initial HI map assuming complete transparency, then use the mass, HI mass fraction, and temperature of each grid cell to correct the HI column density for its ability to shield itself from the surrounding radiation. Typically, a high column density grid cell contains some regions where the hydrogen should be partly ionized and some where it should be completely neutral owing to self-shielding effects. At the standard resolution of 5.43 comoving kpc, our procedure may average the contributions of these two regions before the self-shielding correction is applied, resulting in a lower neutral column density than if the identical correction procedure were applied with a smaller grid spacing. This effect is not important for computing $`n(z)`$, the number of systems with $`N_{\mathrm{HI}}`$ above the DLA cutoff, but it can be important for computing the total mass density of neutral gas, $`\mathrm{\Omega }_{ccg}`$. We examined the effect by reprojecting some of the halos at z=2,3,4 in the SCDM model at 2 and 4 times the original spatial resolution. The original resolution of 5.43 kpc underpredicts the total HI in the simulation, while the cold collapsed gas mass in the 2X and 4X cases is nearly identical. Consequently, we regard the 2X case as numerically converged. Unfortunately, reprojecting all the simulation outputs at this higher resolution is not computationally feasible. We therefore developed an approximate procedure based on the original grid spacing, calibrated against the few higher resolution SCDM projections. In grid cells where the self-shielding corrected HI column is greater than a threshold value $`N_{\mathrm{HI},c}`$, we treat as fully neutral all gas particles that contribute to that grid cell and meet the following criteria: temperature $`T<30,000`$K and gas density $`\rho _g>(1000/177)\rho _{vir}(\mathrm{\Omega }_b/\mathrm{\Omega }_m)`$, where $`\rho _{vir}`$ is the virialization overdensity described in §2.2. For critical models, the density cut corresponds to $`1000\mathrm{\Omega }_b`$. In subcritical models, the density cut occurs at the same fraction of the critical density as in the $`\mathrm{\Omega }_m=1`$ models. We find that for $`\mathrm{log}N_{\mathrm{HI},c}=(20.4,20.7,20.7)`$ at $`z=(2,3,4)`$ this procedure reproduces the SCDM high resolution values for $`\mathrm{\Omega }_{ccg}`$ and $`\mathrm{\Omega }_{obs}`$ to within 10%. ### 3.2 Other Possible Sources of Absorption It is possible that some Lyman limit and/or DLA absorption originates from regions other than galactic halos. To investigate this alternative within our simulations, we project the entire simulation volume and compare the area of LL and DLA absorption to the sum of the absorption calculated by projecting each halo individually. In the analysis presented here, we use all halos that have at least one group identified by SKID as described in §2.2 (i.e. at least one concentration of cold gas that is gravitationally bound), whether or not the halo itself has $`MM_{res}`$. Above $`M=M_{res}`$, 98% of the dark matter halos harbor at least one SKID-identified group. We have removed TCDM from the analysis in this section due to the extreme paucity of structure in the model. For the remaining four models, we calculate the total area subtended by DLA absorption in the halos with SKID-identified groups. Comparing this value to the total area subtended in an entire volume projection of each simulation at redshifts $`z=2`$ and $`z=4`$, we find agreement within 4.5% for all the models at both redshifts, and to better than 2% in five of the eight cases. We attribute the remaining differences to having more than one absorber along a given line of sight. Hence, all DLA absorption in the simulation occurs within halos with at least one concentration of cold, gravitationally-bound gas. In LL absorption, five of the eight outputs agree to better than 6% when compared in this manner. However, at $`z=4`$ the results of volume projection and halo projection differed by 15%, 30%, and 13% for SCDM, CCDM, and OCDM respectively. We took the worst case, $`z=4`$ CCDM, and projected all the halos that contained at least 32 particles (gas $`+`$ dark matter), whether or not they contained a SKID-identified gas concentration. When we sum the area subtended by LL absorption in these halos, we find that it now accounts for all but 1.2% of the LL absorption found by projecting the entire volume. Hence Lyman limit absorption still occurs exclusively in halos, but in this instance 30% of it occurs in halos in which our resolution of gas dynamics and cooling is only marginal and in which there are no SKID-identified gas concentrations. If our study had higher resolution, it is likely that some of these halos would have been able to form DLA systems as well. However, it is just to correct for these unresolved or under-resolved halos that we developed our Press-Schechter correction technique. The important conclusion is that all the Lyman limit absorption we find in the simulations resides within dark matter halos. If Lyman limit absorption were to occur outside galactic halos, it would have to be in regions that are much too small for us to resolve. To summarize, all DLA and LL absorption in our simulations occurs in dynamically bound dark matter halos, even below our resolution cutoff. At $`z=2`$ and $`z=4`$, in all four of the models tested, DLA absorption arises entirely in objects identified by SKID as bound concentrations of cold gas. LL absorption in the simulations also occurs exclusively in dark matter halos, although at $`z=4`$ some of the gas within these halos is able to reach LL column densities without becoming a SKID-identified concentration. ## 4 DLA and LL Absorption by Low Mass Halos ### 4.1 Motivation from Higher Resolution Simulation We have so far focused on DLA and LL absorption in halos above our simulation mass resolution limits $`M_{res}`$. However, if we want to test cosmological models against the observed incidence $`n(z)`$, we must also consider the absorption that arises in lower mass halos, which are smaller in cross section but much more numerous. The L128 run is a factor of eight finer in mass resolution than the other simulations in this study, allowing us to examine trends in DLA and LL systems to lower masses. Figure 6 compares, at $`z=3`$, the circular velocity at $`R_{180}`$ of the halos in L128 with the cross section subtended by DLA absorption (left panel) and LL absorption (right panel) when the halo is projected. The number of vertices on each point indicates the number of SKID-identified concentrations of cold collapsed gas within the halo. We can see that the absorption characteristics of galactic halos are not well approximated by assuming a single galaxy per halo. Instead, the correlation of absorption cross section $`\alpha (z,v_c)`$ with halo mass seems to arise not from a single galaxy in each halo becoming larger as its parent halos increases in mass, but rather from more massive halos harboring more galaxies. Consequently, the multiple-absorber nature of halos is extremely important in modeling the connection between halo mass and absorption cross section. To approach the problem semi-analytically, it is necessary to model the full interaction history of the halos, as is done in Maller et al. (2000). The absorption cross sections of the individual galaxies is virtually independent of halo mass. If any trend exists, it appears that $`\alpha (z,v_c)`$ in halos containing only one galaxy may actually decrease slightly in more massive halos. Higher mass halos have deeper potential wells, causing the concentrations of cold gas to contract more efficiently. This complex gas dynamical behavior demonstrates the value of a fully numerical treatment in modeling these objects. The dashed lines in Figure 6 show the area subtended by a face-on disk of radius $`R=0.1R_{vir}`$ where $`R_{vir}`$ is the virial radius of an isothermal sphere with circular velocity $`v_c`$ at the virial radius. 10% of the virial radius is the typical extent of a galaxy based on centrifugal arguments. The solid line in Figure 6 left (DLA) panel shows the area subtended by a face-on disk of radius $`29\%R_{vir}`$, while in the right panel it denotes area $`\alpha =\pi (63\%R_{vir})^2`$. Although the solid lines were not fit to the data, one can see that the general trend of DLA and LL absorption is that the cross-section is roughly described as proportional to $`(30\%R_{vir})^2`$ and $`(60\%R_{vir})^2`$ respectively. ### 4.2 Testing Extrapolations to Lower Mass Halos We would like to be able to extrapolate the contribution to the total incidence by halos with $`v_{c,min}<v_c<v_{c,res}`$. Given the results from Section 4.1, it is plausible to assume a power law fitting function $`\alpha _{\mathrm{PL}}(z,v_c)Av_c^B`$. QKE find that a photoionizing background suppresses the collapse and cooling of gas in halos with circular velocities $`v_c<v_{c,min}=37`$ km s<sup>-1</sup>. Thoul & Weinberg (1996) find a similar cutoff in simulations that are much higher resolution but assume spherical symmetry. As discussed in §2.1, $`v_{c,res}`$ is approximately $`140`$km s<sup>-1</sup> in the critical models and $`89`$km s<sup>-1</sup> in the subcritical models at $`z=2`$ for the five principal runs. In L128, which has a resolution limit of $`v_{c,res}50`$km s<sup>-1</sup>, we find no evidence of a photoionization cutoff; simulations with better mass resolution are required to detect it. For the $`\alpha (v_c)`$ dependence we find for resolved halos in our simulations, low mass halos dominate the total cross-section for DLA and LL absorption. Therefore, the predicted incidence $`n(z)`$ depends sensitively on the assumed value of $`v_{c,min}`$. Since we cannot robustly predict $`n(z)`$ without exact knowledge of $`v_{c,min}`$, we adopt the less ambitious goal of determining, for each cosmological model, what value of $`v_{c,min}`$ yields a good match to the observed values of $`n(z)`$. Our approach to this calculation is similar to that of GKHW: we use our numerical simulations to calibrate a fit to the mean cross section for DLA (or LL) absorption of halos with circular velocity $`v_c`$ at redshift $`z`$, then integrate over an analytic halo mass function to compute $`n(z)`$. Figure 7 compares $`\alpha (v_c)`$ in the L128 and L64 simulations at redshift $`z=3`$. Figure 8 compares the cumulative incidence $`n(z,v_c)`$, the total incidence from absorbers in halos at least at massive as $`v_c`$, in L128 and L64. Note that in this Figure, the incidence is measured directly from the simulations and not by convolving an analytic mass function with $`\alpha _{\mathrm{PL}}(z,v_c)`$ as is done later in this section. The runs were performed using the same cosmology but a factor of eight different mass resolution. In the absence of systematic resolution effects that may exist above our cutoff $`v_{c,res}`$, the data from the L64 simulations should smoothly overlap with L128 above the L64 $`v_{c,res}`$ value. For the DLA case, although L64 appears to have more low-mass outliers than L128, the simulations agree quite well both in $`\alpha (v_c)`$ and incidence. In the LL case, however, we find that the we systematically underestimate the absorption cross section of LL absorbers in L64 by roughly 25%. This leads to the cumulative incidence of the L64 simulation also being 25% less than the same $`v_c`$ in L128. Therefore, at the L64 resolution we are not resolving all LL absorption regions that exist inside halos with $`v_cv_{c,res}`$. One possibility is that areas whose average HI column density is slightly below the LL cutoff ($`N_{\mathrm{HI}}10^{17.2}\mathrm{cm}^2`$) on $`5`$ kpc scales have smaller, patchy clumps with higher column density. Hence, the lower resolution simulations would systematically underestimate the LL absoption in these regions. For DLA absorption, however, we detect no signatures of numerical resolution artifacts. To find the best fitting function for absorption cross section, we bin halos in 0.05 dex increments in $`\mathrm{log}v_c`$, beginning with $`v_{c,res}`$ and subject to the constraint that there be at least 10 halos in each bin. We sometimes are forced to widen the bin size to satisfy the latter constraint. Let $`\sigma _{DLA}=\alpha (v_c,z)`$ denote the “cross section” of DLA absorption, i.e. the comoving area subtended by HI column densities $`N_{\mathrm{HI}}10^{20.3}\mathrm{cm}^2`$ when a halo is projected onto a plane. For the binned distribution of halos, we determine the log of the average halo DLA absorption cross section, $`\mathrm{log}\sigma _{DLA}`$ for each bin. Then we calculate the statistical uncertainty of $`\mathrm{log}\sigma _{DLA}`$ in each bin by using the bootstrap method with 1000 random realizations of the data set of halos with $`MM_{res}`$. The distribution of halo cross sections $`\sigma _{DLA}`$ at a given $`v_c`$ is approximately log-normal and hence best described by a Gaussian in log space. Since we will use the bootstrap errors in the next section to calculate confidence limits, which assume Gaussianity, we express the errors in log space. We fit the points $`\mathrm{log}\sigma _{DLA}`$ by linear least squares to determine the parameters $`A`$ and $`B`$, the amplitude and index for the power law fitting function $`\alpha _{\mathrm{PL}}(z,v_c)Av_c^B`$. The error crosses in Figure 7 show the mean absorption cross section in each bin of circular velocity at redshifts $`z=3`$ for DLA and LL systems. The horizontal error bars show the width of each bin, and the vertical error bars show the $`1\sigma `$ logarithmic uncertainty in $`\sigma _{DLA}`$ for the bin determined by the bootstrap procedure. The solid error crosses are for L128 data and dashed error crosses for L64. The solid and dashed lines denotes the best fit to the L128 and L64 data respectively. For DLA systems, the fit to the L64 data is nearly identical to the L128 fit, showing that data at the resolution of the principal runs can be used reliably to estimate the absorption cross section below their resolution cutoff. The LL fit is systematically lower, reflecting the mismatch between L128 and L64 halo absorption. However, the two lines parallel each other meaning that the fitting procedure is robust and allows an accurate extrapolation to lower halo masses. It is important to note that we are not seeking to model the distribution of absorption cross sections in each bin, but only to characterize the mean cross section in each bin so that the total absorption can be accurately reconstructed from equation (1) below. The large spread and asymmetry in the distribution of absorption cross sections for individual halos is inconsequential for our purposes: $`\alpha _{\mathrm{PL}}(v_c,z)`$ is the number which, when multiplied by the halo number density at $`v_c`$, yields the total absorption at that $`v_c`$ that matches the total absorption present in the simulation. The bootstrap technique yields a robust estimate of the statistical uncertainty in this mean cross section caused by the finite number of halos in the simulation. Although the spread in absorption cross sections of individual halos may be large, their average absorption is a well determined quantity. ### 4.3 Results Figure 9 shows the fitted relation $`\alpha _{\mathrm{PL}}(z,v_c)`$ for each of 4 cosmologies and redshifts $`z=(2,3,4)`$. Again we have removed TCDM from the analysis in this section due to the extreme paucity of structure in the model. Given so few halos above the mass limit, we felt it useless to attempt to fit to the data. The values of the fit parameters are detailed in Table 2. At $`z=4`$, the fits for each model tend to follow the same slope (the exception being LL SCDM). At later redshifts, $`\alpha _{\mathrm{PL}}(z,v_c)`$ evolves differently for different models. In general, LCDM tends to be among the steepest in all cosmologies. Interestingly, this steepness is not paralleled by OCDM at $`z=2`$ where the cross-sections of the more massive halos in OCDM have decreased markedly. CCDM tends to be flatter than SCDM — the increased amplitude of structure apparently results in halos having smaller gas cross-sections. It is difficult to draw any general conclusions about the behavior of high column density absorbers in critical models vs. subcritical models, except that LCDM tends to have the highest cross-sections in massive halos. We are now in a position to correct the $`n(z)`$ estimates from §3 to include the contribution from halos with $`M<M_{res}`$. Our approach to this problem is similar to that of GKHW, though there are important differences of detail that make a significant difference to the end results, as we discuss later. We compute the number density of halos $`N(M,z)`$ as a function of mass at specified redshift using the mass function of Jenkins et al. (2001; see their equation 9). Multiplying $`N(M,z)`$ by our numerically calibrated functions $`\alpha _{\mathrm{PL}}(v_c,z)`$, and integrating from $`v_c`$ to infinity, yields the number of DLA (LL) absorbers per unit redshift residing in halos of circular velocity greater than $`v_c`$: $$n(z,v_c)=\frac{dr}{dz}_{M(v_c)}^{\mathrm{}}N(M^{},z)\alpha _{\mathrm{PL}}(v_c^{},z)𝑑M^{},$$ (1) where $`v_c^{}`$ is the circular velocity at $`R_{180}`$, the $`\delta =180`$ radius, of a halo of mass $`M^{}`$, and $`r`$ is comoving distance (see GKHW for detailed discussion). If one takes the lower limit of the integral to be $`v_{c,min}`$, the minimum circular velocity for gas cooling and condensation, this yields $`n(z)`$, the total incidence of DLA (or LL) absorption at redshift $`z`$. Figure 10 shows the results of this exercise. The curves show the cumulative incidence $`n(z,v_c)`$ as a function of $`v_c`$ for redshifts $`z=(2,3,4)`$. The error bars for SCDM and LCDM are shown at three representative locations and show the $`1\sigma `$ error region resulting from the bootstrap uncertainty in the fits for $`\alpha _{\mathrm{PL}}(z,v_c)`$. It comforting to note that the $`n(z,v_c)`$ curve for LCDM, which is closest to the cosmology of L64 and L128, does indeed mirror the simulated $`n(z,v_c)`$ shown in Figure 8. The cross-hatched region of Figure 10 denotes the $`1\sigma `$ range allowed observationally by Storrie-Lombardi & Wolfe (2000; DLA) and Storrie-Lombardi et al. (1994; LL). To match DLA observations, contributions from halos with $`v_c>60\mathrm{km}\mathrm{s}^1`$ are required. In the LL case, the minimum halo circular velocity is somewhat lower, more in the range of $`v_{c,min}40\mathrm{km}\mathrm{s}^1`$, although given the results from Figure 7, the estimate of $`n(z,v_c)`$ may be depressed by $`25\%`$. Raising $`n(z,v_c)`$ by this amount would actually bring the minimum LL-harboring halo mass in line with the DLA estimate for most models. On the other hand, it is possible that LL absorbers could reside in halos of lower mass than DLAs. The results show that although halos in models like LCDM generally have higher absorption cross sections, the increased number density of halos in the critical models tends to give them more total absorption than the subcritical models. Although their $`\sigma _8`$ values are close to that of SCDM (Table 1), and the growth factor reduction from $`z=0`$ to $`z=24`$ is smaller in subcritical models, OCDM and LCDM also have redder power spectra and thus less power on these relatively small scales. The difference in power spectrum shape is especially important at the low-mass end of the mass function. ## 5 Conclusions We can divide our conclusions into two classes, those that rely only on the results of our simulations, and those that rely on our extrapolation of these results via the Press-Schechter method (with Jenkins et al. 2001 mass function) to account for absorption by low mass halos. We will treat these two classes of conclusions in turn. ### 5.1 Simulation Results Our five principal simulations resolve the formation of cold, dense gas concentrations in halos with $`v_cv_{c,res}=140,`$ 160, $`180\mathrm{km}\mathrm{s}^1`$ (89, 100, $`110\mathrm{km}\mathrm{s}^1`$) at $`z=2,`$ 3, 4 in the critical density (subcritical) models. We employ a further 2 simulations done in identical cosmologies but with a factor of eight difference in mass resolution to examine resolution effects. The lower resolution run, L64, is the same resolution as the five principal simulations and has $`v_{c,res}`$ equivalent to the subcritical (OCDM and LCDM) models. The higher resolution run, L128, has $`v_{c,res}=50\mathrm{km}\mathrm{s}^1`$. Our clearest conclusion is that absorption in halos above the circular velocity thresholds of the $`64^3`$ simulations cannot account for the observed incidence $`n(z)`$ of DLA or LL absorption or for the amount of cold, collapsed gas, $`\mathrm{\Omega }_{ccg}`$, in observed DLA systems, for any of our five cosmological models (Figures 1, 2, 5). Higher resolution simulations are unlikely to change this conclusion, since clumping of the gas on scales below our gravitational softening length would tend to reduce the absorption cross section rather than increase it, unless this small scale clumping could produce neutral condensations in the outskirts of halos where we predict the gas to be mostly ionized. The evidence from L128 indicates that there are no resolution effects above $`v_{c,res}`$ that affect DLA systems, although estimates of LL incidence in halos $`v_cv_{c.res}`$ may be underestimated by 25% in the principal simulations. Our models assume $`\mathrm{\Omega }_b=0.0125h^2`$, and a higher baryon abundance (e.g. Burles & Tytler 1998ab) might increase the predicted absorption. We have investigated SCDM models with different $`\mathrm{\Omega }_b`$ values and find that higher $`\mathrm{\Omega }_b`$ leads to more absorption per halo as expected, but even a model with $`\mathrm{\Omega }_b=0.03125h^2=0.125`$ has too little absorption at this circular velocity threshold to match the observations. We will report further results from this study in a future paper. If any of these cosmological models is correct, then a substantial fraction of high redshift DLA absorption must arise in halos with $`v_c100150\mathrm{km}\mathrm{s}^1`$. This conclusion appears consistent with the imaging of DLA fields, which often reveals no large, bright galaxies near the line of sight (Fontana et al., 1996; Le Brun et al., 1997; Moller & Warren, 1998; Rao & Turnshek, 1998; Turnshek et al., 2000). However, it implies that the asymmetric metal-line profiles found by Prochaska & Wolfe (1997, 1998) must be interpreted as a signature of non-equilibrium dynamics (Haehnelt et al., 1998) rather than smooth rotation. We find a clear and intuitively sensible relationship between high HI column density absorption and the proximity to galaxies (Figure 3). Damped systems typically lie within 10-15 kpc of the center of a host galaxy at $`2z4`$, while lower column densities near the Lyman limit regime typically occur farther from the host galaxy. All DLA and LL absorption in our simulation occurs within collapsed dark matter halos. If it were to occur outside halos in the actual Universe, it would have to be on size scales smaller than we resolve. The stellar mass in our simulation is generally a steep function of time in the redshift range $`2<z<4`$, corresponding to a power law in $`z`$ (Figure 5). The mass in cold collapsed gas, however, remains relatively fixed, indicating that the rate at which gas is converted into stars is roughly equal to the rate at which new gas cools out of ionized halos and condenses into galaxies. This result is expected if the star formation rate is an increasing function of gas density, as it is in our numerical formulation (KWH). In the first 3-d hydrodynamic study of high column density absorption, KWHM found that the predictions of $`n(z)`$ from their simulations of the SCDM model fell a factor of two short of the observed DLA abundance but a factor of ten short of the observed LL abundance. They speculated that the DLA shortfall could be made up by absorption in lower mass halos but that the LL shortfall might imply a distinct physical mechanism for the formation of LL systems, such as thermal instability on mass scales far below the simulation’s resolution limits (Mo & Miralda-Escudé, 1996). It appears, however, that for the resolution of the KWHM runs and the principal simulations presented here, LL absorption at simulation scales may not have converged. If the resolution is increased by a factor of eight (as in the L128 run), LL absorption in halos over the same range in mass increases by 33%. This suggests the possibility that standard cosmological models can explain the observed LL systems with the physical processes that already occur in these simulations, albeit in halos somewhat below our current resolution limits. Thus LL and DLA absorption are closely related rather than physically distinct phenomena, with LL absorption arising preferentially at larger galactocentric distances and in less massive halos. We find no evidence in the simulations of LL absorption outside of galaxy dark matter halos. ### 5.2 Absorption in Low Mass Halos In our simulations, the halo absorption cross sections $`\alpha (v_c,z)`$ are determined by complex and competing physical processes. If we consider only halos that contain a single gas concentration, then the absorption cross section can actually decrease slightly with increasing circular velocity (solid points in Figure 7). However, more massive halos are more likely to contain multiple gas concentrations, with the net effect that $`\alpha (v_c,z)`$ increases with increasing $`v_c`$. At $`z=2`$, the model with the weakest mass fluctuations (LCDM) tends to have high $`\alpha (v_c)`$ (Figure 10). This fact, and the trend for single-absorber halos, imply that DLA and LL absorption cross sections are substantially affected by non-equilibrium dynamics: absorbers get smaller if they have time to cool and condense in a quiescent dark matter potential well. Although others have argued that this behavior may be a numerical artifact (cf. Maller et al. 2000), the agreement between our L64 and L128 runs and the appearance of the same trend in L128 suggests that it is not. Consequently, we suspect that this non-equilibrium behavior is a real feature of DLA and LL systems, possibly the geometric counterpart to the complex kinematic behavior found by Haehnelt et al. (1998). The physical complexity of $`\alpha (v_c,z)`$ implies that an accurate fully analytic description of high column density absorption in CDM models will be difficult to achieve. Even the simple expectation that more small scale power produces a higher incidence of DLA and LL absorption does not always hold. In the L64 and L128 runs, for which DLA results agree well in the mass regime of overlap, the mean cross-section for DLA absorption is $`\alpha \pi (0.29R_{vir})^2`$, much larger than the simple estimate $`\alpha \pi (0.1R_{vir})^2`$ based on collapse of the baryons to a centrifugally supported disk. For LL absorption, where we find that absorption in equal-mass halos is 25% lower in L64 than in L128 (which has a factor of eight finer mass resolution), the cross sections in L128 are described by $`\alpha \pi (0.63R_{vir})^2`$. To estimate the amount of absorption in halos below the resolution limits of our simulations, we adopted a procedure similar to that of GKHW, using the numerical results to calibrate $`\alpha (v_c,z)`$ and the Jenkins et al. (2001) mass function to compute the halo abundance. However, relative to GKHW we employed a much more conservative estimate of $`v_{c,res}`$ and an improved error estimation procedure based on bootstrap analysis instead of Poisson errors. These changes lead to superior $`\alpha (v_c,z)`$ fits that generally increase the predicted amount of absorption in halos with $`v_c<v_{c,res}`$. Our new results for $`n(z)`$ in the SCDM model supersede those of GKHW, since our new procedures are certainly an improvement, and our results for $`n(z)`$ in other models supersede those of GKWH, since in addition to these technical improvements we now have numerical simulations of these other models to constrain $`\alpha (v_c,z)`$ for $`v_cv_{c,res}`$. The bootstrap procedure yields believable statistical uncertainties in the $`n(z,v_c)`$ predictions. Taking our results and error estimates at face value, we find that four of the cosmological models that we consider are compatible with observational estimates of the incidence of DLA and LL absorption at $`z=2,`$ 3, and 4. What hinders us in better quantifying the total incidence in the Universe is our uncertainly in the estimate of the $`v_{c,min}`$, the circular velocity at which halos cease to harbor high column density systems. Previous studies (QKE; Thoul & Weinberg 1996) have found this cutoff to be approximately 40 km s<sup>-1</sup>. However, due to the extreme number density of halos at this mass, a slight uncertainty in this cutoff results in huge uncertainties in estimating the total incidence in DLA and LL systems. Instead we determine, for each cosmology, the value of $`v_{c,min}`$ that best matches $`n(z)`$ observations. Reproducing the data of Storrie-Lombardi & Wolfe (2000) and Storrie-Lombardi et al. (1994) requires $`v_{c,min}60\mathrm{km}\mathrm{s}^1`$ for DLA systems and $`v_{c,min}40\mathrm{km}\mathrm{s}^1`$ for LL systems, with some dependence on cosmology and redshift (see Fig. 10). Since the DLA values of $`v_{c,min}`$ are above the expected threshold caused by photoionization, there is some risk that all of these models would predict too much DLA absorption in simulations that fully resolved the population of absorbing systems. A model with somewhat less small scale power, such as the lower amplitude LCDM model favored by recent Ly$`\alpha `$ forest studies (McDonald et al., 2000; Croft et al., 2001), might fare better in this regard, perhaps matching the observed DLA abundance with a $`v_{c,min}`$ closer to the expected photoionization value. We are unable to make predictions for total absorption in the TCDM model with our current simulations because the paucity of structure above our resolution threshold makes our extrapolation procedure unreliable. Our current simulations provide a number of insights into the physics of DLA and LL absorption in halos with $`v_c100\mathrm{km}\mathrm{s}^1`$. Unfortunately, they also imply that robust numerical predictions of the incidence of high-redshift DLA and LL absorption will require simulations that resolve gas dynamics and cooling in halos with $`v_c30100\mathrm{km}\mathrm{s}^1`$, where our analytic modeling predicts a large fraction of the high column density absorption to occur. Simulations that resolve such halos exist (e.g. QKE; Navarro & Steinmetz 1997), but they do not yet model large enough volumes to predict statistical quantities like $`n(z)`$. Achieving the necessary combination of resolution and volume is challenging but within reach of current computational techniques. Simulations that meet these requirements will also teach us a great deal about the internal structure of more massive DLA and LL systems and about the connection between these systems and the population of high redshift galaxies. We thank Eric Linder for useful discussions. This work was supported by NASA Astrophysical Theory Grants NAG5-3922, NAG5-3820, and NAG5-3111, by NASA Long-Term Space Astrophysics Grant NAG5-3525, and by the NSF under grants ASC93-18185, ACI96-19019, and AST-9802568. Gardner was supported under NASA Grant NGT5-50078 and NSF Award DGE-0074228 for the duration of this work. The simulations were performed at the San Diego Supercomputer Center.
no-problem/9911/hep-th9911037.html
ar5iv
text
# 1 Integrability and low-energy effective actions for N=2 SUSY gauge theories ## 1 Integrability and low-energy effective actions for N=2 SUSY gauge theories The description of the strong coupling regime in the quantum field theory remains a challenging problem and the main hope is connected to discovery of new proper degrees of freedom which would provide the perturbative expansion distinct from the initial one. The first successful derivation of the low energy effective action in N=2 SUSY Yang-Mills theory clearly shows that solution of the theory involves new ingredients which are not familiar in this context before like Riemann surfaces and meromorphic differentials on it . The general structure of the effective actions is defined by the symmetry arguments, in particular they should respect the Ward identities coming from the bare field theory. For example the chiral symmetry fixes the chiral Lagrangian in QCD and the conformal symmetry provides the dilaton effective actions in N=0 and N=1 YM theories. Since the effective actions have a symmetry origin one can expect universality properties and generically different UV theories can flow to the same IR ones. It is the symmetry origin of the effective actions that leads to the appearance of the integrable systems on the scene. The point is that the phase spaces for the integrable systems coincide with some moduli space or the cotangent bundle to the moduli space. We can mention KdV hierarchy related to the moduli of the complex structures of the Riemann surfaces, the Toda lattice related to the moduli of the flat connections or Hitchin like systems connected with the moduli of the holomorphic vector bundles. In any case moduli spaces come from some additional symmetry of the problem. Identification of the variables in the integrable system responsible for some effective action is a complicated problem. At a moment there is no universal way to introduce the proper variables in the theories which are not topological ones but there is some experience in 2d theories which suggests to identify the nonperturbative transition amplitudes among the vacuum states as the dynamical variables. As for the ”space-time” variables, coupling constants and sources are the most promising candidates. It is expected that the partition function evaluated in the low-energy effective theory is the so called $`\tau `$ \- function in integrable hierarchy which is the generating function for the conserved integrals of motion. The particular solution of the equations of motion in the dynamical system is selected by applying the Ward identities to the partition function of the effective theory. The arguments above explain the reason for the search of some integrable structures behind the Seiberg-Witten solution to N=2 SUSY Yang-Mills theory. This integrable structures which capture the hidden symmetry structure have been found in where it was shown that $`A_{N_c}`$ affine Toda chain governs the low energy effective action and BPS spectrum of pure N=2 SYM theory. The generalization to the theories with matter involves Calogero-Moser integrable system for the adjoint matter and XXX spin chain for the fundamental matter . In five dimensions relativistic Toda chain appears to be relevant for the pure gauge theory while anisotropic XXZ chain for SQCD . At the next step completely anisotropic XYZ chain has been suggested as a guide for 6d SQCD while the generalization to the group product case is described by the higher spin magnets . The candidate system for 6d theory with adjoint matter based on the geometry of elliptically fibered K3 manifold was suggested in (see also . Therefore there are no doubts in the validity of the mapping between effective low-energy effective theories and integrable finite dimensional systems. The list of correspondences between two seemingly different issues looks as follows. The solution to the classical equation of motion in the integrable system can be expressed in terms of higher genus Riemann surface which can be mapped to the complex Liouville tori of the dynamical system. It is this Riemann surface enters Seiberg-Witten solution, and the meromorphic differential introduced to formulate the solution coincides with the action differential in the dynamical system in the separated variables. The Coulomb moduli space in N=2 theories is identified with the space of the integrals of motion in the dynamical system, for example $`Tr\varphi ^2`$ where $`\varphi `$ is the adjoint scalar field coincides with the Hamiltonian for the periodical Toda system. The parameters of the field theory like masses or $`\mathrm{\Lambda }_{QCD}`$ determine the parameters and couplings in the integrable system. For instance in SQCD fundamental masses provide the local Casimirs in the periodical spin chains. In spite of a lot of supporting facts it is necessary to get more transparent explanation of the origin of integrability in this context. To this aim let us discuss the moduli spaces in the problem at hands. Classically there is only Coulomb branch of the moduli space in pure gauge theory so one can expect dynamical system associated with such phase space. Coulomb branch can be considered as a special Kahler manifold while the Hitchin like dynamical system responsible for the model has a hyperkahler phase space . The resolution of the contradiction comes from the hidden Higgs-like branch which has purely nonperturbative nature . It is the dynamical system on this hidden phase space provides the integrable system of the Hitchin or spin chain type. Therefore there are two moduli spaces in our problem and one expects a pair of dynamical systems. This is what we have indeed; dynamical system on the Higgs branch yields the Hitchin like dynamics with the associated Riemann surfaces while the integrable system on the Coulomb branch gives rise to the Whitham dynamics. The “physical meaning of the Hitchin system is to incorporate the nonperturbative instanton like contributions to the effective action in the supersymmetric way while the Whitham dynamics is nothing but the RG flows in the model <sup>2</sup><sup>2</sup>2The latest developments within Whitham approach as well as list of references can be found in . The next evident question is about the degrees of freedom in both dynamical systems. The claim is that all degrees of the freedom can be identified with the collective coordinates of a particular brane configuration. First let us explain where the Higgs branch comes from. The basic illustrative example for the derivation of the hyperKahler moduli space in terms of branes is the description of ADHM data as a moduli for a system of coupled D1-D5 or D0-D4 branes . If the gauge fields are independent on some dimension one derives Nahm description of the monopole moduli space in terms of D1-D3 branes configuration . The transition from ADHM data to the Nahm ones can be treated as a T duality transformation. At the next step the hyperkahler Hitchin space can be obtained by reducing the dependence (or additional T duality transformation) on one more dimension. This corresponds to the system of D2 branes wrapped around some surface $`\mathrm{\Sigma }`$ holomorphically embedded in some manifold. The most relevant example concerns $`T^2`$ embedded into K3 manifold . The T duality along the torus transforms it to the system of D0 branes on the dual torus, which is the most close picture for the Toda dynamics in terms of D0 branes. The related discussion for the derivation of the Hitchin spaces in terms of instantons on $`R^2\times T^2`$ can be found in . Let us now proceed to the explicit brane picture for the N=2 theories. There are different ways to get it, one involves 10d string theory which compactified on the manifold containing the Toda chain spectral curve , or the M theory with M5 brane wrapped around the noncompact surface which can be obtained from the spectral curve by deleting the finite number of points . This picture can be considered as the perturbative one and nonperturbative degrees of freedom have to be added. For this purpose it is useful to consider IIA projection of the M theory which involves $`N_c`$ D4 branes between two NS5 branes located on a distance $`\frac{l_s}{g^2}`$ along, say $`x_6`$ direction. Field theory is defined on D4 branes worldvolume and the extensive review concerning the derivation of the field theories from branes can be found in . The additional ingredient yielding the hidden Higgs branch comes from the set of $`N_c`$ D0 branes, one per each D4 brane . It is known that D0 on D4 brane behaves as a abelian point-like instanton but now we have the system of interacting D0 branes. The coupling constant is provided by the $`\mathrm{\Lambda }_{QCD}`$ parameter which can be most naturally obtained from the mass of the adjoint scalar breaking N=4 to N=2 via dimensional transmutation procedure. One way to explain the need for the additional D0,s in IIA theory or KK modes in M theory looks as follows. It is known that any finite-dimensional integrable system with the spectral parameter allows the canonical transformation to the variables – spectral curve with the linear bundle. The spectral curve role is transparent and KK modes provide the linear bundle. As we have already noted they are responsible for the nonperturbative contribution but the summation of the infinite instanton sums into the finite number degrees of freedom remains the challenging problem. It is worth noting that both canonical coordinates in the dynamical system come from the coordinates of D0 branes in different dimensions. The necessity for the additional nonperturbative degrees of freedom has been also discussed in . To show how the objects familiar in the integrability world translate into the brane language consider two examples. First let us consider the equations of motion in the Toda chain which has the Lax form $$\frac{dT}{ds}=[T,A]$$ (1) with some $`N_c\times N_c`$ matrixes T and A. The Lax matrix T can be related to Nahm matrix for the chain of monopoles using the identifications of the spectral curves for cyclic monopole configuration and periodic Toda chain . All these results in the following expression for the Toda Lax operator in terms of the Nahm matrixes $`T_i`$ $$\begin{array}{c}T=T_1+iT_22iT_3\rho +(T_1iT_2)\rho ^2\end{array}$$ (2) $$\begin{array}{c}T_1=\frac{i}{2}_{j=1}q_j(E_{+j}+E_j)\\ T_2=_{j=1}q_j(E_{+j}E_j)\\ T_3=\frac{i}{2}_jp_jH_j,\end{array}$$ (3) where E and H are the standard SU(N) generators, $`p_i,q_i`$ represent the Toda phase space, and $`\rho `$ is the coordinate on the $`CP^1`$ above. This $`CP^1`$ is involved in the twistor construction for monopoles and a point on $`CP^1`$ defines the complex structure on the monopole moduli space. With these definitions Toda equation of motion and Nahm equation acquire the simple form $$\begin{array}{c}\frac{dT}{dt}=[T,A]\end{array}$$ (4) with fixed A. Having in mind the brane interpretation of the Nahm data we can claim that the equations of motion provide the conditions for the required supersymmetry of the whole configuration. As another example of the validity of the brane-integrability correspondence mention the possibility to incorporate the fundamental matter in the gauge theory via branes in two ways. The first one concerns the semiinfinite D4 branes while the second one the set of $`N_f`$ D6 branes. One can expect two different integrable systems behind and they were found in and . It was shown in that they perfectly correspond to the brane pictures and it appears that the equivalence of two representations agrees with some duality property in the dynamical system. Recently one more ingredient of the brane approach - orientifold has been recognized within integrability approach . To conclude the discussion of the many-body dynamical systems let us mention that one can inverse the logic and use the possible integrable deformations of the dynamical system to construct their field theory counterparts. Along this line of reasoning we can expect some unusual field theories with the several $`\mathrm{\Lambda }`$ type scales . ## 2 Dualities in integrable systems and SYM theories We are going to study the phenomenon of duality whose precise definition is presented shortly. Duality is a subject of much recent investigation in the context of (supersymmetric) gauge theories, in which case the duality is an involution, which maps the observables of one theory to those of another. The duality is powerful when the coupling constant in one theory is inverse of that in another (or more generally, when small coupling is mapped to the strong one). For example, a weakly coupled (magnetic) theory can be dual to the strongly coupled (electric) theory thus making possible to understand the strong coupling behavior of the latter. In particular, it was shown that using the concept of duality one can find exact low-energy Lagrangian of $`𝒩=2`$,$`d=4`$ $`SU(2)`$ gauge theory. A more fascinating recent development is that the duality connecting weak and strong coupling regimes of one or different theories may have a geometric origin. The most notorious example of that is provided by $`M`$-theory. Having in mind the relation between many-body systems and effective actions in SYM theories it is natural to obtain the natural dualities within the integrability approach. Both dynamical systems and gauge theories benefit from establishing of this correspondence which was formulated in . Brane picture for the Hitchin like systems presented above plays important role in derivation of the proper degrees of freedom. ### 2.1 T duality and separation of variables There are three essentially different dualities which manifest themselves in dynamical systems of the Hitchin type. Let us start with the analogue of T duality in the Hitchin like systems . It appears that the proper analogue of T duality can be identified with the separation of variables in the dynamical systems. A way of solving a problem with many degrees of freedom is to reduce it to the problem with the smaller number of degrees of freedom. The solvable models allow to reduce the original system with $`N`$ degrees of freedom to $`N`$ systems with $`1`$ degree of freedom which reduce to quadratures. This approach is called a separation of variables ( SoV). Recently, E. Sklyanin formulated “magic recipe” for the SoV in the large class of quantum integrable models with a Lax representation . The method reduces in the classical case to the technique of separation of variables using poles of the Baker-Akhiezer function (see also ) for recent developments and more references). The basic strategy of this method is to look at the Lax eigen-vector ( which is the Baker-Akhiezer function) $`\mathrm{\Psi }(z,\lambda )`$: $$L(z)\mathrm{\Psi }(z,\lambda )=\lambda (z)\mathrm{\Psi }(z,\lambda )$$ (5) with some choice of normalization. The poles $`z_i`$ of $`\mathrm{\Psi }(z,\lambda )`$ together with the eigenvalues $`\lambda _i=\lambda (z_i)`$ are the separated variables. In all the examples studied so far the most naive way of normalization leads to the canonically conjugate coordinates $`\lambda _i,z_i`$. Remind that the phase space for the Hitchin system can be identified with the cotangent bundle to the moduli space of holomorphic vector bundle $`T^{}M`$ on the surface $`\mathrm{\Sigma }`$. The following symplectomorphysms can be identified with the separation of variables procedure. The phase space above allows two more formulations; as the pair $`(C,)`$ where C is the spectral curve of the dynamical system and $``$ is the linear bundle or as the Hilbert scheme of points on $`T^{}\mathrm{\Sigma }`$ where the number of points follows from the rank of the gauge group. It is the last formulation provides the separated variables. The role of Hilbert schemes on $`T^{}\mathrm{\Sigma }`$ in context of Hitchin system was established for the surfaces without marked points in and generalized for the systems of Calogero types in . In the brane terms separation of variables can be formulated as reduction to a system of D0 branes on some four dimensional manifold. It reminds a reduction to a system of point-like instantons on a (generically noncommutative ) four manifold. One more essential point is that separated variables amount to some explanation of the relation of periodic Toda chain above and monopole chains. Indeed, monopole moduli space have the structure resembling the one for the Toda chain in separated variables; both of them are the Hilbert schemes of points on the similar four manifolds. The abovementioned constructions of the separation of variables in integrable systems on moduli spaces of holomorphic bundles with some additional structures can be described as a symplectomorphism between the moduli spaces of the bundles (more precisely, torsion free sheaves) having different Chern classes. To be specific let us concentrate on the moduli space $`_\stackrel{}{v}`$ of stable torsion free coherent sheaves $``$ on $`S`$. Let $`\widehat{A}_S=1[\mathrm{pt}]H^{}(S,Z)`$ be the $`A`$-roof genus of $`S`$. The vector $`\stackrel{}{v}=Ch()\sqrt{\widehat{A}_S}=(r;\stackrel{}{w};dr)H^{}(S,Z),\stackrel{}{w}\mathrm{\Gamma }^{3,19}`$ corresponds to the sheaves with the Chern numbers: $$ch_0()=rH^0(S;Z)$$ (6) $$ch_1()=\stackrel{}{w}H^2(S;Z)$$ (7) $$ch_2()=dH^4(S;Z)$$ (8) Type $`IIA`$ string theory compactified on $`S`$ has BPS states, corresponding to the $`Dp`$-branes, with $`p`$ even, wrapping various supersymmetric cycles in $`S`$, labelled by $`\stackrel{}{v}H^{}(S,Z)`$. The actual states correspond to the cohomology classes of the moduli spaces $`_\stackrel{}{v}`$ of the configurations of branes. The latter can be identified with the moduli spaces $`_\stackrel{}{v}`$ of appropriate sheaves. The string theory, compactified on $`S`$ has moduli space of vacua, which can be identified with $$_A=O\left(\mathrm{\Gamma }^{4,20}\right)\backslash O(4,20;R)/O(4;R)\times O(20;R)$$ where the arithmetic group $`O(\mathrm{\Gamma }^{4,20})`$ is the group of discrete authomorphismes. It maps the states corresponding to different $`\stackrel{}{v}`$ to each other. The only invariant of its action is $`\stackrel{}{v}^2`$. We have studied three realizations of an integrable system. The first one uses the non-abelian gauge fields on the curve $`\mathrm{\Sigma }`$ imbedded into symplectic surface $`S`$. Namely, the phase space of the system is the moduli space of stable pairs: $`(,\varphi )`$, where $``$ is rank $`r`$ vector bundle over $`\mathrm{\Sigma }`$ of degree $`l`$, while $`\varphi `$ is the holomorphic section of $`\omega _\mathrm{\Sigma }^1\mathrm{End}()`$. The second realization is the moduli space of pairs $`(C,)`$, where $`C`$ is the curve (divisor) in $`S`$ which realizes the homology class $`r[\mathrm{\Sigma }]`$ and $``$ is the line bundle on $`C`$. The third realization is the Hilbert scheme of points on $`S`$ of length $`h`$, where $`h=\frac{1}{2}\mathrm{dim}`$. The equivalence of the first and the second realizations corresponds to the physical statement that the bound states of $`N`$ $`D2`$-branes wrapped around $`\mathrm{\Sigma }`$ are represented by a single $`D2`$-brane which wraps a holomorphic curve $`C`$ which is an $`N`$-sheeted covering of the base curve $`\mathrm{\Sigma }`$. The equivalence of the second and the third descriptions is natural to attribute to $`T`$-duality. Let us mention that the separation of variables above provides some insights on the Langlands duality which involves spectrum of the Hitchin Hamiltonians. The attempt to reformulate Langlands duality as a quantum separation of variables has been successful for the Gaudin system corresponding to the spherical case . The consideration in suggests that the proper classical version of the Langlands correspondence is the transition to the Hilbert scheme of points on four-dimensional manifold. This viewpoint implies that quantum case can be considered as correspondence between the eigenfunctions of the Hitchin Hamiltonians and solutions to the Baxter equation in the separated variables. ### 2.2 S-duality Now let us explain that S-duality well established in the field theory also has clear counterpart in the holomorphic dynamical system. The action variables in dynamical system are the integrals of meromorphic differential $`\lambda `$ over the $`A`$-cycles on the spectral curve. The reason for the $`B`$-cycles to be discarded is simply the fact that the $`B`$-periods of $`\lambda `$ are not independent of the $`A`$-periods. On the other hand, one can choose as the independent periods the integrals of $`\lambda `$ over any lagrangian subspace in $`H_1(𝐓_b;Z)`$. This leads to the following structure of the action variables in the holomorphic setting. Locally over a disc in $`B`$ one chooses a basis in $`H_1`$ of the fiber together with the set of $`A`$-cycles. This choice may differ over another disc. Over the intersection of these discs one has a $`Sp(2m,Z)`$ transformation relating the bases. Altogether they form an $`Sp(2m,Z)`$ bundle. It is an easy excercise on the properties of the period matrix that the two form: $$dI^idI_i^D$$ (9) vanishes. Therefore one can always locally find a function $``$ \- prepotential, such that: $$I_i^D=\frac{}{I^i}$$ (10) The angle variables are uniquely reconstructed once the action variables are known. To illustrate the meaning of the action-action AA duality we look at the two-body system, relevant for the $`SU(2)`$ $`𝒩=2`$ supersymmetric gauge theory : $$H=\frac{p^2}{2}+\mathrm{\Lambda }^2\mathrm{cos}(q)$$ (11) with $`\mathrm{\Lambda }^2`$ being a complex number - the coupling constant of a two-body problem and at the same time a dynamically generated scale of the gauge theory. The action variable is given by one of the periods of the differential $`pdq`$. Let us introduce more notations: $`x=\mathrm{cos}(q)`$, $`y=\frac{p\mathrm{sin}(q)}{\sqrt{2}\mathrm{\Lambda }}`$, $`u=\frac{H}{\mathrm{\Lambda }^2}`$. Then the spectral curve, associated to the system which is also a level set of the Hamiltonian can be written as follows: $$y^2=(xu)(x^21)$$ (12) which is exactly Seiberg-Witten curve . The periods are: $$I=_1^1\sqrt{\frac{xu}{x^21}}𝑑x,I^D=_1^u\sqrt{\frac{xu}{x^21}}𝑑x$$ (13) They obey Picard-Fuchs equation: $$\left(\frac{d^2}{du^2}+\frac{1}{4(u^21)}\right)\left(\begin{array}{cc}& I\\ & I^D\end{array}\right)=0$$ which can be used to write down an asymptotic expansion of the action variable near $`u=\mathrm{}`$ or $`u=\pm 1`$ as well as that of prepotential . The AA duality is manifested in the fact that near $`u=\mathrm{}`$ (which corresponds to the high energy scattering in the two-body problem and also a perturbative regime of $`SU(2)`$ gauge theory) the appropriate action variable is $`I`$ (it experiences a monodromy $`II`$ as $`u`$ goes around $`\mathrm{}`$), while near $`u=1`$ (which corresponds to the dynamics of the two-body system near the top of the potential and to the strongly coupled $`SU(2)`$ gauge theory) the appropriate variable is $`I^D`$ (which corresponds to a weakly coupled magnetic $`U(1)`$ gauge theory and is actually well defined near $`u=1`$ point) . The monodromy invariant combination of the periods : $$II^D2=u$$ (14) (whose origin is in the periods of Calabi-Yau manifolds on the one hand and in the properties of anomaly in theory on the other) can be chosen as a global coordinate on the space of integrals of motion . At $`u\mathrm{}`$ the prepotential has an expansion of the form: $$\frac{1}{2}u\mathrm{log}u+\mathrm{}I^2\mathrm{log}I+\underset{n}{}\frac{f_n}{n}I^{24n}$$ Let us emphasize that S-duality maps the dynamical system to itself. We have seen that the notion of prepotential can be introduces for any holomorphic many-body system however its physical meaning as well as its properties deserve further investigation. ### 2.3 ”Mirror” symmetry in dynamical systems The last type of duality we would like to discuss concerns dualities between pair of dynamical systems . To start with let us remind how this symmetry is formulated within the field theory. The initial motivation amounts from the 3d theory example where mirror symmetry interchanges Coulomb and Higgs branches of the moduli space. The specifics of three dimensions is that both Coulomb and Higgs branches are hyperkahler manifolds and the mirror symmetry can be formulated as a kind of hyperkahler rotation. The attempt to formulate the similar symmetry for 4d theory was performed in . In the general procedure for the analogous symmetry within integrable system in terms of Hamiltonian and Poissonian reductions was formulated. Symmetry maps one dynamical system with coordinates $`x_i`$ to another one whose coordinates coincide with the action variables of the initial system and vise versa. It appears that taking into account the relation between dynamical systems and low-energy effective actions this duality in general maps Higgs and Coulomb branches of the moduli space in the gauge theories in different dimensions. Qualitatively this symmetry is even more transparent in terms of separated variables. As was discussed above the proper object in separated variables is the hyperkahler four dimensional manifold which provides the phase space. In most general situation the manifold involves two tori or elliptically fibered K3 manifold. One torus provides momenta while the seconds coordinates. The duality at hands actually interchanges momentum and coordinate tori and in generic case self-duality is expected. Corresponding field theory counterpart is the hypothetical six-dimensional theory with adjoint matter. All other cases correspond to some degeneration. Degeneration of the momentum torus to $`C/Z_2`$ corresponds to the transition to the five-dimensional theory while degeneration to $`R^2`$ corresponds to four-dimensional theory. Since the modulus of the coordinate torus has the meaning of the complexified bare coupling in the theory the interpretation of the degeneration of the coordinate torus is different. Degeneration to the cylinder corresponds to the switching off the instanton effects while the rational degeneration corresponds to the additional degeneration. We will see below that the ”mirror” symmetry maps theories in different dimensions to each other. Instanton effects in one theory ”map” into the additional compact dimension in the dual counterpart. We will discuss mainly classical case with only few comments on the quantum picture. Since the wave functions in the Hitchin like systems can be identified with some solutions to the KZ or qKZ equations the quantum duality would mean some relation between solutions to the rational, trigonometric or elliptic KZ equations. Recently the proper symmetries for KZ equations where discussed in . #### 2.3.1 Two-body system (SU(2)) Let us discuss first two-body system corresponding to SU(2) case. Two-particle systems which we are going to consider reduce (after exclusion of the center of mass motion) to a one-dimensional problem. The action-angle variables can be written explicitly and the dual system emerges immediately once the natural Hamiltonians are chosen. The problem is the following. Suppose the phase space is coordinatized by $`(p,q)`$. The dual Hamiltonian (in the sense of AC duality) is a function of $`q`$ expressed in terms of $`I,\phi `$, where $`I,\phi `$ are the action-angle variables of the original system : $`H_D(I,\phi )=H_D(q)`$. In all the cases below there is a natural choice of $`H_D(q)`$. Consider as example elliptic Calogero model whose Hamiltonian is: $$H(p,q)=\frac{p^2}{2}+\nu ^2\mathrm{}_\tau (q).$$ (15) Here $`p,q`$ are complex, $`\mathrm{}_\tau (q)`$ is the Weierstrass function on the elliptic curve $`E_\tau `$: $$\mathrm{}_\tau (q)=\frac{1}{q^2}+\underset{\begin{array}{cc}& (m,n)Z^2\\ & (m,n)(0,0)\end{array}}{}\frac{1}{(q+m\pi +n\tau \pi )^2}\frac{1}{(m\pi +n\tau \pi )^2}$$ (16) Let us introduce the Weierstrass notations: $`x=\mathrm{}_\tau (q)`$, $`y=\mathrm{}_\tau (q)^{}`$. We have an equation defining the curve $`E_\tau `$: $$y^2=4x^3g_2(\tau )xg_3(\tau )=4\underset{i=1}{\overset{3}{}}(xe_i),\underset{i=1}{\overset{3}{}}e_i=0$$ (17) The holomorphic differential $`dq`$ on $`E_\tau `$ equals $`dq=dx/y`$. Introduce the variable $`e_0=2E/\nu ^2`$. The action variable is one of the periods of the differential $`\frac{pdq}{2\pi }`$ on the curve $`E=H(p,q)`$ : $$I=\frac{1}{2\pi }_A\sqrt{2(E\nu ^2\mathrm{}_\tau (q))}=\frac{1}{4\pi i}_A\frac{dx\sqrt{xe_0}}{\sqrt{(xe_1)(xe_2)(xe_3)}}$$ (18) The angle variable can be determined from the condition $`dpdq=dId\phi `$: $$d\phi =\frac{1}{2iT(E)}\frac{dx}{\sqrt{_{i=0}^3(xe_i)}}$$ (19) where $`T(E)`$ normalizes $`d\phi `$ in such a way that the $`A`$ period of $`d\phi `$ is equal to $`2\pi `$: $$T(E)=\frac{1}{4\pi i}_A\frac{dx}{\sqrt{_{i=0}^3(xe_i)}}$$ (20) Thus: $$2iT(E)d\phi =\frac{dx}{\sqrt{4_{i=0}^3(xe_i)}}$$ (21) $$\omega d\phi =\frac{dt}{\sqrt{4_{i=1}^3(tt_i)}}$$ (22) where $$\omega =2iT(E)\sqrt{e_{01}e_{02}e_{03}}=\frac{1}{2\pi }_A\frac{dt}{\sqrt{4_{i=1}^3(tt_i)}}$$ (23) $$t=\frac{1}{xe_0}+\frac{1}{3}\underset{i=1}{\overset{3}{}}\frac{1}{e_{0i}};t_i=\frac{1}{3}\underset{j=1}{\overset{3}{}}\frac{e_{ji}}{e_{0i}e_{0j}}$$ (24) where $`e_{ij}=e_ie_j`$ Introduce a meromorphic function on $`E_\tau `$: $$\widehat{cn}_\tau (z)=\sqrt{\frac{xe_1}{xe_3}}$$ (25) where $`z`$ has periods $`2\pi `$ and $`2\pi \tau `$. It is an elliptic analogue of the cosine (in fact, up to a rescaling of $`z`$ it coincides with the Jacobi elliptic cosine). Then we have: $$H_D(I,\phi )=\widehat{cn}_\tau (z)=\widehat{cn}_{\tau _E}(\phi )\sqrt{1\frac{\nu ^2e_{13}}{2E\nu ^2e_3}}$$ (26) where $`\tau _E`$ is the modular parameter of the relevant spectral curve $`v^2=4_{i=1}^3(tt_i)`$: $$\tau _E=\left(_B\frac{dt}{\sqrt{4_{i=1}^3(tt_i)}}\right)/\left(_A\frac{dt}{\sqrt{4_{i=1}^3(tt_i)}}\right).$$ (27) For large $`I`$, $`2E(I)I^2`$. Therefore the elliptic Calogero model with rational dependence on momentum and elliptic on coordinate maps into the ”mirror” dual system with elliptic dependence on momentum and rational on coordinate. On the field theory side d=4 theory with adjoint matter maps into the d=6 theory with adjoint matter with the instanton corrections switched off. The coordinates on the Coulomb branch in d=4 theory becomes the coordinates on the ”Higgs branch ” in d=6 theory which explains the origin of the term ”mirror” symmetry in this context. #### 2.3.2 Many-body systems Now we would like to demonstrate how the ”mirror” transform can be formulated in terms of Hamiltonian or Poissonian reduction procedure. It appears that it corresponds in some sence to the simultaneous change of the gauge fixing and Hamiltonians. More clear meaning of these words will be clear from the examples below. We summarize the systems and their duals in rational and trigonometric cases in the following table: $$\begin{array}{ccccccc}& & \mathrm{rat}.\mathrm{CM}& & \mathrm{rat}.\mathrm{CM}& & \\ & R0& & & & \beta 0& \\ & & \mathrm{trig}.\mathrm{CM}& & \mathrm{rat}.\mathrm{RS}& & \\ & \beta 0& & & & R0& \\ & & \mathrm{trig}.\mathrm{RS}& & \mathrm{trig}.\mathrm{RS}& & \end{array}$$ (28) Here $`CM`$ denotes $`\mathrm{𝐶𝑎𝑙𝑜𝑔𝑒𝑟𝑜}\mathrm{𝑀𝑜𝑠𝑒𝑟}`$ models and $`RS`$ stands for $`\mathrm{𝑅𝑢𝑖𝑗𝑠𝑒𝑛𝑎𝑎𝑟𝑠}\mathrm{𝑆𝑐ℎ𝑛𝑒𝑖𝑑𝑒𝑟}`$. The parameters $`R`$ and $`\beta `$ here are the radius of the circle the coordinates of the particles take values in and the inverse speed of light respectively. The horizontal arrows in this table are the dualities, relating the systems on the both sides. Most of them were discussed by Simon Ruijsenaars . We notice that the duality transformations form a group which in the case of self-dual systems listed here contains $`\mathrm{SL}_2(Z)`$. The generator $`S`$ is the gorizontal arrow described below, while the $`T`$ generator is in fact a certain finite time evolution of the original system (which is always a symplectomorphism, which maps the integrable system to the dual one). We begin with recalling the Hamiltonians of these systems. Throughout this section $`q_{ij}`$ denotes $`q_iq_j`$. Consider the space $`𝒜_{𝐓^2}`$ of $`SU(N)`$ gauge fields $`A`$ on a two-torus $`𝐓^2=𝐒^1\times 𝐒^1`$. Let the circumferences of the circles be $`R`$ and $`\beta `$. The space $`𝒜_{𝐓^2}`$ is acted on by a gauge group $`𝒢`$ , which preserves a symplectic form $$\mathrm{\Omega }=\frac{k}{4\pi ^2}\mathrm{Tr}\delta A\delta A,$$ (29) with $`k`$ being an arbitrary real number for now. The gauge group acts via evaluation at some point $`p𝐓^2`$ on any coadjoint orbit $`𝒪`$ of $`G`$, in particular, on $`𝒪=\text{ }\mathrm{C}\mathrm{IP}^{N1}`$. Let $`(e_1:\mathrm{}:e_N)`$ be the homogeneous coordinates on $`𝒪`$. Then the moment map for the action of $`𝒢`$ on $`𝒜_{𝐓^2}\times 𝒪`$ is $$kF_A+J\delta ^2(p),J_{ij}=i\nu (\delta _{ij}e_ie_j^{})$$ (30) $`F_A`$ being the curvature two-form. Here we think of $`e_i`$ as being the coordinates on $`\text{ }\mathrm{C}^N`$ constrained so that $`_i|e_i|^2=N`$ and considered up to the multiplication by a common phase factor. Let us provide a certain amount of commuting Hamiltonians. Obviously, the eigen-values of the monodromy of $`A`$ along any fixed loop on $`𝐓^2`$ commute with themselves. We consider the reduction at the zero level of the moment map. We have at least $`N1`$ functionally independent commuting functions on the reduced phase space $`_\nu `$. Let us estimate the dimension of $`_\nu `$. If $`\nu =0`$ then the moment equation forces the connection to be flat and therefore its gauge orbits are parameterized by the conjugacy classes of the monodromies around two non-contractible cycles on $`𝐓^2`$: $`A`$ and $`B`$. Since the fundamental group $`\pi _1(𝐓^2)`$ of $`𝐓^2`$ is abelian $`A`$ and $`B`$ are to commute. Hence they are simultaneously diagonalizable, which makes $`_0`$ a $`2(N1)`$ dimensional manifold. Notice that the generic point on the quotient space has a nontrivial stabilizer, isomorphic to the maximal torus $`T`$ of $`SU(N)`$. Now, in the presence of $`𝒪`$ the moment equation implies that the connection $`A`$ is flat outside of $`p`$ and has a nontrivial monodromy around $`p`$. Thus: $$ABA^1B^1=\mathrm{exp}(R\beta J)$$ (31) (the factor $`R\beta `$ comes from the normalization of the delta-function ). If we diagonalize $`A`$, then $`B`$ is uniquely reconstructed up to the right multiplication by the elements of $`T`$. The potential degrees of freedom in $`J`$ are ”eaten” up by the former stabilizer $`T`$ of a flat connection: if we conjugate both $`A`$ and $`B`$ by an element $`tT`$ then $`J`$ gets conjugated. Now, it is important that $`𝒪`$ has dimension $`2(N1)`$. The reduction of $`𝒪`$ with respect to $`T`$ consists of a point and does not contribute to the dimension of $`_\nu `$. Thereby we expect to get an integrable system. Without doing any computations we already know that we get a pair of dual systems. Indeed, we may choose as the set of coordinates the eigen-values of $`A`$ or the eigen-values of $`B`$. The two-dimensional picture has the advantage that the geometry of the problem suggest the $`SL_2(Z)`$-like duality. Consider the operations $`S`$ and $`T`$ realized as: $$S:(A,B)(ABA^1,A^1);T:(A,B)(A,BA)$$ (32) which correspond to the freedom of choice of generators in the fundamental group of a two-torus. Notice that both $`S`$ and $`T`$ preserve the commutator $`ABA^1B^1`$ and commute with the action of the gauge group. The group $`\mathrm{\Gamma }`$ generated by $`S`$ and $`T`$ in the limit $`\beta ,R0`$ contracts to $`\mathrm{SL}_2(Z)`$ in a sense that we get the transformations by expanding $$A=1+\beta P+\mathrm{},B=1+RQ+\mathrm{}$$ for $`R,\beta 0`$. The disadvantage of the two-dimensional picture us the necessity to keep too many redundant degrees of freedom. The first of the contractions actually allows to replace the space of two dimensional gauge fields by the cotangent space to the (central extension of) loop group: $$T^{}\widehat{G}=\{(g(x),k_x+P(x))\}$$ which is a “deformation” of the phase space of the previous example ($`Q(x)`$ got promoted to a group-valued field). The relation to the two dimensional construction is the following. Choose a non-contractible circle $`𝐒^\mathrm{𝟏}`$ on the two-torus which does not pass through the marked point $`p`$. Let $`x,y`$ be the coordinates on the torus and $`y=0`$ is the equation of the $`𝐒^\mathrm{𝟏}`$. The periodicity of $`x`$ is $`\beta `$ and that of $`y`$ is $`R`$. Then $$P(x)=A_x(x,0),g(x)=P\mathrm{exp}_0^RA_y(x,y)𝑑y.$$ The moment map equation looks as follows: $$kg^1_xg+g^1PgP=J\delta (x),$$ (33) with $`k=\frac{1}{R\beta }`$. The solution of this equation in the gauge $`P=\mathrm{diag}(q_1,\mathrm{},q_N)`$ leads to the Lax operator $`A=g(0)`$ with $`R,\beta `$ exchanged. On the other hand, if we diagonalize $`g(x)`$: $$g(x)=\mathrm{diag}\left(z_1=e^{iRq_1},\mathrm{},z_N=e^{iRq_N}\right)$$ (34) then a similar calculation leads to the Lax operator $$B=P\mathrm{exp}\frac{1}{k}P(x)𝑑x=\mathrm{diag}(e^{i\theta _i})\mathrm{exp}iR\beta \nu \mathrm{r}$$ with $$\mathrm{r}_{ij}=\frac{1}{1e^{iRq_{ji}}},ij;\mathrm{r}_{ii}=\underset{ji}{}\mathrm{r}_{ij}$$ thereby establishing the duality $`AB`$ explicitly. When Yang-Mills theory is formulated on a cylinder with the insertion of an appropriate time-like Wilson line, it is equivalent to the Sutherland model describing a collection of $`N`$ particles on a circle. The observables $`\mathrm{Tr}\varphi ^k`$ are precisely the integrals of motion of this system. One can look at other supercharges as well. In particular, when the theory is formulated on a cylinder there is another class of observables annihilated by a supercharge. One can arrange the combination of supercharges which will annihilate the Wilson loop operator. By repeating the procedure similar to the one in one arrives at the quantum mechanical theory whose Hamiltonians are generated by the spatial Wilson loops. This model is nothing but the rational Ruijsenaars-Schneider many-body system. The self-duality of trigonometric Ruijsenaars system has even more transparent physical meaning. Namely, the field theory whose quantum mechanical avatar is the Ruijsenaars system is three dimensional Chern-Simons theory on $`𝐓^2\times 𝐑^1`$ with the insertion of an appropriate temporal Wilson line and spatial Wilson loop. It is the freedom to place the latter which leads to several equivalent theories. The group of (self-)dualities of this model is very big and is generated by the transformations $`S`$ and $`T`$ . Finally let us comment on six dimensional theory compactified on a three dimensional torus $`𝐓^3`$ down to three dimensions. As was discussed extensively in in case where two out of three radii of $`𝐓^3`$ are much smaller then the third one $`𝐑`$ the effective three dimensional theory is a sigma model with the target space $`𝒳`$ being the hyper-kahler manifold (in particular, holomorphic symplectic) which is a total space of algebraic integrable system. The complex structure in which $`𝒳`$ is the algebraic integrable system is independent of the radius $`𝐑`$ while the Kähler structure depends on $`𝐑`$ in such a way that the Kähler class of the abelian fiber is proportional to $`1/𝐑`$. The theory in three dimensions which came from four dimensions upon a compactification on a circle whose low-energy effective action describes only abelian degrees of freedom can be always dualized to the theory of scalars/spinors only, due to the vector-scalar duality in three dimensions. In this way different sets of vector and hypermultiplets in four dimensions can lead to the same three dimensional theory . I am indebted to V. Fock, N.Nekrasov and V. Roubtsov for the collaboration on this subject. This work is supported in part by grant INTAS-97-0103.
no-problem/9911/astro-ph9911114.html
ar5iv
text
# RADIO SUPERNOVAE AND THE SQUARE KILOMETER ARRAY ## 1. Supernovae Supernovae (SNe) play a vital role in galactic evolution through explosive nucleosynthesis and chemical enrichment, through energy input into the interstellar medium, through production of stellar remnants such as neutron stars, pulsars, and black holes, and by the production of cosmic rays. SNe are also being utilized as powerful cosmological probes, both through their intrinsic luminosities and expansion rates. A primary goal of supernova research is an understanding of progenitor stars and explosion mechanisms for the different SNe types. Unfortunately, little is left of the progenitor star after explosion, and only the progenitors of four (SNe 1987A, 1978K, 1993J, and 1997bs), out of more than 1560 extragalactic SNe, have been directly identified in pre-explosion images. Without direct information about the progenitors, thorough examination of the environments of SNe can provide useful constraints on the ages and masses of the progenitor stars. SNe come in three basic types (e.g., ): Ia, Ib/c, and II. Both SNe Ia and SNe Ib/c lack hydrogen lines in their optical spectra, whereas SNe II all show hydrogen in their optical spectra with varying strengths and profiles . SNe Ib and SNe Ic subclasses do not show the deep Si II absorption trough near 6150 Å that characterizes SNe Ia, and SNe Ib show moderately strong He I lines, while SNe Ic do not. These spectral differences are theoretically explained by differences in progenitors. SNe Ia are currently thought to arise from the total disruption of white dwarf stars, which accrete matter from a binary companion. In contrast, SNe II, SNe Ib, and SNe Ic are likely the explosions of massive stars. SNe II presumably result from the core collapse of massive, hydrogen-rich supergiant stars with masses $`8<M(M_{})<40`$. On the other hand, SNe Ib/c are believed to arise from a massive progenitor which has lost all of its hydrogen envelope prior to explosion (e.g., ). One candidate progenitor for SNe Ib/c is exploding Wolf-Rayet stars (which evolve from stars with $`M>40M_{}`$; e.g., ). An alternative candidate is exploding, relatively less-massive helium stars in interacting binary systems . Possible variants of normal SNe II are the “Type IIn” , and the “Type IIb” , which both show unusual optical characteristics. SNe IIn show the normal broad Balmer line profiles, but with a narrow peak sitting atop a broad base. The narrow component presumably arises from interaction with a dense ($`n>10^7`$ cm<sup>-3</sup>) circumstellar medium (CSM) surrounding the SN. SNe IIb look optically like normal SNe II at early times, but evolve to more closely resemble SNe Ib at late times. ## 2. Radio Supernovae Radio emitting supernovae (RSNe) have been extensively searched for since at least 1970 and several weak detections of SN1970G were obtained . However, due to low resolution, background confusion, and sensitivity limitations, only with the Very Large Array (VLA)<sup>1</sup><sup>1</sup>1The VLA is operated by the National Radio Astronomy Observatory of the Associated Universities, Inc., under a cooperative agreement with the National Science Foundation. was the first example found which could be studied in detail at multiple radio frequencies (SN 1979C; ; see also ). So far, about 27 RSNe have been detected with the VLA and other radio telescopes and only $``$17 objects have been extensively studied, including the SNe II 1980K and 1979C , the SNe Ib/c 1983N , 1990B , and 1994I , the SNe IIn 1986J and 1988Z , and the SN IIb 1993J . The SNe IIn are unusual not only in the optical, but also in the radio, in being exceptionally powerful radio sources ($`10^{28}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>, or several thousand times the luminosity of Cas A, at 6 cm). Figure 1 shows several examples of detailed RSNe light curves. Figure 2 shows an example of a particularly well-measured RSN, the SN IIb 1993J, which is only 3.6 Mpc distant, in M81. Analysis of the radio emission provides vital insight into the interaction of the SN shock with preexisting circumstellar matter lost by the progenitor star or progenitor system, and, therefore, into the nature of presupernova evolution. All RSNe appear to have the common properties of 1) nonthermal synchrotron emission with high brightness temperature; 2) a decrease in absorption with time, resulting in a smooth, rapid turn-on first at shorter wavelengths and later at longer wavelengths; 3) a power-law decline of the flux density with time at each wavelength after maximum flux density (optical depth $``$1) is reached at that wavelength; and 4) a final, asymptotic approach of spectral index $`\alpha `$ to an optically thin, non-thermal, constant negative value . The observed RSNe can in general be represented by the “mini-shell” model , which involves the acceleration of relativistic electrons and enhanced magnetic field necessary for synchrotron emission, arising from the SN shock interacting with a relatively high-density CSM, which has been ionized and heated by the initial SN UV/X-ray flash. The CSM is presumed to be the pre-SN mass lost in the late stages of the progenitor’s evolution. The rapid rise in radio flux density results from the shock overtaking progressively more of the ionized wind matter, leaving less of it along the line-of-sight to absorb the emission from the shock region. The slow decline in flux density at each wavelength after the peak is then due to the SN shock expanding into generally lower density regions of the now-optically thin CSM. This model has been parameterized by as: $$S=K_1(\nu /5\mathrm{GHz})^\alpha (tt_0)^\beta e^\tau \left(\frac{1e^\tau ^{}}{\tau ^{}}\right)\text{mJy, }$$ (1) $$\tau =K_2(\nu /5\mathrm{GHz})^{2.1}(tt_0)^\delta $$ (2) and $$\tau ^{}=K_3(\nu /5\mathrm{GHz})^{2.1}(tt_0)^\delta ^{}.$$ (3) $`K_1`$, $`K_2`$, and $`K_3`$ formally correspond to the unabsorbed flux density ($`K_1`$), uniform ($`K_2`$) and non-uniform ($`K_3`$) optical depths, respectively, at $`5`$ GHz one day after the explosion date $`t_0`$. The term $`e^\tau `$ describes the attenuation of a local medium with optical depth $`\tau `$ that uniformly covers the emitting source (“uniform external absorption”), and the $`(1e^\tau ^{})\tau ^1`$ term describes the attenuation produced by an inhomogeneous medium with optical depths distributed between 0 and $`\tau ^{}`$ (“clumpy absorption”). All absorbing media are assumed to be purely thermal, ionized hydrogen with opacity $`\nu ^{2.1}`$. The parameters $`\delta `$ and $`\delta ^{}`$ describe the time dependence of the optical depths for the local uniform and non-uniform media, respectively. Normally $`0>\delta >\delta ^{}`$, so that $`\tau ^{}`$ is the dominant opacity when $`(tt_0)(K_3/K_2)^{1/(\delta \delta ^{})}\text{ days}`$. At later times, the dominant opacity is $`\tau `$ until the CSM becomes optically thin and the radio emission is described by its characteristic power law decline with index $`\beta `$. In both Figures 1 and 2 we show the model fits to the observed data. As more radio information has become available, some interesting variations in this model have appeared, including clumpiness in the CSM, variations in mass-loss rates (and, thus, stellar evolution phase) in the last few thousand years before explosion , and possibly synchrotron self-absorption (SSA) in the earliest phases of the SN evolution – the only example of SSA known outside of compact galactic nuclei and quasars. However, even with the considerable improvement in VLA sensitivity over the past 20 years, the field of RSN studies is still very much sensitivity limited. More than 100 nearby SN events have been observed in the radio, with a detection rate of only $``$1/4, and we have only been able to develop relatively complete, multi-frequency radio light curves for fewer than half of those detected. With more than 1300 SNe which have been discovered optically since the first modern SN discovery, SN 1885A (S Andromeda) in M31, there is insufficient radio sensitivity, even with the VLA, to have a chance of detecting even a small fraction of them. Such sensitivity limitations restrict the scope of most RSN studies to distances smaller than the Virgo cluster, a cosmologically insignificant distance. Furthermore, because of sensitivity limitations, the statistics of radio emission from different types of optical SNe is very poor, with only 7 examples of SNe Ib/c and no examples of SNe Ia ever detected. Even the generally radio-brighter Type II SNe have less than two dozen detections, and fewer than half of that number have well measured, multi-frequency radio light curves. ## 3. New Observations Possible with the SKA ### 3.1. Radio Emission from Type Ia SNe With the Square Kilometer Array (SKA) RSN studies would enter a new era. We would be able to monitor RSNe at a practical threshold up to distances ten times further than is currently possible. As a result, statistics for both the Type II and Type Ib/c RSNe would substantially increase giving a better indication of progenitor types and environments. An aspect where the SKA would greatly advance RSNe studies, and our general understanding of SNe, is in the possible detection of radio emission from Type Ia SNe. These are the luminous objects currently serving as powerful cosmological probes out to $`z1`$ and providing interesting constraints on $`\mathrm{\Omega }_\mathrm{M}`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ . Yet, we are still uncertain as to what type of stars are giving rise to Type Ia SNe. We suspect theoretically that they involve the deflagration or detonation of a white dwarf in a mass-transfer binary system, but being able to observe the properties of the mass lost from the presupernova system, through the shock-generated radio emission, could greatly improve our knowledge of the progenitor system’s properties. Some scenarios for Type Ia SN progenitor systems (see, e.g., ) could generate a CSM around the SN sufficiently dense to produce faint radio emission, currently below the sensitivity limit for the VLA. The level of the SN shock/CSM interaction for Type Ia SNe, and its implication for the nature of the progenitor system, thus await a more sensitive radio array. ### 3.2. RSN Distance Determinations Evidence has recently been presented that the radio emission from SNe may have quantifiable properties which allow for distance determinations. Type II RSNe, based on a small sample of twelve objects, appear to obey a relation $`L_{6\mathrm{cm}\mathrm{peak}}5.5\times 10^{23}(t_{6\mathrm{cm}\mathrm{peak}}t_0)^{1.4}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>, with time in days (Figure 3). Thus, measurement of the radio turn-on time ($`t_{6\mathrm{cm}\mathrm{peak}}t_0`$) and peak flux density $`S_{6\mathrm{cm}\mathrm{peak}}`$ may yield a luminosity estimate and therefore a distance. The reality of this relation may be tested simply through the study of more objects, and some examples of the class are bright enough that $``$1 per year can presently be detected in the radio to slowly increase the available statistics. For the radio fainter Type II SNe, however, there exists a large gap in our knowledge between the very faint, somewhat oddball SN 1987A ($`3\times 10^{23}`$ erg s<sup>-1</sup> Hz<sup>-1</sup> at 6 cm peak; which could only be detected in the radio because it was extremely nearby in the LMC), and the faintest of the normal Type II RSNe, such as SN 1980K ($`1\times 10^{26}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>), which can be observed in more distant galaxies with the VLA sensitivity and are more than two orders-of-magnitude radio brighter than SN 1987A at 6 cm peak. Additionally, the SKA holds the possibility for detection of the very luminous RSNe IIn at quite large distances. Figure 4 illustrates that, at a sensitivity level of 1 $`\mu `$Jy, one can detect the brightest of RSNe, such as the Type IIn SN 1988Z and SN 1986J, at the cosmologically interesting distance of $`z=1`$. At a sensitivity level of 0.1 $`\mu `$Jy one can even study more normal Type II RSNe, such as SNe 1979C and 1980K, at such cosmologically interesting distances. If we can extend our horizons to observe SNe up to a redshift of z $``$ 1 we will fill in the gaps in our knowledge of SN progenitors and improve the statistics for RSNe of all types. Also, RSNe may then provide a powerful and independent technique for investigating the long-standing problem of distance estimates in astrophysics. ### 3.3. The SN-SNR Connection Old SNe, such as SN 1968D in NGC 6946, SN 1970G in M101, and SN 1923A in M83, provide a connection to young supernova remnants (SNRs). Currently, a large gap in time exists between the oldest RSNe and the youngest radio SNRs such as Cas A (SN $``$1680), Kepler (SN 1604), Tycho (SN 1572), etc. (See Figure 5.) Bridging this gap and understanding the evolution of SNe into SNRs is vital for our understanding of SNe, their interaction with the CSM, and their energy and chemical input into the ISM — with the resulting influence on star formation and galaxy evolution. The SKA would potentially allow detection of decades-old SNe which may still be radio emitters, but are currently well below the VLA sensitivity limit. ## 4. Summary The limitations to our studies of RSNe are intimately tied to the present sensitivity of the VLA, such that: 1) the realistic study limit for short, multi-frequency monitoring is $``$1 mJy peak flux density; 2) we can only detect normal SNe to $``$ Virgo Cluster distances ($``$20 Mpc); 3) we can only study very luminous RSNe to $``$100 Mpc; 4) there is significant delay from observing to mapping; and, 5) no realistic radio SN search modes are possible. The current RSN study problems are: limitation to optical magnitude $`m_\mathrm{V}12`$ – 14 for normal Type II SNe; $``$1300 optical SNe are known but there are only $``$27 radio detections; and, $``$150 SNe are discovered each year, but there are only $``$1 to 2 new radio detections yearly. The SKA could extend RSN detections to m<sub>v</sub> $``$ 19, such that $``$50 radio detections per year would become possible. The SKA could possibly provide better SN statistics than the optical by not being limited due to absorption by dust and, as a result, could discover “hidden” SNe. The SKA could therefore provide better galaxy SN rates, which would yield improved chemical and dynamical galaxy evolution modeling. Radio data are vital for understanding the nature of SN progenitor stars and stellar systems by probing the pre-SN mass loss in the late stages of the progenitor’s stellar evolution. As a result, radio data place important constraints on the SN progenitor properties and masses. Improved radio data could also extend the monitoring time of young RSNe and provide improved detection of “old” SNe, to bridge the SN-SNR time gap. With the SKA, normal SNe should be radio detectable to $``$200 Mpc; bright SNe should be radio detectable to z $``$ 1; and, radio distance estimates could be made from the radio peak luminosity vs. turn-on time relation. $`H_0`$ determinations could be made independent of optical limitations, and estimation of other cosmological parameters, such as $`q_0`$ and $`\mathrm{\Omega }`$, might be possible. ## 5. Conclusions and Recommendations The current VLA is severely sensitivity limited for SN studies and the current lack of on-line mapping at the VLA precludes RSN searches. Thus, for RSN studies one would like to see: 1) Sensitivity of 1 $`\mu `$Jy rms or better in 30 minutes; 2) Resolution $``$ $`1^{\prime \prime }`$ at 1.4 GHz (preferred also at 327 MHz); 3) Simultaneous, multi-frequency observations; 4) Real-time, on-line editing, calibration, and mapping; and 5) Nearly circular snapshot beam. The SKA and, for example, a VLA Expansion, would improve SN environment/progenitor studies, would improve SN statistics, would lead to improvements in our understanding of galactic chemical and dynamical evolution, and would provide independent distance and cosmological parameter estimates. #### Acknowledgments. KWW and MJM wish to thank the Office of Naval Research (ONR) for the 6.1 funding supporting this research. Further information about RSNe can be found at http://rsd-www.nrl.navy.mil/7214/weiler/. ## References Bertelli, G., et al. 1994, A&A, 106, 275 Filippenko, A. V., AnnRevAstAp, 35, 309 (1997) Schlegel, E. M, AJ, 111, 1660 (1996) Porter, A. C., & Filippenko, A. V., AJ, 93, 1372 (1987) Conti, P. S., et al., ApJ, 274, 302 (1983) Humphreys, R., Nichols, M., & Massey P., AJ, 90, 101 (1985) Uomoto, A., ApJ, 310, L35 (1986) Podsiadlowski, Ph., Joss, P. C, & Hsu, J.J.L., ApJ, 391, 246 (1992) Schlegel, E. M., MNRAS, 244, 269 (1990) Filippenko, A. V., AJ, 96, 1941 (1988) de Bruyn, A. G., A&A, 26, 105 (1973) Allen, R. J., Goss, W. M., Ekers, R. D., & de Bruyn, A. G., A&A, 48, 253 (1976) Weiler, K. W., van der Hulst, J. M., Sramek, R. A., & Panagia, N., ApJ, 243, L151 (1981) Weiler, K. W., Sramek, R. A., Panagia, N., van der Hulst, J. M., & Salvati, M., ApJ, 301, 790 (1986) Weiler, K. W., Van Dyk, S. D., Panagia, N., Sramek, R. A. & Discenna, J. L., ApJ, 380, 161 (1991) Weiler, K. W., Van Dyk, S. D., Pringle, J.E., & Panagia, N., ApJ, 399, 672 (1992) Montes, M. J., Weiler, K. W., Van Dyk, S. D., Sramek, R. A., Panagia, N., & Park, R., ApJ, submitted (1999) Weiler, K. W., Van Dyk, S. D., Panagia, N., & Sramek, R. A., ApJ, 398, 248 (1992) Montes, M. J., Van Dyk, S. D., Weiler, K. W., Sramek, R. A., & Panagia, N., ApJ, 506, 874 (1998) Van Dyk, S. D., Sramek, R. A., Weiler, K. W., & Panagia, N., ApJ, 409, 162 (1993) Van Dyk, S. D., & Rupen, M. P., et al., in preparation (2000) Weiler, K. W., Panagia, N., & Sramek, R. A., ApJ, 364, 611 (1990) Van Dyk, S. D., Weiler, K. W., Sramek, R. A., & Panagia, N., ApJ, 419, L69 (1993) Van Dyk, S. D., Weiler, K. W., Sramek, R. A., Rupen, M. P., & Panagia, N., ApJ, 432, L115 (1994) Chevalier, R. A., ApJ, 259, 302 (1982) Chevalier, R. A., ApJ, 259, L85 (1982) Chevalier, R. A., ApJ, 499, 810 (1998) Riess, A. G., et al., AJ, 116, 1009 (1998) Perlmutter, S., et al., ApJ, in press (1999) Branch, D., Livio, M., Yungelson, L. R., Boffi, F. R., & Baron, E., PASP, 107, 1019 (1995) Weiler, K. W., Van Dyk, S. D., Montes, M. J., Panagia, N., & Sramek, R. A., ApJ, 500, 51 (1998)
no-problem/9911/astro-ph9911209.html
ar5iv
text
# The Relativistic Astrophysics Explorer: A New Mission for X-Ray Timing ## 1. Scientific Motivation for a New X-Ray Timing Mission The Rossi X-Ray Timing Explorer (RXTE) has made substantial and unique contributions to the study of the behavior of matter in strong gravitational fields near accreting compact objects, the formation of relativistic jets, the emission mechanisms of active galactic nuclei, the evolution of neutron stars in binaries, the x-ray emission regions in cataclysmic variables, and many other aspects of high-energy astrophysics (for a review see Bradt 1999). The key feature of RXTE is a large effective area x-ray detector coupled with a high telemetry bandwidth. The prowess of RXTE for fast timing opened a new “discovery space” in rapid x-ray variability, allowing timing studies at the dynamical time scales of the innermost orbits around stellar mass compact objects, and leading to the discovery of millisecond quasiperiodic oscillations from accreting neutron stars and black holes. The large x-ray detector area also made possible many other advances such as the discovery of coherent millisecond pulsations from an accreting neutron star, and the study of rapid spectral variations, such as the $`200\mathrm{s}`$ cycles in the microquasar GRS 1915+105 related to ejection of the inner regions of the accretion disk. The great success of the Rossi X-Ray Timing Explorer (RXTE) is a strong indication that further progress in x-ray timing will lead to new scientific advances. Here, we describe a next generation x-ray timing mission which would offer an order of magnitude increase in x-ray timing capabilities via an x-ray detector with a geometric area of at least 60,000 cm<sup>2</sup>, equal to ten times that of RXTE. The most important advances made with this order of magnitude increase in collecting area are likely to be true discoveries and thus cannot be anticipated. However, an order of magnitude increase in area would benefit many scientific investigations. Here, we describe three particular examples. Fast quasiperiodic oscillations from black hole candidates (BHCs) have been discovered in three systems with frequencies of 67-300 Hz (Remillard et al. 1999). The fast QPOs from BHCs are rather weak (rms amplitudes near 1%) and difficult to study in detail with RXTE. A number of models of the QPOs have been proposed, all of which involve strong-field general relativistic effects, but distinguishing amongst the various models will be difficult with the RXTE data. The increase in the photon statistics with RAE would make possible much more accurate measurements of the QPO parameters and their variations with time or correlations with spectral or other timing parameters. This may lead to a unique identification of the QPO generation mechanism. Understanding these QPOs would provide a unique probe of strong-field gravity. Millisecond oscillations in x-ray bursts have been discovered from a number of neutron stars. The oscillations have periods in the range 1.7-3 ms and are interpreted as due to inhomogeneous nuclear burning of matter initially located on the neutron star surface. The burst oscillations provide a means to constrain the neutron star mass-radius relation. Currently, the best constraint comes from a deep modulation ($`75\%\pm 17\%`$) seen in the initial 62.5 ms of one burst (Strohmayer et al. 1998). RAE would detect roughly 1000 counts in each oscillation cycle near the peak of a typical bright burst. This would permit detailed examination of individual oscillation cycles and allow accurate measurement of the modulation amplitude in the first few oscillation cycles. Both our understanding of the burst oscillations and constraints on the neutron star mass-radius relation would improve. Eclipse mapping of the accreting magnetic white dwarf XY Arietis showed that the x-ray flux emerges from eclipse egress in $`<2`$ s (Hellier 1997). For the previous 15 years, the fraction, $`f`$, of the white dwarf surface involved in x-ray emission had been debated with values ranging from 0.001 to 0.3. Hellier’s result, obtained by combining 20 RXTE observations, shows that $`f<0.002`$. Using RAE, an accurate estimate could be made of the emitting region location on each egress which would allow direct mapping of movement of the emitting region. Similar mapping can also be done in neutron star and black hole binaries. The best constraints currently available on the size of the x-ray emitting regions in black hole systems come from x-ray dips (e.g. Tomsick et al. 1997). RAE would lead to significant advances in mapping x-ray emission from many different x-ray sources. ## 2. Mission Overview The Relativistic Astrophysics Explorer (RAE) will consist of two scientific instruments: a large area x-ray detector and a wide-field x-ray monitor. RAE will be designed to have telemetry sufficient to transmit the large event rate and flexible operations with multiple repointings each day to permit study of transient sources and rare states of known sources. The goal for the large area x-ray detector is to provide an order of magnitude increase in x-ray timing capabilities relative to RXTE. The design goals are: a useful detector area of at least 6 m<sup>2</sup>, sensitivity from 2 keV to 30 keV, absolute timing better than $`10\mu \mathrm{s}`$, minimal dead time effects for sources 10 times as bright as the Crab nebula, an energy resolution of 1.2 keV (preferably 300 eV) at 6 keV, no imaging, and a field of view of $`1^{}`$ or smaller. All-sky x-ray monitoring is needed for several reasons. First, the x-ray monitor provides continual long-term light curves. As many x-ray sources are highly variable, knowledge of the long term behavior in important in understanding the physical nature of the sources and in placing pointed observations in the context of the source state. Second, an x-ray monitor provides a means to trigger pointed observations when a selected source reaches a state of particular interest. Finally, an x-ray monitor allows discovery of new sources or new, unpredicted, outbursts of known sources. Many of the sources of interest are transients with unknown or irregular recurrence intervals. An x-ray monitor is essential to detect transient events. The design goal for the x-ray monitor is a sensitivity of several mCrab for daily observations, sufficient to monitor a large sample of AGNs ($`40`$) on a daily basis. ## 3. Large Area X-Ray Detector Array An effective area of 6 m<sup>2</sup> will require a total geometric detector area near 10 m<sup>2</sup>. A detector with a cross sectional area of 10 m<sup>2</sup> and a thickness of 0.75 m<sup>2</sup> fits within the 3 m diameter fairing of a two-stage Delta II. Thus, a 6 m<sup>2</sup> detector can be accommodated in a “medium-sized” mission without a deployment mechanism. The x-ray detector must have low mass per unit effective area, reasonable cost, highly reliable and stable operation, efficient rejection of particle backgrounds, and good energy resolution. After extensive review of the available detector technologies, we have selected silicon detectors as the most promising candidate for large format x-ray astronomy detectors. A 2 mm thick silicon detector provides 40% efficiency up to 30 keV at a mass of 0.5 gm/cm<sup>2</sup>; this compares favorably to PCA on XTE at a mass of 90 gm/cm<sup>2</sup>. Silicon is widely used and can be obtained at low cost due to large economies of scale; 10 m<sup>2</sup> of commercially available silicon strip detectors (see below) can be procured for less than US$4M. Silicon has a low ionization potential which leads to good energy resolution and allows silicon detectors to be operated without internal amplification. While the lack of internal amplification mandates the use of low capacitance detectors and low noise electronics for good performance, it also eliminates the need for high voltage and facilitates reliable and stable operation. Silicon detectors can be configured in different geometries including PIN diodes, silicon strip detectors, and silicon drift chambers. Silicon strip detectors (SSDs) are widely used in particle physics. SSDs offer one-dimensional imaging which may provide a means of effective discrimination against particle backgrounds. However, SSDs employ charge collection strips which run the length of a wafer and, thus, have relatively high capacitance which leads to high electronic noise and poor energy resolution. Given mission constraints, we estimate that the best energy resolution achievable with SSDs will be 1–2 keV at 6 keV (see Costa et al. these proceedings). Silicon drift chambers (SDCs) (Gatti & Rehak 1984) have an internal electric field arranged so that electrons, produced by interaction of radiation within the silicon, drift toward a single charge collection point. SDCs have a great advantage in that the readout electrode can be made very small even if the detection area is large; thus, the readout capacitance is small, which leads to low readout noise and good energy resolution. Cylindrical SDCs, in which a single detector electrode is used to collect charge from a cylindrical drift region, are particularly well-optimized for x-ray spectroscopy (Rehak et al. 1985; Lechner et al. 1996). Excellent performance has been demonstrated from cylindrical SDCs in the laboratory (Gauthier et al. 1994; Fiorini et al. 1997) and in the field (Longoni et al. 1998). Operating at $`15^{}\mathrm{C}`$, a resolution of 155 eV (FWHM) at the Mn-$`K\alpha `$ line has been obtained (Fiorini et al. 1997). The main question for x-ray astronomy is whether effective particle background rejection can be achieved. We currently are engaged in a technology development program to study application of SDCs to x-ray astronomy and to develop effective techniques for charged particle background rejection. ## 4. Conclusion We have presented a conceptual design for a next generation x-ray timing mission and identified key technologies which must be developed. The outstanding results from the Rossi X-Ray Timing Explorer are strong motivation for a new mission with an order of magnitude increase in x-ray timing capabilities. ## REFERENCES Bradt, H.V. 1999, astro-ph/9901174 Fiorini, C. et al. 1997, Rev. Sci. Instrum. 68, 2461 Gatti, E. & Rehak, P. 1984, NIM, A255, 608 Gauthier, C. et al. 1994, NIM, A349, 258 Hellier, C. 1997, MNRAS, 291, 71 Lechner et al. 1996, NIM, A377, 346 Longoni, A. et al. 1998, NIM, A409, 407 Rehak, P. et al. 1996, NIM, A235, 224 Remillard et al. 1999, ApJ, 517, L127 Strohmayer, T.E. et al. 1998, ApJ, 498, L135 Tomsick, J.A., Lapshov, I., & Kaaret, P. 1998, ApJ, 494, 747
no-problem/9911/hep-th9911068.html
ar5iv
text
# TASI Lectures on Matrix Theory ## 1 Introduction – Limitations on Lagrangian Quantum Mechanics This lecture series is about Matrix Theory , a nonperturbative, Lagrangian formulation of M Theory. There has been a lot of confusion about this theory in the literature, to the extent that it has been characterized as controversial in the popular press . Much of this confusion has been caused by misinterpretation and misunderstanding of what the theory was supposed to do, and too little appreciation of how important it is to take the large $`N`$ limit in order to calculate amplitudes of interest to a Lorentz invariant theory. However, some of the difficulties of Matrix Theory are more important, and reflect crucial issues about any quantum theory of gravity. I therefore want to begin with a critical discussion of what we can expect to be the limitations of ordinary Lagrangian quantum mechanics in the description of any quantum theory of gravity. Any theory which contains general relativity (GR) must be time reparametrization invariant. Mathematically this means that the time translation generator for an arbitrary definition of time is a constraint, which must vanish on physical states. Physically it means that any nonvanishing definition of energy must be conjugate to a physical clock variable which measures time. This is certain to cause problems in the quantum theory, where there must in general be variables which do not commute with the clock. Thus, the very definition of physical time translation implies some sort of semiclassical approximation in which the clock evolves classically. In a closed cosmology, we cannot expect such an approximation to be valid with arbitrary precision. However, if the universe has a boundary and is of infinite size, then the boundary conditions at infinity define frozen classical variables which can be used as clocks. Typically, we insist that the metric at infinity approaches that of a noncompact symmetric space (Minkowski or Anti-DeSitter (AdS)) and the natural time translation generators are chosen from the asymptotic symmetry group of the metric. In these lectures we will be concerned primarily with asymptotically Minkowski spaces. Let us first consider an ordinary Lorentz frame at spacelike infinity. Then there is a special Poincaré subgroup of the asymptotic diffeomorphisms of the metric , and up to a Lorentz transformation, a unique choice of Hamiltonian. Quantum M Theory (or, for those who are still skeptical, any quantum theory of gravity) will have a Hilbert space on which this generator acts as a Hermitian operator. We also expect it to have a ground state $`|0`$, whose energy eigenvalue must, for consistency, be zero. It might in fact have a discrete or continuous ground state degeneracy, labelled by expectation values of Poincaré invariant operators. We will assume that, as in local field theory, there is a large class of interesting operators (hereafter called localizable operators) that do not disturb the boundary conditions at infinity. If we restrict attention to localizable operators, the Hilbert space breaks up into superselection sectors, each of which has a unique ground state. Given any localizable operator $`O`$, we can formally define the time dependent Heisenberg operator $`O(t)`$ by $$O(t)e^{iHt}Oe^{iHt}.$$ (1) To investigate the degree of formality of this definition, we compute the two point function $$O(t)O^{}(0)=_0^{\mathrm{}}𝑑Ee^{iEt}\rho _O(E),$$ (2) where the spectral density is defined by $$\rho _O(E)\underset{n}{}\delta (EE_n)\left|0|O|n\right|^2.$$ (3) The crucial question is now the convergence of this integral representation, or equivalently, the high energy behavior of the spectral density. In quantum field theory, the high energy behavior of the theory is determined by a conformally invariant fixed point. The density of states in volume $`V`$ behaves like $$\rho e^{cV(E/V)^{\frac{(d1)}{d}}},$$ (4) where $`d`$ is the dimension of spacetime. Generic operators localized in the volume will have a spectral density $`\rho _O`$ with the same behavior. However, there is a special class of local operators of fixed dimension, which connect the vacuum only to the states in a given irreducible representation of the conformal group. The spectral density of these operators grows only like a power of the energy. Note that in either case, we can define the Green’s function by analytic continuation of an absolutely convergent integral in Euclidean time In a quantum theory of gravity, it is extremely plausible, that for a theory with four or more asymptotically Minkowski dimensions the high energy density of states is dominated by highly metastable black holes. The existence and gross properties of these states follow from semiclassical GR. The density of black hole states is given by the Bekenstein-Hawking formula $$\rho e^{k(E/M_P)^{\frac{(d2)}{(d3)}}},$$ (5) where $`M_P`$ is the $`d`$ dimensional Planck mass. There are two interesting features of this formula. The first is its independence of the volume. This is a consequence of the Jeans instability. If we try to construct an extended translation invariant state other than the vacuum in a theory containing gravity, we eventually get to an object whose Schwarzchild radius exceeds its physical size and the system collapses into a black hole. The only translation invariant states are those which are superpositions of a single black hole at different positions in spacetime. Secondly, any operator whose matrix elements between the vacuum and states of energy $`E`$ are not drastically cut off at energies above the Planck scale, will not have a well defined two point function. Thus, although the general formalism of quantum mechanics will be valid, we should not expect a conventional Lagrangian description of the system to be applicable. Indeed, the Lagrangian formalism produces Green’s functions of Heisenberg operators as its fundamental output, and we are supposed to deduce the energy spectrum and the structure of the Hilbert space from this more fundamental data (similar remarks are of course applicable to any more abstract formalism which takes Green’s functions as its basic starting point). In fact, implicit in the definition of the Lagrangian formalism is an assumption that the short time behavior of the system is approximately free. This assumption is not even valid for a nontrivial fixed point theory. However, in field theory we can restore the validity of Lagrangian methods by realizing the theory as the limit of a cutoff system or (in many cases) by realizing the nontrivial fixed point as an infrared limit of an asymptotically free theory. No such workarounds appear to be available for the quantum theory of gravity. Another fascinating possibility is that the space of states of a very large black hole has a group of symmetries which partitions it into irreducible representations in much the same way that the conformal group partitions the states of charged black branes with AdS horizons. Then one could construct operators which connected the vacuum only to those states in an irrep of the group. If the density of states in an irrep had subexponential growth then these operators would have sensible correlation functions. In the absence of such a large black hole symmetry group, there will be no conventional Green’s functions in the holographic dual of asymptotically flat spacetimes. It is interesting to contrast these results with our expectations in a light cone frame. We will discuss light cone formalism more extensively in the next section. For the moment we will need only the formula for light cone energy: $$P^{}=\frac{𝐏^2+M^2}{P^+}$$ (6) where $`𝐏`$ (which we will set equal to zero) and $`P^+`$ are the transverse and longitudinal momenta respectively. Again assuming that the high energy density of states is dominated by black holes, we can write the light cone density of states as $$\rho e^{k(/M_P)^{\frac{(d2)}{2(d3)}}}.$$ (7) Note that we now expect a convergent two point function in light cone time for any $`d5`$. Even for $`d=4`$, we find only a Hagedorn spectrum (rather than the more divergent form of the black hole spectrum in ordinary energy) and the Green’s function will be defined for sufficiently long Euclidean time separations. The conclusion that I want you to draw from this is that we should only expect a conventional Lagrangian quantum mechanics for M Theory in light cone time, and perhaps only for $`5`$ or more noncompact Minkowski dimensions. Another point to remember is that the high energy density of states seems to increase as the number of noncompact dimensions decreases. This suggests that compactified M Theory has more fundamental degrees of freedom than uncompactified M Theory a conclusion which we will see is borne out in the sequel. Some readers will be curious about the case of two or three asymptotically Minkowski dimensions, where there are no black holes. Here the story is quite different, at least in those situations with enough supersymmetry (SUSY) to guarantee that there is a massless scalar field in the supergravity (SUGRA) multiplet. In these cases one can argue that the system has very few states, because would be localized excitations so distort the geometry of spacetime that the asymptotic boundary conditions are not satisfied. In some sense, the resulting theory is topological. Another interesting example of the arguments used in this section is M Theory in spaces with 3 or more asymptotically AdS dimensions ($`AdS_2`$ has the same sort of problems as two or three dimensional Minkowski space ). Here the boundary of spacetime is timelike and there is no analog of a light cone frame. There are two natural inequivalent choices of Hamiltonian, corresponding to global and Poincaré time. The corresponding black objects are AdS Schwarzchild black holes and near extremal black branes of appropriate dimension. Both have positive specific heat, which is to say that their density of states grows less rapidly than an exponential. And in precisely these cases, we expect that there is an exact description of the system in terms of a conventional quantum field theory. In the next section, we will begin an exploration of M Theory in the light cone frame, starting with the simplest case of eleven noncompact directions. ## 2 Matrix Theory in Eleven Dimensions ### 2.1 Quantum Field Theory in Light Cone Frame and Discrete Light Cone Quantization To set the stage for our discussion of Matrix Theory we begin with a brief introduction to light cone field theory . As hinted in the previous section, the basic idea is to choose a light cone frame pointing in a specific spatial direction. In this basis, the momentum takes the form $`(P^{},P^+,𝐏)`$. $`P^{}`$ is taken to be the time translation generator and takes the form $`P^{}=\frac{𝐏^2+M^2}{P^+}`$, where $`M^2`$ is the mass squared operator. The subgroup of the Lorentz group which leaves the light cone frame invariant is obviously isomorphic to the Galilean group in $`d2`$ dimensions, with $`P^{}`$ transforming like the Galilean energy, $`𝐏`$ like the Galilean momentum, $`P^+`$ like the Galilean mass, and $`M^2`$ like a Galilean invariant potential. Compared to a true nonrelativistic field theory the new feature here is the continuous Galilean mass spectrum and the associated possibility of particles breaking up into others with smaller Galilean mass. There are also two other sets of Lorentz transformations. The first is the longitudinal boost generator, which rescales $`P^\pm `$ in opposite directions. The other is the set of null plane rotating transformations which rotate the direction of the null vector into the transverse directions. These are typically the most difficult symmetries to realize in building an actual Lagrangian. In supersymmetric theories, we must also include the supertranslations. Since they are Lorentz spinors, they break up into left moving and right moving pieces under longitudinal boosts. These satisfy the following commutation relations $$[q_a,q_b]_+=\delta _{ab}P^+$$ (8) $$[Q_a,Q_b]_+=\delta _{ab}P^{}$$ (9) $$[Q_a,q_b]_+=\gamma _{ab}^iP_i$$ (10) It will turn out in Matrix Theory that it is relatively easy to implement these symmetries, but that they give strong constraints on the dynamics of the system. For the purposes of this brief introduction to light cone field theory, we will restrict our attention to a simple scalar Lagrangian of the form $$=_+\varphi _{}\varphi (\varphi )^2V(\varphi ).$$ (11) Standard (Dirac) quantization of this Lagrangian gives us the commutation relations $$[\varphi (z,𝐱,t),_z\varphi (z^{},𝐱^{},t)]=\frac{1}{2}\delta (zz^{})\delta (𝐱𝐱^{}).$$ (12) which are solved by $$\varphi =_0^{\mathrm{}}\frac{dP^+}{P^+}[a(𝐱,P^+)e^{iP^+z}+a^{}(𝐱,P^+)e^{iP^+z}]$$ (13) where $`a`$ and $`a^{}`$ have the commutation relations of ordinary second quantized nonrelativistic fields. The $`z`$ independent part of $`\varphi `$ has no canonical momentum and is a constraint variable. Often one solves for it at the classical level, but this procedure is unsatisfactory. A better strategy (in principle) is to derive the light cone formalism from a covariant path integral. Then one sees explicitly that the zero longitudinal momentum degrees of freedom must be integrated out and that there are contributions to the effective interaction for the nonzero modes at all orders in the loop expansion (and furthermore that the higher order terms are larger than those obtained in the tree approximation — the semiclassical expansion is not applicable). In the formalism developed so far, the zero mode problem is mixed up with an equally vexing problem from modes with nonzero, but arbitrarily small, longitudinal momentum. The method of Discrete Light Cone Quantization (DLCQ) is an attempt to repair this difficulty by compactifying the lightlike longitudinal direction (studying the field theory on a space where $`x^{}`$ is identified with $`x^{}+R`$) , thus rendering the longitudinal momentum discrete. This has another property which at first sight renders DLCQ extremely attractive. As one can see from the expansion (13), the Fock space of light cone field theory contains only particles with positive longitudinal momentum. Operators with negative longitudinal momentum are annihilation operators. If longitudinal momentum is conserved, positive and discrete, then states with $`P^+=N/R`$ can have at most $`N`$ particles in them. Thus in DLCQ in the sector with $`N`$ units of momentum, field theory reduces to nonrelativistic quantum mechanics with a fixed number of particles. As there is no such thing as a free lunch, there must be a catch somewhere. In fact, in field theory there are two. First of all, in order to have a hope of recapturing Lorentz invariant physics one must take the large $`N`$ limit. Field theory in a space with a periodic lightlike direction is weird, very close to a space with periodic time, which has apparent grandfather paradoxes<sup>1</sup><sup>1</sup>1Though resolved in the way first proposed by R.A. Heinlein in .. If $`N`$ is large, one can hope to make wave packets which are localized along the lightlike direction. Furthermore, since systems with large longitudinal momenta would be expected to Lorentz contract, their physical size in the longitudinal direction might also be small. The physics of such large $`N`$ systems could very well be oblivious to the lightlike identification and reproduce that of the uncompactified, Lorentz invariant, system. The use of words like might, could and hope in the last three sentences, signals that there is no rigorous argument guaranteeing that this is the case. The second catch is integrating out the zero modes. One can get some insight into how difficult this is in field theory by viewing lightlike compactification as an infinite boost limit of spacelike compactification on an infinitesimally small circle. If we compactify an ordinary field theory on a circle of very small radius $`R_S`$ and concentrate only on the lagrangian of the zero modes, then $`R_S`$ appears as a multiplicative factor. In other words, the theory of the zero modes is at infinitely strong coupling. Thus, even if the original field theory is weakly coupled, the problem of calculating the effective Lagrangian for the nonzero modes in DLCQ appears intractable in general<sup>2</sup><sup>2</sup>2 We will later encounter a field theory where DLCQ leads to a weakly coupled system.. Given all of these problems, why are we interested in light cone quantization in M Theory? Apart from the general motivation given in the introduction, there are many indications that M Theory is much better behaved than field theory in the light cone frame. The many successes of light cone string theory attest to this. In particular, in perturbative string theory in light cone gauge, longitudinal momentum is the spatial coordinate on the string world sheet. DLCQ is a discretization of the string world sheet. Since the uncompactified light cone string theory is a conformal field theory, the process of taking the large $`N`$ limit is controlled by the world sheet renormalization group and the problems with zero longitudinal momentum modes are encoded in local contact terms. Furthermore, although the actual computation of these counterterms to all orders in perturbation theory would be tedious, the form of light cone string perturbation theory leads immediately to the guess that the correct answer is given by conformal field theory on higher genus Riemann surfaces, a prescription which automatically fixes the contact terms by analytic continuation, at least in many cases. The other reason to be somewhat more hopeful about our prospects for success, is SUSY. SUSY nonrenormalization theorems give us some control over the possible effects of integrating out the zero modes. In the cases we will study, this is probably enough to fix the effective lagrangian uniquely. ### 2.2 The Holographic Principle and the Matrix Theory Lagrangian For a number of years, Charles Thorn championed an approach to string theory in light cone frame based on the notion of string bits, which were taken to be pieces of string carrying the lowest possible value of longitudinal momentum. Full strings, with higher values of longitudinal momenta were supposed to be bound states of these more fundamental constituents. Thorn explicitly noted that in such a formalism, one of the dimensions of spacetime appeared dynamically. The fundamental constituents propagated on a surface of one lower dimension. In an independent development some years later, G. ’t Hooft proposed that the apparent paradoxes of black hole physics in local field theory might be resolved if the fundamental quantum theory of gravity had degrees of freedom which lived on hypersurfaces of dimension one lower than that of the full spacetime, with a density equal to the Planck density. The motivation for the latter restriction was the the Bekenstein-Hawking formula for the entropy of a black hole. He characterized a theory of this type as holographic. Susskind then realized that light cone gauge string theory embodied at least half of the holographic principle of ’t Hooft, essentially because of the properties described in the paragraph above. The Bekenstein bound is not satisfied in perturbative string theory. If we think of string bits as the fundamental degrees of freedom, then a string made up of $`N`$ bits has a transverse extent of order $`\mathrm{ln}N`$. On the other hand, the Bekenstein bound would suggest that the transverse area had to grow like $`N`$. It is not terribly surprising that this part of the holographic principle can only be realized in a nonperturbative manner in string theory. The Bekenstein bound is formulated in Planck units and formally goes to infinity when the string coupling is taken to zero with the string tension fixed. Morally speaking, it is similar to restrictions on operators in large $`N`$ field theory which stem from the fact that the traces, $`\text{tr }M^k`$, of an $`N\times N`$ matrix are not all independent. It is well known that such restrictions are nonperturbative in the $`1/N`$ expansion. In nonperturbative formulations of M Theory, such as Matrix Theory and the AdS/CFT correspondence, the second half of the holographic principle is derived by explicit dynamical calculations , . In the latter case it is rather easy to derive and one finds that the bound is saturated, while in Matrix Theory the argument is based on crude approximations and one finds the restriction on the number of degrees of freedom only as a bound. Let us now proceed to the construction of the Matrix Theory Lagrangian in eleven flat spacetime dimensions. The original construction of used the holographic principle as its starting point and used the language of the Infinite Momentum Frame rather than light cone quantization. Susskind then suggested that the finite $`N`$ Matrix Theory lagrangian was the DLCQ of M Theory. Here we will follow a much cleaner argument due to Seiberg (see also ) which begins from the idea that DLCQ of a Lorentz invariant theory can be obtained by a boost applied to a system compactified on a spacelike circle, if the radius, $`R_S`$, of the spacelike circle is taken to zero, with the rapidity $`\omega `$ of the boost scaling like $`\mathrm{ln}(1/R_S)`$. If we wish to be in the sector with $`N`$ units of longitudinal momentum in DLCQ, then we should work in the sector with $`N`$ units of spacelike momentum. This argument is a derivation of Susskind’s claim. The crucial feature which distinguishes M Theory from most field theories, is that the limiting theory on a small spacelike circle is free. Indeed, it is free Type IIA string theory (I am assuming that all the students at this school are familiar with this paper or will shortly become so). The sector with $`N`$ units of momentum around the circle is the sector of IIA string theory with $`N`$ units of D0 brane charge. In free string theory we can characterize this sector as containing $`N`$ D0 branes and the strings connecting them, as well as any number of closed strings and D0 brane anti D0 brane pairs. However, we are interested only in degrees of freedom with finite light cone energy. The light cone energy is of order $`e^\omega (EN/R_S)E/R_SN/R_S^2`$. The lowest energy in the sector with $`N`$ D0 branes is $`N/R_S`$. We are clearly interested only in states whose splitting from this ground state is inversely proportional to the D0 brane mass. For a single $`D0`$ brane, examples of such states are the states of the D0 brane moving with (transverse from the point of view of the 11 dimensional light cone frame) momenta fixed as $`R_S0`$. Note that the eleven dimensional Planck scale remains fixed in this limit, so this is the same as requiring the transverse momenta to be a finite number of Planck units in the weak string coupling limit. For multiple D0 branes separated by distances of order the Planck scale, we must also include degrees of freedom which create and annihilate minimal length open strings between the branes. The Lagrangian for this system was written down in this context in . It is the dimensional reduction of ten dimensional Super Yang Mills theory on a nine torus, and, as such, was first written down by . We will write it both in string and eleven dimensional Planck units $$L=\frac{l_S^3}{g_S}\mathrm{Tr}\left(\dot{\varphi }^\mathrm{𝟐}+[\varphi ^i,\varphi ^j]^2+i\mathrm{\Theta }\dot{\mathrm{\Theta }}\mathrm{\Theta }[\varphi ^i,\gamma ^i\mathrm{\Theta }]\right)$$ (14) $$L=\mathrm{Tr}\left(\frac{\dot{𝐗}^2}{R}+R\frac{[X^i,X^j]^2}{L_p^5}+i\theta \dot{\theta }R\frac{\theta [X^i,\gamma ^i\theta ]}{L_p^3}\right).$$ (15) Here $`\varphi ^i`$ and $`X^i`$ are nine Hermitian $`N\times N`$ matrices, the former with dimensions of mass and the latter with dimensions of length. Similarly $`\mathrm{\Theta }`$ is a sixteen component $`SO(9)`$ spinor, which is an Hermitian $`N\times N`$ matrix, and has dimensions of $`[m]^{3/2}`$, while $`\theta `$ has the same transformation properties, but is dimensionless. Witten’s motivation for this Lagrangian was that it summed up the leading infrared singularities of string perturbation theory, that are caused by zero energy open strings when D0 branes are separated by distances less than the string length. The authors of gave a careful argument to all orders in string perturbation theory that this Lagrangian in fact captured all of the dynamics at energy scales equal to the kinetic energy of a D0 brane with Planck momenta. Seiberg’s argument is often criticized as being “too slick” and the work of is cited as an example of the dangers of naively ignoring the integration out of the zero modes in DLCQ. In fact, the fact that M Theory on a small circle is weakly coupled Type IIA string theory, and the careful analysis of (which shows that the kind of perturbative divergences of the small $`R_S`$ limit found by in field theory are absent to all orders in perturbation theory) suggest that the latter reference is completely irrelevant in the present context. About the only loophole one could imagine in the argument is the possibility that weakly coupled IIA string theory has nonperturbative corrections to (14) which somehow survive the $`R_S0`$ limit. Even this loophole can probably be closed by proving the following conjecture: the Lagrangian (14) is the only Lagrangian for this set of degrees of freedom consistent with the symmetries we will list below. The italicized phrase means in particular that the Lagrangian may not contain time derivatives higher than the first, though it may contain higher powers of the first time derivatives. The conjecture has been partially proven in . In trying to give a more complete proof one should use certain facts which were not employed in this reference. In particular, the Lagrangian we have written has a thirty two generator odd subalgebra of its symmetry algebra and has translation invariance in the transverse directions, as well as Galilean boost invariance. Furthermore, its gauge group is $`U(N)`$ and not $`SU(N)\times U(1)`$, so that arbitrary separations of the trace parts of matrices from their traceless parts are not allowed. These facts seem to give a fairly straightforward argument for the conjecture if one restricts attention to Lagrangians which can be written as a single trace. I do not pretend to have a complete proof of this conjecture (although I am convinced it is correct) and leave it to an enterprising student. To my mind, the strongest arguments for Matrix Theory come from its successes in reproducing known facts and conjectures about M Theory as dynamical results of a complete Lagrangian system. There are difficult questions about whether the large $`N`$ limit really reproduces the Lorentz invariant dynamics which interests us. But there seems to be little doubt that in a variety of backgrounds Matrix Theory is a correct DLCQ of M Theory. To proceed with the exposition of the results of Matrix Theory we begin with a list of its symmetries: The most important of these are SUSYs. The full supertranslation algebra is preserved in DLCQ. Only the spectrum of the translation generators is different from that expected in the uncompactified theory. As usual in light cone frame, spinors can be decomposed as right moving and left moving under the $`SO(1,1)`$ group of boosts in the longitudinal direction (which is not a symmetry of DLCQ). Thus, there are two sets of spinor SUSY generators, each transforming as the $`\mathrm{𝟏𝟔}`$ of the transverse $`SO(9)`$ rotation group (which is preserved by DLCQ). The first of these is simply realized in terms of the $`16`$ canonical matrix variables of Matrix Theory as $$q_a=\sqrt{\frac{1}{R}}\text{tr }\theta _a$$ (16) The anticommutator of these is $`\delta _{ab}\frac{N}{R}`$, which identifies $`N`$ as the integer valued, positive longitudinal momentum $`P^+`$ of DLCQ. The anticommutator of the left and right moving SUSY generators is $$[q_a,Q_b]_+=\gamma 𝐏_{ab}$$ (17) This is realized by $$Q_a=\sqrt{\frac{R}{N}}\text{tr }(\gamma 𝐏_{ab}\theta _b+i\gamma _{ab}^{ij}[X^i,X^j])\theta _b$$ (18) The second term does not contribute to (17) but is probably required by the final relation of the supertranslation algebra $$[Q_a,Q_b]_+=\delta _{ab}P^{}.$$ (19) In fact, the latter is realized only on $`U(N)`$ invariant states of the model, which identifies $`U(N)`$ as a gauge group. The word probably in the penultimate sentence reflects the incompleteness of the uniqueness proof I referred to above. In addition to these symmetries, the model is invariant under $`SO(9)`$ rotations and transverse Galilean boosts. The missing parts of the eleven dimensional super-Poincaré group are the longitudinal boosts and the null plane rotating parts of the spatial rotation group. These may be restored in the large $`N`$ limit. Note that the Galilean transformations act only on the $`U(1)`$ center of mass variables and restrict their Hamiltonian to be quadratic in canonical momenta. Finally, I note a discrete symmetry under $`\theta \theta ^T`$, $`X^i(X^i)^T`$, which commutes with half of the supertranslations and with $`P^\pm `$. This symmetry is instrumental in the matrix theory description of Hořava-Witten domain walls . ### 2.3 Gravitons and Their Scattering The classical Lagrangian of Matrix Theory has a moduli space consisting of commuting matrices. The high degree of supersymmetry of the system guarantees that this moduli space is preserved in the quantum theory. This means that if we integrate out all of the non moduli space variables, then the effective Lagrangian on the moduli space has no potential. Furthermore, the terms quadratic in time derivatives are not renormalized, and the terms quartic in time derivatives appear only at one loop . Furthermore, for $`N>2`$ there are other terms in the effective Lagrangian which receive only a unique loop correction and thus are exactly calculable . The justification for the description by an effective Lagrangian is the Born-Oppenheimer approximation. When we go off in some moduli space direction $`𝐗_\mathrm{𝟎}=_k𝐫_𝐤I_{N_k\times N_k}`$, then variables which do not commute (as matrices) with $`𝐗_\mathrm{𝟎}`$ have frequencies of order $`|𝐫_𝐤𝐫_𝐥|`$. Thus, if these distances are large, these variables can be safely integrated out in perturbation theory. For the $`SU(N_k)`$ variables, which commute with $`𝐗_\mathrm{𝟎}`$ the Born-Oppenheimer argument depends on a fundamental conjecture about this system, due to Witten . That is, that the $`SU(N)`$ version of this supersymmetric quantum mechanics has exactly one (up to an obvious spinor degeneracy to be discussed below) normalizable SUSY ground state. This conjecture lies at the heart of the M Theory - IIA duality, and the whole web of string dualities would collapse if it proved false. More impressively, the conjecture has been more or less rigorously proven for $`N=2`$, and arguments exist for higher values of $`N`$ . Finally, arguments can be given that the typical scale of energy of excitation of these bound states is of order $`1/N^p`$ with $`p<1`$. Since we will see that energies on the moduli space are of order $`1/N`$, this justifies the use of the Born-Oppenheimer approximation for large $`N`$. If we accept the bound state conjecture it follows that the large $`N`$ limit of the model contains in its spectrum the Fock space of free eleven dimensional supergravitons. In fact, the theorems we have cited show that the Lagrangian along the moduli space direction $`𝐗_\mathrm{𝟎}`$ with large separations, is $$\underset{k=1}{\overset{n}{}}\frac{N_k}{2R}\dot{𝐫_𝐤}^2+i\theta _k\dot{\theta _k}$$ (20) which is that of a collection of massless eleven dimensional superparticles in light cone frame. Each of the $`\theta _k`$ variables is a $`16`$ component $`SO(9)`$ spinor. The Hamiltonian of this system is $`\theta `$ independent, so the fermionic variables serve merely to parametrize the degeneracy of particle states. They are quantized as $`16`$ Clifford variables so their representation space is $`256`$ dimensional. It decomposes under $`SO(9)`$ as $`\mathrm{𝟒𝟒}\mathrm{𝟖𝟒}\mathrm{𝟏𝟐𝟖}`$, which are the states of a symmetric traceless tensor, a totally antisymmetric three tensor, and a vector spinor satisfying $`\gamma _{ab}^i\psi _b^i=0`$. This is precisely the content of the 11D SUGRA multiplet. The required Bose or Fermi symmetrization of multiparticle states follows from the residual $`S_n`$ gauge invariance on the moduli space (commuting matrices are diagonal matrices modulo permutations) and the fermionic nature of the spinor coordinates. I want to note in particular, that SUSY was crucial to the cluster property of these multiparticle states. In the nonsupersymmetric version of the matrix model, an $`|𝐫_𝐤𝐫_𝐥|`$ potential is generated on the moduli space and the whole system collapses into a single clump. I think that this may be one of the most interesting results of Matrix Theory. In perturbative string theory, explicit SUSY breaking is usually associated with the nonexistence of a stable, interacting vacuum state, and often with tachyonic excitations which violate the cluster property. Matrix Theory suggests even more strongly that SUSY may be crucial to the existence of a theory of quantum gravity in which propagation in large classical spacetimes is allowed. In fact, it appears that only asymptotic SUSY is strictly necessary for the cluster property. Indeed the cancellation of the large distance part of the potential has to do with the SUSY degeneracy between states at extremely high energy. However, simple attempts to break SUSY even softly appear to lead to disaster . Before beginning our discussion of graviton scattering, I want to clarify what we can expect to extract from perturbative or finite $`N`$ calculations in the matrix quantum mechanics. The basic idea of the calculations that have been done is to study zero longitudinal momentum transfer scattering by concentrating on the region of configuration space where some number of blocks are very far away from each other. The intra-block wave functions are taken to be the normalizable ground states in each block. This leaves the coefficients of the unit matrix in each block and the off block diagonal variables. In the indicated region of configuration space the frequencies of the latter are very high, and one attempts to integrate them out perturbatively. There are two rather obvious reasons why these calculations should fail to give the answers we are interested in. The first is that the nominal perturbation parameter for this expansion is $`\frac{NL_P^3}{r^3}`$ where $`r`$ is a transverse distance between some pair of blocks. In order to make comparisons with SUGRA we want to take $`r/L_P1`$, but independent of $`N`$ as $`N`$ tends to infinity. Indeed the scattering amplitudes in this regime of impact parameters should become independent of $`N`$ (or rather scale with very particular powers since they refer to exactly zero longitudinal momentum transfer), as a consequence of Lorentz invariance. This is evidently not true for individual terms in the perturbation expansion. It should be emphasized that this means we are not interested in the ’t Hooft limit of this theory. In addition to this, the perturbation expansion is not even a correct asymptotic expansion of the amplitudes in the large $`r`$ region. To leading order in inverse distances, the interactions between the high frequency off diagonal variables, and the $`SU(N_i)`$ variables in individual blocks does not enter in the expressions for amplitudes. At higher orders this is no longer the case. The complete calculation involves expectation values of operators in the individual block wave functions. The fact that the off diagonal variables have very high frequencies allows us to make operator product expansions and limits the number of unknown expectation values that come in at a given power of $`r`$. Terms like this will give fractional powers of the naive expansion parameter. However, since the short time limit of quantum mechanics is free, all the operators have integer dimensions and we are led to expect only integer powers $`L_P`$ in the expansion. The $`N`$ dependence of these terms is completely unknown. The second of these problems is inescapable, but the first could be avoided if it were possible to make direct comparisons between finite $`N`$ Matrix Theory and DLCQ SUGRA. It is important to realize that there is absolutely no reason to expect this to be so. The intuitive reason is that the gravitons of Matrix Theory are complicated bound states rather than structureless particles. They only behave like structureless particles when their relative velocities are very small, because then the scattering state is almost a BPS state. In the large $`N`$ limit the velocities become arbitrarily small and one can argue that to all orders in energies over $`M_P`$ they should behave like particles of an effective field theory. However, for finite $`N`$, there is no reason for this to be so, even at low energy and momentum transfer. More mathematically, we can note that SUGRA is a limit of M Theory in which all momenta are small compared to the Planck mass. On the other hand, we have seen that the DLCQ system can be viewed as a system compactified on a tiny circle, with a finite number of units of momentum. Thus every state in the system contains momenta large compared to the Planck scale. There is no reason to expect the limit which gives DLCQ to commute with the SUGRA limit. The reason that some amplitudes are calculable in perturbation theory is that the system has a host of nonrenormalization theorems. That is, the large SUSY of the Matrix Theory Lagrangian so constrains certain terms in the effective Lagrangian for the relative positions that they are given exactly by their value at some order of the loop expansion. I will not give a description of the state of these calculations, but refer the reader to the literature , . The fact that only quantities determined by symmetries are calculable gives one pause, I must admit (unless one is able to prove the conjecture above that the symmetries completely determine the Lagrangian) but one must recognize that this is a rather generic state of affairs in recent results about M Theory. However, what I consider important about Matrix Theory is that it reduces all questions about M Theory (in the backgrounds where it applies) to concrete, albeit difficult, problems in mathematical physics. We are no longer reduced to guesswork and speculation. Of course, this is no better than the statement that lattice gauge theory reduces hadron physics to a computational problem. Obviously Matrix Theory will only be truly useful if one can find analytic or numerical algorithms for efficiently extracting the S-matrix from the Lagrangian. On the other hand, it may be possible to attack certain conceptual problems before a practical calculation scheme is found. ### 2.4 General Properties of the S-Matrix and the Graviton Wave Function We can easily write down an LSZ-like path integral formula for the S-matrix of gravitons in Matrix Theory. Simply perform the path integral with the following boundary conditions: as $`t\mathrm{}`$, the matrices approach the moduli space $$𝐗𝐗_\mathrm{𝟎}^𝐈=\underset{k}{}𝐫_𝐤(t)I_{N_k\times N_k}$$ (21) with similar formulae for the fermionic variables. $`𝐫_𝐤(t)`$ are classical solutions of the moduli space equations of motion. They are labelled by the transverse momenta of the incoming states, while the longitudinal momenta are the $`N_k`$. Similarly, for $`t\mathrm{}`$ we have $$𝐗𝐗_\mathrm{𝟎}^𝐅=U^{}\underset{k}{}𝐫_𝐤(t)I_{N_k\times N_k}U$$ (22) where we must integrate over the $`U(N)`$ matrix, $`U`$, in order to impose gauge invariance. Of course, the number of outgoing particles, as well as their transverse and longitudinal momenta, will in general be different from those in the initial state <sup>3</sup><sup>3</sup>3An alternate approach to the Matrix Theory S-matrix can be found in .. This formula does not quite give the S-matrix since the $`SU(N_k)`$ variables are sent to zero asymptotically by the boundary conditions. In principle we should allow them to be free, and convolute the path integral with the bound state wave function for each external state. Thus, our path integral computes the S-matrix multiplied by a (momentum dependent) factor for each external leg equal to the bound state wave function at the origin. It is likely that in the large $`N`$ limit these renormalization factors will vanish, so we would have to be careful to extract them before computing the limiting S-matrix. Several of the S-matrix elements for multigraviton scattering at zero longitudinal and small transverse momentum transfers can be computed with the help of nonrenormalization theorems. All of these computations agree precisely with the formulae from 11D SUGRA. As noted above, we cannot expect to make more detailed comparisons until we understand the large $`N`$ limit much better than we do at present. We can however try to understand what could possibly go wrong with the limiting S-matrix. Assuming the bound state conjecture, we know that the large $`N`$ theory has the correct relativistic Fock space spectrum. Furthermore, the S-matrix exists and is unitary for every finite $`N`$. This implies that individual S-matrix elements cannot blow up in the limit. Furthermore, we know that some T-matrix elements are nonzero so the S-matrix cannot approach unity (I do not have an argument that amplitudes with nonzero longitudinal momentum transfer cannot all vanish in the limit). The absence of pathological behavior in which individual S-matrix elements oscillate infinitely often in the large $`N`$ limit is more or less equivalent to longitudinal boost invariance, which states that as the $`N_k`$ get large, S-matrix elements should only depend on their ratios. Thus, proving the existence of generic S-matrix elements is probably equivalent to proving longitudinal boost invariance. Assuming the existence of limiting S-matrix elements, there is another disaster that could occur in the limit. This is an infrared catastrophe. That is, the cross section for reactions initiated by only a few particles might be dominated by production of a number of particles scaling like a positive power of $`N`$. Then, the S-matrix would not approach a well defined operator in Fock space. In 11D SUGRA the infrared catastrophe is prevented by Lorentz invariance. Again we see a possible connection between the mere existence of the S-matrix, and its Lorentz invariance. A possible avenue for investigating the infrared catastrophe is to exploit the fact that the production of a large number of particles at fixed energy and momentum means that each of the produced particles has smaller and smaller energy and momentum. It is barely possible that the nonrenormalization theorems will give us sufficient information about scattering in this regime to put a bound on the multiparticle production amplitudes. Personally, I believe that a demonstration of the existence and Lorentz invariance of the limiting S-matrix must await the development of more sophisticated tools for studying these very special large $`N`$ systems. ### 2.5 Membranes One of the attractive features of Matrix Theory is the beautiful way in which membranes are incorporated into its dynamics. This connection has its origin in groundbreaking work on membrane dynamics done in the late 80s . In that work, the Matrix Theory Lagrangian was derived as a discretization of the light cone Lagrangian for supermembranes. The idea was to build a theory analogous to string theory, with membranes as the fundamental objects. The theory appeared to fail when it was shown that the Lagrangian had continuous spectrum . Today we realize that this is actually a sign that the theory exceeds its design criteria: it actually describes multibody states of membranes and gravitons, and the continuum states are simply the expected scattering states of a multibody system. The membrane/matrix connection has been described so many times in the literature that I will only give a brief summary of it here. It is simplest to describe toroidal membranes, though in principle any Riemann surface can be treated . One of the amusing results of this construction is that, for finite $`N`$, the topology of the membrane has no intrinsic meaning. States describing any higher genus surface can be found in the toroidal construction. It is only in the large $`N`$ limit that one appears to get separate spaces of membranes with different topology. The question of whether topology changing interactions (which certainly exist for finite $`N`$) survive the large $`N`$ limit, has not been studied, but there is no reason to presume that they do not. The heart of the membrane construction is the famous Von Neumann-Weyl basis for $`N\times N`$ matrices in terms of unitary clock and shift operators satisfying $$U^N=V^N=U^{}U=V^{}V=1.$$ (23) $$UV=e^{\frac{2\pi i}{N}}VU$$ (24) Any matrix can be expanded in a series $$A=a_{mn}U^mV^n$$ (25) If, as $`N\mathrm{}`$ we restrict attention to matrices whose coefficients $`a_{mn}`$ approach the Fourier coefficents of a smooth function, $`\widehat{A}(p,q)`$, on a two torus, then it is easy to show that $$[A,B]\frac{i}{N}\{\widehat{A},\widehat{B}\}_{P.B.}$$ (26) and $$\text{tr }AN𝑑p𝑑q\widehat{A}(p,q)$$ (27) Using these equivalences, one can show that, on this subclass of large $`N`$ matrices, the Matrix Theory Lagrangian approaches that of the supermembrane. One can extend the construction to more general Riemann surfaces (the original matrix papers cited in worked on the sphere) by noting that the equations (23), (24) arise in the theory of the lowest Landau level of electrons on a torus propagating in a uniform background magnetic field of strength proportional to $`N`$. One can then study an analogous problem on a general Riemann surface. Note that since all of these Landau systems have finite dimensional Hilbert spaces, they can be mapped into each other. Thus, for finite $`N`$, the configuration spaces of membranes of general topology are included inside the toroidal case. It is interesting to note that, at the level of the classical dynamics of the matrix model, the condition (26) is sufficient to guarantee that the membrane states constructed as classical solutions of the equations of motion obeying this restriction, will have energies of order $`1/N`$ and are thus candidates for states which survive in the (hoped for) Lorentz invariant large $`N`$ limit. This suggests that a more general condition, viz. that the matrices be replaced by operators whose commutator is trace class, may be a useful formulation of Matrix Theory directly in the infinite $`N`$ limit. However, it is not clear that this sort of classical consideration is useful when $`N`$ is large. Indeed, it can be argued that in a purely bosonic matrix model, the classical energy of membrane states is renormalized by an amount which grows with $`N`$. By contrast, in the supersymmetric model, the infinite flat membrane is a BPS state and the energies of large smooth membranes are all of order $`1/N`$ in the quantum theory. The direct formulation of the infinite $`N`$ theory in eleven dimensions is an outstanding problem. It is clear that it is not simply the light cone supermembrane Lagrangian. But perhaps supermembranes do give us a clue to the ultimate formulation. ### 2.6 Fivebranes We will attack the problem of finding the 5-brane of M Theory in Matrix Theory by applying Seiberg’s algorithm. In fact for longitudinal 5-branes, this was done by Berkooz and Douglas long before Seiberg’s argument was conceived of. A longitudinal 5-brane is one which is wrapped around the longitudinal circle. In the IIA string language, it is an M5 brane wrapped around the small circle, and thus a D4 brane. Berkooz and Douglas proposed that the Matrix Theory model for such a 5-brane was the large N limit of the ND0-D4 system. This is a supersymmetric quantum mechanics obtained as the dimensional reduction of $`𝒩=2,d=4`$ SUSY Yang Mills, with an adjoint and a fundamental hypermultiplet. For $`k`$ such longitudinal five branes we simply introduce $`k`$ fundamental hypermultiplets. Seiberg’s argument shows that this is the appropriate DLCQ of M Theory with $`k`$ longitudinal 5-branes. In the large $`N`$ limit, one can argue that a different procedure which dispenses with the fundamentals, may be a sufficiently good description of the system. We now turn to the more problematic question of fivebranes in the transverse dimensions. After all, the longitudinal branes have infinite energy (relative to the Lorentz invariant spectrum) in the large $`N`$ limit. Again, we use Seiberg’s argument and find ourselves faced with a system containing an NS 5-brane and N D0 branes in IIA string theory with vanishing coupling. This is the system described by (a certain soliton sector of) the $`k=1`$ IIA little string theory . The system of $`k`$ NS 5 branes in any string theory with vanishing coupling is a six dimensional Lorentz invariant quantum system which “decouples from gravity”. That is to say, although it contains states with the quantum numbers of bulk gravitons (and other closed string modes) in a ten dimensional spacetime with a linear dilaton field , they are described holographically in terms of a quantum theory with $`5+1`$ dimensional Lorentz invariance. It is not a quantum field theory, because it has T-duality when compactified on circles and because it is an interacting theory with a Hagedorn spectrum . The absence of a simple description of fivebranes in the original eleven dimensional Matrix Theory Lagrangian is probably the first indication of a general principle. In quantum field theory, the fundamental degrees of freedom are local. When we study the theory on a compact space we can encounter new degrees of freedom, like Wilson lines, but they are all implicit in the local variables which describe the dynamics in infinite flat space. In a theory of fundamental extended objects, we may expect that this will cease to be true. If there are fundamental degrees of freedom associated with objects wrapped around nontrivial cycles of a compact manifold, and if, as may be expected, the energies of all states associated with these degrees of freedom scale to infinity with the volume of the manifold, then the theory describing infinite flat space may be missing degrees of freedom. Infinite fivebranes have infinite energy. We may imagine them to arise as limits of finite energy wrapped fivebranes on a compact space whose volume has been taken to infinity. Their description may involve degrees of freedom which decouple in the infinite volume limit. We shall see that this appears to be the case. Nonetheless, one feels a certain unease with the asymmetrical treatment of membranes and fivebranes<sup>4</sup><sup>4</sup>4Which is only partly relieved by noting that the only true duality between the two is realized in the standard formulation of Matrix Theory on a three torus, and that this duality is captured by the Matrix Theory formulation we will present below.. Perhaps there is a completely different formulation of light cone M Theory in which one somehow discretizes the light cone dynamics of the M5 brane. Indeed, since membrane charges are certainly incorporated in the world volume theory of the M5 brane<sup>5</sup><sup>5</sup>5M5 branes are in a sense D branes of the M2 brane and so carry charges which measure the number of M2 branes ending on them. These couple to the two form potential on the world volume. one might hope to obtain a more complete formalism in this way. ## 3 M Theory on a Circle Let us now imagine trying to compactify one of the transverse dimensions of M Theory on a circle of size $`aL_P`$. We are led to study D0 branes in IIA string theory with coupling $`g_S(R_S/L_P)^{3/2}0`$ on a circle of radius $`ag_S^{1/3}l_S`$. This situation is T dual to $`N`$ D strings in Type IIB string theory with coupling $`G_Sg_S^{2/3}`$. The D strings are wound on a circle of radius $`g_S^{1/3}l_S/a`$. The states of this system whose energy gap above the ground state of the D0 brane system is of order $`R_S`$ are described by the $`1+1`$ dimensional world volume theory of nonrelativistic D strings. This is $`1+1`$ dimensional dimensional SYM theory with 16 SUSYs. After rescaling to light cone energy the Hamiltonian of the system is $$RL_p_0^{L_p^2/R_9}𝑑s𝑑t\mathrm{Tr}\left(f^2+(\frac{D𝐗}{L_p^2})^2+\frac{[X^i,X^j]^2}{L_p^4}+\theta [\gamma D+𝚪𝐗,\theta ]L_p\right).$$ (28) In this formula, boldface characters are $`SO(8)`$ vectors. $`𝐗`$ has dimensions of length. The electric field strength $`f`$ has dimensions of $`[m]^2`$ and $`\theta `$ has dimensions of mass (so that the kinematic SUSY generator $`q=\frac{1}{\sqrt{R}}𝑑s\mathrm{Tr}\theta `$ has dimensions of $`[m]^{1/2}`$). $`R`$ is the radius of the lightlike circle. $`\gamma `$ are $`1+1`$ dimensional Dirac matrices. $`\theta `$ is a sixteen component spinor which transforms as $`(L,8_c)+(R,8_s)`$ under the Lorentz and $`SO(8)`$ symmetries. The model has $`(8,8)`$ SUSY as a two dimensional field theory. It is obvious that as $`R_9`$ is taken to infinity, this system reduces to the 11 dimensional matrix theory we studied in the previous section. Indeed, the compactified theory has more degrees of freedom than the uncompactified one. In the Seiberg analog model, these correspond to strings connecting the D0 branes which wind around the torus and they obviously become infinitely massive in the limit $`R_9\mathrm{}`$. Amusingly, in terms of the $`1+1`$ dimensional field theory this decoupling is the standard one of Kaluza-Klein states when the radius of a circle compactified field theory goes to zero. A catchy phrase for describing this phenomenon is that in Matrix Theory dimensional oxidation is T dual to dimensional reduction. More interesting is the opposite limit $`R_9/L_P0`$. According to the duality relation between M Theory and IIA string theory this limit is supposed to be the free IIA string theory. This argument is based on the BPS formula which shows that the IIA string tension is the lightest scale in the theory in this limit, plus the relations between the low energy 11D and IIA SUGRA Lagrangians. It is important to realize that Matrix Theory provides us with a true derivation of this relation. In a sense the relation between duality arguments and Matrix Theory is similar to that between symmetry arguments based on current algebra and the QCD Lagrangian. The derivation is easy. In the indicated limit, the mass scale of the SYM theory goes to infinity in Planck units and we should be left with an effective conformal field theory describing any massless degrees of freedom. The only obvious massless degrees of freedom are those on the moduli space, which is a $`1+1`$ dimensional orbifold CFT with target space (supersymmetrized) $`R^{8N}/S_N`$. This is a classical statement, but the nonrenormalization theorems for a field theory with sixteen SUSYs ($`(8,8)`$ SUSY in the language of $`1+1`$ field theory) assure us that the Lagrangian on the moduli space is not renormalized. Indeed, we will see in a moment that the leading perturbation of this system consistent with the symmetries is an irrelevant operator with dimension $`(3/2,3/2)`$. First we want to show that the spectrum of the orbifold quantum field theory at order $`P^{}1/N`$ is precisely that of the Fock space of free Type IIA Green Schwarz string field theory . To do this we note that the orbifold theory has topological sectors not contained in the $`R^{8N}`$ CFT in which the diagonal matrix fields are periodic only up to an orbifold gauge transformation in $`S_N`$. These are labelled by the conjugacy classes of the permutation group. A general permutation can be written as a product $`C_{N_1}\mathrm{}C_{N_p}`$ of cyclic permutations. Within each such sector, we recognize that there is a residual $`Z_{N_1}\times \mathrm{}Z_{N_p}`$ gauge symmetry of cyclic permutation within each block of the matrix. The importance of these topological sectors is that, as the $`N_k\mathrm{}`$, they contain states of energy $`1/N_k`$. Indeed, a diagonal matrix function on an interval of length $`2\pi `$, satisfying $`x_i(\sigma +2\pi )=x_{i+1}(\sigma ):`$ mod $`N_k`$, is equivalent to a single function $`X_i(s)`$ on an interval of length $`2\pi N_k`$. The Hamiltonian for Fourier modes of $`X_i`$ is $`H=\frac{1}{N_k}_nn\alpha _n^i\alpha _n^i`$. For a general topological sector, the Hamiltonian is a sum of $`p`$ such single string Hamiltonians. It is these long strings which are the strings of perturbative string theory. There are two important constraints which follow from the remnants of the $`U(N)`$ gauge symmetry of the original matrix Lagrangian. First of all, on the subspace of states which are generated by finite Fourier modes of the $`X^i`$, the $`Z_{N_k}`$ residual gauge symmetries just become translations in the variable $`s`$. In the language of string theory, these are the light cone Virasoro constraints: $`L_0\overline{L}_0=0`$, on physical states. Secondly, for configurations in which several of the long strings are identical, there is a residual permutation gauge symmetry which exchanges them. This is the conventional statistics symmetry of quantum mechanics. It picks up the right minus signs because half integral spin in the model is carried by Grassmann variables. Finally, we want to note that the single string Lagrangians derived from this model are Type IIA Green-Schwarz superstrings. Indeed, the $`U(N)`$ SYM theory from which we began has an $`SO(8)`$ R symmetry group under which the left and right SUSY generators transform as the two different spinor representations. Thus, we may summarize the results we have derived by the statement that in the $`R_{10}0`$, $`N\mathrm{}`$ limits, the Hilbert space of states of the Matrix Theory Hamiltonian with energies of order $`1/N`$, is precisely the Fock space of free light cone gauge Type IIA string field theory. This is a derivation of the famous duality conjecture relating M theory to Type IIA string theory. Dijkgraaf, Verlinde and Verlinde went one step further, and showed that the first correction to the free string Hamiltonian for finite $`R_{10}`$ was an irrelevant operator which precisely reproduced the Mandelstam three string vertex. Their argument used effective field theory: this is the lowest dimension operator compatible with the symmetries of the underlying SYM theory. Thus, they were unable to compute the coefficient of the Mandelstam vertex. However, because $`R_{10}`$ determines the $`1+1`$ dimensional mass of the excitations of SYM theory which decouple in the zero radius limit, the dimension ($`(3/2,3/2)`$) of the irrelevant operator determines that the string coupling scales as $`g_S(R_{10}/L_p)^{3/2}`$ which is the scaling anticipated from duality considerations. The importance of the result is twofold. It shows that at least to leading order in the small radius expansion the dynamics of Matrix Theory is, in the large $`N`$ limit, invariant under the ten dimensional Super-Poincaré group. And it shows that the general structure of string perturbation theory will follow from matrix theory. Indeed, up to contact terms on the long string world sheet, the Mandelstam vertex generates the Riemann surface expansion of string perturbation theory. As yet, no one has found an argument that Matrix Theory provides the correct contact terms to all orders in perturbation theory. Let us stress the important points that we have learned in this section. We have seen in a very explicit way how Matrix Theory interpolates between the quantum mechanics of the previous section, which describes 11D SUGRA, and free string theory. In particular we have given a very explicit dynamical argument for why the IIA strings are free in the zero radius limit. In previous discussions of the duality between IIA strings and 11D SUGRA, this was more or less a postulate, supported only by the behavior of the low energy effective Lagrangian. Another important lesson was that in the limit of small radius, positions of objects on the compactified circle become wildly fluctuating quantum variables. Indeed, these positions are Wilson lines in the gauge theory, and we are going to the strong coupling limit. There is no longer any sensible geometrical meaning to the small circle, but the theory itself is perfectly smooth in the limit. ## 4 M Theory on a Two Torus To study the theory on $`T^2`$ we apply the same set of arguments. D0 branes in weakly coupled Type IIA on an 11D Planck scale torus are treated by double T duality and related to D2 branes in a dual (but still weakly coupled) Type IIA theory. The states of finite light cone energy are described by maximally symmetric $`2+1`$ dimension SYM theory, with finite coupling, compactified on a torus which is dual to the M Theory torus. (In fact, here is an exercise: go through Seiberg’s argument for a general torus and show that the states of finite light cone energy are those whose energy scale is given by the coupling of the SYM theory obtained by doing T duality on all the radii. For $`T^4`$ and up this SYM theory is nonrenormalizable and we will see the implications of this in the next section.) The two Wilson lines of the SYM theory represent the coordinates of particles on the M Theory torus. Aspinwall and Schwarz argued using string duality that Type IIB theory in ten dimensional space was obtained as the zero area limit of M Theory on a two torus. The seeming contradiction that $`112=10`$ is resolved by noting that in the zero area limit a continuum of light wrapped M2 brane states appear, and play the role of momentum in a new tenth dimension. One of the puzzles of this approach is why the theory should be symmetric under rotations which rotate this new dimension into the other $`9`$. Fundamental (F) and D strings are identified as M2 branes wrapped around the short resp. long cycle of the torus. This identifies the type IIB string coupling as the ratio of the short and long cycles (more generally, the imaginary part of the complex structure), explains the $`SL(2,Z)`$ duality of the theory and explains why there is a T duality between weakly coupled IIA and IIB theories. If we try to take the zero area limit of the M Theory torus we are led to study the SYM theory on a torus of infinite area. Again, because the SYM coupling is relevant, this is equivalent to an infinite coupling limit in which all but conformal degrees of freedom decouple. Again, the existence of a moduli space ensures us that there is some sort of conformal limit rather than a completely trivial topological theory. Now however, there is a difference. In $`2+1`$ dimensions there is a finite superconformal algebra which has 16 ordinary supercharges. It has an $`SO(8)`$ R subalgebra under which the supercharges transform as (8,2) (with the 2 standing for their transformation under the $`2+1`$ dimensional Lorentz group). Furthermore, since the theory is conformal, it depends only on the complex structure of the limiting $`T^2`$ and there is an obvious $`SL(2,Z)`$ invariance which acts on the complex structure . The finiteness of the superconformal algebra allows for the possibility of an interacting theory, and indeed the same interacting superconformal theory was postulated to describe the interactions of $`N`$ M2 branes at separations much smaller than the 11D Planck scale. Here we want the theory to be interacting because on a two torus with finite complex structure we are trying to describe interacting Type IIB string theory. We can understand the weak coupling limit, and simultaneously get a better understanding of the $`SO(8)`$ symmetry, by going to large complex structure. In the limit where the $`a`$ cycle of the M Theory torus is much smaller than the $`b`$ cycle, the corresponding SYM torus has cycles with the opposite ratio. Thus we can do a Kaluza Klein reduction on the cycle of the dual torus corresponding to the $`b`$ cycle and obtain a $`1+1`$ dimension field theory. This is best done by going to the moduli space, which is a $`U(1)^N`$ SYM theory before the Aspinwall Schwarz limit. To take the limit we do an electromagnetic duality transformation, replacing the Abelian gauge fields by compact scalars ($`F_{\mu \nu }ϵ_{\mu \nu \lambda }^\lambda X^8`$), which decompactify in the limit. It is then obvious that there is an $`SO(8)`$ symmetry which rotates this boson into the seven original scalars. Indeed the Lagrangian on the moduli space is $$=(_\mu X^i)^2+\overline{\theta _a}\mathrm{\Gamma }^\mu _\mu \theta _a$$ (29) Here $`i`$ and $`a`$ each run from $`1`$ to $`8`$, and the $`\theta `$’s are two component, $`2+1`$ dimensional spinors. Of course, $`2+1`$ Lorentz invariance is broken by the compactification on a torus, but under the $`SL(2,Z)`$ which transforms the radii of the torus, the spinors transform as a doublet. There is an obvious $`SO(8)`$ symmetry of this Lagrangian under which the $`X^i`$ and $`\theta _a`$ could each transform in any of the eight dimensional representations. Note that both components of $`\theta _a`$ must transform the same way. Superconformal invariance in the Aspinwall Schwarz limit, assures us that this $`SO(8)`$ group survives in the interacting theory. Furthermore it tells us that the scalars must transform in the vector representation of $`SO(8)`$ and the fermions in one of the two chiral spinors (which one is a matter of pure convention). When we further make the Kaluza-Klein reduction corresponding to large complex structure the moduli space theory is identical to the light cone gauge IIB Green-Schwarz string. Actually it is $`N`$ copies of this theory, related by a residual $`S_N`$ gauge symmetry (as in the previous section). We can rerun the analysis there and show that the theory is the Fock space of free strings in this limit and that the first correction to the limit is the correct Mandelstam interaction for IIB strings, with the right scaling of the couplings. The $`SO(8)`$ symmetry is seen to be the spacetime rotation symmetry of IIB string theory, and superconformal invariance in $`2+1`$ dimensions has given us an understanding of the reason for emergence of this symmetry (without recourse to a weak coupling expansion) and of the chirality of the resulting spacetime physics. An alternative derivation of the $`SO(8)`$ symmetry using the compactification of the theory on the three torus can be found in . One thing which does not work is the continuous perturbative shift symmetry of the theory under translations of the Ramond-Ramond scalar. This can be attributed to the virtual presence in the theory of various longitudinally wrapped branes which are sensitive to the value of $`\theta `$. Presumably this will all go away in the large $`N`$ limit when the energies of these states go off to infinity (relative to the Lorentz invariant states). This example show us that, at least in perturbation theory there can be many versions of DLCQ. A perturbative DLCQ of Type IIB string theory would have preserved the continuous symmetry. ## 5 Three and Four Tori The theory on the three torus is somewhat less interesting. One again obtains a maximally SUSY YM theory, which is scale invariant and has an $`SL(2,Z)`$ Olive Montonen duality symmetry. This combines with the geometrical $`SL(3,Z)`$ to give the proper U duality of M Theory. There are now no new limits. Olive Montonen duality is identified with the M duality which identifies M2 branes with M5 branes wrapped on a three torus, giving a new version of M Theory. The four torus is much more interesting. Going through the Seiberg scaling one obtains D0 branes in weakly coupled IIA theory on a Planck scale four torus. Performing four T dualities to get to a a large manifold we come back to IIA theory, but this time with a large coupling $`G_S`$ (because the coupling rescales by four powers of $`g_S^{1/3}`$. The zero branes have of course become D4 branes. Large coupling means that we are going back to M Theory a new dimension is opening up, and the D4 branes become M5 branes. We are thus led to the theory of $`N`$ M5 branes at distances well below the Planck length in a new copy of M Theory. This is a $`5+1`$ dimensional superconformally invariant field theory $`(2,0)_N`$ whose moduli space is $`N`$ self dual tensor multiplets with Wilson surfaces lying in the self dual $`U(N)`$ weight lattice. This proposal was first made by . In terms of the original IIA theory we can understand the extra momentum quantum number in the field theory as arising from D4 branes of the (pre T duality) IIA theory wrapped around the Planck scale torus. The $`(2,0)_N`$ theory is superconformal and lives on a five torus. The smallest radius of this torus defines the Planck length of M Theory and the remaining four torus is the dual torus to the one M Theory theory lives on. The $`SL(5,Z)`$ U duality of M Theory compactified on $`T^4`$ is manifest in this presentation of the theory. Govindarajan and Berkooz and Rozali have suggested that a very similar construction can be made for M Theory compactified on K3 manifolds. One first uses the Seiberg scaling to relate the problem to D0 branes on a K3 of scale $`g_s^{1/3}l_S`$ in the zero coupling limit of IIA string theory. Then one uses the K3 T duality to relate this to D4 branes wrapped on a dual K3 in a strongly coupled IIA string theory and thus to the $`(2,0)_N`$ theory compactified on $`S^1\times \widehat{K3}`$. They show that many properties of the theory, including all of the expected dualities and the F theory limit can be qualitatively understood in this formulation. ## 6 Five and Six (Where We Run Out of Tricks) We now come to the five torus, where things really start to get interesting. The standard limiting procedure leads us to D0 branes in weakly coupled IIA string theory on a Planck scale 5 torus, which is T dual to D5 branes wrapped on a 5 torus in strongly coupled IIB string theory, which in turn is S dual to NS 5 branes wrapped on a five torus in weakly coupled IIB string theory. If one goes through the dualities carefully, then one sees that the scale that is being held fixed in the latter theory, is the IIB string scale. We can see this as follows. As usual, in the original picture, we are taking the DKPS limit and the scale which is held fixed is the kinetic energy of a single D0 brane with Planck scale momenta. After T duality this always leads at low enough energy to a SYM theory in which the SYM coupling is held fixed. In Type IIB theory, with string coupling $`G_S`$, the SYM coupling is given by $$g_{SYM}^2=G_SL_S^2$$ (30) Let us rewrite this in terms of the parameters of the S dual IIB theory: $$\stackrel{~}{G_S}=1/G_S,$$ (31) $$\stackrel{~}{L_S}^2=G_SL_S^2.$$ (32) We learn that a collection of $`N`$ coincident NS 5 branes in weakly coupled IIB string theory, has, on its collective world volume, a SYM theory whose coupling depends only on the string tension and does not go to zero with the string coupling. As a consequence we learn (believing always in the consistency of string theory) that the $`G_S=0`$ limit of a collection of $`N`$ coincident NS 5 branes in the zero coupling limit of Type IIB string theory is a consistent interacting quantum theory with manifest $`5+1`$ dimensional Lorentz invariance. We call this the $`U(N)_B`$ little string theory. The Matrix Theory for M Theory on a five torus with Planck scale radii and $`N`$ units of longitudinal momentum is the $`U(N)_B`$ little string theory compactified on a dual five torus. The parameter $`\stackrel{~}{L_S}`$ and radii $`\mathrm{\Sigma }^A`$ of the little string theory are related to those of M Theory by $$1=\frac{R^2L_1L_2L_3L_4L_5}{L_S^2L_p^9}$$ (33) $$\mathrm{\Sigma }^A=\frac{L_p^3}{RL_A}.$$ (34) Here $`R`$ is the lightlike compactification radius. The little string theory retains the manifest $`O(5,5)`$ T duality of compactified IIB string theory with zero coupling. This is now interpreted as the duality group of M Theory and is in fact the correct U duality group of M Theory on a five torus. This symmetry has several interesting consequences. First of all, the little string theory cannot be a quantum field theory. In quantum field theory, the variation of correlation functions with respect to the metric is given uniquely in terms of insertions of the stress tensor. But T-duality transformations change the metric of the torus without changing the theory. If we make a small variation of a radius of the torus around some T-self dual point, then we find (assuming that we are dealing with a quantum field theory) that every matrix element of the operator $`\delta g^{\mu \nu }\theta _{\mu \nu }`$ vanishes. It is easy to argue that this is incompatible with the properties of field theory. We will find more evidence below that little string theories are not field theories. Another consequence of T duality is the existence of another type of little string theory, called $`U(N)_A`$. Indeed if we do a T duality transformation on a single radius of the five torus, we get $`N`$ NS five branes in IIA theory. If we now take the infinite torus limit, we obtain a distinct theory. This could have been obtained directly by considering the zero coupling limit of NS five branes in the IIA theory in infinite ten dimensional spacetime. Indeed, one can construct similar little string theories from the zero coupling limit of NS five branes in the heterotic theories. The low energy limit of the $`U(N)_A`$ little string theory is not a SYM theory but rather the $`(2,0)_N`$ superconformal field theory. An interesting way of understanding the fact that interactions of NS five branes survive the limit of zero string coupling is to write down the low energy SUGRA solution corresponding to a collection of $`N`$ NS fivebranes. In the string conformal frame this has the form $$g_{ij}=\delta _{ij}e^{2\varphi }$$ (35) $$e^{2\varphi }=e^{2\varphi _0}+\frac{N}{r^2}$$ (36) $$H_{ijk}=Nv_{ijk}$$ (37) Here the indices span the four dimensional space transverse to the five brane and $`r`$ is the Euclidean distance from the five brane in this space. $`v_{ijk}`$ is the volume form of the unit three sphere in this space. $`e^{2\varphi _0}=\stackrel{~}{G_S}^2`$, the square of the string coupling. The metric components along the fivebrane are Minkowskian. Near $`r=0`$ the transverse space has the form of a flat infinite one dimensional space times a three sphere of fixed radius. The dilaton varies linearly in this flat space. This limiting background is in fact an exact solution of the classical string equations of motion to all orders in $`\alpha ^{}`$ . Indeed, the background $`H`$ field converts the three sphere $`\sigma `$ model into the level $`N`$ $`SU(2)`$ Wess-Zumino-Witten (WZW) conformal field theory. The linear dilaton is such that the value of the super central charge is $`\widehat{c}=10`$. This is called the linear dilaton background. For finite $`\stackrel{~}{G_S}`$ this infinite space is cut off on one end and merges smoothly into an asymptotically flat space with finite string coupling. However, no matter how small the asymptotic value of the string coupling, the region near the five brane is strongly coupled. The effect of taking $`\stackrel{~}{G_S}`$ to zero is to make the linear dilaton background valid everywhere. As a consequence of the fact that the coupling goes to infinity at $`r=0`$, the higher order terms in the formal genus expansion are infinite and the perturbation expansion is not useful for correlation functions which probe the $`r=0`$ region. We will see later that for large $`N`$, duality allows us to study this region in terms of a different low energy SUGRA expansion. In addition to the failure of quantum field theory to capture the dynamics of DLCQ M Theory on $`T^5`$, this geometry presents us with another new phenomenon. In all previous cases, Seiberg’s argument allowed us to relate DLCQ M Theory to string theory, but we then found that most of the string states decoupled in the DLCQ limit. Here we have found that the scale which is kept finite is the string scale and we are left with a little string theory. Another name we might have chosen for this is a “Kondo string theory”. The famous Kondo model in condensed matter physics is a free $`1+1`$ dimensional field theory interacting with a localized defect with a finite number of degrees of freedom. The full system is a rather nontrivial interacting quantum problem, and the degrees of freedom of the field theory cannot be thrown away even though they are free everywhere except at the position of the defect. Maldacena and Strominger have argued that the same is true for the little string theory. If we analyze the scattering problem of string modes off a five brane in the zero coupling limit it is easy to convince oneself that every asymptotic state of the string theory maps into an asymptotic state of the linear dilaton background. Maldacena and Strominger argue that at large $`N`$ any state of the fivebrane with energy density<sup>6</sup><sup>6</sup>6“All the experts” agree that the same conclusions are valid for localized states of finite energy on the fivebrane, although no calculations have been done for this case. The case of finite energy density is directly relevant to the toroidally compactified little string theory which is our essential concern in Matrix Theory. above a certain cutoff can be described in a sufficiently good approximation as a $`1+1`$ dimensional black hole in the low energy effective field theory. The point is that for large enough $`N`$ and energy density, the black hole horizon is in the region where the coupling is still weak. Thus, the classic Hawking analysis of black hole radiation is valid and indicates that such fivebrane states decay into asymptotic string states. A standard calculation shows that the Hawking temperature is of order $`1/\sqrt{N\stackrel{~}{L_S}^2}`$. This has two interesting consequences. First it shows that in the large $`N`$ limit, there is a true decoupling of these string states. Second, since the temperature is independent of the energy, it implies a Hagedorn spectrum of black hole states, which must be interpreted as states of the little string theory. This is a second indication that little string theories are not field theories. We will find independent confirmation of this spectrum by a very different method below. The large $`N`$ decoupling of the string states is extremely important, because it is easy to see that in the Matrix Theory context these states do not have Lorentz invariant dispersion relations. First of all, since their D0 brane charge vanishes, they have no longitudinal momentum. Secondly, in the DLCQ limit they actually have vanishing transverse momentum as well. Indeed, the typical states to which perturbative string theory applies have transverse momenta of order the string scale as the string coupling goes to zero. The weakly coupled string theory which we use in the description of M Theory on $`T^5`$ is an S-dual Type IIB theory, whose string length in terms of the original Type IIB theory is given by equation (32). In turn we have, in terms of the original IIA string coupling $`G_S=o(g_S^{5/3})`$. Thus, a momentum of order $`\stackrel{~}{L}_S^1`$ is of order $`g_S^{5/6}L_S^1`$. By contrast, the D0 branes have momenta of order the eleven dimensional Planck scale, which scales like $`g_S^{1/3}L_S^1`$. Thus, in the limit, string states of the little string theory carry zero Planck units of M Theory transverse momentum. This remark explains the otherwise paradoxical fact that the transverse momentum of string states in the NS 5 brane background is not conserved (note that the $`O(4)`$ angular momentum is conserved). From the point of view of the flat space string theory we began from, the string modes are interacting with states which carry infinitely more transverse momentum than they do, and therefore they can gain or lose arbitrary amounts of (string scale) transverse momentum. From the M Theory point of view then, the string states of the little string theory are troublesome. They carry finite light cone energy but exactly zero transverse and longitudinal momentum. They are not consistent with M Theory Lorentz invariance. Fortunately, they seem to decouple in the large $`N`$ limit. The Maldacena Strominger calculation seems to indicate that excitations on the five branes do not excite such states (even if it were energetically possible) in the large $`N`$ limit <sup>7</sup><sup>7</sup>7O.Aharony has argued to me that since the spectrum of stringy states appears to begin only at energies of orde $`1/\sqrt{N}`$ times the string scale, there is an energetic argument for decoupling, independent of the Maldacena-Strominger calculation.. On the six torus, things get even more out of hand. After performing the Seiberg limit and using T duality we obtain the theory of D6 branes in a strongly coupled Type IIA string theory. We are instructed to keep the SYM coupling on the D6 brane world volume finite. It is well known, that in this limit, D6 branes can be viewed as KK monopoles of 11D SUGRA compactified on a very large circle. Be very careful to note that this is not the 11D SUGRA we are trying to model. In fact, the gravitons of this 11D SUGRA arise from the IIA DLCQ point of view, as D6 branes wrapped on the M Theory $`6`$ torus. The SYM coupling on the KK monopole world volume is just the Planck scale of this (new) 11D SUGRA. This is a new wrinkle. Previously the theories which described DLCQ M Theory did not contain gravity. This was an advance because the conceptual problems of quantizing gravity seemed to be avoided. This is no longer the case on $`T^6`$. The only saving grace here is that one can again argue that these fake gravitons had better decouple in the large $`N`$ limit. Indeed, the reader may verify that, just like the string states of the little string theories, they carry vanishing longitudinal and transverse momenta from the M Theory point of view. This means they had better decouple. A hand waving argument that they do in fact decouple is the following. KK monopoles are manifolds which are circle bundles over the space transverse to a six brane. The radius of the circle is fixed at infinity (though we must take the limit in which this asymptotic radius is itself infinite) and goes to zero near the six brane. For a monopole of charge $`N`$ the rate at which the circle shrinks to zero as the radius is varied, is multiplied by $`N`$. Thus gravitons with nonzero momentum<sup>8</sup><sup>8</sup>8It is easy to see that gravitons with zero momentum decouple in the limit that the SUGRA circle goes to infinity. around the circle will be repelled from the KK monopoles, and the repulsion will set in at a larger distance for large $`N`$. From the point of view of M Theory we want to study the scattering of $`N`$ KK monopoles (wrapped on the dual 11D SUGRA $`T^6`$) at transverse separations much smaller than the dual Planck scale (although we want to keep energies which are of order the dual Planck scale). It seems plausible that these scattering processes will not involve graviton emission in the large $`N`$ limit. Obviously, we could do with a stronger argument. The example of $`T^6`$ kills once and for all the idea that the finite $`N`$ DLCQ should reduce to finite $`N`$ DLCQ SUGRA in the limit of low energy and large transverse separations. It is clear that at finite $`N`$, DLCQ M Theory contains states of arbitrarily low light cone energy (wrapped D6 branes in the original description – gravitons in the T dual description) which are simply not there in DLCQ SUGRA. One might have thought that the simple scaling arguments above go through for any compactification on a six manifold. However, Seiberg’s argument implicitly contains assumptions about the moduli space of string theory compactified on manifolds smaller than the string scale — assumptions which are valid only if there is enough SUSY to provide nonrenormalization theorems for the space and the metric on it. The argument indeed goes through for $`K3\times T^2`$ but the authors of have pointed out that things are quite different for a general Calabi-Yau threefold, where there are only eight supercharges. Indeed, it is well known that the Kähler moduli space of string theory on CY 3-folds is corrected when the sizes of cycles reach the string scale. The exact form of the Kähler moduli space and the metric on it can be read off from the complex structure moduli space of the mirror manifold . The authors of suggest that the point in moduli space corresponding to a “Planck scale Calabi Yau” is a mirror CY whose complex structure is very close to the conifold point. This conjecture is based on the notion that mirror symmetry is obtained by writing the CY as a $`T^3`$ fibration and doing T duality on the three torus. It is not precisely clear what this means since the manifold has no Killing vectors with which to perform an honest T duality transformation. Nonetheless, the idea that mirror symmetry would map a very small (real) 6 fold into a 6 fold with a shrinking three cycle sounds plausible. If this suggestion is correct, then we know at least that the effective theory for the DLCQ will not contain gravity. Indeed, it is known since the seminal work of Strominger that the effective theory of the new massless states coming from wrapping Type IIB three branes on the shrinking 3 cycle, is a $`3+1`$ dimensional gauge field theory with a massless hypermultiplet. The authors of suggest that the whole Matrix Theory on a CY threefold may be some sort of $`3+1`$ dimensional field theory with four supercharges. This is an interesting idea, but not much follow up work has been done on it. In my opinion it is a direction which may lead to some interesting progress. We have seen that Matrix Theory becomes more and more complicated as we compactify more and more dimensions. This is quite interesting, since it is not the way field theory behaves. When we compactify a field theory we generally lose degrees of freedom rather than gain them<sup>9</sup><sup>9</sup>9To make this statement more precise, count the number of degrees of freedom below a certain energy, and ask how this number changes as we shrink the size of the compactification manifold.. This is not completely true. In gauge field theory compactification adds Wilson lines, and in gravity, it adds the moduli of the compactification manifold. However this addition is far outweighed by the loss of modes with nontrivial variation on small manifolds. Perhaps more importantly, these modes did exist as gauge degrees of freedom on the noncompact manifold, but with gauge functions which cannot live on the compactified space. Some of the extra degrees of freedom we have discovered in Matrix Theory are artifacts of DLCQ. In the low energy SYM approximation, the momentum modes of the field theory represent (from the original M Theory point of view) branes wrapped around both transverse and longitudinal cycles. These states have energy of order one when $`N\mathrm{}`$ and should decouple from the hypothetical Lorentz invariant limiting theory. Examples where this can be worked out rather explicitly are the weak coupling limits of Type II and Heterotic strings, as derived from various $`1+1`$ and $`2+1`$ SYM theories. There it is seen that only certain quasi-topological modes of the SYM theory, which vary at a rate $`1/N`$ along the SYM torus (and manage to be periodic by wandering a distance of order $`N`$ in the space of matrices), survive the large $`N`$ limit. In my opinion, the key question in the dynamics of Matrix Theory is to find a way to isolate and describe the spectrum of order $`1/N`$ with an effective Lagrangian, away from the weak coupling limit. ### 6.1 The Seven Torus and Beyond From the point of view described in the introduction, the problems we have encountered as we increased the number of compactified dimensions beyond four are connected to the density of states of the theory at large energies. The little string theory has, as we shall see below, a Hagedorn spectrum. This is the essential feature that prevents it from being a quantum field theory. The DLCQ of M Theory on a six torus does not decouple from gravity. As a consequence, its light cone density of states grows faster than an exponential, because its high energy light cone spectrum is identical to that of SUGRA in an ordinary reference frame. As we have emphasized, these problems should go away in the large $`N`$ limit. The Lorentz invariant spectral density of the models grows more slowly than an exponential. Indeed, for both the five and six tori we have suggested that the offending states decouple in the large $`N`$ limit. On the seven torus we face a problem of a somewhat different nature. It has long been known that massive excitations of a Lorentz invariant vacuum in $`2+1`$ dimensional gravity do not preserve globally asymptotically flat boundary conditions. Worse, in theories with massless scalar fields in $`2+1`$ dimensions (which includes SUGRA with all but the minimal SUSY) excitations tend to have logarithmically growing scalar Coulomb fields and infinite energy. This has been argued to imply that the Hilbert space of Lorentz invariant, asymptotically flat $`1+1`$ or $`2+1`$ dimensional string theory is topological in nature and contains no local propagating excitations. In DLCQ we compactify one more dimension than necessary to describe the Lorentz invariant system we are trying to model. Thus, the paucity of states with asymptotically flat boundary conditions should become a problem in compactifications to four spacetime dimensions. Indeed, following Seiberg’s argument for Matrix Theory on the seven torus we are led to a theory of seven branes in Type IIB string theory. The BPS formula tells us that these have logarithmically divergent tension. Thus, there is no sensible DLCQ of M Theory with $`1+1`$, $`2+1`$, or $`3+1`$ dimensional Lorentz invariant asymptotics. Note that we do expect a noncompact formulation of light cone M Theory with $`3+1`$ dimensional asymptotics to exist (it should have a Hagedorn spectrum, like little string theory), but it cannot be found as the large $`N`$ limit of DLCQ. ## 7 DLCQ and Holography of $`(2,0)_k`$ Theories and Little String Theories In our discussion of compactification of Matrix Theory we encountered two new types of Lorentz invariant quantum theories which seemed to be decoupled from gravity in the sense that they could be formulated on fixed spacetime manifolds. This is certainly true for the $`(2,0)_k`$ theories and their less supersymmetric cousins, which are ordinary quantum field theories in six dimensions. It is likely to be true for the little string theories as well. In this section we will introduce two complementary methods for studying these theories. At the moment, both methods make sense only in flat six dimensional Minkowski spacetime. Even toroidal compactification results in new singularities which are not well understood. Although we could treat the field theories as limits of the little string theories we will instead find it useful to introduce both methods of computation in the simpler context of field theory. We begin with DLCQ. ### 7.1 DLCQ of $`(2,0)_k`$ Theories We have remarked above that DLCQ is not a terribly useful tool for ordinary field theory because the theory compactified on a small spatial circle is usually strongly coupled and intractable. This is not the case for the $`(2,0)_k`$ theories. Indeed, dimensional reduction on a small circle leads us to an infrared free $`4+1`$ dimensional SYM theory. One simple argument for this comes from the derivation of the $`(2,0)_k`$ theory as the effective theory of $`k`$ coincident M5 branes. If we compactify on a small spatial circle along the brane then we are studying $`k`$ coincident D4 branes in weakly coupled Type IIA theory. Things become even simpler if we ask what in the SYM theory corresponds to momentum around the small circle. The only obvious conserved quantum number is the instanton number (remember that instantons are particles in $`4+1`$ dimensions). That this is indeed the right identification follows from the BPS formula for the instanton mass $`M_I=8\pi ^2/g_{SYM}^2`$. Remembering the identification of the coupling in terms of the radius of the fifth dimension, we see that this is just the formula for the mass of a KK mode, $`M_{KK}=2\pi /R_5`$. Since the SYM coupling is small when the radius is small, and the $`4+1`$ dimensional SYM theory is infrared free, a semiclassical analysis of the dynamics of the instantons is valid. Thus, in the sector with longitudinal momentum $`N`$, DLCQ of the $`(2,0)_k`$ theory would seem to reduce to quantum mechanics on the moduli space of $`N`$ instantons in $`U(k)`$ gauge theory. The fact that this moduli space and the quantum mechanics on it are calculable from classical considerations follows from the high degree of SUSY of the problem<sup>10</sup><sup>10</sup>10All of these arguments come from the papers and , while the regularization below was invented in the second of these two papers.. Well, almost. The fly in the ointment is that this moduli space is singular. Fortunately, there is an elegant and unique regularization of the moduli space of instantons in four Euclidean dimensions, which appears to make the system completely finite and sensible. The $`(2,0)_k`$ theory has 16 ordinary SUSYs. In light cone frame we expect only half of them to be realized linearly so we expect to find a quantum mechanics with 8 SUSYs. The target space of the quantum mechanics must therefore be a hyperkähler quotient. There is a famous construction (called the ADHM construction) of instanton moduli space as a singular hyperkähler quotient. It is the solution space of the algebraic equations $$[X,X^{}][Y,Y^{}]+q_iq_i^{}(p^i)^{}p^i=0$$ (38) and $$[X,Y]=q_ip^i$$ (39) modded out by a $`U(N)`$ gauge symmetry which acts on $`X`$ and $`Y`$ as adjoints and the $`k`$ $`q_i`$ and $`k`$ $`p^i`$ as fundamentals and antifundamentals respectively. The products of fundamentals and adjoints appearing in these equations are tensor products of $`U(N)`$ representations and are to be interpreted as matrices in the adjoint representation. These equations also define the Higgs branch of the moduli space of $`𝒩=2,d=4`$ $`U(N)`$ SYM theory with $`k`$ fundamental hypermultiplets. The latter interpretation also introduces the natural regularization of the space, for we can add a Fayet Iliopoulos term by modifying the first ADHM equation to read $$[X,X^{}][Y,Y^{}]+q_iq_i^{}(p^i)^{}p^i=\zeta I_N,$$ (40) where $`I_N`$ is the $`N\times N`$ unit matrix, and $`\zeta `$ is a real number. Note that this is a regularization of the moduli space but not of the Yang Mills equations as local differential equations. Instead it corresponds to solving the Yang Mills equations on a certain noncommutative geometry . An important facet of this DLCQ of the $`(2,0)_k`$ theories is the fact that when they are KK reduced on a circle, the low energy effective theory is five dimensional SYM theory, which is infrared free. Thus, the difficulties encountered in should be absent and the semiclassical identification of the system as quantum mechanics on the ADHM moduli space is valid. SUSY nonrenormalization theorems guarantee that the metric on this space is unique, and the regularization of the singularities by the FI term is the unique way to deform the instanton moduli space into a smooth hyperkähler manifold. The key to finding the spectrum of chiral primary operators in the $`(2,0)_k`$ theory from DLCQ is the following observation of . These authors observe that the DLCQ procedure preserves a subgroup of the superconformal group of the full theory. They identify these generators as explicit operators in the quantum mechanics on instanton moduli space. In particular, they show that a vertex operator is primary, only if it is concentrated on the singular submanifold of zero scale size instantons. Chiral primary operators can then be identified in terms of the cohomology with compact support of the Fayet-Iliopoulos regulated instanton moduli space, which has been investigated in the mathematical literature. We will not explore the details of these calculations. Suffice it to say that they find the correct spectrum of chiral primary operators. The way we know this is that the spectrum calculated from DLCQ coincides with that implied by the AdS/CFT correspondence. This is not just a matter of agreement between two unrelated conjectures (which in itself would be impressive). Rather, the basis for the AdS/CFT identification of primary operators comes from a low energy analysis of the interaction of 11D SUGRA with fivebranes. There must be one primary for each SUGRA field which is in a short multiplet of $`AdS_7\times S^4`$ SUSY. As usual with short multiplets, the number and properties of these multiplets are independent of parameters, and can be calculated in the low energy approximation. ### 7.2 DLCQ of the Little String Theories The papers and described the DLCQ of the $`U(k)_A`$ little string theories and performed the same task for the $`U(k)_B`$ theories. We will restrict attention to the $`U(k)_A`$ case. One way to understand the derivation for the Type A theory is to consider the DLCQ of Type IIA string theory as derived in the Matrix String picture and add fivebranes wrapped around the longitudinal direction. The result is the $`1+1`$ dimensional field theoretical generalization of the model of Berkooz and Douglas for longitudinal 5 branes in Matrix-Theory. One obtains a $`1+1`$ dimensional field theory with $`(2,2)`$ SUSY. It is a $`U(N)`$ gauge theory with one adjoint and (in the sector with $`k`$ fivebranes) $`k`$ fundamentals. As in one takes the weak string coupling limit by descending to the moduli space. Now however we want to be on the Higgs branch of the moduli space (the Higgs and Coulomb branches obviously decouple from each other in the limit) and we obtain a $`\sigma `$ model with target space the ADHM moduli space. Of course this moduli space is singular, but we can regularize it by adding FI terms. Thus, the DLCQ<sub>N</sub> of $`U(k)_A`$ LST is a sigma model on the moduli space of $`N`$ $`U(k)`$ instantons on $`R^4`$. This moduli space can be regularized by the addition of FI terms to the ADHM equations, so that the DLCQ theory is realized as a limit of a well defined, conformally invariant $`(4,4)`$ supersymmetric sigma model. Note that, in contrast to the regularized quantum mechanics, the sigma model retains its conformal invariance after regularization. However, the conformal generators of the sigma model are not symmetries of the spacetime M Theory. From the M Theory point of view, the spatial momentum on the sigma model world sheet is a quantum number that counts longitudinally wrapped branes, and should decouple in the limit of large $`N`$. Before regularization the ADHM moduli space is locally flat. Thus, the central charge $`c`$ of the SCFT is just $`c=6Nk`$. Because the variation of the FI term is a marginal perturbation of the sigma model, this remains the value of $`c`$ in the regularized model. We can immediately turn this into a computation of the high energy density of states in the DLCQ model. The entropy is given by $$S(P^{})\sqrt{2cP^{}}=\sqrt{6k}El_S$$ (41) In the last equality we have used the relation between light cone and ordinary energy for vanishing transverse momentum. This is the Hagedorn spectrum that we advertised for the little string theories. In the DLCQ approach, it arises, as in perturbative string theory, because the light cone energy is identified with the ordinary energy of a $`1+1`$ dimensional CFT. The only problem with this derivation is that it applies to the asymptotic density of states of the DLCQ theory, which are not actually states with $`P^{}`$ of order $`1/N`$. It has been argued by Ofer Aharony that the sigma model contains such states as strings which wander through of order $`N`$ instantons before closing (note that the instanton moduli space has an $`S_N`$ orbifold symmetry). As in our treatment of matrix string theory, or the Maldacena-Susskind description of fat black holes , these configurations should have an entropy of the Hagedorn form even at energies much lower than those at which (41) is naively valid in the CFT. We will see below that a completely different argument produces the same formula for the entropy. The sigma model on regularized instanton moduli space is a fascinating CFT, which also arises in the study of D1-D5 black holes. Its properties have recently been studied in . ### 7.3 Holography The AdS/CFT correspondence will be covered by other lecturers at this school. Suffice it to say that for the $`(2,0)_N`$ theory<sup>11</sup><sup>11</sup>11In the previous section, in order to conform to the literature on the subject we used $`k`$ to signify the number of five branes, while $`N`$ was reserved for the longitudinal momentum. Here we will revert to the standard use of $`N`$ in the AdS/CFT correspondence. It denotes the number of fivebranes. it provides the leading term in a large $`N`$ expansion of the correlation functions of the chiral primary operators. The calculation is performed by solving the classical equations of 11D SUGRA in the presence of certain perturbations of an $`AdS_7\times S^4`$ spacetime, with $`N`$ units of fivebrane flux on the $`S^4`$. In order to calculate corrections to higher order in $`1/N`$ one needs the higher terms in the derivative expansion of the effective action. We have only a limited amount of information about such terms. Even in leading order, few calculations have been done for this case. Just like the DLCQ solution of these theories, the AdS/CFT calculations only work in uncompactified spacetime. The SUGRA background corresponding to toroidally compactified $`(2,0)_N`$ theories is singular and the derivative expansion does not seem sensible even for large $`N`$. In this case we can get an inkling of the reason for the greater degree of singularity of the compactified case. We are used to the fact that in M Theory, singularities correspond to light degrees of freedom which have not been included in the effective Lagrangian. In the toroidally compactified $`(2,0)_N`$ theory it is obvious that there are such degrees of freedom. The zero modes along the moduli space, which are frozen expectation values in the infinite volume theory are here zero frequency quantum variables. Indeed it is precisely the scattering matrix of these variables which one would hope to compute in Matrix Theory. It seems unlikely that this dynamics will be easily captured by reliable calculations in SUGRA. We now turn to the holographic description of little string theories. It was suggested in that these are simply the exact description of string theory in the linear dilaton backgrounds. More precisely, these are Type II string theories in the following background. $$ds^2=dt^2+d𝐱^2+d\varphi ^2+aNl_s^2d\mathrm{\Omega }_3^2$$ (42) $$H=\omega _{N\mathrm{\Omega }_3}$$ (43) $$g_S^2=e^{\varphi /\sqrt{N}l_S}$$ (44) Here $`𝐱`$ are coordinates on the worldvolume of $`N`$ coincident fivebranes, $`\mathrm{\Omega }_3`$ are coordinates on a three sphere transverse to the fivebranes, and $`\omega _{\mathrm{\Omega }_3}`$ is its volume form. There are some interesting differences in the way holography works in this context, as compared to the AdS/CFT correspondence. Most of them stem from the fact that the asymptotic geometry of these backgrounds is Minkowski space with an exponentially vanishing string coupling. Therefore, even for finite $`N`$, there is an infinite region of spacetime in which the description of the system in terms of freely propagating particles becomes exact. The linear dilaton systems have an S-matrix. By contrast, even though the geometry of AdS space has infinite volume, the boundary conditions which define a Cauchy problem in this space are reminiscent of those for a system in a box. In the AdS/CFT correspondence we do not expect to see any sort of large spacetime unless $`N`$ is large, but even for $`N=1`$ (note that unlike other systems of this type, the $`N=1`$ little string theory does not appear to be a trivial gaussian system) or $`2`$ the little string theories should have asymptotic multiparticle states propagating in the weak coupling region. Another dramatic difference between the two types of theories is that the little string theory has a Hagedorn spectrum and is not a quantum field theory. Thus, in many ways, the little string theory is much closer to string theory in Minkowski space than the AdS systems. We have seen the Hagedorn spectrum in the DLCQ calculation above. In the holographic description one calculates the asymptotic density of states by using the Bekenstein-Hawking formula for black hole entropy. This is justified in the linear dilaton background because the mass of the black hole is inversely proportional to the string coupling at the horizon. The world outside a large mass black hole is completely contained in the weak coupling regime. It is well known that the Hawking temperature of linear dilaton black holes is independent of their mass. This is equivalent to the statement that the entropy is linear in the energy i.e. we have a Hagedorn spectrum<sup>12</sup><sup>12</sup>12It should be noted in passing that this simple calculation shows that all extant nonperturbative formulations of the $`c=1`$ string theory are wrong. The entropy in all such calculations is that of a $`1+1`$ dimensional field theory rather than the much more degenerate Hagedorn spectrum. The $`c=1`$ model was solved by trying to resum a divergent perturbation expansion. Clearly some nonperturbative states (Liouville “D branes”??) have been missed in this resummation.. The Hagedorn temperature can be computed in terms of the cofficient which governs the rate of increase of the dilaton. We again find that $`S=\sqrt{6N}El_S`$. The Hagedorn spectrum actually solves a potential paradox in the claim of . These authors argue that the S-matrix of string theory in the linear dilaton background can be interpreted as the correlation functions of observables in the LST. The $`p`$ particle S-matrix elements are of course symmetric under interchange of arguments (the $`S_p`$ group of statistics). They are also $`5+1`$ Lorentz invariant. If these are to be interpreted as correlation functions in a quantum theory, their $`S_p`$ symmetry implies that they must be Fourier transforms of time ordered products of Heisenberg operators. Lorentz invariance would then imply that LST was a local field theory, because time ordered products are only Lorentz invariant if the operators commute at spacelike separations. However, the Hagedorn spectrum prevents us from performing the Fourier transform and this conclusion cannot be reached<sup>13</sup><sup>13</sup>13 Some readers may be confused by our apparent denial of the possibility of having a Hagedorn spectrum in local field theory. What, they will ask, about the Hagedorn spectrum of large $`N`$ QCD? In fact there is no contradiction. Two point functions of operators in large N QCD are in fact controlled by the asymptotically free fixed point at short distances. However, the crossover scale, above which free behavior sets in, depends on the operator. At infinite $`N`$ there are some operators which never get to the crossover point, because it scales with a positive power of $`N`$. This phenomenon of operators which have rapidly vanishing matrix elements between the vacuum and most of the high energy states, appears to be connected to the fact that infinite $`N`$ QCD is a free theory, with an infinite number of conservation laws. I do not expect such behavior in a finite system with interaction.. Indeed, by calculating two point functions of operators by analogy with AdS/CFT (solving linearized wave equations in the linear dilaton background), Peet and Polchinski showed explicitly that they were not Fourier transformable. Their behavior is exponential, with the Hagedorn temperature controlling the rate of growth of the exponent. Peet and Polchinski’s calculation is easy to summarize: The scalar wave equation in the linear dilaton background is $$[_\varphi ^2+\frac{(2l+3)(2l+1)}{4}+k^2\alpha ^{}N]e^{3\varphi /2}\psi =0.$$ (45) For large $`k`$ and $`\varphi `$ its solutions have the form $$\psi e^{(\alpha ^{}Nk^2)^{1/2}}\varphi $$ (46) In the holographic interpretation the S-matrix computed from these wave functions, which has the same behavior in momentum space as the wave functions themselves, is supposed to be the two point function of some operator in the LST. Thus, it the two point function is not Fourier transformable, and grows in the way we would expect from the Hagedorn spectrum. To conclude this brief summary of our knowledge of little string theories, I want to discuss the question of what the scale of nonlocality is in these theories. What we know so far suggests two rather different answers. Seiberg’s original argument about T-duality suggests string nonlocality on a scale $`l_S`$. On the other hand, the length scale defined by the Hagedorn temperature is of order $`l_S\sqrt{N}`$ which is much longer. Note however that the argument for the latter scale is based on high energy asymptotics. Thus, although the Hagedorn temperature is low for large $`N`$, it might be that the exponential behavior of the density of states does not set in until energies of order $`l_S^1`$. The Hagedorn temperature controls the rate of growth of the asymptotic density of states, but does not tell us anything about the finite scale at which the asymptotic behavior begins to dominate. Minwalla and Seiberg have done a calculation which suggests that in fact the Hagedorn behavior does not set in until scales far above the Hagedorn temperature. They argued that if, in the $`LST_A`$ theory, one takes the limit $`l_S0`$ with $`l_S\sqrt{N}`$ fixed, then the SUGRA approximation to scattering amplitudes with energies of order this fixed scale becomes exact. The point is that in $`LST_A`$, the strong coupling behavior of the theory is described at low energy by 11D SUGRA. Minwalla and Seiberg show that in the large N limit described above, there is a SUGRA description of the scattering amplitude which is valid for arbitrary values of the dilaton. Thus, the full amplitude is calculable by solving partial differential equations. The resulting equation is complicated, but Minwalla and Seiberg obtained a qualitative understanding of its behavior and were able to solve it approximately in various regimes. They calculated the amplitude for a single massless string to scatter off the NS 5 brane in this limit, and found a Fourier transformable answer. This suggests that for large $`N`$, the density of states in the vicinity of the Hagedorn energy scale, increases more slowly than the Hagedorn formula<sup>14</sup><sup>14</sup>14This is not a definitive argument against a Hagedorn spectrum because the matrix elements of operators between the vacuum and high energy spectrum might fall sufficiently rapidly to give a Fourier transformable two point function. It is however suggestive that the limiting Minwalla-Seiberg calculation shows a different behavior than that found by Peet and Polchinski.. It is tempting to suggest that the Hagedorn behavior of the spectrum sets in only above the string scale, which is the scale of nonlocality indicated by T duality. Indeed, in the spacetime picture of this system, the high energy CGHS black hole spectrum can only be computed reliably for energies above the string scale. If this conjecture is correct, there is a puzzle about the nature of the large $`N`$ limiting theory defined by Minwalla and Seiberg. Naive application of the logic we applied to the full LST would suggest that it is a quantum field theory, since its correlation functions have spacetime Fourier transforms, which can then be interpreted as Lorentz invariant time ordered products. But large $`N`$ limits are tricky, and I expect that if the Minwalla-Seiberg limit of all the correlation functions of $`LST_A`$ exists, it does not define a quantum field theory. Finally, I want to discuss an issue raised by the analysis of Minwalla and Seiberg, which is not particularly related to the bulk of the material in these lectures. There is some confusion in the literature, and in discussions I have participated in, about whether the AdS/CFT correspondence (and in particular the fact that the theory is formulated in terms of a Hermitian Hamiltonian in a well defined Hilbert space) says something definitive about the issue of unitarity in Hawking radiation. I would claim that it does not, because the AdS theory does not have an S-matrix<sup>15</sup><sup>15</sup>15Attempts to extract the flat space S-matrix from AdS/CFT, , have not progressed to the point where one can decide if the S-matrix is unitary.. The little string theories do have an S-matrix and one can begin to address the question. In particular, Minwalla and Seiberg find a nonzero absorption cross section for the black fivebrane. This could be taken as a signal of lack of unitarity. Like many extremal black holes, the extremal fivebrane metric has an analytic completion with multiple asymptotic regions. One could try to interpret the absorption cross section as matter being scattered into another asymptotic region, violating unitarity in any given region. I would like to present a more conservative interpretation of the absorption cross section: the fivebrane absorbs only because it is infinite. There is indeed another asymptotic region, but this is the region along the infinite brane. This is most clearly seen in the IIB case, where the low energy theory on the brane is infrared free $`5+1`$ dimensional SYM theory. A particle coming in from infinity in $`\varphi `$, can excite gluons on the brane which propagate off to asymptotic infinity in $`5`$ dimensions without any propagation back off to infinite values of $`\varphi `$. In the IIA theory analyzed by Minwalla and Seiberg , the low energy theory is conformal and has no conventional S-matrix. However it should still be true that localized disturbances in CFT eventually dissipate out to infinity<sup>16</sup><sup>16</sup>16I thank A.Zamolodchikov for a discussion of this point., so similar physics is to be expected. The key test of this interpretation is to see what happens when the theory is compactified. Indeed, the origin of the absorption cross section is a logarithm in momentum space which comes from the continuum of low energy modes. Since these are described by field theory, we understand what happens to them upon compactification. The modes on the brane are discrete. There will be no absorption at generic energies. All that will happen is the excitation of resonant modes of the brane, which will eventually decay back to the vacuum. The only continuum in the system is that describing modes propagating far from the brane, in the weak coupling region. On the gravitational side of the holographic correspondence, compactification causes a singularity to appear on the horizon (the compactification circles have zero radius there). One cannot conclude anything from this about unitarity, but it certainly does not contradict the conclusion based on the nongravitational side of the duality. Thus, there is no evidence against unitarity of the LST S-matrix. Little string theories are a fascinating area for future work. They are our only example of Lorentz invariant quantum theories which are neither quantum field theories nor theories of gravity (in $`5+1`$ dimensions). Conventional Lagrangian techniques are applicable only in the light cone frame. It would be of the utmost interest to find an alternative, manifestly Lorentz invariant, framework for formulating and solving these theories. ## 8 Conclusions Matrix theory is a nonperturbative DLCQ formulation of M Theory in backgrounds with six or more asymptotically flat directions. It provides proofs of a large number of duality conjectures, and has led to a new class of Lorentz invariant, gravity free theories. It demonstrates the existence of a new class of large $`N`$ limits of ordinary gauge field theories, in which one concentrates on states with energies of order $`1/N`$. There is a lot of evidence that the theory becomes simpler in the large $`N`$ limit, in the sense that many of the finite $`N`$ degrees of freedom decouple. A Lorentz invariant formulation awaits the development of techniques to study these new kinds of large $`N`$ limit. In the meantime, we can try to use Matrix Theory to study a variety of issues in gravitational physics which do not require us to compactify to low dimensions. A beginning of the study of black holes in Matrix Theory may be found in . There are a number of important general lessons about M Theory that may be learned from Matrix Theory. Among these are 1. The statistical gauge symmetry of identical particles arises as a subgroup of a much larger, continuous, gauge symmetry. 2. The cluster property, and the existence of spacetime itself seems to be closely intertwined with supersymmetric cancellations. 3. The number of degrees of freedom of the theory increases as we compactify. This is quite odd from the point of view of quantum field theory. 4. Short distance divergences in the effective gravitational theory turn out to be infrared divergences caused by the neglect of degrees of freedom which become light when particles are brought together. These correspond to light branes stretched between the particles, and again are very different from the kinds of degrees of freedom encountered in field theory. 5. As in any generally covariant theory we expect a conventional Hamiltonian description only when space is asymptotically flat or AdS. In the asymptotically flat case we have argued that conventional Hamiltonian quantum mechanics will only be applicable in the light cone frame and only when there are five or more noncompact dimensions. The phenomenologically relevant case of four dimensions has a Hagedorn spectrum in light cone energy and may be describable by some kind of little string theory. The outstanding problem in Matrix Theory is to find a way to isolate the dynamics of the states with DLCQ energy $`1/N`$ and to write a Lagrangian (for $`d_{noncompact}5`$) for the infinite $`N`$ system. For the phenomenologically relevant case of $`d=4`$ one must obtain a sensible substitute for Lagrangian methods for systems with a Hagedorn spectrum. Another unsolved problem is the formulation of DLCQ M Theory on Calabi-Yau threefolds. Beyond this, Matrix Theory cannot go, for light cone methods do not appear to be useful for cosmology or for studying the problem of SUSY breaking (where the typical ground state of the system may not have null Killing vectors). ###### Acknowledgments. I am grateful to the organizers, J.Harvey, S.Kachru, E.Silverstein, and especially K.T.Mahantappa for inviting me to this stimulating school. This work was supported in part by the DOE under grant number DE-FG02-96ER40559.
no-problem/9911/astro-ph9911358.html
ar5iv
text
# Formation Scenarios ## 1 Introduction In keeping with the focus of the Colloquium, I will discuss primarily the formation of the thick disk and of the stellar halo, the more metal-poor components of the Milky Way Galaxy. However, we should bear in mind that at approximately the same epoch at which these stars formed, around 12 or 14 Gyr ago, the stars in the central bulge were also forming (Ortolani et al. 1993; Feltzing & Gilmore 1999), despite the bulge stars being significantly more metal-rich in the mean than are stars in either the thick disk or the stellar halo. Further, there also exist old stars in the local thin disk, at least as old as 10 Gyr and perhaps as old as stars in the halo (e.g. Edvardsson et al. 1993), which could imply that there was no significant hiatus between halo and the onset of disk formation (we have little information for the age distribution of stars in the more distant thin disk, either interior or exterior to the solar circle). The existence of these old disk stars poses significant problems for some models of disk galaxy formation, such as those (Weil, Eke & Efstathiou 1998) that posit the delay of disk formation until after a redshift of unity, or lookback times of only $`7`$ Gyr for a flat matter-dominated Universe with Hubble constant of 65 km/s/Mpc. Current structure formation scenarios favour hierarchical clustering, such as that in a universe dominated by cold, dark matter (CDM). In this picture, the first objects to turn around and collapse have a mass that is a small fraction of the mass of a galaxy like the Milky Way (e.g. Tegmark et al. 1997), and larger-scale structure grows by clustering and merging of this small-scale structure. The dynamics of the dark matter, interacting only through gravity, is rather straightforward to model, through N-body simulations and semi-analytic techniques such as the Press-Schechter formalism (e.g. Lacey & Cole 1993). Following the behaviour of the baryons and predicting the evolution of luminous galaxies is much more difficult, either with gas-dynamic simulations (e.g. Navarro & Steinmetz 1997; Pearce et al. 1999) or star-formation prescriptions combined with the N-body simulations and/or semi-analytic treatment of the merging of the dark haloes (e.g. Kauffmann et al. 1999; Baugh et al. 1998). Cold dark matter models have found much success in the analysis of observations of large-scale structure, from the microwave background down to clusters of galaxies. Further, there are many examples in both the local, and distant, Universe of interacting and merging galaxies. The Milky Way is clearly interacting with its satellite galaxies, such as the LMC/SMC (Putman et al. 1998) and the Sagittarius dwarf spheroidal galaxy (Ibata, Gilmore & Irwin 1994; 1995). However, disk galaxies as observed, with a broad range of stellar ages in the thin disk, cannot have experienced merger events that were too frequent or too violent, since this would have destroyed the disk (cf. Ostriker 1990; Toth & Ostriker 1992). The past merging with dissipationless stellar or dark-matter systems is restricted to the accretion of small objects onto a dominant central system. And the accreted objects have to be assimilated quite efficiently, since at least in the Milky Way there is little evidence of successive, significant, past mergers. This last point is particularly difficult in light of new high-resolution N-body simulations by two groups (Klypin et al. 1999; Moore et al. 1999a) which for the first time have enough dynamic range to model both large and small scales within the same simulation. These simulations have been restricted so far to flat ($`\mathrm{\Omega }=1`$) cosmologies, with CDM the dominant mass, and include a universe dominated by $`\mathrm{\Lambda }`$ at the present day ($`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, $`\mathrm{\Omega }_{CMD}=1\mathrm{\Omega }_\mathrm{\Lambda }`$), as favored by a variety of constraints on large scales (Bahcall et al. 1999). In these model universes, small-scale dark haloes are very persistent (essentially reflecting their high redshifts of formation and hence high density) and they predict that a galaxy like the Milky Way should today have around a factor of ten more satellite galaxies than we observe. Of course, the simulations are strictly restricted to dark haloes only, and one might postulate that the ‘missing’ satellites (cf. Klypin et al. 1999) are dark, or perhaps related to the extremely high-velocity clouds identified by Blitz et al. (1999). However, even if dark, those satellites on radial orbits that interact with the disk of the Milky Way could make a thin disk impossible to sustain (Moore et al. 1999a). Allowing an open universe, still dominated by CDM, would change the timescales of the growth of structure, but not remove the basic problem with the over-prediction of the number of long-lived satellite galaxies. The simulations of a flat universe provide very good agreement with observations on larger scales, such as the galaxy luminosity function within clusters of galaxies (Moore et al. 1999a), leading to the suggestion that some modification be made to the CDM power spectrum, to reduce small-scale power. Such a modification, truncating the power spectrum at small scales, may also be favoured by the discrepancies between the shapes of the galaxy rotation curves predicted by standard CDM-dominated models (Navarro, Frenk & White 1997) and the observations for dwarf galaxies (Moore 1994) and for other apparently dark-matter-dominated galaxies, such as low-surface-brightness disks (Burkert & Silk 1997; 1999; Moore et al. 1999b). Indeed, many of the properties of present-day disks, for example their scale-lengths and rotational velocities, can be explained by simple dissipational collapse of gas within a fixed dark halo potential, with detailed conservation of angular momentum (Fall & Efstathiou 1980; Dalcanton et al. 1997; Mo, Mao & White 1998), followed by modest re-arrangment by, for example, viscous processes (e.g. Zhang & Wyse 2000). Merging of dark haloes and luminous galaxies is usually accompanied by angular-momentum transport, driven by gravitational torques and dynamical friction, removing angular momentum from the disk material, and ‘standard’ CDM-dominated models produce disks that are too small (Navarro & Steinmetz 1997). Again, this suggests only ‘minor-mergers’ in the history of a disk galaxy. Of course the formation of stars is set by local gravitational instability, on the scale of giant molecular clouds and their cores. Thus essentially all models of galaxy formation and evolution invoke early star formation in substructure, whether the smaller scales formed by fragmentation (perhaps reflecting the Jeans mass) or were primordial density fluctuations (such as the CDM power spectrum). Questions that both observation and theory should address include what fraction of this substructure survives to the present, what are the possible relationships of this early substructure with present-day smaller systems, e.g. dwarf galaxies or globular clusters, and how important are interactions with satellite galaxies? The faint stellar Initial Mass Function (IMF) in satellite galaxies of the Milky Way, which are possible examples of surviving ‘building blocks’, is also now accessible to direct determination. Local observations of old stars provide complementary constraints to observations of high redshift objects. ## 2 The Thick Disk This stellar component was first detected in the Milky Way Galaxy through star counts (Gilmore & Reid 1983), although surface photometry of external S0 galaxies had earlier revealed ‘thick disks’ in them (Burstein 1979; Tsikoudi 1979). Perhaps the most recent study of the structure of the thick disk through star counts is that of Phleps et al. (1999), who utilised the R-band data in the NGP of the Calar Altar (faint galaxy) survey; they detect the thick disk, and derive parameter values that are reasonably consistent with earlier results, in that the scale-height of the thick disk is around 1kpc, and the local (solar neighbourhood) normalisation by number is several percent. ### 2.1 Connection to Other Galactic Components At the time of its discovery, it was postulated that the thick disk was formed by local compression of the stellar halo by the potential of the thin disk (Gilmore & Reid 1983). This was soon disproven (Gilmore & Wyse 1985) by the determination of distinct metallicity distributions of thick disk and stellar halo (see Fig. 1 here). But is the thick disk simply the extreme thin disk, meaning they have a common origin? An increase in scale-height and velocity dispersion with stellar age within the thin disk is well-established (e.g. Wielen 1977). However, the thick disk is discontinuous in its kinematics from the thin disk (Wyse & Gilmore 1986; Gilmore, Wyse & Kuijken 1989). Further, the value of the vertical velocity dispersion of the thick disk, some $`4045`$km/s (e.g. the review of Majewski 1993 and references therein) is too high to be explained by the known heating mechanisms for the stars in the thin disk, namely interactions with local gravitational perturbations in the disk, such as Giant Molecular Clouds (Spitzer & Schwartzschild 1957; Lacey 1991). More exotic phenomena, such as close encounters with massive black holes in the dark halo, can provide the required high amplitude of random motions for a small fraction of the thin-disk stars (Ostriker & Lacey 1985; see also Sanchez-Salcedo 1999), but then the thick disk should be a random sample of the thin disk, and have very a similar stellar population. Again, the different metallicity distributions of thick disk and thin disk argue against this – the metallicity distribution of the thick disk peaks at \[Fe/H\]$`0.6`$ dex, and is rather broad (Gilmore & Wyse 1985; Carney, Latham & Laird 1989; Wyse & Gilmore 1995; Bonifacio et al. 1999), while that of the thin disk peaks around $`0.2`$ dex (e.g. Wyse & Gilmore 1995; see Fig. 1 above). Further, the thick disk is apparently composed of only old stars, ages older than $`12`$ Gyr (Gilmore & Wyse 1985; Gilmore, Wyse & Kuijken 1989; Carney, Latham & Laird 1989; Edvardsson et al. 1993; Gilmore, Wyse & Jones 1995; Fuhrmann 1998; Carney, this volume; Bartasiute & Lazauskaite, this volume). The age distribution older than this is not well-defined with present data, but a younger component must be only minor. This last point is illustrated in Fig. 2, and argues strongly against an extended period of thick disk formation, and in favour of a unique event, long ago. ### 2.2 Heating or Cooling? Two possibilities remain, one that the thick disk formed as part of the (dissipational) settling of the proto-thin disk, the second that the thick disk formed during a traumatic heating event early in the evolution of the thin disk. In the former (cooling) scenario the scaleheight of the stellar disk decreases with time, and is set by a balance between cooling (and star formation) and gravity; the discontinuity between thick and thin disks could reflect the change in the cooling law as metallicity increases above $`1`$ dex and line radiation from metals becomes dominant (Gilmore & Wyse 1985; Burkert, Truran & Hensler 1993; Burkert & Yoshii 1996). One might then expect all (moderately metal-rich) disk galaxies to have a thick disk, which they do not (e.g. Shaw & Gilmore 1990; Fry et al. 1999). The latter (heating) scenario draws some support from the fact that, as noted above, interactions between the Milky Way and its satellites are ongoing. The vertical velocity dispersion of the thick disk can be provided for if a significant part of the orbital energy of a moderate-mass satellite galaxy is transformed into additional internal energy of the stellar thin disk (Gilmore & Wyse 1985; Ostriker 1990; Majewski 1993). The effect of the accretion of a companion galaxy on the disk depends on many parameters such as those of the satellite’s orbit (initial inclination to the disk plane, pericenter and apocenter distances, sense of angular momentum) and the satellite’s density profile and total mass. Simulations of the merging process between a stellar disk and satellite have become increasingly sophisticated in recent years, including more physics such as allowing the excitation of the internal degrees of freedom of the dark halo, which lessens the heating effect on the disk (e.g. Huang & Carlberg 1997; Walker, Mihos & Hernquist 1996; Velazquez & White 1999). The extant simulations suggest that the accretion by the present-day stellar disk of a stellar satellite with mass some 20% of that of the disk can produce a thick disk similar to that observed in the Milky Way. However, gas has yet to be included in the simulations investigating disk heating, which is an important shortcoming, since gas if present (which is likely), would absorb and subsequently radiate away some of the orbital energy, again lessening the impact of the merger. Further, the initial conditions of the published simulations assume a fully-assembled stellar disk, and especially given the results above on the old age of the Galactic thick disk, we need simulations that better model conditions at an early stage of disk galaxy evolution. The stellar population in the local thin disk is consistent with a roughly constant star-formation rate over a Hubble time (e.g. Rocha-Pinto & Maciel 1997), implying that the local stellar disk at the lookback time corresponding to the formation of the thick disk was around 10% of its present mass. This is close to the mass of the local thick disk, expressed as a fraction by mass of the present-day local thin disk, suggesting essentially all of the pre-existing thin disk was heated. Of course one needs to know how to generalise this result to the entire disk, which requires global knowledge of star-formation histories and thin- and thick-disk structural parameters. Further, the energy losses due to gas have yet to taken into account. Really one needs a cosmologically self-consistent model including disk buildup, with appropriate star formation. As a corollary to this heating scenario, if a significantly massive satellite were responsible for the thick disk, the accompanying torques could drive a substantial fraction of the gas in the disk at that time to the central regions, perhaps triggering rapid star formation (cf. Hernquist & Mihos 1995) and even the formation of the ‘bulge’ (Minniti 1996) or ‘thick disk’ (Armandroff 1989) globular clusters. It may well be no coincidence that the ages of field stars in the bulge (e.g. Feltzing & Gilmore 1999) and in the thick disk are similar. The lack of young or even intermediate-age stars in the thick disk limits the last significant merger to have occurred a long time ago, at lookback times greater than around 12 Gyr, or redshifts of $`6`$ in standard matter-dominated flat cosmology. This is rather difficult for $`\mathrm{\Omega }_{CDM}=1`$ models, requiring the Milky Way to be an unusual galaxy (cf. Toth & Ostriker 1992). Note that Moore et al. (1999a) argue that any hierarchical clustering cosmology, even the open CDM-dominated model, has problems since not all disk galaxies have thick disks. However, I feel that given the many parameters determining the effect of a merger on a disk, a widely disparate population of thick disks, including one too small to have been detected, must result. ### 2.3 Where is the Shredded Satellite? Is there then a signature of the remnant of the putative satellite that caused the Milky Way thick disk? Stars that are removed by tides will remain on orbits close to that of the centre-of-mass of the satellite at the time the stars are removed (e.g. Tremaine 1993; Johnston 1998); the orbit of the satellite is expected to decay through dynamical friction (at a rate dependent on its mass, but the effect should be significant for the $`20\%`$ fractional masses under consideration), depositing ‘shredded satellite’ stars over a reasonably large spatial region. These stars will phase mix. Published simulations of low-inclination satellite orbits indeed show that the satellite is dispersed into a broadly flattened distribution, mixed in with the heated thin-disk stars. Satellites on prograde (rather than retrograde) orbits couple better to the disk and provide more heating, and are favoured to cause the thick disk (e.g. Velazquez & White 1999). Thus one might expect a signature to be visible in the mean orbital rotational velocity of the stars, and for a typical satellite orbit in the mean the stripped stars would lag the Sun by more than does the canonical thick disk. The relative number of stars in the ‘shredded satellite’ versus the heated-thin disk (now the thick disk) depends on the details of the shredding and heating processes, and is a diagnostic of them, and may well vary strongly with location. Note that a merger without accompanying heating of the disk, but producing the thick disk in its entirety from the shredded satellite is rather contrived: this would require a fairly massive satellite, given what we know of the mass-metallicity relationship and the relatively high mean metallicity of the thick disk – which at $`0.6`$ dex is greater than that of the old population in the LMC, and about equal to the young population in the SMC (see, e.g. de Freitas Pacheo, Barbuy & Idiart 1998) – and also the total luminosity of the thick disk, extrapolated from its locally-defined structural parameters (several percent of the present thin disk). Only a very narrow part of parameter space could allow a satellite this massive to penetrate even to the solar circle, then be tidally disrupted without imparting any damage to the thin disk. We (Gilmore, Wyse, Norris & Freeman) are quantifying the phase-space structure of the Milky Way, through a comprehensive statistical study of the kinematics and metallicity distributions of stars in the interface between the thick disk and stellar halo, those stellar components for which mergers are most often implicated. We are using the 2-degree-field fibre-fed spectrograph on the Anglo-Australian telescope, providing 400 spectra simultaneously. These spectra are used to obtain radial velocities and absorption line-strengths for samples of F/G main sequence stars at distances from the Sun of 5–10kpc (dependent on metallicity and magnitude), beyond significant contamination by the thin disk, down several key lines-of-sight. Our targets include fields towards and against Galactic rotation, to provide optimal halo/thick disk discrimination through orbital angular momentum. Further, we have fields at the same Galactic longitude but different latitudes, to determine if any kinematic features are halo-like or disk-like in their spatial distributions. We also include lines-of-sight interior to and exterior to the Sun, to allow characterisation of the velocity dispersion tensor, detection of gradients and any metallicity–kinematics correlation, and crucially to test if kinematic structure is restricted to angular momentum, or is equally present in the other components of the velocity tensor. Our approach investigates the time-integrated structure of the halo and thick disk and is quite complementary to surveys of the far outer Galaxy. Our first observations have apparently detected a new kinematic component of the Milky Way Galaxy, plausibly the shredded remnant of the satellite whose merger with the Galaxy produced the canonical thick disk, by heating the pre-existing thin disk. As described above, depending on the mass, density profile and orbit of the satellite, ‘shredded-satellite’ stars will leave a kinematic signature, distinct from the canonical thick disk that results from the heated thin disk. As shown in Fig. 3 and Fig. 4 below, the radial velocity distributions for our samples indeed show evidence for an excess number of stars moving on orbits with V-velocity between those of the canonical thick disk and halo, with a low velocity dispersion, plausibly smaller than either of these Galactic components, and further this signature is strongest in the blue stars. A similar kinematic signature was seen in our earlier in situ (AUTOFIB) survey of the kinematics and metallicity distributions of stars in the thin disk/thick disk interface (Wyse & Gilmore 1990). Further, Fuchs et al. (1999) have reported a similar result, finding an excess number of stars with intermediate kinematics in a narrow metallicity range, for a local sample, based on the kinematically-selected sample of Carney et al. (1996), supplemented with Hipparcos data. These stars are best interpreted as being the actual debris of the satellite and would provide the ‘smoking gun’ signature of the most significant merger in the Galaxy’s past. They must be part of a fairly large system, being detected in disparate samples, but this requires a larger statistically-significant sample for confirmation, to allow quantification of the properties of the satellite galaxy, and to distinguish between this interpretation and a vertical gradient in the mean rotation of the thick disk (e.g. Majewski 1993; Mendez et al. 1999). The metallicity distribution will be an important constraint, reflecting that of the parent satellite in one case (cf. Freeman 1993), and that intrinsic to the thick disk in the other (that of the old thin disk, should the canonical thick disk be the heated thin disk). ## 3 The Stellar Halo To first order, the stellar halo is the metal-poor, old, slowly-rotating, extended but centrally-concentrated, stellar system represented at the solar neighbourhood by the high-velocity subdwarfs. The stellar halo is usually distinguished from the central bulge when discussing the Milky Way, but it is common practice simply to refer to them as one entity – the bulge or spheroid – in discussions of external disk galaxies (see Wyse, Gilmore & Franx 1997 for a review). This is of course related to the difficulty of detecting a component like the stellar halo in external galaxies – the mass ratio between bulge and stellar halo in the Milky Way is around a factor of ten, and the surface brightness of the stellar halo at the solar circle is around 28 mag/sq arcsec in the B-band (Morrison 1993). Essentially all models of the formation and evolution of the stellar halo invoke early star formation in small substructure, with subsequent disruption of these systems and mixing and assimilation of the stars formed therein into the field stellar halo. The small substructure may have formed through fragmentation of an initially ‘monolithic’ baryonic component, perhaps reflecting the Jeans mass of shock-heated gas in the potential well of a larger-scale protogalaxy (e.g. Fall & Rees 1985), or could reflect simply the initial power spectrum of primordial density fluctuations (e.g. White & Rees 1978). Or, a combination of the above. It is highly likely that the stellar halo is ‘multi-component’ and we need to quantify the stellar masses in the different components, and characterise their origins. Questions that can be addressed and hopefully answered by the fossil record of the halo stars include – when did the stars form? Did internal (e.g. feedback from massive stars) or external (e.g. collisions or tidal disruption) processes cause the destruction of the substructure? When was the substructure disrupted? What is the relation, if any, between this putative substructure and surviving systems in the halo such as globular clusters and satellite galaxies? What is the relationship to ‘Population III’, those stars formed very early, at redshifts prior to re-heating and re-ionization of the InterGalactic Medium? Has accretion of stars and stellar systems played an important role? ### 3.1 Elemental Abundances The chemical elemental abundances of a typical halo star are as expected if only Type II supernovae enriched the gas out of which the stars formed (e.g. Nissen et al. 1994; Norris 1999). The most straightforward explanation of this observation is that these stars formed in star-formation events that were of short duration, shorter than the time needed to have significant production of newly-synthesised material by Type Ia supernovae. While there is not yet a generally-accepted model of Type Ia supernovae (other than involving a white dwarf driven over the Chandrasekhar mass limit via accretion of some sort) several scenarios predict timescales after formation of the progenitor main sequence stars, for significant chemical enrichment by Type Ia supernovae, of around 1 Gyr (e.g. Smecker & Wyse 1991; Yungelson & Livio 1998). One does not require that the entire stellar halo formed on this timescale, only that self-enriching regions formed stars this rapidly, and there was little cross-contamination between non-synchronized regions. Indeed, an attractive mechanism to produce only a short duration to star formation is a Type II supernovae-driven wind, more naturally-produced if the star-forming regions have local potential wells significantly shallower than does the halo as a whole. Thus one is led to a picture wherein the field stars of the halo form in fragile fragments/blobs, within which feedback from massive stars can be sufficiently disruptive that star formation is truncated, and a large part of the remaining interstellar medium ejected. The feedback and mass loss could be sufficient to unbind the ‘fragment’ totally, or it could be that the new virial equilibrium of the ‘fragment’ is sufficiently fragile that external processes such as tidal forces can disrupt the ‘fragment’. The ejected gas will cool and dissipate, and with angular momentum conservation will settle into the central regions of the overall larger-scale potential well of the proto-Galaxy, somewhat as envisaged by Eggen, Lynden-Bell & Sandage (1962). This could form the central bulge. As pointed out by Hartwick (1976), one can understand the low mean metallicity of the halo within models with a fixed stellar IMF if there is gas removal during the formation of the halo stars – a reduction of a factor of around 10 in the mean metallicity, compared to theoretical expectations with no gas loss, is required for the stellar halo, and this is achieved by removing around this ratio of mass during star formation. Thus one would predict a central bulge some 10 times more massive than the stellar halo, in agreement with estimates of the masses of stellar halo and bulge (Carney et al. 1990; Wyse & Gilmore 1992). There is little scatter in the trend of element ratios, such as \[Mg/Fe\] against \[Fe/H\], for the bulk of the stellar halo (e.g. Nissen et al. 1994), consistent with enrichment by stars with a fixed massive-star IMF, and furthermore, one close to that observed in star-forming regions locally today (see Wyse 1998 for a review). The lack of scatter also implies rather efficient mixing, and one might expect to see some scatter at the lowest levels of enrichment, when very few supernovae have contributed (cf. Audouze & Silk 1995; Tsujimoto, Shigeyama & Yoshii 1999), and this is indeed observed (McWilliam et al. 1995; Ryan, Norris & Beers, 1996; Ryan this volume). In this picture of star-forming fragments in the halo, there is the possibility that a few fragments had a deep enough potential-well to sustain star formation and self-enrich with the products of Type Ia supernovae. Further, with asynchronous onsets of star formation in different regions, there could be enrichment of a given ‘fragment’ by a Type Ia supernova whose progenitors were formed in a different fragment. Thus one might expect to see at least some stars in the halo now with values of the element ratios reflecting ‘extra’ iron from Type Ia supernovae (note that the fact that this is not generally observed for halo stars is further motivation for the prompt removal of gas from the halo to the bulge; Wyse & Gilmore 1992). Indeed, a subset of halo stars, apparently biased towards the metal-rich halo, i.e. \[Fe/H\]$`>1`$ dex (Nissen & Schuster 1997) and/or extremely high-energy orbits (Carney et al. 1997; King 1997; Ivans et al. this volume) have now been observed to have low values of \[Mg/Fe\], close to or below the solar value, consistent with pre-enrichment by a combination of Type I and Type II supernovae. Fig. 5 is taken from Gilmore & Wyse (1998) and based on the data of Nissen & Schuster (1997) for their survey of disk and halo stars of similar metallicities, where ‘disk’ and ‘halo’ are defined kinematically in terms of orbital rotational velocity. This is one of the few datasets where both disk and halo stars have been analysed together, allowing a direct comparison of their elemental abundances. As can be seen, there are both disk and halo stars with enhanced magnesium, \[Mg/Fe\]$`+0.3`$, consistent with being enriched by massive stars with the same, invariant IMF. The halo stars and disk stars with low values of \[Mg/Fe\] are as expected for some enrichment from Type Ia supernovae. Note that in Fig. 5 the halo stars lie along the locus for a lower star-formation rate than do the disk stars, but this may be a manifestation of the fact that gas outflows, as invoked above to have operated in halo star-forming regions, reduce the efficiency of star formation and mimic a lower value. The lower values of the $`\alpha `$-elements in these low-metallicity stars (remember that the higher metallicity part of the halo is still well below solar metallicity) are as predicted for the metal-poor stars in dwarf companion galaxies to the Milky Way (Gilmore & Wyse 1991). This, combined with the fact that the serendipitously-identified metal-poor halo stars with anomalously low values of the ratio of magnesium-to-iron are on retrograde orbits, led to the speculation that these stars may have been captured (accreted) from external satellite galaxies (Carney et al. 1997; King 1997). However, there is no tendency for the low \[Mg/Fe\] stars in the Nissen & Schuster sample to be on retrograde orbits. All the ‘low-alpha-elements’ stars are, however, on very high-energy radial orbits, with apoGalacticon greater than 15 kpc, and periGalacticon less than 1 kpc. These are very unlikely orbits for stars accreted from satellite galaxies, a term that implies a separate identity for a significant time. In models which invoke fragmentation within a gaseous proto-halo, fragments which probe the denser inner Galaxy are naturally themselves more dense (e.g. Fall & Rees 1985), and likely to have deeper local potential wells, and be able to self-enrich longer. Thus a trend that the halo stars with evidence for enrichment by Type Ia supernova be on orbits of low periGalacticon (but not necessarily high apoGalacticon) may be understood, without any need to appeal to ‘accretion’. Further, Stephens (1999) has analysed a sample of halo stars selected to be on extreme orbits, and interprets the elemental abundance patterns as being no different from those of the rest of the halo. However, there is clearly a need for a large, uniformly-analysed sample over the entire range of kinematics and metallicity of the stellar halo. A further point evident from Fig. 5 is that the typical elemental abundances of disk and halo stars of the same iron abundance are different, supporting a disk/halo discontinuity in chemical enrichment, and consistent with the different specific angular momentum content of disk and halo (Wyse & Gilmore 1992) – the local halo did not pre-enrich the local disk. These metal-rich halo stars are a small fraction of the locally-defined stellar halo. However, it must be noted that the global metallicity distribution of the stellar halo is poorly defined (in particular for the inner halo), as are the wings of the metallicity distribution function for more locally-defined samples. The overlap between stellar halo and (thick) disk at metallicities around \[Fe/H\]$`=1`$ is a particular focus of our ongoing 2dF project (Gilmore, Wyse, Norris & Freeman). ### 3.2 Kinematics & Age Moving groups of stars in the halo have been searched for by many groups, with limited success in that the signatures are usually of low statistical weight (e.g. Arnold & Gilmore 1992; Majewski, Munn & Hawley 1996; Helmi & White 1999; Helmi, White, de Zeeuw & Zhao 1999). A complication in the interpretation of moving groups is that all substructure in the Galaxy is subjected to tidal effects. For example, the present system of globular clusters may well be a mere shadow of the initial retinue (e.g. Gnedin & Ostriker 1997) and one expects e.g. tidal arms and streamers from the surviving globular clusters (see Grillmair, Freeman, Irwin & Quinn, 1995; Meylan, this volume). The unique substructure that has been found by virtue of its location in position–radial velocity phase space is the Sagittarius Dwarf Spheroidal galaxy (Ibata, Gilmore & Irwin 1994, 1995). This satellite companion to the Milky Way was noticed by its discoverers, in the course of their survey of stars in the bulge of the Milky Way, as a distinct set of stars with ‘anomalous’ kinematics to be actually members of the bulge. The stars with these very well-defined, but anomalous, kinematics were localised in a subset of their lines-of-sight, and could be identified with a feature (the red clump of helium-burning stars) in the colour-magnitude diagram for all stars in those fields. The center of the Sagittarius dwarf is located some 24 kpc from the Sun, on the other side of the Galactic centre. The preliminary proper motion and orbit derived by Ibata et al. (1997) for this dwarf galaxy imply that it has a radial period of less than 1 Gyr, and a periGalacticon of only $`12`$ kpc. Without a significant amount of dark matter to bind it, the Sagittarius dSph would be unable to survive more than a couple of such close periGalactic passages (e.g. Velazquez & White 1995), but its dominant stellar population has an age of $`10`$ Gyr. Either the orbital parameters have been recently changed (e.g. Zhao 1998), or the dwarf is more robust than it looks (Ibata et al. 1997). The more diffuse outer regions of the Sagittarius dSph are more susceptable to tidal stripping, and intriguing observations of stars with kinematics such that they plausibly could have been removed from the dwarf galaxy on a previous periGalactic passage have been reported by Majewski et al. (1999), and analysed in Johnston et al. (1999). Further, this interpretation is consistent with the photometric detection of member stars some 30 from the centre of the Sagittarius dwarf by Mateo, Olszewski & Morrison (1998). A few of the globular clusters of the Milky Way have clearly younger ages than the vast majority of globular clusters and have been suggested as candidates for being accreted from companion galaxies (e.g. Fusi Pecci et al. 1995). Indeed it is now recognised that this group includes actual members of the Sagittarius dwarf’s retinue of clusters (e.g. Ibata et al. 1995; Da Costa & Armandroff 1995), and should not be included in the census of Milky Way clusters. The globular cluster M54 is situated at the centre of the Sagittarius dwarf, no doubt partly motivating the interpretation of the globular cluster $`\omega `$ Cen as the nucleus of an accreted dwarf galaxy (Majewski et al. this volume; Wallerstein & Hughes, this volume). The very different stellar populations in a typical companion galaxy and in the stellar halo provide strong constraints on general accretion and disruption of satellite galaxies as a means to form the stellar halo (Unavane, Wyse & Gilmore 1996; Gilmore this volume). Indeed significant (greater than $`10`$% of the stellar halo) accretion cannot have occurred from typical dwarf galaxies, with their large range of stellar ages, subsequent to the formation of these intermediate-age stars, and is thus restricted to $`8`$ Gyr ago. Several analyses of chemistry and kinematics for samples of halo stars have found evidence for differences between the ‘far’ halo and the ‘inner’ halo e.g. Majewski (1992), Norris (1994), Carney et al. (1996), Layden (1998), Carney (this volume). This would plausibly reflect differences in the dominant physical mechanisms at formation. However, for any reasonable halo profile the fractional mass of the ‘far’ halo is small (see e.g. Fig. 1 of Unavane, Wyse & Gilmore 1996). Further, Carney (private communication) has found intriguing differences in binary fraction between the ‘far’ halo and the ‘inner’ halo, in that the ‘far’ halo has a significantly lower fraction. Mass transfer in close binary systems could certainly affect surface elemental abundances, and the ‘kick’ from disruption could perhaps produce extreme kinematics. Clearly more work is warranted. ### 3.3 Faint Stellar Luminosity Function and IMF As discussed briefly above in Sect. 32.1, the elemental abundances for the bulk of the stellar halo and the disk suggest that the massive star IMF was, and is, invariant. Low-mass stars, those with main-sequence lifetimes that are of order the age of the Universe, provide more direct constraints on the IMF when they formed. Star counts in systems with simple star-formation histories are particularly straightforward to interpret, and those in ‘old’ systems allow one to determine the low-mass stellar IMF at large look-back times and thus at high redshift. The dwarf spheroidal satellite galaxies of the Milky Way are now accessible for this experiment using the Hubble Space Telescope. These galaxies are particularly interesting since their internal kinematics suggest that they are among the most dark-matter-dominated systems known (reviewed by Mateo 1998), and this dark matter must be cold to form structures on such small scales (e.g. Tremaine & Gunn 1978; Gerhard & Spergel 1992), but at least the gas-rich dwarfs for which rotation curves can be measured do not fit the predictions of non-baryonic CDM (e.g. Moore 1984). Could the dark matter be cold since it is baryonic and radiated away binding energy? Might the dark matter then be associated with faint stars? The Ursa Minor dwarf spheroidal galaxy (dSph) is suitable for study, being relatively nearby (distance $`70`$kpc), and, unusually for a dwarf spheroidal galaxy, having a stellar population with narrow distributions of age and of metallicity (e.g. Hernandez, Gilmore & Valls-Gabaud 1999), remarkably similar to that of a classical halo globular cluster such as M92 or M15, i.e. old and metal-poor (\[Fe/H\] $`2.2`$ dex). The integrated luminosity of the Ursa Minor dSph ($`L_V3\times 10^5L_{}`$) is also similar to that of a globular cluster. However, the central surface brightness of the Ursa Minor dSph is only 25.5 V-mag/sq arcsec, corresponding to a central luminosity density of 0.006 $`L_{}`$pc<sup>-3</sup>, many orders of magnitude lower than that of a typical globular cluster. Further, again in contrast to globular clusters, its internal dynamics are dominated by dark matter, with $`(M/L)_V80`$, based on the relatively high value of its internal stellar velocity dispersion (Hargreaves et al. 1994; see review of Mateo 1998). We obtained deep imaging data with the Hubble Space Telescope, using WFPC2 (V-606 & I-814), STIS (LP optical filter) and NICMOS (H-band), in a field close to the center of the Ursa Minor dSph (program GO 7419: PI Wyse, Co-Is Gilmore, Tanvir, Gallagher, Smecker-Hane, Feltzing & Houdashelt). As shown in Fig. 6 (from Feltzing, Gilmore & Wyse 1999), the faint optical stellar luminosity function of the Ursa Minor dSph is also remarkably similar to that of M92, down to our limiting apparent magnitude with WFPC2 data, which corresponds to around four-tenths of a solar mass. The M92 data (Piotto, Cool & King 1997) should be a reliable estimate of the global initial luminosity (and mass) function in this cluster, being obtained at intermediate radius within the globular cluster, minimising internal dynamical effects, and the cluster itself is on an orbit that minimises external tidal effects. The similarity of age and metallicity between these two systems means that the comparison of the main sequence faint luminosity functions is effectively a comparison of stellar initial mass functions. And as can be seen in the figure, these two luminosity functions are remarkably similar, down to our completeness limit corresponding to around 0.4 $`M_{}`$. Thus two systems that differ in mass to light ratio by a factor of roughly 50, and in stellar surface density by orders of magnitude, formed stars with the same initial mass function. A consistent result, but for a significantly less-deep luminosity function reaching to 0.6$`M_{}`$, was obtained for the Draco dSph by Grillmair et al. (1998). The apparent insensitivity of the stellar IMF to any parameter that physical intuition tells one should be important is remarkable (see papers in Gilmore & Howell, 1998). However, it allows a reliable simplifying assumption – an invariant IMF – to be made when modelling the evolution of galaxies. The identity of the dark matter in dSph galaxies remains a puzzle; we are obtaining deeper data and should be able to push the stellar luminosity function somewhat fainter. However, low-mass stars do not look likely candidates. ## 4 Summary and Conclusions ‘Scenario’ rather than ‘model’ is appropriate for the title of this paper, since we do not yet have a clear understanding of the mechanisms by which disk galaxies form and evolve. However, it is clear that substructure plays an important role in galaxy formation and evolution. For example, the thick disk is plausibly a remnant of the last significant merger event in the history of the Milky Way; the remnant satellite may have been detected. That this last significant merger was a long time ago is apparently in conflict with flat CDM-dominated models. Further, while accretion of stars from fragile satellite galaxies may make a significant contribution to the outer halo, this is not the case for the bulk of the halo; late accretion in particular is ruled out by the uniform old age of the bulk of the halo stars. A simplifying factor for models of galaxy evolution is that the stellar IMF is apparently invariant. ## Acknowledgements I would like to thank the organisers for their financial help to enable me to participate in the meeting, and for arranging such a stimulating and enjoyable conference. I acknowledge partial support from NASA ATP program grant NAG 5-3928. Support for my work on the Ursa Minor dSph luminosity function was provided by NASA through grant GO-07419.01-96A from the Space Telescope Science Institute, which is operated by AURA, under NASA contract NAS5-26555. ## References > Armandroff, T., 1989, AJ 97, 375 > > Arnold, R. & Gilmore, G., 1992, MNRAS 257, 225 > > Audouze, J. & Silk, J. 1995, ApJL 451, L49 > > Bahcall, N., Ostriker, J., Perlmutter, S. & Steinhardt, P., 1999, Science, 284, 1481 > > Baugh, C.M., Cole, S., Frenk, C.S. & Lacey, C.G., 1998, ApJ 498, 504 > > Blitz, L., Spergel, D.N., Teuben, P.J., Hartmann, D. & Burton, W.B., 1999, ApJ 514, 818 > > Bonifacio, P., Pasquini, L., Molaro, P. & Marconi, G. 1999, in ‘Galaxy Evolution: Connecting the Distant Universe with the Local Fossil Record’, Obs de Meudon, 1998. > > Burkert, A. & Silk, J., 1997, ApJL 488, L55 > > Burkert, A. & Silk, J., 1999, in ‘Dark Matter in Astro and Particle Physics’, ed. H.V. Klapdor-Kleingrothaus, in press > > Burkert, A. & Yoshii, Y., 1996, MNRAS 282, 1349 > > Burkert, A., Truran, J. & Hensler, G., 1992, ApJ 391, 651 > > Burstein, D., 1979, ApJ 234, 829 > > Carney, B., Latham, D. & Laird, J., 1989, AJ 97, 423 > > Carney, B., Latham, D. & Laird, J., 1990, AJ 99, 572 > > Carney, B., Latham, D., Laird, J. & Aguilar, L., 1994, AJ 107, 2240 > > Carney, B., Laird, J., Latham, D. & Aguilar, L., 1996, AJ 112, 668 > > Carney, B., Wright, J., Sneden, C., Laird, J., Aguilar, L. & Latham, D., 1997, AJ 114, 363 > > Da Costa, G., 1999, in ‘The Galactic Halo’, Third Stromlo Symposium, ASP Conference Series vol 165, eds B. Gibson, T. Axelrod & M. Putman (ASP, San Francisco) p153 > > Da Costa, G. & Armandroff, T., 1995, AJ 109, 2533 > > Dalcanton, J.J., Spergel, D.N., & Summers, F.J., 1997, ApJ 482, 659. > > de Freitas Pacheo, J., Barbuy, B. & Idiart, T., 1998, A&A 332, 19 > > Edvardsson, B., Andersen, J., Gustafsson, B., Lambert, D.L., Nissen, P.E. & Tomkin, J., 1993, A&A 275, 101 > > Eggen, O., Lynden-Bell, D. & Sandage, A., 1962, ApJ 136, 748 > > Fall, S.M. & Efstathiou, G., 1980, MNRAS 193, 189 > > Fall, S.M. & Rees, M.J., 1985, ApJ 298, 18 > > Feltzing, S. & Gilmore, G., 1999, A&A in press > > Feltzing, S., Gilmore & Wyse, R.F.G. 1999, ApJL 516, L17 > > Freeman, K. 1993, in ‘Galaxy Evolution: The Milky Way Perspective’, ASP Conference series vol 49, ed. S. Majewski (ASP, San Francisco) p125 > > Fry, A., Morrison, H., Harding, P. & Boroson, T., 1999, AJ 118, 1209 > > Fuchs, B., Jahreiss, H. & Wielen, R., 1999, in ‘Galaxy Evolution: Connecting the Distant Universe with the Local Fossil Record’, Obs de Meudon, 1998. > > Fuhrmann, K., 1998, A&A 338, 161 > > Fusi Pecci, F., Bellazzini, M., Cacciari, C. & Ferraro, F.R., 1995, AJ 110, 1664 > > Gerhard, O.E. & Spergel, D.N., 1992, ApJL 389, L9 > > Gilmore, G., & Howell, D., 1998, eds ‘The Stellar IMF’, ASP Conference series vol 142, (ASP, San Francisco) > > Gilmore, G. & Reid, I.N., 1983, MNRAS 202, 1025 > > Gilmore, G. & Wyse, R.F.G., 1985, AJ 90, 2015 > > Gilmore, G. & Wyse, R.F.G., 1991, ApJ 367, L55 > > Gilmore, G. & Wyse, R.F.G., 1998, AJ 116, 748 > > Gilmore, G., Wyse, R.F.G. & Jones, J.B., 1995, AJ 109, 1095 > > Gilmore, G., Wyse, R.F.G. & Kuijken, K., 1989, ARAA 27, 555 > > Gnedin, O.Y. & Ostriker, J.P., 1997, ApJ 474, 223 > > Grillmair, C., Freeman, K.C., Irwin, M. & Quinn, P.J., 1995, AJ 109, 2553 > > Grillmair, C., et al. 1998, AJ 115, 144 > > Hargreaves, J., Gilmore, G., Irwin, M. & Carter, D., 1994, MNRAS 271, 693 > > Hartwick, F.D.A., 1976, ApJ 209, 418 > > Helmi, A. & White, S.D.M., 1999, MNRAS 307, 495 > > Helmi, A., White, S.D.M., de Zeeuw, P.T. & Zhao, H.-S. 1999, Nat 402, 53 > > Hernandez, X., Gilmore. G. & Valls-Gabaud, D., 1999, MNRAS in press > > Hernquist, L. & Mihos, J.C., 1995, ApJ 448, 41 > > Huang, S. & Carlberg, R., 1997, ApJ 480, 503 > > Ibata, R. & Gilmore, G., 1995, MNRAS 275, 605 > > Ibata, R., Gilmore, G. & Irwin, M., 1994, Nat 370, 194 > > Ibata, R. Gilmore, G. & Irwin, M., 1995, MNRAS 277, 781 > > Ibata, R., Wyse, R.F.G., Gilmore, G., Irwin, M.J. & Suntzeff, N.B., 1997, AJ 113, 634 > > Johnston, K.V., 1998, ApJ 495, 297 > > Johnston, K.V., Majewski, S., Siegel, M., Reid, I.N. & Kunkel, W.E., 1999, AJ in press > > Kauffmann, G., Colberg, J.M., Diaferio, A. & White, S.D.M., 1999, MNRAS 303, 188 > > King, J., 1997, AJ 113, 2302 > > Klypin, A., Kravtsov, A.V., Valenzuela, O. & Prada, F., 1999, ApJ 522, 82 > > Lacey, C.G., 1991, in ‘Dynamics of Disc Galaxies’, ed B. Sundelius (Goteborg University, Sweden) p257 > > Lacey, C.G. & Cole, S., 1993, MNRAS 262, 627 > > Layden, A.C., 1998, in ‘Galactic Halos’, ASP Conference series vol. 136 ed D. Zaritsky (ASP, San Francisco) p14 > > McWilliam, A., Preston, G., Sneden, C. & Searle, L. 1995, AJ 109, 2757 > > Majewski, S.R. 1992, ApJS 78, 87 > > Majewski, S.R., 1993, ARAA 31, 575 > > Majewski, S.R, Munn, J. & Hawley, S., 1996, ApJL 459, L73 > > Majewski, S., Siegel, M., Kunkel, W.E., Reid, I.N., Johnston, K.V., Thomson, I., Landolt, A & Palma, C. 1999, AJ in press > > Mateo, M., 1998, ARAA 36, 435 > > Mateo, M., Olszewski, E. & Morrison, H., 1998, ApJL 508, L55 > > Mendez, R., Platais, I., Girard, T., Kozhurina-Platais, V. & Altena, W., 1999, AJ in press > > Minniti, D., 1996, ApJ 459, 175 > > Mo, H.J., Mao, S. & White, S.D.M., 1998, MNRAS 295, 319 > > Moore, B., 1994, Nat 370, 629 > > Moore, B. Ghigna, S., Governato, F., Lake, G., Quinn, T., Stadel, J. & Tozzi, P., 1999a, ApJL 524, L19 > > Moore, B., Quinn, T., Governato, F., Stadel, J. & Lake, G., 1999b, MNRAS submitted > > Morrison, H., 1993, AJ 106, 578 > > Navarro, J.F. & Steinmetz, M., 1997, ApJ 478, 13 > > Navarro, J.F., Frenk, C.S. & White, S.D.M., 1997, ApJ 490, 493 > > Nissen, P. & Schuster, W., 1997, A&A 326, 751 > > Nissen, P., Gustafsson, B., Edvardsson, B. & Gilmore, G., 1994, A&A 285, 440 > > Norris, J., 1994, ApJ 431, 645 > > Norris, J., 1999, in ‘The Galactic Halo’, Third Stromlo Symposium, ASP Conference Series vol 165, eds B. Gibson, T. Axelrod & M. Putman (ASP, San Francisco) p213 > > Ortolani, S., Renzini, A., Gilmozzi, R., Marconi, G., Barbuy, B., Bica, E. & Rich, R.M., 1995, Nat 377, 701 > > Ostriker, J.P., 1990, in ‘Evolution of the Universe of Galaxies’, ASP Conference Proceedings Vol. 10, eds. Kron, R.G., p25 > > Ostriker, J.P. & Lacey, C., 1985, MNRAS 299, 633 > > Pearce, F. et al. (the VIRGO Consortium), 1999, ApJL 521, 99L > > Phleps, S., Meisenheimer, K., Fuchs, B., Wolf, C. & Jahreiss, H., 1999, in ‘Galaxy Evolution: Connecting the Distant Universe with the Local Fossil Record’, Obs de Meudon, 1998. > > Piotto, G., Cool, A. & King, I., 1997, AJ, 113, 1345 > > Putman, M., et al., 1998, Nat 394, 752 > > Rocha-Pinto, H. & Maciel, W., 1997, MNRAS 289, 882 > > Ryan, S., Norris, J. & Beers, T., 1996, ApJ 471, 254 > > Sanchez-Salcedo, F.J., 1999, MNRAS 303, 755 > > Shaw, M. & Gilmore, G., 1990, MNRAS 242, 59 > > Smecker, T. & Wyse, R.F.G., 1991, ApJ 372, 448 > > Spitzer, L. & Schwartzschild, M., 1953, ApJ 118, 106 > > Stephens, A., 1999, AJ 117, 1771 > > Tegmark, M., Silk, J., Rees, M.J., Blanchard, A., Abel, T. & Palla, F., 1997, ApJL 474, 1L > > Toth, G. & Ostriker, J.P., 1992, ApJ 389, 5 > > Tremaine, S. 1993, in ‘Back to the Galaxy’, eds S. Holt & F. Verter (AIP, New York) p599 > > Tremaine, S. & Gunn, J., 1978, Phys Rev Lett 42, 407 > > Tsikoudi, V., 1979, ApJ 234, 842 > > Tsujimoto, T., Shigeyama, T. & Yoshii, Y., 1999, ApJL 519, L63 > > Unavane, M., Wyse, R.F.G. & Gilmore, G., 1996, MNRAS 278, 727 > > VandenBerg, D., 1985, ApJS 58, 711 > > VandenBerg, D. & Bell, R., 1985, ApJS 58, 561 > > Velazquez, H. & White, S.D.M., 1995, MNRAS 275, L23 > > Velazquez, V. & White, S.D.M., 1999, MNRAS 304, 254 > > Walker, I., Mihos, J.C. & Hernquist, L., 1996, ApJ 460, 121 > > Weil, M., Eke, V. & Efstathiou, G., 1998, MNRAS 300, 773 > > White, S.D.M. & Rees, M.J., 1978, MNRAS 183, 341 > > Wielen, R., 1977, A&A 60, 263 > > Wyse, R.F.G., 1998, in ‘The Stellar IMF’, ASP Conference series vol 142, eds G. Gilmore & D. Howell, (ASP, San Francisco) p89 > > Wyse, R.F.G. & Gilmore, G., 1990, in ‘Chemical & Dynamical Evolution of Galaxies’, eds F. Ferrini, J. Franco & F. Matteucci (ETS Editrice, Pisa, Italy) p19 > > Wyse, R.F.G. & Gilmore, G., 1986, AJ 92, 1215 > > Wyse, R.F.G. & Gilmore, G., 1992, AJ 104, 114 > > Wyse, R.F.G. & Gilmore, G., 1995, AJ 110, 2771 > > Wyse, R.F.G., Gilmore, G. & Franx, M., 1997, ARAA 35, 637 > > Yungelson, L. & Livio, M. 1998, ApJ 497, 168 > > Zhang, B. & Wyse, R.F.G., 2000, MNRAS in press > > Zhao, H.-S. 1998, ApJL 500, L149
no-problem/9911/math9911008.html
ar5iv
text
# Purely Infinite, Simple 𝐶^∗-algebras Arising from Free Product Constructions, III
no-problem/9911/hep-lat9911027.html
ar5iv
text
# Gauge Theory and String Theory; An Introduction to the AdS/CFT CorrespondencePlenary talk at LATTICE 99 held in Pisa, Itay from June 29 to July 3, 1999. To appear in the Proceedings of the conference. ## 1 Introduction In this talk, I would like to show you some of the recent developments in superstring theory, in particular the relation between gauge theory and string theory. String theory was originally invented as a theory of hadrons, but it was superseded by the gauge theory. It then found its employment in quantum gravity. Now it seems that string theory and gauge theory are meeting again, and I hope this new direction will provide an interesting arena where lattice gauge theorists and string theorists can interact and exchange ideas, benefiting both. Let me begin my talk by posing a question: What is the string theory? Until recently, one of the serious defects of the string theory had been that we did not know what it is. We did not have a definition of the theory. This was an unpleasant situation since there was a logical possibility that the string theory might not exist after all, even as a purely theoretical framework, let alone the possibility of describing the real world. Fortunately we now know that string theory exists in certain situations, and it is what I would like to tell you today. But before we get into that, let me tell you what we knew before. The original “definition” of the string theory was entirely in terms of the perturbative expansion. We knew that the theory, if exists, should contain oscillating strings, which in the limit of the vanishing coupling constant $`g_{string}0`$, freely propagate in spacetime. We also knew how to compute string amplitudes using the perturbative expansion in $`g_{string}`$. In superstring, each term in the perturbative expansion was known to be finite. However this, by itself, cannot be a complete definition. The perturbative series is not convergent in most of the cases. So, even though superstring theory was proposed as the unified theory including gravity, we could not use it to study effects in strong gravitational fields to address mysteries of quantum gravity. There was a concern that there may be some unknown strong coupling phenomenon which makes the theory ill-defined. Does string theory exist? Recently a remarkable correspondence between gauge theory and string theory was discovered (see for a review) and it has partially resolved this problem. It also realized the earlier expectation that the ’t Hooft large-$`N`$ limit of gauge theory is a string theory . The realization, however, came with a twist — that the string theory and the gauge theory live in different dimensions. The correspondence was discovered in the study of black hole in string theory. ## 2 Black $`p`$-branes in string theory It has been recognized for a long time that quantizing gravity is not a straightforward task. There are many aspects of quantum gravity which we do not know how to deal with in the standard field theory method. If string theory truly unifies quantum mechanics and general relativity, string theory should be able to address these puzzles of quantum gravity. One of the important questions in quantum gravity is the information paradox of black hole. The problem is roughly as follows. The black hole is characterized by the presence of an event horizon, which separates its interior region from an outside observer. Now suppose we throw some object into the black hole. Once the object passes the horizon, we won’t be able to access to the information carried by the object. In classical general relativity, the information is not lost, but is merely hidden behind the horizon. In quantum mechanics, however, the situation is different. It was showed by Hawking , based on the semi-classical approximation, that the black hole emits pure thermal radiation, namely with the radiation with maximum randomness. So it seems that the information carried by the object is completely dissipated as the thermal radiation. The loss of information may not be a problem in classical physics, but it is a big problem in quantum mechanics since it means that we cannot maintain the quantum coherence under the time evolution. It violates the basic axioms of quantum mechanics. This is the information paradox of quantum black hole. String theory in fact has a large class of black hole solutions. They were called black $`p`$-branes, where $`p`$ is an integer and it can be $`0,1,2`$ etc. The $`0`$-brane is the standard black hole, which is a point source. The spacetime becomes flat when one goes away from the center, but it is strongly curved near the center and there is a horizon. When $`p=1`$, we have the black $`1`$-brane. It is a string-like source. It is spread in one-spatial dimensions. Combined with the time direction, the source looks like a two-dimensional surface. The $`2`$-brane is a membrane. In general, the $`p`$-brane is extended in $`p`$-spatial directions and $`1`$-time direction. So the string theory indeed have black hole solutions. To address the question of the information paradox, one needs to know what happens if we throw a string into the black $`p`$-brane. There are two ways to approach this problems. One is to study the propagation of the string in the black $`p`$-brane geometry. There is, however, another way, that is to use the collective coordinates of the $`p`$-brane. In this approach, we first identify intrinsic degrees of freedom of the $`p`$-brane, which describe the motion and the fluctuation of the brane. One can then try to formulate the problem as interactions between the string and the collective coordinates of the $`p`$-brane. This approach was taken by Polchinski , and he called this description of the $`p`$-brane as D-brane. In a certain situation, in particular when we study physics at low energy, this D-brane description tells you that the dynamics of the $`p`$-brane is described by a gauge theory in $`(p+1)`$ dimensions So we have these two descriptions of the same object. In one way of viewing at it, we have strings moving in the background geometry of black $`p`$-brane. In another way of viewing at it, we have the collective coordinates of the $`p`$-brane, which is the $`(p+1)`$-dimensional gauge theory, interacting with strings. The equivalence of these two descriptions is the origin of the connection between gauge theory and string theory. In the past few years, various evidences have emerged in support of the idea that gauge theory and quantum gravity are closely related. At the same time, the gauge theory description has provided us very strong computational tools to study string theory. ## 3 AdS/CFT correspondence The evidences in support of the correspondence between gauge theory and string theory have crystallized in the work of Maldacena. He formulated the conjecture that superstring theory in a curved $`10`$-dimensional space, which is $`5`$-dimensional anti-de Sitter space ($`AdS_5`$) times $`5`$-dimensional sphere ($`S^5`$), is equivalent to a gauge theory in $`4`$ dimensions with $`N=4`$ supersymmetry. The anti-de Sitter space is a homogeneous space with negative curvature, which I will describe in more detail below. The gauge theory with $`N=4`$ supersymmetry is a conformal field theory (CFT). Thus the conjecture by Maldacena is called the AdS/CFT correspondence. To formulate the conjecture, Maldacena looked at the black $`3`$-brane. The D-brane description suggests that the low energy physics of the $`3`$-brane is described by the gauge theory, in this case the 4-dimensional gauge theory with $`N=4`$ supersymmetry. This description becomes exact in the low energy limit. He then noticed that, in precisely the same limit, the black hole description also becomes nice; in this limit, the region near the event horizon of the 3-brane is amplified. The geometry near the horizon is that of $`AdS_5\times S^5`$. In this way, the essence of the correspondence between gauge theory and string theory has been extracted. Let me describe what the anti-de Sitter space is. The 5-dimensional anti-de Sitter space ($`AdS_5`$) has $`4`$ spatial dimensions and $`1`$ time. The spacelike section of $`AdS_5`$ is simply the hyperbolic space of Lobachevsky and Bolyai. It is historically the first counter-example to Euclid’s 5th postulate about parallel lines. In Figure 1, I show Poincare’s disk model of the hyperbolic space. In this model, geodesics are represented by semi-circles which are intersecting with the boundary in the right angle. In our case, the disk is $`4`$ dimensional, so its boundary is 3-dimensional sphere. $`AdS_5`$ is obtained by simply adding $`1`$ time direction to this. You may view it like a solid cylinder as in Figure 2. The boundary of $`AdS_5`$ is 4 dimensional space of $`R\times S^3`$, and it is identified as a spacetime for the gauge theory. More precisely, the conjecture states that Type IIB superstring theory (consisting only of closed strings with the same chirality in both left and right moving sectors on the worldsheet) is equivalent to the $`𝒩=4`$ supersymmetric gauge theory in 4 dimensions with gauge group $`SU(N)`$. Type IIB superstring on $`AdS_5\times S^5`$ has three dimensionful parameters, $`l_{AdS},l_{string}`$ and $`l_{Planck}`$. By the stringy generalization of Einstein’s equation, the radius of $`S^5`$ is required to be the same as the curvature radius $`l_{AdS}`$ of $`AdS_5`$. The string length $`l_{string}`$ characterizes the size of the zero point oscillation of the string, $`i.e.`$ (string tension)$`=l_{string}^2`$. Finally the Planck length $`l_{Planck}`$ in 10 dimensions is expressed in the combination of the string coupling constant $`g_{string}`$ and the string length as $`l_{Planck}=g_{string}^{1/4}l_{string}`$. The theory is then characterized by two dimensionless combinations of these, for example $`l_{AdS}/l_{string}`$ and $`L_{AdS}/l_{Planck}`$. On the other hand, the gauge theory also has two parameters, the size of the gauge group $`N`$ and the gauge coupling constant $`g_{gauge}`$. (Since the $`𝒩=4`$ theory is ultraviolet finite, $`g_{gauge}`$ is really a constant.) These two sets of parameters are related as $$\frac{l_{AdS}}{l_{string}}=(g_{gauge}^2N)^{1/4},\frac{l_{AdS}}{l_{Planck}}=N^{1/4}.$$ In particular, if we take the limit $`N\mathrm{}`$ keeping $`g_{gauge}^2N`$ finite, quantum gravity effects in $`AdS_5\times S^5`$ are suppressed and we have strings freely propagating in the spacetime. This realizes the idea that the ’t Hooft large-$`N`$ limit of gauge theory is a tree-level string theory. After Maldacena’s work, the conjecture was formulated in a more precise manner and various tests of this conjecture have been made. The AdS/CFT correspondence states that the two quantum theories, namely the string theory in $`10`$ dimensions and the gauge theory in $`4`$ dimensions are equivalent. This means that, first of all, the Hilbert space of the two theories must be identical. The Hilbert space of the string theory contains gravitons and strings propagating in the AdS space, and black holes and various black p-branes. On the other hand, the Hilbert space of the gauge theory is constructed from the gauge invariant observables such as the field strength of the gauge field. So there has to be a dictionary between the two. In fact, we have succeeded in identifying what the gravitons correspond to in the gauge theory side, and we have also understood some aspects of black holes from the gauge theory point of view. The dictionary of the two Hilbert spaces is still incomplete, and there are many things to be understood here. The quantum theories are characterized by the operator algebras on the Hilbert spaces. Thus the equivalence of the two theories implies that the correlation functions of the two theories must be the same. After the conjecture was formulated, various computations have been done on both side, and many non-trivial agreements have been found. String theory computations also give various theoretical prediction about the gauge theory, and these are currently being examined using the gauge theory method. There are two surprising features of the AdS/CFT correspondence. The first surprise is that string and the gauge theory live in different spacetime dimensions. I should point out that this is not the same as the Kaluza-Klein reduction. In the Kaluza-Klein mechanism, one curls up a part of the spacetime into a tiny circle. If we ignore field fluctuation on the circle (or compact manifolds, in general), the dimensionality of the space is reduced. Here we are not truncating a theory. The equivalence states that the string theory is fully equivalent to the gauge theory, without any reduction. The second surprise is that string theory contains gravity and the gauge theory doesn’t. These two surprises seem to be related to an earlier observation on quantum gravity by ’t Hooft and Susskind . They suggested, based on properties of black hole, that in quantum gravity the information of the theory can be stored in lower dimensions. This idea is called Holography of Quantum Gravity. It seems that the AdS/CFT correspondence is an explicit realization of this idea. ## 4 Summary and Outlook Let me summarize what have been accomplished. I think that the most important fact coming out from this development is that we now have a complete definition of the string theory in certain cases. For superstring on $`AdS_5\times S^5`$, the 4-dimensional gauge theory with $`N=4`$ supersymmetry can in principle give the non-perturbative definition. In addition, the AdS/CFT correspondence realizes the idea that the ’t Hooft large-$`N`$ limit of the $`SU(N)`$ gauge theory is string theory. It also realizes the idea that quantum gravity is holographic. I would like to close this talk by pointing out some future directions. Since we know string theory and gauge theory are equivalent, we may try to use string theory to do difficult computations in gauge theory or vice versa, and thereby doubling our theoretical knowledge. In fact string theory computations have shown us various new results about gauge theories with conformal symmetry. For applications to QCD, to go beyond the qualitative analysis, we need to understand string dynamics in the curved background better , and I hope there will be some progress in this direction. One should also hope to learn much more about quantum gravity using gauge theory. Some aspects of quantum black holes have been studied using the gauge theory method, in particular the microscopic derivation of the black hole entropy by Strominger and Vafa . The correspondence has so far been limited to the case when string theory is on a space which is asymptotically anti de Sitter. It is desirable to extend this to other geometry such as asymptotically flat space. More generally, one should hope to find a formulation of string theory independently of its background geometry. The main aim of string theory research is still a search for the unified theory. In the course of this research, we have found that string theory is also useful to study various gauge theories. I hope that useful collaborations between lattice gauge theorists and string theorists would emerge from this correspondence. ## Acknowledgments I would like to thank the organizers of LATTICE 99 for the very stimulating conference and for their hospitality. This research was supported in part by NSF grant PHY-95-14797, DOE grant DE-AC03-76SF00098, and the Caltech Discovery Fund.
no-problem/9911/cond-mat9911235.html
ar5iv
text
# Topology and phase transitions: a paradigmatic evidence ## Abstract We report upon the numerical computation of the Euler characteristic $`\chi `$ (a topologic invariant) of the equipotential hypersurfaces $`\mathrm{\Sigma }_v`$ of the configuration space of the two-dimensional lattice $`\phi ^4`$ model. The pattern $`\chi (\mathrm{\Sigma }_v)`$ vs $`v`$ (potential energy) reveals that a major topology change in the family $`\{\mathrm{\Sigma }_v\}_v`$ is at the origin of the phase transition in the model considered. The direct evidence given here - of the relevance of topology for phase transitions - is obtained through a general method that can be applied to any other model. Suitable topology changes of equipotential submanifolds of configuration space can entail thermodynamic phase transitions. This is the novel result of the present Letter. The method we use, though applied here to a particular model, is of general validity and it is of prospective interest to the study of phase transitions in those systems that challenge the conventional approaches, as it might be the case of finite systems (like atomic and molecular clusters), of off-lattice polymers and proteins, of glasses and in general of amorphous and disordered materials. Let us begin by giving a theoretical argument and then proceed by numerically proving its truth for the $`2d`$ lattice $`\phi ^4`$ model. Consider classical many particle systems described by standard Hamiltonians $$H(p,q)=\underset{i=1}{\overset{N}{}}\frac{1}{2}p_i^2+V(q)$$ (1) where the $`(p,q)(p_1,\mathrm{},p_N,q_1,\mathrm{},q_N)`$ coordinates assume continuous values and $`V(q)`$ is bounded below. The statistical behaviour of physical systems described by Hamiltonians as in Eq.(1) is encompassed, in the canonical ensemble, by the partition function in phase space $`Z_N(\beta )`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{N}{}}dp_idq_ie^{\beta H(p,q)}}=\left({\displaystyle \frac{\pi }{\beta }}\right)^{\frac{N}{2}}{\displaystyle \underset{i=1}{\overset{N}{}}dq_ie^{\beta V(q)}}`$ (2) $`=`$ $`\left({\displaystyle \frac{\pi }{\beta }}\right)^{\frac{N}{2}}{\displaystyle _0^{\mathrm{}}}𝑑ve^{\beta v}{\displaystyle _{\mathrm{\Sigma }_v}}{\displaystyle \frac{d\sigma }{V}}`$ (3) where the last term is written using a co-area formula , and $`v`$ labels the equipotential hypersurfaces $`\mathrm{\Sigma }_v`$ of configuration space, $`\mathrm{\Sigma }_v=\{(q_1,\mathrm{},q_N)^N|V(q_1,\mathrm{},q_N)=v\}`$. Equation (3) shows that for Hamiltonians (1) the relevant statistical information is contained in the canonical configurational partition function $`Z_N^C=\mathrm{\Pi }𝑑q_i\mathrm{exp}[\beta V(q)]`$. Remarkably, $`Z_N^C`$ is decomposed – in the last term of Eq.(3) – into an infinite summation of geometric integrals, $`_{\mathrm{\Sigma }_v}𝑑\sigma /V`$, defined on the $`\{\mathrm{\Sigma }_v\}_v`$. Once the microscopic interaction potential $`V(q)`$ is given, the configuration space of the system is automatically foliated into the family $`\{\mathrm{\Sigma }_v\}_v`$ of these equipotential hypersurfaces. Now, from standard statistical mechanical arguments we know that, at any given value of the inverse temperature $`\beta `$, the larger the number $`N`$ of particles the closer to $`\mathrm{\Sigma }_v\mathrm{\Sigma }_{u_\beta }`$ are the microstates that significantly contribute to the averages – computed through $`Z_N(\beta )`$ – of thermodynamic observables. The hypersurface $`\mathrm{\Sigma }_{u_\beta }`$ is the one associated with $`u_\beta =(Z_N^C)^1dq_iV(q)e^{\beta V(q)}`$, the average potential energy computed at a given $`\beta `$. Thus, at any $`\beta `$, if $`N`$ is very large the effective support of the canonical measure shrinks very close to a single $`\mathrm{\Sigma }_v=\mathrm{\Sigma }_{u_\beta }`$. Hence, and on the basis of what we found in , let us make explicit the Topological Hypothesis: the basic origin of a phase transition lies in a suitable topology change of the $`\{\mathrm{\Sigma }_v\}`$, occurring at some $`v_c`$. This topology change induces the singular behavior of the thermodynamic observables at a phase transition. By change of topology we mean that $`\{\mathrm{\Sigma }_v\}_{v<v_c}`$ are not diffeomorphic to the $`\{\mathrm{\Sigma }_v\}_{v>v_c}`$ . In other words, the claim is that the canonical measure should “feel” a big and sudden change – if any – of the topology of the equipotential hypersurfaces of its underlying support, the consequence being the appearence of the typical signals of a phase transition, i.e. almost singular (at finite $`N`$) energy or temperature dependences of the averages of appropriate observables. The larger $`N`$, the narrower is the effective support of the measure and hence the sharper can be the mentioned signals, until true singularities appear in the $`N\mathrm{}`$ limit. This point of view has the interesting consequence that – also at finite $`N`$ – in principle different mathematical objects, i.e. manifolds of different cohomology type, could be associated to different thermodynamical phases, whereas from the point of view of measure theory the only mathematical property available to signal the appearence of a phase transition is the loss of analyticity of the grand-canonical and canonical averages, a fact which is compatible with analytic statistical measures only in the mathematical $`N\mathrm{}`$ limit. In order to prove or disprove the conjectured role of topology, we have to explicitly work out adequate information about the topology of the members of the family $`\{\mathrm{\Sigma }_v\}_v`$ for some given physical system. Below it is shown how this goal is practically achieved by means of numerical computations. As it is conjectured that the counterpart of a phase transition is a breaking of diffeomorphicity among the surfaces $`\mathrm{\Sigma }_v`$, it is appropriate to choose a diffeomorphism invariant to probe if and how the topology of the $`\mathrm{\Sigma }_v`$ changes as a function of $`v`$. This is a very challenging task because we have to deal with high dimensional manifolds. Fortunately a topological invariant exists whose computation is feasible, yet demands a big effort. This is the Euler characteristic, a diffeomorphism invariant, expressing fundamental topological information . In order to make the reader acquainted with it, we remind that a way to analyze a geometrical object is to fragment it into other more familiar objects and then to examine how these pieces fit together. Take for example a surface $`\mathrm{\Sigma }`$ in the euclidean three dimensional space. Slice $`\mathrm{\Sigma }`$ into pieces that are curved triangles (this is called a triangulation of the surface). Then count the number $`F`$ of faces of the triangles, the number $`E`$ of edges, and the number $`V`$ of vertices on the tesselated surface. Now, no matter how we triangulate a compact surface $`\mathrm{\Sigma }`$, $`\chi (\mathrm{\Sigma })=FE+V`$ will always equal a constant which is characteristic of the surface and which is invariant under diffeomorphisms $`\varphi :\mathrm{\Sigma }\mathrm{\Sigma }^{}`$. This is the Euler characteristic of $`\mathrm{\Sigma }`$. At higher dimensions this can be again defined by using higher dimensional generalizations of triangles (simplexes) and by defining the Euler characteristic of the $`n`$-dimensional manifold $`\mathrm{\Sigma }`$ to be $$\chi (\mathrm{\Sigma })=\underset{k=0}{\overset{n}{}}(1)^k(\mathrm{\#}\mathrm{of}{}_{}{}^{\prime \prime }\mathrm{faces}\mathrm{of}\mathrm{dimension}\mathrm{k}^{\prime \prime }).$$ (4) In differential topology a more standard definition of $`\chi (\mathrm{\Sigma })`$ is $$\chi (\mathrm{\Sigma })=\underset{k=0}{\overset{n}{}}(1)^kb_k(\mathrm{\Sigma })$$ (5) where also the numbers $`b_k`$ – the Betti numbers of $`\mathrm{\Sigma }`$ – are diffeomorphism invariants . While it would be hopeless to try to practically compute $`\chi (\mathrm{\Sigma })`$ from Eq.(5) in the case of non-trivial physical models at large dimension, there is a possibility given by a powerful theorem, the Gauss-Bonnet-Hopf theorem, that relates $`\chi (\mathrm{\Sigma })`$ with the total Gauss-Kronecker curvature of the manifold, i.e. $$\chi (\mathrm{\Sigma })=\gamma _\mathrm{\Sigma }K_G𝑑\sigma $$ (6) which is valid for even dimensional hypersurfaces of euclidean spaces $`^N`$ \[here $`\mathrm{dim}(\mathrm{\Sigma })=nN1`$\], and where: $`\gamma =2/Vol(𝕊_1^n)`$ is twice the inverse of the volume of an $`n`$-dimensional sphere of unit radius; $`K_G`$ is the Gauss-Kronecker curvature of the manifold; $`d\sigma =\sqrt{det(g)}dx^1dx^2\mathrm{}dx^n`$ is the invariant volume measure of $`\mathrm{\Sigma }`$ and $`g`$ is the Riemannian metric induced from $`^N`$. Let us briefly sketch the meaning and definition of the Gauss-Kronecker curvature. The study of the way in which an $`n`$-surface $`\mathrm{\Sigma }`$ curves around in $`^N`$ is measured by the way the normal direction changes as we move from point to point on the surface. The rate of change of the normal direction $`𝝃`$ at a point $`x\mathrm{\Sigma }`$ in direction $`𝐯`$ is described by the shape operator $`L_x(𝐯)=_𝐯𝝃`$, where $`𝐯`$ is a tangent vector at $`x`$ and $`_𝐯`$ is the directional derivative, hence $`L_x(𝐯)=(\xi _1𝐯,\mathrm{},\xi _{n+1}𝐯)`$; gradients and vectors are represented in $`^N`$. As $`L_x`$ is an operator of the tangent space at $`x`$ into itself, there are $`n`$ independent eigenvalues $`\kappa _1(x),\mathrm{},\kappa _n(x)`$ which are called the principal curvatures of $`\mathrm{\Sigma }`$ at $`x`$. Their product is the Gauss-Kronecker curvature: $`K_G(x)=_{i=1}^n\kappa _i(x)=\mathrm{det}(L_x)`$. The practical computation of $`K_G`$ for the equipotential hypersurfaces $`\mathrm{\Sigma }_v`$ proceeds as follows. Let $`𝝃`$$`=V/V`$ be the unit normal vector to $`\mathrm{\Sigma }_v`$ at a given point $`x`$, and let $`\{𝐯_1,\mathrm{},𝐯_n\}`$ be any basis for the tangent space of $`\mathrm{\Sigma }_v`$ at $`x`$. Then $$K_G(x)=\frac{(1)^n}{V^n}\left|\left(\begin{array}{c}_{𝐯_1}V\\ \mathrm{}\\ _{𝐯_n}V\\ V\end{array}\right)\right|\left|\left(\begin{array}{c}𝐯_1\\ \mathrm{}\\ 𝐯_n\\ V\end{array}\right)\right|^1.$$ (7) Let us now consider the family of $`\{\mathrm{\Sigma }_v\}_v`$ associated with a particular physical system and show how things work in practice. We consider the so-called $`\phi ^4`$ model on a $`d`$-dimensional lattice $`^d`$ with $`d=1,2`$, described by the potential function $$V=\underset{i^d}{}\left(\frac{\mu ^2}{2}q_i^2+\frac{\lambda }{4}q_i^4\right)+\underset{ik^d}{}\frac{1}{2}J(q_iq_k)^2$$ (8) where $`ik`$ stands for nearest-neighbor sites. This system has a discrete $`_2`$-symmetry and short-range interactions; therefore, according to the Mermin-Wagner theorem, in $`d=1`$ there is no phase transition whereas in $`d=2`$ there is a symmetry-breaking transition of the same universality class of the $`2d`$ Ising model. Independently of any statistical measure, let us now probe, by computing $`\chi (\mathrm{\Sigma }_v)`$ vs $`v`$ according to Eq.(6), if and how the topology of the hypersurfaces $`\mathrm{\Sigma }_v`$ varies with $`v`$. To this aim we first devised an algorithm of MonteCarlo type by constructing a Markov chain on any desired surface $`\mathrm{\Sigma }_v`$. This is obtained by means of a “demon” algorithm corrected with a projection technique which provides a simple and efficient method to constrain a random walk on a level-hypersurface, here, of the potential function. Each new step so obtained on $`\mathrm{\Sigma }_v`$ represents a trial step which is accepted or rejected according to a Metropolis-like “importance sampling” criterion adapted to the weight $`\sqrt{\mathrm{det}(g)}`$. With any MonteCarlo scheme we can actually compute densities, that is we can only estimate $`_{\mathrm{\Sigma }_v}K_G𝑑\sigma /_{\mathrm{\Sigma }_v}𝑑\sigma `$, the average of $`K_G`$, rather than its total value (6) on $`\mathrm{\Sigma }_v`$. Hence the need for an estimate of $`Area(\mathrm{\Sigma }_v)=_{\mathrm{\Sigma }_v}𝑑\sigma `$ as a function of $`v`$. To this aim we worked out a geometric formula that links the relative variation of $`Area(\mathrm{\Sigma }_v)`$ with respect to an arbitrary initial value $`Area(\mathrm{\Sigma }_{v_0})`$, to another MonteCarlo average on $`\mathrm{\Sigma }_v`$: $`M_1/V_{MC}^{\mathrm{\Sigma }_v}`$ where $`M_1=\frac{1}{n}_{i=1}^n\kappa _i`$ is the mean curvature of $`\mathrm{\Sigma }_v`$ . Thus the final outcomes of our computations are the relative variations of the Euler characteristic. The computation of $`K_G`$ at any point $`x\mathrm{\Sigma }_v`$ proceeds by working out an orthogonal basis for the tangent space at $`x`$, orthogonal to $`𝝃=V/V`$, by means of a Gram-Schmidt orthogonalization procedure. Then Eq.(7) is used to compute $`K_G`$ at $`x`$. On each $`\mathrm{\Sigma }_v`$ we sampled $`110^6\mathrm{\hspace{0.17em}3.5}10^6`$ points where we computed $`K_G`$. This number of points was varied, and several initial conditions were also considered in order to check the stability of the results. The computations were performed for $`\mathrm{dim}(\mathrm{\Sigma }_v)=48,80`$ (i.e. $`N=7\times 7,\mathrm{\hspace{0.17em}9}\times 9`$) and with the choice $`\lambda =0.6,\mu ^2=2,J=1`$ for the parameters of the potential. In order to test the correctness of our numerical “protocol” to compute $`\chi (\mathrm{\Sigma }_v)`$, and to assess its degree of reliability, we checked the method against a simplified form of the potential $`V`$ in Eq. (8), i.e. with $`\lambda =J=0,\mu ^2=1`$. In this case the $`\mathrm{\Sigma }_v`$ are hyperspheres and therefore $`\chi (𝕊_v^n)=2`$ for any even $`n`$. $`Area(𝕊_v^n)`$ is analytically known as a function of the radius $`\sqrt{v}`$, therefore the starting value $`Area(\mathrm{\Sigma }_{v_0})`$ is known and in this case we can compute the actual values of $`\chi (\mathrm{\Sigma }_v)`$ instead of their relative variations only. In Fig.1 we report $`\chi (\mathrm{\Sigma }_v=𝕊_v^n)`$ vs $`v/N`$ for $`N=5\times 5`$, the results are in agreement with the theoretical value within an error of few percents, a very good precision in view of the large variations of $`\chi (\mathrm{\Sigma }_v)`$ that are found with the full expression (8) of $`V`$. In Fig.2 we report the results for the $`1d`$ lattice, which is known not to undergo any phase transition. Apart from some numerical noise - here enhanced by the more complicated topology of the $`\mathrm{\Sigma }_v`$ when $`\lambda ,J0`$ \- a monotonously (in the average) decreasing pattern of $`\chi (v/N)`$ is found. Since the variation of $`\chi (v/N)`$ signals a topology change of the $`\{\mathrm{\Sigma }_v\}`$, Fig. 2 tells that a “smoothly” varying topology is not sufficient for the appearence of a phase transition. In fact, when the $`2d`$ lattice is considered, the pattern of $`\chi (v/N)`$ is very different: it displays a rather abrupt change of the topology variation rate with $`v/N`$ at some $`v_c/N`$. This result is reported in Fig.3 for a lattice of $`N=7\times 7`$ sites, and in Fig. 4 for a larger lattice of $`N=9\times 9`$ sites . The question is now whether the value $`v_c/N`$, at which $`\chi (v/N)`$ displays a cusp, has anything to do with the thermodynamic phase transition, i.e. we wonder if the effective support of the canonical measure shrinks close to $`\mathrm{\Sigma }_{vv_c}`$ just at $`\beta 1/T_c`$, the (inverse) critical temperature of the phase transition. The answer is in the affirmative. In fact, the numerical analysis in Refs. shows that – with $`\lambda =0.6,\mu ^2=2,J=1`$ – the function $`\frac{1}{N}V(T)`$ and its derivative signal the phase transition at $`\frac{1}{N}V3.75`$, a value in very good agreement – within the numerical precision – with $`v_c/N`$ where the cusp of $`\chi (v/N)`$ shows up. Through the computation of the $`v`$-dependence of a topologic invariant, the hypothesis of a deep connection between topology changes of the $`\{\mathrm{\Sigma }_v\}`$ and phase transitions has been given a direct confirmation. Moreover, we found that a sudden “second order variation” of the topology of these hypersurfaces is the “suitable” topology change - mentioned at the beginning of the present Letter - that underlies the phase transition of second kind in the lattice $`\phi ^4`$ model. There is no reason why the results presented here should be peculiar only to the chosen model, and therefore they point to a general validity of the relationship between topology and phase transitions, opening a wide field of future investigations and applications. We warmly thank A. Abbondandolo, L.Casetti, C. Chiuderi and G.Vezzosi for helpful discussions and comments. One of us (MP) wishes to thank E.G.D. Cohen and D. Ruelle for an encouraging and helpful discussion held at I.H.E.S. (Bures-sur-Yvette).
no-problem/9911/astro-ph9911101.html
ar5iv
text
# Radio Astronomical Polarimetry and the Lorentz Group ## 1 Introduction In radio astronomy transformations occur during the propagation and reception of radio waves that act to change the state of polarized radiation. Some of these transformations, such as Faraday rotation in the ionosphere or the interstellar medium, arise from propagation effects that may themselves be of astrophysical interest. Others originate from instrumental effects such as differential amplification or receiver cross-talk, and have an adverse effect on polarimetric observations. Realistically many such effects may be present, each having its own time and frequency dependence, and collectively acting to distort measurements of the polarized radiation. The interpretation and calibration of these observations may be quite complex, and it is useful to have a general context in which to describe these transformations. Linear transformations of fully polarized radiation were first investigated by Jones (1941), who represented transformations of the two-component transverse electric field in terms of 2x2 complex matrices now called Jones matrices. This analysis was extended to partially polarized radiation by Parrent & Roman (1960), who used Jones matrices to describe the transformation properties of the coherency matrix. Alternatively, both fully and partially polarized radiation may be described by the Stokes parameters, and their linear transformations may be represented in terms of 4x4 real matrices called Mueller matrices (Mueller 1948). In fact these transformations are intimately related to the Lorentz group. This relationship arises from the fact that for a plane propagating wave Maxwell’s equations admit two independent solutions, representing the two orthogonal senses of polarization. Thus polarized radiation constitutes a two state system, and the linear transformations of all such systems are described by the Lorentz group. This relationship has long been known in optics, and different aspects have been discussed by many authors (e.g. Barakat 1963, Whitney 1971, Cloude 1986, Pellat-Finet & Bausset 1992). Recent work has focused on representations of the Jones and Mueller matrices (Opatrny & Perina 1993, Brown & Bak 1995, Han, Kim & Noz 1997). Up to a multiplicative constant the set of Jones matrices constitute the group SL(2,C), which forms the spin 1/2 representation of the Lorentz group. The corresponding set of Mueller matrices constitutes the group SO(3,1) and forms the spin 1 representation of the Lorentz group. Finally, the relationship between Jones and Mueller matrices is represented through the 2-1 mapping between these two groups. SO(3,1). The formulation of the transformation properties of polarized radiation in terms of Lorentz transformations affords considerable insight, including the interpretation of the Stokes parameters as a Lorentz 4-vector, the classification of transformations of this 4-vector as rotations or boosts, and the existence of a polarimetric analogue to the invariant interval. In this paper these concepts are reviewed in the context of radio astronomical polarimetry. Since both linear and circular bases are widely used in radio astronomy, a basis independent formulation is emphasized. This formalism is then used to construct a model for the propagation and reception of radio waves. ## 2 Representations of Polarized Radiation Consider the representation of a transverse electromagnetic wave. Such a wave may be described through its two-component transverse electric field vector $`(t)`$. This vector may also represent the electric field in a waveguide, or the voltage in a pair of cables. The vector $`(t)`$ is commonly written in terms of the two-component complex analytic signal $`𝐄(t)`$ as $`(t)=\mathrm{Re}\left[𝐄(t)\mathrm{exp}\{i\omega t\}\right]`$. This construction is familiar from both optics (Born & Wolf 1980) and signal processing (Bracewell 1986). For fully polarized radiation the analytic signal is independent of time. In this case the relative amplitudes and phases of the two components of $`𝐄`$ specify the state of elliptical polarization of the plane wave. For partially polarized radiation $`𝐄(t)`$ is time dependent, and measurable properties of the wave may instead be defined through time-averaging. Such an averaging procedure is conveniently treated through the coherency matrix (Wiener 1930, Wolf 1959). This is a 2x2 Hermitian matrix formed from the direct product of the analytic signal, and may be written as $`\rho =𝐄(t)𝐄^{}(t)`$. Here the angular brackets denote time-averaging. As with any Hermitian matrix, the coherency matrix may be written in terms of 4 real quantities $`(S_o,𝐒)`$ as $$\rho =(S_o\sigma _o+𝐒\sigma )/2$$ (1) where $`\sigma _𝐨`$ is the 2x2 identity matrix and $`\sigma `$ is a 3-vector whose components are the Pauli spin matrices. These three 2x2 matrices are traceless and Hermitian with determinant $``$1. The 4 parameters $`(S_o,𝐒)`$ are simply the mean Stokes parameters of the plane wave (Fano 1954), with $`S_o`$ representing the total intensity. Let us now introduce a particular basis. The electric field vector may be represented in the Cartesian basis $`(\widehat{x},\widehat{y},\widehat{z})`$, in which $`(t)=(_x(t),_y(t))`$ is resolved into mutually orthogonal components, each orthogonal to the direction of propagation $`z`$ of the plane wave. The corresponding analytic signal is $`𝐄(t)=(E_x(t),E_y(t))`$. For the 3-dimensional space of $`𝐒`$ the Cartesian basis $`(\widehat{q},\widehat{u},\widehat{v})`$ is used, along with the customary representation of the Pauli matrices $`\sigma _{\widehat{q}}=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)`$ $`\sigma _{\widehat{u}}=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)`$ $`\sigma _{\widehat{v}}=\left(\begin{array}{cc}0& i\\ i& 0\end{array}\right)`$ (8) To associate the coherency matrix with the Stokes parameters, we write $`\rho `$ in this basis as $$\rho =𝐄(𝐭)𝐄^{}(𝐭)=\left(\begin{array}{cc}𝐄_𝐱^{}(𝐭)𝐄_𝐱(𝐭)& 𝐄_𝐱^{}(𝐭)𝐄_𝐲(𝐭)\\ 𝐄_𝐲^{}(𝐭)𝐄_𝐱(𝐭)& 𝐄_𝐲^{}(𝐭)𝐄_𝐲(𝐭)\end{array}\right)$$ (9) From equation 1, the Stokes parameters in this basis become $$\begin{array}{cc}\begin{array}{c}S_o=E_x^{}(t)E_x(t)+E_y^{}(t)E_y(t)\hfill \\ S_q=E_x^{}(t)E_x(t)E_y^{}(t)E_y(t)\hfill \end{array}\begin{array}{c}S_u=2\mathrm{Re}\left[E_x^{}(t)E_y(t)\right]\hfill \\ S_v=2\mathrm{Im}\left[E_x^{}(t)E_y(t)\right]\hfill \end{array}& \end{array}$$ (10) This is simply the usual definition of the Stokes parameters in a linear basis (Born & Wolf 1980). ## 3 Transformation Properties of Polarized Radiation Let us now consider the transformation properties of polarized radiation. Attention is restricted to linear, invertible transformations. This excludes the class of projective transformations, which are important in representing perfect polarizing filters. Similarly, such transformations cannot describe multipath propagation of coherent radiation, such as the focusing or defocusing of radiation by lenses or mirrors. Despite this restriction, the set of linear, invertible transformations encompasses a broad class of physical processes, including single-particle scattering, propagation through anisotropic media, and many transformations arising from instrumental devices. This set may also describe the linear transformations of the voltage signal in two cables, which are known in linear network theory as two-port networks (Ruston & Bordogna 1966). This equivalence is particularly useful in radio astronomy, where the two components of the electric field are converted to voltages by a receiver and then passed through an electronics downconversion chain. The most general linear transformation of the analytic signal may be written as $`𝐄^{}(t)=\mathrm{𝐭𝐄}(t)`$, where the Jones matrix t is a 2x2 complex matrix. As the direct product of the analytic signal, the coherency matrix must transform as $`\rho ^{}=𝐭\rho 𝐭^{}`$. For invertible transformations a Jones matrix may be written as $`𝐭=\sqrt{\mathrm{det}𝐭}𝐭_N`$, where $`\mathrm{det}𝐭`$ is the determinant of $`𝐭`$ and $`𝐭_N`$ is a matrix with unit determinant. The set of 2x2 complex invertible matrices with unit determinant forms the group SL(2,C), which constitutes the spin 1/2 representation of the Lorentz group. To investigate the transformation properties of the Stokes parameters, note that the determinant of equation 1 is simply $`\mathrm{det}\rho =S_o^2|S|^2S_{\mathrm{inv}}`$ (Barakat 1963). This is just the form of the Lorentz invariant. Under transformation by the Jones matrix $`𝐭`$, $`\mathrm{det}\rho ^{}=|\sqrt{\mathrm{det}𝐭}|^2S_{\mathrm{inv}}`$, so that this interval is preserved up to a multiplicative constant. The set of transformations that preserve this interval forms the group SO(3,1), which constitutes the spin 1 representation of the Lorentz group. That is, the Stokes parameters transform as a Lorentz 4-vector, with the total intensity acting as the timelike component and the remaining Stokes parameters acting as the spacelike components. The condition that the total intensity $`S_o>0`$ restricts this 4-vector to lie within or on the surface of the forward light cone. These two cases correspond to partially polarized ($`S_{\mathrm{inv}}>0`$) or fully polarized ($`S_{\mathrm{inv}}=0`$) radiation, respectively. The representations of the groups SL(2,C) and SO(3,1) are well known in physics, but are not widely used in astronomy. Basis-independent representations of these groups are now reviewed, and are interpreted in the context of polarimetry. This will serve both as an introduction and to establish the notation used in the next section. For similar reviews, see Brown & Bak (1995) or Tung (1996). The group SL(2,C) contains as a subgroup the set of 2x2 unitary transformations SU(2). Any such unitary transformation may be parameterized as $$𝐫_{\widehat{𝐧}}(\varphi )=e^{\left(i\sigma \widehat{𝐧}\varphi \right)}=\sigma _0\mathrm{cos}\varphi +i\sigma \widehat{𝐧}\mathrm{sin}\varphi $$ (11) where $`\widehat{𝐧}`$ is a unit 3-vector. This is called the axis-angle parameterization of SU(2). The angle $`\varphi `$ differs from the definition customary in classical and quantum mechanics by a factor of $`1/2`$, but is in agreement with the conventions of optics. Under the transformation of equation 11, $`\rho 𝐫_{\widehat{𝐧}}(\varphi )\rho 𝐫_{\widehat{𝐧}}(\varphi )=(\sigma _oS_o+\sigma 𝐒^{})/2`$, where $$𝐒^{}=𝐒\mathrm{cos}2\varphi +𝐒\times \widehat{𝐧}\mathrm{sin}2\varphi +(\widehat{𝐧}𝐒)\widehat{𝐧}\left(1\mathrm{cos}2\varphi \right)$$ (12) and we have used the relationship $`(\sigma 𝐚)(\sigma 𝐛)=𝐚𝐛+i\sigma (𝐚\times 𝐛)`$. But $`𝐒^{}`$ is simply the vector resulting from a rotation of $`𝐒`$ about an axis $`\widehat{𝐧}`$ by an angle $`2\varphi `$ (cf. Goldstein 1980). This reflects the well-known mapping between SU(2) and the group SO(3), whose elements form a representation of the rotations of a 3-dimensional vector. The mapping is 2-1, since the two rotations $`𝐫_{\widehat{𝐧}}(\varphi )`$ and $`𝐫_{\widehat{𝐧}}(\varphi +\pi )=𝐫_{\pm \widehat{𝐧}}(\varphi )`$ result in the same vector $`𝐒^{}`$. Such rotations preserve the degree of polarization $`|𝐒|/S_o`$ of the plane wave, and are readily interpreted geometrically in the space of the Poincare sphere in terms of the axis $`\widehat{𝐧}`$ and angle $`2\varphi `$ of rotation. Equation 11 may be represented in the basis of equation 8 as $$𝐫_{\widehat{𝐧}}(\varphi )=\left(\begin{array}{cc}\mathrm{cos}\varphi +\mathrm{𝐢𝐧}_𝐪\mathrm{sin}\varphi & (\mathrm{𝐢𝐧}_𝐮+𝐧_𝐯)\mathrm{sin}\varphi \\ (\mathrm{𝐢𝐧}_𝐮𝐧_𝐯)\mathrm{sin}\varphi & \mathrm{cos}\varphi \mathrm{𝐢𝐧}_𝐪\mathrm{sin}\varphi \end{array}\right)$$ (13) where $`\widehat{𝐧}=(n_{\widehat{q}},n_{\widehat{u}},n_{\widehat{v}})`$. From equation 12 the corresponding rotation of the Stokes parameters $`(S_o,S_q,S_u,S_v)`$ in this basis is $$𝐑_{\widehat{𝐧}}(2\varphi )=\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& n_q^2+(1n_q^2)\mathrm{cos}2\varphi & n_qn_u\mathrm{sin}^2\varphi +n_v\mathrm{sin}2\varphi & n_qn_v\mathrm{sin}^2\varphi n_u\mathrm{sin}2\varphi \\ 0& n_qn_u\mathrm{sin}^2\varphi n_v\mathrm{sin}2\varphi & n_u^2+(1n_u^2)\mathrm{cos}2\varphi & n_un_v\mathrm{sin}^2\varphi +n_q\mathrm{sin}2\varphi \\ 0& n_qn_v\mathrm{sin}^2\varphi +n_u\mathrm{sin}2\varphi & n_un_v\mathrm{sin}^2\varphi n_q\mathrm{sin}2\varphi & n_v^2+(1n_v^2)\mathrm{cos}2\varphi \end{array}\right)$$ (14) For example, for $`(n_{\widehat{q}},n_{\widehat{u}},n_{\widehat{v}})=(1,0,0)`$ equation 14 constitutes the rotation about the $`\widehat{q}`$ axis $`𝐑_{\widehat{𝐪}}(2\varphi )`$, which may be interpreted from equation 13 as generating a phase delay between the two components of the electric field. Next let us consider the group SL(2,C), which has a parameterization similar to its subgroup SU(2). Any element of the group may be written as $`\mathrm{exp}\left(i\sigma \widehat{𝐧}\varphi +\sigma \widehat{𝐦}\beta \right)`$, where $`\widehat{𝐧}`$ and $`\widehat{𝐦}`$ are unit vectors. In analogy with equation 11, let us consider transformations for which $`\varphi =0`$. Such transformations may be written in terms of the Hermitian matrix $$𝐛_{\widehat{𝐦}}(\beta )=\mathrm{exp}\left(\sigma \widehat{𝐦}\beta \right)=\sigma _0\mathrm{cosh}\beta +\sigma \widehat{𝐦}\mathrm{sinh}\beta $$ (15) Under this transformation the coherency matrix becomes $`\rho 𝐛_{\widehat{𝐦}}(\beta )\rho 𝐛_{\widehat{𝐦}}(\beta )=(\sigma _oS_o^{}+\sigma 𝐒^{})/2`$, where $$\begin{array}{c}S_o^{}=S_o\mathrm{cosh}2\beta +𝐒\widehat{𝐦}\mathrm{sinh}2\beta \\ 𝐒^{}=𝐒+\left(S_o\mathrm{sinh}2\beta +2𝐒\widehat{𝐦}\mathrm{sinh}^2\beta \right)\widehat{𝐦}\end{array}$$ (16) This is simply the result of performing a Lorentz boost on the 4-vector $`S=(S_o,𝐒)`$ along the axis $`\widehat{𝐦}`$ by a velocity parameter $`2\beta `$ (cf. Jackson 1975). As in the general case, such a transformation preserves the invariant interval $`S_{\mathrm{inv}}`$. Unlike rotations, it does not preserve the degree of polarization $`|𝐒|/S_o`$ of the plane wave. For example, there exists some transformation that will completely depolarize partially polarized radiation. This is just the analogue of the statement in special relativity that there always exists a reference frame in which two events separated by a timelike interval occur at the same position in space. Equation 15 may be represented in the basis of equation 8 as $$𝐛_{\widehat{𝐦}}(\beta )=\left(\begin{array}{cc}\mathrm{cosh}\beta +m_q\mathrm{sinh}\beta & (m_uim_v)\mathrm{sinh}\beta \\ (m_u+im_v)\mathrm{sinh}\beta & \mathrm{cosh}\beta m_q\mathrm{sinh}\beta \end{array}\right)$$ (17) where $`\widehat{𝐦}=(m_{\widehat{q}},m_{\widehat{u}},m_{\widehat{v}})`$. The boost of equation 16 in this basis becomes $$𝐁_{\widehat{𝐦}}(2\beta )=\left(\begin{array}{cccc}\mathrm{cosh}2\beta & m_q\mathrm{sinh}2\beta & m_u\mathrm{sinh}2\beta & m_v\mathrm{sinh}2\beta \\ m_q\mathrm{sinh}2\beta & 1+2m_q^2\mathrm{sinh}^2\beta & 2m_qm_u\mathrm{sinh}^2\beta & 2m_qm_v\mathrm{sinh}^2\beta \\ m_u\mathrm{sinh}2\beta & 2m_qm_u\mathrm{sinh}^2\beta & 1+2m_u^2\mathrm{sinh}^2\beta & 2m_um_v\mathrm{sinh}^2\beta \\ m_v\mathrm{sinh}2\beta & 2m_qm_v\mathrm{sinh}^2\beta & 2m_um_v\mathrm{sinh}^2\beta & 1+2m_v^2\mathrm{sinh}^2\beta \end{array}\right)$$ (18) The rotations of equations 13 and 14 and boosts of equation 17 and 18 constitute a subset of the Lorentz transformations that suffices for the radio astronomical applications presented in the next section. Finally, systems consisting of multiple physical processes are readily modelled through successive application of the above transformations. Of particular use in the analysis of such composite systems are the commutation relations $$\left[𝐑_{\widehat{𝐧}}(\alpha )𝐑_{\widehat{𝐧}}(\beta )\right]=\left[𝐑_{\widehat{𝐧}}(\alpha )𝐁_{\widehat{𝐧}}(\beta )\right]=\left[𝐁_{\widehat{𝐧}}(\alpha )𝐁_{\widehat{𝐧}}(\beta )\right]=0$$ (19) As we shall see in the next section, these relationships are useful in determining whether the transformations that arise from different physical processes or instrumental elements commute with one another. ## 4 The Propagation and Reception of Radio Waves Let us examine some practical situations in which linear, invertible transformations of polarized radiation arise. One such example is a rotation of the electric field vector about the direction of propagation. This transformation may represent a rotation of a physical device with respect to the plane wave. Another example arises from Faraday rotation, which occurs when radio waves propagate through a magnetized plasma. Such a transformation may be written as $`𝐫_{\widehat{v}}(\varphi )`$. Equivalently, the transformation of the Stokes parameters is $`𝐑_{\widehat{v}}(2\varphi )`$. A similar transformation generates a phase delay between the two components of $`𝐄`$, and may be written as the rotation $`𝐫_{\widehat{q}}(\psi )`$ . In optics a physical device that induces such a phase delay is called a compensator. This transformation arises in electronics when two signals traverse different cable lengths. Both $`𝐫_{\widehat{v}}(\varphi )`$ and $`𝐫_{\widehat{q}}(\psi )`$ are unitary, and preserve the degree of polarization $`|𝐒|/S_o`$ and the invariant interval $`S_{\mathrm{inv}}`$. Differential amplification or attenuation of the components of $`𝐄`$ provide examples of a non-unitary transformation. Consider the transformation $$𝐠=\left(\begin{array}{cc}g_a& 0\\ 0& g_b\end{array}\right)$$ (20) We may write this as $`𝐠=\sqrt{g_ag_b}𝐛_{\widehat{q}}(\beta )`$, where $`\beta =\mathrm{ln}(g_b/g_a)`$. The Stokes parameters transform as $`g_ag_b𝐁_{\widehat{q}}(2\beta )`$. Note that this transformation does not preserve $`|𝐒|/S_o`$, and preserves $`S_{\mathrm{inv}}`$ only up to the factor $`g_ag_b`$. Next consider two orthogonal elliptically polarized waves with axial ratio $`\mathrm{tan}\chi `$ and orientation $`\theta `$ (Chandrasekhar 1960). $$\begin{array}{cc}e_a=\left(\begin{array}{c}\mathrm{cos}\theta \mathrm{cos}\chi i\mathrm{sin}\theta \mathrm{sin}\chi \\ \mathrm{sin}\theta \mathrm{cos}\chi i\mathrm{cos}\theta \mathrm{sin}\chi \end{array}\right)& e_b=\left(\begin{array}{c}\mathrm{sin}\theta \mathrm{cos}\chi i\mathrm{cos}\theta \mathrm{sin}\chi \\ \mathrm{cos}\theta \mathrm{cos}\chi +i\mathrm{sin}\theta \mathrm{sin}\chi \end{array}\right)\end{array}$$ (21) We may perform a change of basis by forming a matrix $`𝐬`$ with rows $`e_a^{}`$ and $`e_b^{}`$. This matrix may be factored as $`𝐬(\theta ,\chi )=𝐫_{\widehat{u}}(\chi )𝐫_{\widehat{v}}(\theta )`$. A transformation from a linear to a circular basis occurs for $`\chi =\pi /4`$, and in the case where $`\theta =\pi /4`$ results simply in a cyclic permutation of the indices $`(\widehat{q},\widehat{u},\widehat{v})(\widehat{v},\widehat{q},\widehat{u})`$ in equation 8. The transformation $`𝐬(\theta ,\chi )`$ may also be regarded as representing a receiver with two receptors sensitive to orthogonal forms of elliptical radiation. The process of reception constitutes a projection of $`𝐄`$ onto the receptors of the receiver, which is represented by matrix multiplication of $`𝐄`$ by $`𝐬(\theta ,\chi )`$. Clearly the choice of a rotation about the $`\widehat{u}`$ axis followed by one about the $`\widehat{v}`$ axis is not unique. One alternative specification of elliptically polarized radiation is given in terms of an orientation and phase delay as $`𝐫_{\widehat{v}}(\theta )𝐫_{\widehat{q}}(\varphi )`$. In optics this transformation is accomplished through a device known as a Babinet compensator (Born & Wolf 1980). Now let us consider a receiver sensitive to two forms of elliptical polarization that are not necessarily orthogonal. This situation may arise in practice from imperfections in the construction of a receiver (Conway & Kronberg 1969, Stinebring et al. 1984). The transformation may be written as $$𝐜=\left(\begin{array}{cc}\mathrm{cos}\theta _a\mathrm{cos}\chi _a+i\mathrm{sin}\theta _a\mathrm{sin}\chi _a& \mathrm{sin}\theta _a\mathrm{cos}\chi _a+i\mathrm{cos}\theta _a\mathrm{sin}\chi _a\\ \mathrm{sin}\theta _b\mathrm{cos}\chi _b+i\mathrm{cos}\theta _b\mathrm{sin}\chi _b& \mathrm{cos}\theta _b\mathrm{cos}\chi _bi\mathrm{sin}\theta _b\mathrm{sin}\chi _b\end{array}\right)$$ (22) where the two probes of the receiver are sensitive to elliptical radiation with axial ratios $`\chi _a,\chi _b`$ and orientations $`\theta _a,\theta _b`$. For the case $`\theta _a=\theta _b`$ and $`\chi _a=\chi _b`$ equation 22 simplifies to the unitary transformation $`𝐬(\theta ,\chi )`$. In the general case such a transformation does not conserve energy. With the definitions $$\begin{array}{cc}\begin{array}{c}\sigma _\theta =\theta _a+\theta _b\hfill \\ \delta _\theta =\theta _a\theta _b\hfill \end{array}\begin{array}{c}\sigma _\chi =\chi _a+\chi _b\hfill \\ \delta _\chi =\chi _a\chi _b\hfill \end{array}& \end{array}$$ (23) we may write $`𝐜=𝐜^{}𝐫_{\widehat{v}}(\sigma _\theta /2)`$, where $$𝐜^{}=\left(\begin{array}{cc}\mathrm{cos}(\sigma _\chi /2+\delta _\chi /2)\mathrm{cos}\delta _\theta /2+& \mathrm{cos}(\sigma _\chi /2+\delta _\chi /2)\mathrm{sin}\delta _\theta /2+\\ i\mathrm{sin}(\sigma _\chi /2+\delta _\chi /2)\mathrm{sin}\delta _\theta /2& i\mathrm{sin}(\sigma _\chi /2+\delta _\chi /2)\mathrm{cos}\delta _\theta /2\\ \mathrm{cos}(\sigma _\chi /2\delta _\chi /2)\mathrm{sin}\delta _\theta /2+& \mathrm{cos}(\sigma _\chi /2\delta _\chi /2)\mathrm{cos}\delta _\theta /2+\\ i\mathrm{sin}(\sigma _\chi /2\delta _\chi /2)\mathrm{cos}\delta _\theta /2& i\mathrm{sin}(\sigma _\chi /2\delta _\chi /2)\mathrm{sin}\delta _\theta /2\end{array}\right)$$ (24) For a nearly orthogonal receiver with receptors sensitive to linearly polarized radiation, this matrix may be written to first order in these parameters as $`𝐜^{}=𝐛_{\widehat{u}}^{(1)}(\delta _\theta /2)𝐛_{\widehat{v}}^{(1)}(\delta _\chi /2)𝐫_{\widehat{u}}^{(1)}(\sigma _\chi /2)`$. Here the superscript $`(1)`$ indicates that these transformations are first order in their arguments. These examples may be combined to form a model for the propagation and reception of radio waves. $$𝐭=\sqrt{g_ag_b}𝐛_{\widehat{q}}(\beta )𝐫_{\widehat{q}}(\mathrm{\Phi }_I)𝐛_{\widehat{u}}^{(1)}(\delta _\theta /2)𝐛_{\widehat{v}}^{(1)}(\delta _\chi /2)𝐫_{\widehat{u}}^{(1)}(\sigma _\chi /2)𝐫_{\widehat{v}}(\sigma _\theta /2)𝐫_{\widehat{v}}(\zeta )𝐫_{\widehat{v}}(\mathrm{\Phi }_{\mathrm{iono}})𝐫_{\widehat{v}}(\mathrm{\Phi }_{\mathrm{ISM}})$$ (25) where $`\mathrm{\Phi }_{\mathrm{iono}}`$ and $`\mathrm{\Phi }_{\mathrm{ISM}}`$ are the angles arising from Faraday rotation in the ionosphere and the interstellar medium, respectively, $`\zeta `$ is the angle between the frame of the receiver and that of the sky, and $`\mathrm{\Phi }_I`$ is the instrumental phase delay arising from differing electronic pathlengths. The equivalent transformation law for the Stokes parameters is simply obtained from equation 25. $$S^{}=g_ag_b𝐁_{\widehat{q}}(2\beta )𝐑_{\widehat{q}}(2\mathrm{\Phi }_I)𝐁_{\widehat{u}}^{(1)}(\delta _\theta )𝐁_{\widehat{v}}^{(1)}(\delta _\chi )𝐑_{\widehat{u}}^{(1)}(\sigma _\chi )𝐑_{\widehat{v}}(\sigma _\theta )𝐑_{\widehat{v}}(2\zeta )𝐑_{\widehat{v}}(2\mathrm{\Phi }_{\mathrm{iono}})𝐑_{\widehat{v}}(2\mathrm{\Phi }_{\mathrm{ISM}})S$$ (26) The analysis of this model is simplified through the commutation relations in equation 19. Amplifications and phase delays in the downconversion chain represent boosts and rotations with respect to the $`\widehat{q}`$ axis, so it is easy to see that the order of amplifiers and relative electronics delays does not matter. Terms from individual components may simply be collected into the overall parameters $`\beta `$ and $`\mathrm{\Phi }_I`$. Similarly, rotations about the same axis commute, and rotations about the $`\widehat{v}`$ axis by the angles $`\zeta `$, $`\mathrm{\Phi }_{\mathrm{iono}}`$, and $`\mathrm{\Phi }_{\mathrm{ISM}}`$ all have the same signature. Note that each of the parameters in this model may have its own time and frequency dependence. For example, $`\mathrm{\Phi }_{\mathrm{iono}}`$ fluctuates in time as the ionospheric column density changes, and scales as $`\nu ^2`$ from the cold plasma dispersion relation. Finally, calibration of a polarimetric observation is accomplished through the inversion of equations 25 or 26. Naturally such an inversion requires a knowledge of the parameters in this model. ## 5 Discussion The representation of polarimetric transformations in terms of the Lorentz group provide a simple context in which to analyze polarized radiation. This formalism is particularly relevant for the calibration of polarimetric data, and greatly simplifies the discussion from a qualitative standpoint. Propagation or instrumental effects that give rise to the rotations of equation 14 change the polarized component of the radiation $`𝐒`$, thus obscuring the properties of the true, emitted light. Those effects that take the form of a boost transformation mix the total intensity $`S_o`$ and the polarized component of the radiation $`𝐒`$. Such transformations can have a particularly detrimental effect on polarimetric observations. In many astrophysical applications $`S_o`$ is much larger than $`|𝐒|`$, so that even boosts nearly equivalent to the identity matrix may completely corrupt the polarized flux. Observations that aim to detect very small polarized fractions, such as the polarized component of the Cosmic Microwave Background radiation or the circularly polarized radio emission from Active Galactic Nuclei, are particularly vulnerable. For applications that require extremely high precision, the mixing of $`𝐒`$ into $`S_o`$ can corrupt an observation. One such example has been seen in high-precision pulsar timing, where differential amplification and receiver cross-talk induce time dependent mixing of the pulse profiles, thereby modifying the total intensity profile and systematically shifting the times of arrival (Britton et al., in preparation). For pulsar observations the invariant $`S_{\mathrm{inv}}`$ proves particularly useful, since an invariant pulse profile may be formed that is independent of propagation and instrumental effects. This invariant profile may then be used for timing or to investigate pulse variability that may be intrinsic to the pulsar. I thank René Grognard, Richard Manchester, Geoffrey Opat, and Matthew Bailes for useful conversations. I thank the referee for pointing out recent literature on the application of the Lorentz group to optics.
no-problem/9911/hep-lat9911003.html
ar5iv
text
# The glueball spectrum from novel improved actions. ## 1 INTRODUCTION The QCD glueball spectrum has been investigated in low-cost simulations using anisotropic lattices . To reduce the computational overhead, the spatial lattice was kept rather coarse (0.2-0.4 fm) while the temporal spacing was made much finer. The fine temporal grid allows adequate resolution of the Euclidean-time decay of appropriate correlation functions which, for gluonic states are rather noisy and fall too rapidly on coarse lattices. In these simulations, the scalar glueball suffered from large finite cut-off effects. The mass in units of $`r_0`$ fell sharply until the spatial lattice spacing, $`a_s`$ was about 0.25 fm when the mass rose again; the “scalar dip”. At the conference last year, we presented results from simulations with an anisotropic Wilson “two-plaquette” action which included a term constructed from the product of two parallel plaquettes on adjacent time-slices . This was found to reduce the scalar dip significantly. Here, we report on the status of simulations in progress using a Symanzik-improved action including a similar two-plaquette term. In this study, we tune the anisotropy parameter in the lattice action to recover Euclidean invariance in the “sideways” potential. With these parameters fixed, we investigate the inter-quark potential for this action as an initial test that the benefits of the Symanzik program are preserved by the addition of the extra term. We are currently computing the glueball spectrum for this action. ## 2 THE ACTION Following Ref. , we begin with the plaquette operator, $$P_{\mu \nu }(x)=\frac{1}{N}\text{ReTr }U_\mu (x)U_\nu (x+\widehat{\mu })U_\mu ^{}(x+\widehat{\nu })U_\nu ^{}(x).$$ The Wilson (unimproved) discretisation of the magnetic field strength is then constructed from the spatial plaquette. $`\mathrm{\Omega }_s`$ $`=`$ $`{\displaystyle \underset{x,i>j}{}}\left\{1P_{ij}(x)\right\}`$ (1) $`=`$ $`{\displaystyle \frac{\xi _0}{\beta }}{\displaystyle d^4x\text{ Tr }B^2}+𝒪(a_s^2),`$ where $`i,j`$ are spatial indices and $`\xi _0`$ is the anisotropy, $`a_s/a_t`$ at tree-level in perturbation theory. We introduce a term which correlates pairs of spatial plaquettes separated by one site temporally $$\mathrm{\Omega }_s^{(2t)}=\frac{1}{2}\underset{x,i>j}{}\left\{1P_{ij}(x)P_{ij}(x+\widehat{t})\right\}.$$ (2) The separation of the two plaquettes allows the standard Cabibbo-Marinari and over-relaxation gauge-field update methods to be applied. Including two-plaquette terms adds a computational overhead of only $`10\%`$ to our improved action workstation codes. It can be shown that for all $`\omega `$, the operator combination, $$\stackrel{~}{\mathrm{\Omega }}_s=(1+\omega )\mathrm{\Omega }_s\omega \mathrm{\Omega }_s^{(2t)}$$ (3) has an identical expansion in powers of $`a_{s,t}`$ (at tree-level) to $`\mathrm{\Omega }_s`$ up to $`𝒪(a_s^4)`$. Thus, starting from the improved action $`S_{II}`$ used in Refs. , it is straightforward to construct a Symanzik improved, two-plaquette action by simply replacing the spatial plaquette term in $`S_{II}`$ with the linear combination $`\stackrel{~}{\mathrm{\Omega }}_s`$ of Eqn. 3. In full, this action is $`S_\omega =`$ $`{\displaystyle \frac{\beta }{\xi _0}}\left\{{\displaystyle \frac{5(1+\omega )}{3u_s^4}}\mathrm{\Omega }_s{\displaystyle \frac{5\omega }{3u_s^8}}\mathrm{\Omega }_s^{(2t)}{\displaystyle \frac{1}{12u_s^6}}\mathrm{\Omega }_s^{(R)}\right\}`$ $`+\beta \xi _0\left\{{\displaystyle \frac{4}{3u_s^2u_t^2}}\mathrm{\Omega }_t{\displaystyle \frac{1}{12u_s^4u_t^2}}\mathrm{\Omega }_t^{(R)}\right\},`$ (4) with $`\mathrm{\Omega }_t`$ the temporal plaquette and $`\mathrm{\Omega }_s^{(R)}`$,$`\mathrm{\Omega }_t^{(R)}`$ the $`2\times 1`$ rectangle in the $`(i,j)`$ and $`(i,t)`$ planes respectively. This action has leading $`𝒪(a_s^4,a_t^2,\alpha _sa_s^2)`$ discretisation errors and only connects sites on adjacent time-slices, ensuring the free gluon propagator has only one real mode. The free parameter $`\omega `$ is chosen such that the approach to the QCD continuum is made on a trajectory far away from the critical point in the plane of fundamental-adjoint couplings. Close to the QCD fixed point, physical quantities should be weakly dependent on $`\omega `$. This provides us with a consistency check, however the data presented here are for one value only, $`\omega =3`$. ## 3 TUNING THE ANISOTROPY At finite coupling, the anisotropy measured using a physical probe differs from the parameter in the action at $`𝒪(\alpha _s)`$ . In previous calculations, we relied upon the smallness of these renormalisations for the (plaquette mean-link improved) action $`S_{II}`$. For the action of Eqn. 4, these renormalisations are larger and thus we chose to tune the input parameter in the action to ensure that the potentials measured along anisotropic axes matched. We follow a similar procedure to Ref. . The potentials between two static sources propagating along the z-axis for separations on both fine and coarse axes, $`V_t`$ and $`V_s`$ respectively, are measured using smeared Wilson loops. Since the UV divergences due to the static sources are the same, tuning $`\xi _0`$ such that the ratio $$\rho _n=\frac{a_sV_s(na_s)}{a_sV_t(mna_t)}1,$$ (5) implies the anisotropy $`\xi _V=m(mZ)`$. A consistency check is provided by studying different coarse source separations, $`na_s`$. Fig. 1 shows this tuning for $`n=3,4,5`$, where the desired anisotropy is 6. Consistency is observed for $`n=4`$ and 5 and the appropriate $`\xi _0`$ is found to better than $`1\%`$. ## 4 SIMULATION RESULTS ### 4.1 The inter-quark potential The replacement of the spatial plaquette in $`S_{II}`$ with $`\stackrel{~}{\mathrm{\Omega }}_s`$ of Eqn. 3 should lead only to changes in the irrelevent operators responsible for $`𝒪(a_s^4,\alpha _sa_s^2)`$ errors. To test this replacement still generates an improved action with the good rotational invariance of $`S_{II}`$, the inter-quark potential was computed for a variety of different inter-quark lattice orientations. The potential is shown in 2, and shows excellent rotational invariance. We conclude that the benefits of the Symanzik improvement programme are preserved by including the two-plaquette term for a typical value of $`\omega `$ useful for glueball simulation. ### 4.2 The glueball spectrum At present, we are computing the glueball spectrum on the $`\xi _V=6`$ tuned lattices. Preliminary data are presented in Figs. 3 and 4. In Fig. 3, the finite-lattice-spacing artefacts in the scalar glueball mass for the new action are compared to those of $`S_{II}`$. The lattice cut-off dependence is seen to be significantly reduced and for the range of lattice spacings studied here, the mass rises monotonically with lattice spacing rather than falling first to a minimum. Fig. 4 shows the lattice spacing dependence on the tensor and pseudoscalar glueballs. Their lattice spacing dependence is similar to the form for $`S_{II}`$ and consistent with leading $`𝒪(a_s^4)`$ behaviour. ## 5 CONCLUSIONS Preliminary data from our simulations of the Symanzik improved action of Eqn. 4 suggest the scalar dip is removed by inclusion of a two-plaquette term with negative coefficient, consistent with the argument that the poor scaling of the scalar glueball, even after Symanzik improvement, is caused by the presence of a nearby critical point. The inter-quark potential on this new action exhibits equally good rotational symmetry to the improved actions of Refs. .
no-problem/9911/astro-ph9911061.html
ar5iv
text
# High-Energy Spectral Signatures in Gamma-Ray Bursts ## Introduction High energy gamma-rays have been observed for six gamma-ray bursts by the EGRET experiment on CGRO. Most conspicuous among these observations is the emission of an 18 GeV photon by the GRB940217 burst hurl94 . Taking into account the instrumental field of view, these detections indicate that emission in the 1 MeV–10 GeV range is probably common among bursts, if not universal. One implication of GRB observability at energies around or above 1 MeV is that, at these energies, spectral attenuation by two-photon pair production ($`\gamma \gamma e^+e^{}`$) is absent in the source. From this fact, early on Schmidt Schmidt78 concluded that if a typical burst produced quasi-isotropic radiation, it had to be less distant than a few kpc, since the optical depth $`\tau _{\gamma \gamma }`$ scales as the square of the distance to the burst. This result conflicted with BATSE’s determination of the spatial isotropy and inhomogeneity of bursts Meeg96 , which suggested that they are either in an extended halo or at cosmological distances (where $`\tau _{\gamma \gamma }10^{12}`$ for isotropic photons). Hence Fenimore et al. feh92 proposed that GRB photon angular distributions are highly beamed and produced by a relativistically moving plasma, a suggestion that has become very popular. This can dramatically reduce $`\tau _{\gamma \gamma }`$ and blueshift spectral attenuation turnovers out of the observed spectral range. Various determinations of the bulk Lorentz factor $`\mathrm{\Gamma }`$ of the GRB medium have been made in recent years, mostly concentrating kp91 ; baring93 on cases where the angular extent of the source was of the order of $`\mathrm{\hspace{0.17em}1}/\mathrm{\Gamma }`$. These calculations generally assume an infinite power-law burst spectrum, and deduce bh97b that gamma-ray transparency up to the maximum energy detected by EGRET requires $`\mathrm{\Gamma }100`$$`\mathrm{\hspace{0.17em}10}^3`$ for cosmological bursts. While power-law source spectrum assumption is expedient, the spectral curvature seen in most GRBs by BATSE band93 is expected to play an important role in reducing the opacity for potential TeV emission from these sources (Baring & Harding bh97a ). Such curvature is patently evident in 200 keV–2 MeV spectra of some EGRET-detected bursts, and its prevalence in bursts is indicated by the generally steep EGRET spectra for bursts hurl94 ; Schneid92 ; Sommer94 . In this paper, the principal effects introduced into pair production opacity calculations by spectral breaks in the BATSE energy range are considered, focusing the work of bh97a to identify the properties of cosmological bursts in the 1 GeV–1 TeV range. These signatures are clearly distinguishable from absorption by background radiation fields, thereby defining diagnostics that future ground-based initiatives such as Veritas, MILAGRO, HESS and MAGIC, and space missions such as GLAST can provide for GRB studies. ## SPECTRAL CURVATURE AND $`\gamma `$-$`\gamma `$ ATTENUATION The simplest picture kp91 ; baring93 of relativistic beaming has “blobs” of material moving with a bulk Lorentz factor $`\mathrm{\Gamma }`$ more-or-less toward the observer, and having an angular “extent” $`1/\mathrm{\Gamma }`$. For an infinite power-law spectrum $`n(\epsilon )=n_\gamma \epsilon ^\alpha `$, where $`\epsilon `$ is the photon energy in units of $`m_ec^2`$ (a dimensionless convention used throughout), for which the optical depth to pair creation assumes the form $`\tau _{\gamma \gamma }(\epsilon )\epsilon ^{\alpha 1}\mathrm{\Gamma }^{(1+2\alpha )}`$ for $`\mathrm{\Gamma }1`$. As noted above, the input source spectrum needs to be modified, to explore the effects of a relative depletion of low energy photons in the BATSE range. The simplest approximation to spectral curvature is a power-law broken at a dimensionless energy $`\epsilon _\text{B}`$ ($`=E_\text{B}/0.511`$MeV): $$n(\epsilon )=n_\gamma \epsilon _\text{B}^{\alpha _h}\{\begin{array}{cc}\epsilon _\text{B}^{\alpha _l}\epsilon ^{\alpha _l},\hfill & \text{if }\epsilon \epsilon _\text{B}\text{,}\hfill \\ \epsilon _\text{B}^{\alpha _h}\epsilon ^{\alpha _h},\hfill & \epsilon >\epsilon _\text{B}\text{.}\hfill \end{array}$$ (1) The optical depth determination for such a distribution utilizes results obtained in gs67 for truncated power-laws. The resulting forms are presented in bh97a , and the optical depth $`\tau _{\gamma \gamma }(\epsilon )`$ for attenuation of a broken power-law photon distribution has the basic form $$\frac{\tau _{\gamma \gamma }\left(\epsilon \right)}{n_\gamma \sigma _\text{T}R}\{\begin{array}{cc}\frac{\epsilon ^{\alpha _h1}}{\mathrm{\Gamma }^{2\alpha _h}},\hfill & \text{if }\epsilon \mathrm{\Gamma }^2/\epsilon _\text{B}\text{,}\hfill \\ \frac{\epsilon ^{\alpha _l1}}{\mathrm{\Gamma }^{2\alpha _l}},\hfill & \text{if }\epsilon \mathrm{\Gamma }^2/\epsilon _\text{B}\text{,}\hfill \end{array}$$ (2) that implies breaks in the absorbed portion of the hard gamma-ray spectrum that “image” the BATSE band break in the seed photons. More gradual spectral curvature can be treated by fitting the GRB continuum with piecewise continuous power-laws. A variability “size” $`R_v=3\times 10^7`$cm ($`=R/\mathrm{\Gamma }`$) is chosen here following bh97b ; bh97a , and the observed flux at 1 MeV normalizes the source density coefficient $`n_\gamma `$. The results of the attenuation of the spectra in Eq. (1) are depicted in Fig. 1 using an attenuation factor $`\mathrm{\hspace{0.17em}1}/(1+\tau _{\gamma \gamma })`$ that is appropriate for opacity skin depth effects. The source spectrum parameters are chosen to approximate the observed values for the “Superbowl burst” GRB930131, for two different extragalactic distance scales: the nearer one, 30 Mpc, is appropriate to scenarios where GRBs generate ultra-high energy cosmic rays. Clearly the attenuation is marked in the GeV–TeV band for the Lorentz factors $`\mathrm{\Gamma }`$ chosen, and could be reduced by increasing $`\mathrm{\Gamma }`$. The onset of attenuation couples to $`\mathrm{\Gamma }`$ and the EGRET band spectral index $`\alpha _h`$, and above this turnover the immediate spectral index is $`\mathrm{\hspace{0.17em}1}2\alpha _h`$. Precise knowledge of the GRB distance, such as through redshifts of accompanying optical afterglows, would facilitate the determination of tight constraints on $`\mathrm{\Gamma }`$. For either distance scale in Fig. 1, there is a flattening (to index $`\mathrm{\hspace{0.17em}1}\alpha _h\alpha _l`$) in the TeV/sub-TeV band that is a consequence of the spectral break in the BATSE band: it arises at $`\epsilon \mathrm{\Gamma }^2/\epsilon _\text{B}`$. The potential for observational diagnostics is immediately apparent. First, the extant EGRET data already provides a lower bound to $`\mathrm{\Gamma }`$: the dot on Fig. 1 represents the highest energy photon from GRB930131, and clearly suggests that $`\mathrm{\Gamma }250`$ for $`d=30`$Mpc or $`\mathrm{\Gamma }800`$ for $`d=1`$Gpc. Second, the sensitivity of ACTs is easily sufficient to detect bursts even with significant attenuation, so that they could well probe the spectral issues raised here. While the Whipple rapid search Conn97 postdated the EGRET detections, and produced merely upper limits as indicated in Fig. 1, an intriguing possible detection of a BATSE burst by the MILAGRITO forerunner to MILAGRO was announced at this meeting by McEnery et al. (these proceedings), foreshadowing advances to come. Perhaps the greatest strides in understanding will be precipitated by broad-band spectral coverage afforded by simultaneous detection of bursts by GLAST and TeV experiments like MILAGRO. Fig. 2 displays a time-evolutionary sequence of GRB spectra, including the effects of $`\gamma `$-$`\gamma `$ attenuation, and compares this with the potentially-constraining current Whipple integral sensitivity threshold (deduced from the results of Conn97 ), and the projected GLAST steady-source differential sensitivity. The GLAST sensitivity is obtained from simulations (Digel, private communication) of the spectral capability for high latitude, steady sources in a one-year survey, i.e. roughly 8 weeks on source. The real GLAST sensitivity for transient GRBs of duration $`t_{\mathrm{dur}}`$ can be estimated to be roughly $`[(8\mathrm{weeks})/t_{\mathrm{dur}}]^{1/2}`$ times that depicted. Note that the differential sensitivity is the most appropriate measure for spectral diagnostic capabilities. Evidently, ACTs and GLAST working in concert will be able to determine the spectral shape and evolution of bright, flat-spectrum bursts like GRB930131 if the attenuation is no more dramatic than $`\mathrm{\hspace{0.17em}1}/(1+\tau _{\gamma \gamma })`$. The particular evolutionary scenario depicted in Fig. 2 is an adiabatic one for blast wave deceleration during the sweep-up phase, where the dependences on time $`t`$ are $`\mathrm{\Gamma }t^{3/8}`$, $`\epsilon _\text{B}\mathrm{\Gamma }^4t^{3/2}`$, and $`\epsilon _\text{B}^2f(\epsilon _\text{B})\mathrm{\Gamma }^{8/3}t^1`$ for the flux at the peak Derm99 . Shifts in the turnover energy and sub-TeV break energy, and correlations with BATSE flux and break energy should be discernible in bright sources. It must be emphasized that these internal absorption characteristics are easily distinguishable from those of external absorption due to the cosmological infra-red background along the line of sight SdeJ96 ; mhf96 . Attenuation by such background fields couples to the redshift, not parameters internal to the source nor the shape of the spectrum in the BATSE and EGRET bands. Furthermore, it is always exponential in nature (i.e. of severity equivalent to the dashed curve in Fig. 2) since the emission region is distinct from the location of the soft target photons, and is patently independent of time. The possibility of confusing such with the internal attenuation that forms the focus of this paper seems minimal. Hence, the prospects for powerful spectral diagnostics in bright bursts with atmospheric Čerenkov telescopes and the GLAST mission promise an exciting time ahead for the field of high energy gamma-ray astronomy. Acknowledgments: I thank Alice Harding and Brenda Dingus for helpful discussions, and Seth Digel for simulating GLAST spectral sensitivities.
no-problem/9911/astro-ph9911299.html
ar5iv
text
# The BeppoSAX X-ray spectrum of the remnant of SN 1006 ## 1 Introduction The historical supernova SN 1006 was probably a Type Ia supernova (Clark & Stephenson Clark (1977), Schaefer Schaefer (1996)). Its remnant has now a diameter of 30’ and in recent years three interesting findings were reported: * The ASCA X-ray spectrum indicates that the emission above $`1`$ keV is dominated by synchrotron radiation from electrons with energies up to $``$100 TeV, which are accelerated at the shock front (Koyama et al. Koyama95 (1995), Reynolds Reynolds98 (1998)). The detection of TeV gamma ray emission confirms the presence of extremely relativistic electrons (Tanimori et al. Tanimori (1998)). * Modeling far ultra-violet spectra of a Northwestern filament has revealed that the post shock plasma has not reached ion-electron temperature equilibration. The electron temperature was found to be only $`10\%`$ of the ion temperature (Laming et al. Laming96 (1996)). * Ultraviolet absorption features in the spectrum of the Schweizer-Middleditch (Schweizer (1980)) star, a blue subdwarf lying behind the remnant, have revealed the presence of blue and red-shifted unshocked silicon and iron (Hamilton et al. Hamilton97 (1997)). Remarkably, no evidence was found for the presence of $`0.5`$M of iron, which is likely to have been synthesized during the carbon deflagration of the white dwarf progenitor (Nomoto et al. Nomoto (1984)). Carbon deflagration is the currently favored model for Type Ia supernovae. The presence of a synchrotron component contaminates the thermal X-ray emission and the non-equilibration of ion and electron temperatures obscures the underlying shock hydrodynamics, which might otherwise have been inferred from the temperature structure. On the other hand, in X-rays other ion species can be studied than in the UV and this may help verifying the models for nucleosynthesis of Type Ia supernovae. In this paper we present BeppoSAX spectra of the remnant of SN 1006. We will show that the thermal emission from the central region of the remnant is best described by a two component non-equilibrium ionization model with a very low ionization parameter. ## 2 The data and method The Italian-Dutch satellite BeppoSAX (Boella et al. 1997a ) observed SN 1006 in April 1997. The observation consists of three different pointings, which together cover the whole remnant by the imaging instruments LECS (Parmar et al. Parmar (1997)) and MECS (Boella et al. 1997b ). The total observation time is 91 ks for the MECS and 37 ks for LECS, which has a lower observation efficiency owing to UV leakage. Both the LECS ($`0.110`$ keV) and the MECS ($`210`$ keV) are gas scintillation counters with a spectral resolution of $`8`$% at 6 keV, a spatial resolution at 6 keV of $`2`$′ FWHM and a half power width<sup>1</sup><sup>1</sup>1The half power width gives the width of the area in which 50% of the photons of a point source are expected to end up; this measure is sensitive to the broad wings of the point spread function, whereas the FWHM is more determined by the core of the point spread function. of $`2.5`$′. The spatial resolution is strongly energy dependent: with increasing energy the core of the point spread function decreases, whereas the wings of the point spread function (determined mostly by the mirror properties) become more dominant (Parmar et al. Parmar (1997), Boella et al. 1997b , cf. Vink et al. Vink99 (1999)). The extended wings of the point spread function make that in the case of SN 1006 the spectra of the low brightness central region, which emits mostly thermal emission (Koyama et al. Koyama95 (1995), Willingale et al. Willingale (1996), the latter based on ROSAT PSPC data), is contaminated by the radiation from the rims of the remnant, which is predominantly synchrotron radiation. We circumvented this problem by using a new version (v2.0) of the X-ray spectral fitting program SPEX (Kaastra et al. Kaastra (1996), cf. Kaastra et al. Kaastra99 (1999)), with which one can fit several spectra from the same object simultaneously. In this version the concept of the spectral redistribution matrix is extended to include also a spatial redistribution part. The spectral model is calculated on a spatial/spectral input grid and a model of the instrument properties describes what fraction of the incoming photons with a certain energy, coming from a certain region of the sky will end up in a given energy bin (channel) and spatial bin (spectral extraction region). This means in practice that two sets of spatial regions have to be provided (in the program these are specified using images compliant with the FITS format). One set specifies the sectors for which the model is calculated, the other set specifies the spectral extraction regions. For stable calculations, the extraction regions should roughly correspond to the spatial model sectors, but a detailed correspondence is not necessary. For example, some regions of SN 1006 were only partially covered by some observations, but this is accounted for in the spectral/spatial redistribution matrix. The X-ray emission models used for this paper are the same as in the standard SPEX program. We divided the remnant into six regions. Apart from a central region and two X-ray bright rims, we divided these regions further into a Northern and Southern half in order to see if the thermal emission from the brighter Southern half is different from the emission from the Northern halve. With North and South we mean here, and throughout the rest of the text, North and South with respect to the principle axis of the remnant, which is tilted with respect to the North-South direction in equatorial coordinates. The spatial model grid for which the models were calculated are shown in Fig. 1. We chose spectral extraction regions roughly corresponding to the model regions, but the extraction regions corresponding to the bright rims were larger, in order to reduce the contamination of the central regions by emission coming from the bright rims. We used archival ROSAT HRI images (Fig. 1, see also Winkler & Long Winkler (1997)) to define the spectral extraction regions and the model sectors. For the spectral extraction regions we took also into account the extend of the synchrotron rims in the BeppoSAX images, as they seem broader than on the HRI images due to the point spread function. The extraction of the spectra and the generation of the spectral-spatial response matrix was done with a program specifically made to extract BeppoSAX spectra. It incorporates the current knowledge of the detectors, such as the point spread function, vignetting and absorption by the strong-backs. We used the response matrices of September 1997 and the standard background data (November 1998). However, there are still some uncertainties in the detector calibration, especially for off axis positions. These uncertainties are larger for the LECS, which has a more complicated design than the MECS. For this reason we left the normalization of the LECS spectra and some off-axis MECS spectra free with respect to the on-axis MECS spectra. The relative LECS normalizations turned out to be between 0.6 to 0.9, consistent with other BeppoSAX results (e.g. Favata et al. Favata (1997)). The spectra were binned to a bin size of roughly 1/3rd the spectral resolution and some further rebinning was done for channels with low count rates. In order to circumvent statistical problems with bins with very few counts we used a method proposed by Wheaton et al. (Wheaton (1995)). This means that after obtaining a good fit, we used the best fit model to calculate the expected error per bin, instead of the observed counts. Using this method with two or three extra iterations gives in general a stable and in principle more reliable $`\chi ^2`$ value. ## 3 Spectral fitting Fitting six spectral regions simultaneously has the obvious disadvantage that the spectral model can become very complex. So we took care to constrain the spectral model as far as possible without loosing too much of its heuristic qualities. For each sector we chose to have three to four spectral components. A typical configuration consisted of the following components for each sky region: a power law component, one or two non-equilibrium ionization (NEI) thermal components and an absorption component (Morrison & McCammon Morrison (1983)). We only looked for spectral differences in the thermal emission between the Northern and Southern halves of the remnant. We assumed uniform abundances for SN 1006. Based on their deprojection of the ROSAT PSPC image of SN 1006, Willingale et al. (Willingale (1996)) reported that the synchrotron emission does not seem to originate from all around the remnant, but only from incomplete shells at the Northwest and Southeast of the remnant. Consequently, in our simplest model we assume that no synchrotron emission is originating from the central region of the remnant. This model produces a reasonable fit to the LECS spectra, but it does not fit the MECS spectra of the central regions above $``$4 keV, where an excess in the observed spectra with respect to the model exists (Fig. 2). The temperature of the thermal components was $`T_\mathrm{e}`$$`1.5`$ keV. So clearly an additional emission component is needed in order to fit the emission from the central regions. This can be either a thermal component with an higher temperature, but it could also mean that the non-thermal emission observed to come from the rims has in reality cylindrical symmetry, in which case the apparent structure of the remnant in X-rays may be due to an extreme case of limb brightening. The latter possibility would be in disagreement with the above mentioned ROSAT PSPC findings, but it is conceivable that the synchrotron emission is coming from such a thin layer that the deprojection scheme of Willingale et al. (Willingale (1996)) may not have worked adequately. We investigated both possibilities. In the case of an additional power law component we fixed the power law index of the central regions to a value of 2.8, similar to the the values found for the rims (see Table 2). We found, however, that an additional thermal component offers a better explanation. Only a hot thermal component fits adequately the Fe K emission seen in Fig. 3 (most clearly in the right bottom panel). This emission implies that there is some plasma present with a temperature in excess of $`2`$ keV. An additional power law component has, moreover, the disadvantage that the result is at a closer look inconsistent: it results in very high abundances for the thermal component (e.g. 148 times solar for silicon and 356 times solar for iron), whereas the emission measure indicated a total mass of $`5`$ Mfor a distance of 1.8 kpc (Laming et al. Laming96 (1996)). This implies that most of the shock heated plasma is swept up material, if SN 1006 was a Type Ia with a mass of $``$ 1.4 M, but then it is not clear why the abundances are so high. High abundances are also hard to reconcile with the evolved dynamical status of the remnant, which implies that the remnant consists mostly of swept up interstellar matter (Moffet et al. Moffet (1993), Long et al. Long88 (1988)). However, X-ray synchrotron models for supernova remnants (Reynolds,Reynolds98 (1998)) show that the synchrotron component is likely to have a more curved spectrum, which predicts less flux at lower energies. It may well be that such more advanced models do not have the problems mentioned above, although an additional thermal component will still be needed for producing the Fe K emission. From a statistical point of view the two models fit the data equally well, with slightly better fits for the thermal model (the model with an additional power law has $`\chi ^2/\nu =1689/1287`$). The best fit model with an additional thermal component (Table 1 and 2) is shown in Fig. 3. The total emission measure of $`n_\mathrm{e}n_\mathrm{H}V/d^2`$$`=(7.4\pm 1.4)10^{62}`$ m<sup>-3</sup>kpc<sup>-2</sup> implies a mass of $`M=8.3\pm 0.8f_{0.4}^{3/2}d_{1.8}^{3/2}`$M, where $`f_{0.4}`$ is the volume filling factor divided by 0.4 (cf. Willingale et al. Willingale (1996)) and $`d_{1.8}`$ is the distance in units of 1.8 kpc (Laming et al. Laming96 (1996)). The volume estimate assumes a spherical remnant with a diameter of 30′. For this model the implied silicon mass is 0.05 M and the iron mass is 0.06 M, which is much lower than the 0.5 M of iron expected in remnants of Type Ia supernovae. The iron abundance reported here is, however, higher than reported by Koyama et al. (Koyama95 (1995)). We do not find significant variations in temperature or ionization between the Northern and Southern half of the remnant, but, as Fig. 3 shows, the hottest component seems more dominant in the Northern region. An interesting feature in the spectra of the central regions is the iron K-shell emission around 6.4 keV. This is especially apparent in the lower right panel of Fig. 3. The centroid of this emission ($`6.3\pm 0.2`$ keV) is consistent with emission from iron in low ionization stages (caused by inner shell ionizations and excitations). In the next section we will discuss the very low value of the ionization parameter ($`n_\mathrm{e}t`$ ). Our best fit absorption column is a factor two lower than the value found by Koyama et al. (Koyama95 (1995)), but is higher than the value reported by Willingale et al. (Willingale (1996)). The higher ASCA value is not very surprising, because ASCA was less reliably calibrated below $`1`$ keV and ASCA is not sensitive below 0.4 keV. The fact that the ROSAT PSPC and BeppoSAX LECS show that there is emission at 0.2 keV is, however, a clear indication that the absorption column is not as high as indicated by the ASCA spectrum. We think that our absorption estimate is consistent with the ROSAT PSPC spectra if one takes into account that we use non-equilibrium ionization models and in this case such a model produces more line radiation below 1 keV than an equilibrium model, which implies a higher absorption to explain the flux levels. The power law index of the non-thermal spectra are somewhat lower than indicated by the ASCA spectra and higher than found for the ROSAT PSPC data (see Table 2). We checked for gradual steepening of the spectrum, which is expected on theoretical grounds (Reynolds Reynolds98 (1998)). Indeed, we found some evidence that on average the power law index changes from $`2.65\pm 0.21`$ below 2 keV to $`2.81\pm 0.05`$ above 2 keV. However, the scatter in the four points is rather large and we can, furthermore, not exclude that the difference in slope is due to calibration uncertainties, as there are still problems with the intercalibration of the LECS and MECS instruments. ## 4 Interpretation Our fits indicate that the spectra of the central region are best fitted with two NEI components with a very low value for the ionization parameter. The very low ionization is not surprising as previous studies indicate that the density of the ISM surrounding the remnant is very low: estimates of the pre-shock hydrogen number density vary from 0.04 cm<sup>-3</sup> (Laming et al. Laming96 (1996)) to 1 cm<sup>-3</sup> (Winkler & Long Winkler (1997)). Our mass estimate implies a pre-shock density of 0.1 cm<sup>-3</sup>. For the Sedov model the average ionization parameter ($`n_\mathrm{e}t`$ ) is approximately $`n_0t`$, where $`n_0`$ is the pre-shock hydrogen density and $`t`$ the age of the remnant. So for the shocked interstellar gas we estimate $`n_\mathrm{e}t`$ $`\mathrm{3\hspace{0.17em}10}^{15}`$ m<sup>-3</sup>s. For the ionization parameter of the shocked ejecta we can find an upper limit by assuming that 1.4 M of ejecta is completely ionized and that it has a volume filling factor of 0.4. This gives an electron density of 0.04 cm<sup>-3</sup> and implies $`n_\mathrm{e}t`$ $`\stackrel{<}{}\mathrm{1.2\hspace{0.17em}10}^{15}`$ m<sup>-3</sup>s. The $`n_\mathrm{e}t`$ value found by fitting the X-ray spectra agrees with our current understanding of SN 1006. The low ionization has, however, some intriguing aspects. For example, our models indicate that most of the silicon is in the form of Si V to Si XII. So with the silicon lines around 1.8 keV we are only observing the top of the iceberg. However, the silicon L-shell emission contributes significantly to the flux between 0.1 keV and 0.4 keV. Also L-shell emission from magnesium, sulphur and argon, and K-shell emission from carbon contribute to the flux below 0.5 keV. Unfortunately, L-shell emission is notoriously complicated; only the iron L-shell emission has been investigated in substantial detail. The LECS spectral resolution below 0.5 keV is not good enough to see possible discrepancies in the atomic data, but the resulting uncertainties in the flux below 0.5 keV may have affected the spectral modeling. This demonstrates the significance of the broad spectral range covered by the two BeppoSAX instruments. The observation of emission from iron around 6.4 keV illustrates that even ions in low ionization stages emit some X-ray line emission, because of inner shell ionizations and excitations (in this respect SN1006 seems to be similar to RCW 86, see Vink et al. Vink97 (1997)). It would be interesting to search for similar emission features from Ar and Ca with future X-ray detectors. The apparent two temperature structure is not surprising. SN 1006 is a large remnant and it would be more surprising if the whole remnant could be characterized by a single temperature. Furthermore, there are several possible explanations for the two temperature structure. Hydrodynamical shock models, such as the Sedov model (Sedov Sedov (1959)) predict temperature gradients. Models including a reverse shock (e.g. Chevalier Chev82 (1982)) even predict a two temperature regime. The lowest temperature is associated with the ejecta heated by the reverse shock, whereas the highest temperatures are attained by the shocked interstellar medium. However, hydrodynamical models predict the ion temperature, whereas the shape of the thermal X-ray continuum is determined by the electron temperature, which, in absence of electron-ion equilibration, can in principle be as much as a factor 1835 (the electron/proton mass ratio) lower than the ion temperature. It can be even more if the plasma consist of highly enriched supernova ejecta. In these extreme cases, however, we would not observe thermal X-ray emission at all. The equilibration process is still poorly understood. In the case of SN 1006 there is strong evidence for poor equilibration of the electron and ion temperature (Laming et al. Laming96 (1996)). The evidence comes from UV spectroscopy of the Northern shock front, but the equilibration may very well vary over the remnant. Another possible cause for temperature variations over the remnant are density differences. There is some evidence that SN 1006 has a very strong density gradient from the front to the back of the remnant (Hamilton et al. Hamilton97 (1997)). Yet another interesting possibility is that the shock heating process does not produce a Maxwellian electron distribution, since the low density in SN 1006 may prevent a rapid thermalization of the electron distribution (Laming Laming98 (1998), see also Itoh Itoh (1984), Hamilton & Sarazin Hamilton84 (1984)). The hot component could then be associated with bremsstrahlung from the tail of the non-thermal electron distribution. For practical reasons our fitting model has been largely a phenomenological model. It is important to realize this when interpreting the best fit parameters. For instance, our use of single/double NEI components does not account for the fact that there are temperature and ionization gradients. Or to be more precise: we suppose that these gradients can be approximated by two thermal plasma components. On the other hand models incorporating temperature and ionization gradients, such as spectral models based on the Sedov model (e.g. Kaastra & Jansen Kaastra93 (1993)), suppose that the gradients are well defined, whereas in reality the gradients may very well be more complicated, because of the pre-supernova density structure of the interstellar medium or because of the non-equilibration of ion and electron temperature, as discussed above. In our opinion a more severe problem with our modeling is that we use uniform abundances for the whole remnant, which is probably not correct, as there is very likely freshly shocked ejecta present consisting entirely of metals, on the one hand, and shocked interstellar gas with solar abundances, on the other hand. This might for example affect our total mass estimate, which may be lower because a pure metal plasma is an efficient bremsstrahlung emitter. However, for more realistic models we need either data from more sensitive X-ray instruments, or we need a clear understanding of the thermal and abundance structure of the remnant. For instance, with future X-ray missions we may be able to identify spatially or spectrally an ejecta emission component and observe the (projected) temperature structure of the remnant directly. The overall structure of SN 1006, as emerging from other observations, appears complicated and is not very well understood. Some observations seem even contradictory. For example, UV absorption measurements indicate that a lot of the ejecta are still in free expansion (Hamilton et al. Hamilton97 (1997)), but the kinematics of the remnant, on the other hand, show that SN 1006 is dynamically evolved implying that most of the ejecta has been shocked by now (Moffet et al. Moffet (1993), Long et al. Long88 (1988)). Finally, it is peculiar that a remnant that has such a high degree of symmetry with respect to the Southeast/Northwest axis (Roger et al. Roger (1988)) seems to have a strong front-back asymmetry (Hamilton et al. Hamilton97 (1997)). A strong deviation from cylindrical symmetry has also been reported by Willingale et al. (Willingale (1996)). ## 5 Conclusion Our analysis indicates that a one temperature model does not fit the thermal spectrum of SN 1006 adequately. An additional thermal component with a temperature in excess of 3 keV is needed. The alternative, an additional power law component, cannot be excluded, but in this case the abundances of the low temperature thermal component become exceptionally high, which is inconsistent with the idea that SN 1006 must have swept up a considerable amount of interstellar material. The values for the ionization parameters for the thermal components appear to be very low ($`n_\mathrm{e}t`$ $`\stackrel{<}{}\mathrm{3\; 10}^{15}`$ m<sup>-3</sup>s), but are consistent with the electron densities that we derive from the emission measure. Note, however, that in reality a range of ionization values will be present. For instance, the Sedov model (see Kaastra & Jansen Kaastra93 (1993)) predicts a superposition of plasma ionization values ranging from roughly 0.2 in the center of the remnant up to 2 times the average value at a radius of 0.9 times the shock radius. Inhomogeneities in the pre-shock interstellar medium may extend this range upward and downward. For such a low ionization parameter the L-shell emission of Mg, Si and S contributes substantially to the flux below 0.5 keV. This explains that our best fit hydrogen absorption column ($`N_\mathrm{H}`$ $`=(8.8\pm 0.5)10^{20}`$ cm<sup>-2</sup>) turns out to be higher than found by Willingale et al. (Willingale (1996)). Noteworthy is an iron K shell emission feature in the spectrum. The centroid of the line emission is $`6.3\pm 0.2`$ keV, which indicates that the emission is the result of inner shell ionizations and excitations. The fitted abundances clearly indicate non-solar abundances with, in particular, a very high silicon abundance. This is in agreement with the analysis of ASCA data (Koyama et al. Koyama95 (1995)). We did not find significant differences in the temperatures and ionization parameters for the spectra of the Northwestern and Southeastern halves of the remnant, but the hottest component seems more dominant in the Northern region. Observations in the near future with Chandra (AXAF), XMM and Astro E will be able to reveal new information on the remnant which is needed in order to obtain a consistent model for this remnant. In particular, a spatial or a spectral separation of emission originating from shocked ejecta and shocked interstellar gas will be very useful for understanding more about the structure and abundances of this interesting but rather enigmatic supernova remnant. ###### Acknowledgements. We thank Martin Laming for stimulating discussions on SN 1006. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. This work was financially supported by NWO, the Netherlands Organization for Scientific Research.
no-problem/9911/astro-ph9911169.html
ar5iv
text
# Weak Lensing Observations of High-Redshift Clusters of Galaxies ## 1. Introduction High-redshift clusters of galaxies are very powerful tools for testing cosmological and structure formation models. Both the expected number density of massive clusters at $`z>0.5`$ and the amount of substructure contained within the clusters depend very strongly on the mass density of the universe (Bahcall, Fan, & Cen 1997). X-ray observations have proven to be a very efficient means to detect these clusters. Unlike optical surveys, in which one can find an overdensity which is a superposition of unrelated galaxies, clusters are detected in X-rays emitted from gas heated during infall into a large potential well. While masses derived from the X-ray observations have been used to apply constraints to cosmological models (Henry 1997), they are subject to an uncertainty in that they, like dynamical mass estimates from cluster galaxy redshifts, depend on an assumed dynamical state of the cluster (Evrard, Metzler, & Navarro 1996). Weak gravitational lensing, however, has no such dependence, and thus can, in theory, provide mass estimates independent of any assumptions regarding the cluster lens. We, therefore, have undertaken an optical survey of high-redshift, X-ray selected clusters of galaxies to perform a weak lensing analysis on the clusters. Our primary goals are to measure the masses of the clusters without assumptions regarding the degree of virialization of the clusters and to detect any substructure in the clusters which would be indicative of the clusters still undergoing initial formation. We have, to date, observed six $`z>0.5`$ clusters, five from the Einstein Medium Sensitivity Survey (Gioia & Luppino 1994) and one from the ROSAT North Ecliptic Pole survey (Henry et al. 1998). All six clusters have deep ($`>1.5`$ hours) exposures in $`R`$-band from the Keck Observatory as well as shallower $`I`$\- and $`B`$-band images from the UH88<sup>′′</sup> telescope. ## 2. Weak Lensing Analysis A weak lensing signal is detected in a field by measuring the ellipticities of background galaxies and looking for a statistical deviation from an isotropic ellipticity distribution. We used a hierarchical peak finding algorithm on the $`R`$-band images to detect faint galaxies and measure their magnitudes and second moments of the flux distribution. The second moments were used to calculate ellipticities for each object, which were then corrected for psf anisotropies and reduction of the ellipticity due to psf smearing using techniques originally developed in Kaiser, Squires, & Broadhurst (1993). Details of the data reduction process can be found in Clowe et al. (1999) and a review of these and other techniques can be found in Mellier (1999). Once the ellipticities are corrected for psf smearing they can be used to measure the shear caused by the gravitational lensing exerted by the cluster. The shear field is quite noisy because of the intrinsic ellipticity distribution of the background galaxies, which is the dominant source of random error in weak lensing analysis. The shear can then be converted to the convergence, $`\kappa `$, which is the surface mass density of the lens divided by $`\mathrm{\Sigma }_{crit}`$, where $$\mathrm{\Sigma }_{crit}=\frac{c^2}{4\pi G}\frac{D_{os}}{D_{ol}D_{ls}}.$$ (1) The $`D`$’s are angular distances between the observer, the lens (cluster), and the source (background galaxy). For low redshift clusters ($`z<0.3`$), $`\mathrm{\Sigma }_{crit}`$ is effectively constant for $`z_{bg}>.8`$, where most background galaxies are located. For high redshift clusters, however, one must know the redshift distribution of the background galaxies to convert the convergence to a surface density. As the redshift distribution of the galaxies used in our sample ($`23<R<26.5`$ and $`RI<0.9`$) is currently poorly known, we cannot give a definite measure of the mass for these clusters. If the mass of the clusters can be measured by some other means, however, then one can measure the mean redshift, and possibly the redshift distribution, of the background galaxies. We have done this (Clowe et al. 2000) and find good agreement between the mean redshift we measure and the mean redshift calculated from the Fontana et al. (1999) HDF-S photometric redshift estimates, using the same magnitude and color selection criteria. We have used two different methods to convert shears to convergences. The first method is the KS93 inversion algorithm (Kaiser & Squires 1993) which uses the fact that both the shear and the convergence are combinations of second moments of the surface potential to transform, in Fourier space, between the two. This results in a two-dimensional image of the convergence, and thus the surface density, of the clusters but is limited in that it can only determine the convergence to an unknown additive constant. The second method, aperture densitometry, measures the circularly-averaged radial profile of the convergence around an arbitrarily chosen center minus the average convergence of a chosen annular region, usually set at the edges of the image. One can then either assume the convergence in this outer region is zero and measure a minimum mass at a given radius or fit the profile with an assumed mass model and determine the average convergence in the annular region for the model. ## 3. Discussion The maps of the convergence are given in Figure 1, along with contour overlays of the ROSAT X-ray emission and maps of the number density of galaxies with colors similar to the brightest cluster galaxy. As can be seen, there is in general a very good agreement between the features present in the galaxy number density distribution and the mass reconstructions. Most of these features are, however, at moderate statistical significance in the mass reconstructions, and their shapes have probably been altered to a large degree by the noise in the reconstructions. In particular, three of the clusters have a secondary peak in both the mass and galaxy surface densities. For two of these clusters, MS1054$``$03 and RXJ1716$`+`$67, spectra of the galaxies in the peaks have shown that they are at the same redshift as the main cluster peak (Tran et al. 1999; Gioia et al. 1999), while insufficient numbers of spectra exist for MS2053$``$04 to determine if the second peak is physically associated with the cluster. Using the aperture densitometry profiles, assuming the center of each cluster is the position of the brightest cluster galaxy, we have calculated the best fit singular isothermal sphere and NFW profiles (Navarro, Frenck, & White 1996). This was done using a $`\chi ^2`$ fitting algorithm and using the mean redshift of the HDF-S photometric redshift catalog for galaxies with the same magnitude and color range as the selected background galaxies. We find that the isothermal sphere model is a good fit to the lower mass clusters, but a poor fit to the higher mass clusters. The NFW profiles, however, provide a good fit to all the clusters, although the best fit profiles do not follow the relationship between total mass and concentration as given by the zero redshift relaxed N-body clusters in Navarro, Frenck, & White (1996). Using an F-test, we have determined that the difference in the reduced $`\chi ^2`$ for the two fits in the most massive clusters is significant in the 2-3 $`\sigma `$ range. Masses for the clusters measured at a 500 $`h^1`$ kpc radius from the BCG are given in table 1, along with the mass-to-light ratio at the same radius using the luminosity of galaxies with colors similar to the BCG to calculate the cluster luminosity. We are currently investigating statistics which can be used to quantify the amount of substructure present in the mass reconstructions. One statistic which we have tried is measuring the ellipticity of the mass peaks from their second moments of the surface density. To do this, however, we first must assume a value for the additive constant for each reconstruction. We have chosen this value so that the mean convergence in an annular region located 2/3rds of the distance from the center to the nearest edge in the image is the same as the mean convergence in the best fit NFW profile to each cluster. We then measure the second moments using a Gaussian weighting function with a FWHM of 100 $`h^1`$ kpc. Ellipticities measured in this manner are extremely sensitive to the value of the added constant, and changing the constant from the maximum to minimum value allowed within the 1$`\sigma `$ NFW profiles from the aperture densitometry fits would often change the measured ellipticity by a factor of two. To determine the significance of these ellipticities we performed Monte-Carlo simulations in which background galaxies were randomly positioned in a field with the same number density as seen in the images. These galaxies were then sheared and displaced appropriately for an isothermal sphere of mass similar to that measured in the clusters. A mass reconstruction was then performed on the galaxy distribution and the ellipticity of the central peak was measured. As the isothermal spheres have a zero ellipticity, any ellipticity measured is induced by the noise in the reconstruction from the intrinsic ellipticity distribution of the galaxies. From this distribution, we determined that the minimum ellipticities measured above for MS1054, RXJ1716, and MS0451 are greater than 72%, 88%, and 92% of the simulations. The other clusters have ellipticities consistent with those induced by noise. A second statistic we have investigated is that of the separation between the centroid of the mass distribution in the reconstruction, measured from minimizing the first moment of the mass distribution, and the position of the brightest cluster galaxy. To determine the significance of the separations we used the same Monte-Carlo simulations as above and measured the separation in centroid of the reconstructed mass distribution and that used when shearing the galaxies. From this we find that separation seen in RXJ1716 is larger than 98% of the simulations, but that none of the other clusters has a significant separation. ### Acknowledgments. We wish to thank Pat Henry, Harald Ebeling, Chris Mullis, and Megan Donahue for sharing their X-ray data prior to publication. This work was supported by NSF Grant AST-9500515 and the “Sonderforschungsbereich 375-95 für Astro–Teilchenphysik” der Deutschen Forschungsgemeinschaft. ## References Bahcall, N. A., Fan, X., & Cen, R. 1997, ApJ, 485, L53 Clowe, D., Luppino, G., Kaiser, N., & Gioia, I. 1999, ApJ, in press Clowe, D., Luppino, G., & Kaiser, N. 2000, in prep Evrard, A. E., Metzler, C. A., & Navarro, J. F. 1996, ApJ, 437, 56 Fontana, A., D’Odorico, S., Fosbury, R., Giallongo, E., Hook, R., Polli, F., Renzini, A., Rosati, P., Viezzer, R. 1999, A&A, 343, L19 Gioia, I. M. & Luppino, G. A. 1994, ApJS, 94, 583 Gioia, I.M., Henry, I.M., Mullis, C.R., Ebeling, H. and Wolter, A. 1999, AJ, 117, 2608 Henry, J. P. 1997, ApJ, 489, L1 Henry, J. P., Gioia, I. M., Mullis, C. R., Clowe, D. I., Luppino, G. A., Boehringer, H., Briel, U. G., Voges, W., & Huchra, J. P. 1997, AJ, 114, 1293 Kaiser, N., Squires, G., & Broadhurst, T. 1995, ApJ, 449, 460 Kaiser, N. & Squires, G. 1993, ApJ, 404, 441 Mellier, Y. 1999, ARA&A, in press Navarro, J. F., Frenk, C. S., & White, S. D. M. 1996, ApJ, 462, 563 Tran, K., Kelson, D., Van Dokkum, P., Franx, M., Illingworth, G., Magee, D. 1999, ApJ, 522, 39
no-problem/9911/astro-ph9911078.html
ar5iv
text
# Analytical properties of the R1/m law ## 1 Introduction After its introduction as a generalization of the $`R^{1/4}`$ law (de Vaucouleurs 1948), the so–called Sersic law (Sersic 1968) has found a variety of applications. On the observational side, it has been used as a tool to quantify the non-homology of elliptical galaxies (see, e.g., Davies et al. 1988; Capaccioli 1989, hereafter C89; Caon, Capaccioli & D’Onofrio 1993; Young & Currie 1994; D’Onofrio, Capaccioli & Caon 1994; Prugniel & Simien 1997, hereafter PS97; Wadadekar, Robbason & Kembhavi 1999). In addition, it has been applied to the description of the the surface brightness profiles of galaxy bulges (see Andredakis, Peletier & Balcells 1995; Courteau, De Jong & Broeils 1996). One research area where the usefulness of the Sersic law as a statistically convenient description has been exploited is that of the Fundamental Plane of elliptical galaxies (Graham et al. 1996; Ciotti, Lanzoni & Renzini 1996; Graham & Colless 1997; Ciotti & Lanzoni 1997; Graham 1998). On the theoretical side, it has been the focus of several general investigations (see, e.g., Makino, Akiyama & Sugimoto 1990; Ciotti 1991, hereafter C91; Gerbal et al. 1997; Andredakis 1998). According to such law, the surface brightness profile is given by $$I(R)=I_0e^{b\eta ^{1/m}},$$ (1) where $`\eta =R/R_\mathrm{e}`$, $`m`$ is a positive real number, and $`b`$ a dimensionless constant such that $`R_\mathrm{e}`$ is the effective radius, i.e., the projected radius encircling half of the total luminosity associated with $`I(R)`$. In a broad statistical sense, it is found that bright ellipticals are well fitted by the Sersic law with $`m`$ around 4, dwarf ellipticals and galaxy disks with $`m`$ around 1, and finally bulges and intermediate luminosity ellipticals with $`1m4`$. For some galaxies, a value of $`m`$ even higher than 10 has been found (e.g., see NGC 4552, Caon et al. 1993). The projected luminosity inside the projected radius $`R`$ is given by $$L(R)=2\pi _0^RI(R^{})R^{}𝑑R^{}=I_0R_\mathrm{e}^2\frac{2\pi m}{b^{2m}}\gamma (2m,b\eta ^{1/m}),$$ (2) where (for $`\alpha >0`$) $$\gamma (\alpha ,x)=_0^xe^tt^{\alpha 1}𝑑t$$ (3) is the *(left) incomplete gamma function*. The total luminosity is then given by $$L=I_0R_\mathrm{e}^2\frac{2\pi m}{b^{2m}}\mathrm{\Gamma }(2m),$$ (4) where $`\mathrm{\Gamma }(\alpha )=\gamma (\alpha ,\mathrm{})`$ is the *complete gamma function*. From the definition of $`R_\mathrm{e}`$ it follows that $`b(m)`$ is the solution of the following equation: $$\gamma (2m,b)=\frac{\mathrm{\Gamma }(2m)}{2}.$$ (5) ## 2 Asymptotic expansion Unfortunately, Eq. (5) cannot be solved in explicit, closed form, and so it is usually solved numerically<sup>1</sup><sup>1</sup>1For $`m=1`$, i.e., the exponential profile, the solution can be formally expressed using the *Lambert* $`W`$ function, as $`b(1)=1W(1,1/2e)=1.678346990\mathrm{}`$. This is inconvenient for a number of observational and theoretical applications. The exact values of $`b(m)`$ are recorded in Table 1 for $`1m10`$. For the de Vaucouleurs law, $`m=4`$ and $`b(4)7.66924944`$. Interpolation formulae for $`b(m)`$ have been given in the literature, namely $`b1.9992m0.3271`$ by C89 (as reported by Graham & Colless 1997), $`b2m0.324`$ by C91, $`b2m1/3`$ (for $`m`$ integer) by Moriondo, Giovanardi & Hunt (1998), and the “numerical solution” $`b(m)2m1/3+0.009876/m`$ by PS97. These expressions provide an accurate fit in the range $`0.5m10`$; curiously, their leading term is *linear* in $`m`$, with a slope very close to 2. In the following we show that this behavior results from a general property of the gamma function. Prompted by Eq. (5) we now address the following: Problem Solve for $`x`$ $$\gamma (\alpha ,x)=\frac{\mathrm{\Gamma }(\alpha )}{2},$$ (6) for given $`\alpha >0`$. Because no explicit solution in closed form is available, we will focus on the asymptotic expansion of $`x(\alpha )`$ for $`\alpha >>1`$. In fact, it is well known that in many cases asymptotic expansions turn out to give excellent approximations of the true function even for relatively small values of the expansion parameter. The starting point of our study is the asymptotic relation (see Abramowitz & Stegun 1965) $$\mathrm{\Gamma }(\alpha )e^\alpha \alpha ^\alpha \sqrt{\frac{2\pi }{\alpha }}\left[1+\frac{1}{12\alpha }+\frac{1}{288\alpha ^2}\frac{139}{51840\alpha ^3}\frac{571}{2488320\alpha ^4}+\frac{163879}{209018880\alpha ^5}+\mathrm{O}(\alpha ^6)\right].$$ (7) This is the *Stirling formula*<sup>2</sup><sup>2</sup>2The derivation of this formula can be found in standard textbooks. The coefficients appearing in the asymptotic expansion of $`\mathrm{ln}\mathrm{\Gamma }(\alpha )`$ for $`\alpha \mathrm{}`$ can be expressed in terms of the so–called *Bernoulli numbers*; see, e.g., Arfken & Weber 1995, Chapts. 5 and 10., which is known to be associated with a relative error smaller than $`3\times 10^6`$ already for $`\alpha =2`$. Let us now introduce the sequence $$x_n=\alpha +\underset{k=0}{\overset{n1}{}}\frac{c_k}{\alpha ^k}$$ (8) with $`x_0=\alpha `$, so that $`x_{n+1}=x_n+c_n/\alpha ^n`$. Here $`c_k`$ are coefficients (to be determined at a later stage), independent of $`\alpha `$. Then we start by proving the following asymptotic results, applicable for $`\alpha >>1`$. Lemma 1 The following asymptotic relation $$\gamma (\alpha ,x_0)\frac{\mathrm{\Gamma }(\alpha )}{2}+e^\alpha \alpha ^\alpha \underset{k=0}{\overset{\mathrm{}}{}}\frac{P_k^{(0)}}{\alpha ^{k+1}},$$ (9) holds, where $`P_k^{(0)}`$ are rational numbers. The validity of Eq. (9) can be established by means of a standard asymptotic expansion (e.g., see Bender & Orszag 1978, Bleinstein & Handelsman 1986) of the integral $$\gamma (\alpha ,\alpha )=\alpha ^\alpha e^\alpha _1^0\frac{\mathrm{exp}[\alpha s+\alpha \mathrm{ln}(1+s)]}{1+s}𝑑s.$$ (10) In fact, the argument of the integral is the same as that of the integral representation of $`\mathrm{\Gamma }(\alpha )`$. In both cases the stationary point for the exponent occurs at $`s=0`$, but for $`\mathrm{\Gamma }(\alpha )`$ the stationary point is in the middle of the domain of integration, because the integral extends to $`\mathrm{}`$ (instead, for the integral in Eq. the upper limit is precisely $`s=0`$). Thus, when we consider the power series expansion (in $`s`$) of the argument of the integral around the stationary point, for $`\gamma (\alpha ,\alpha )`$ the *even* powers of $`s`$ contribute exactly *one half* of their contribution to $`\mathrm{\Gamma }(\alpha )`$, while the *odd* powers determine the terms in Eq. (9) associated with the coefficients $`P_k^{(0)}`$ (in contrast, the odd powers do not contribute to $`\mathrm{\Gamma }(\alpha )`$, by symmetry). The calculation of $`P_k^{(0)}`$ is tedious, but straightforward. Note that there is a “shift” of powers, by $`\alpha ^{1/2}`$, between the two terms on the right hand side of Eq. (9). In particular, the second term is smaller by a factor $`\mathrm{O}(\alpha ^{1/2})`$. This already shows that $`x_0=\alpha `$ is a first approximate solution to the problem set by Eq. (6). Lemma 2 The following asymptotic relation $$\gamma (\alpha ,x_{n+1})\gamma (\alpha ,x_n)+e^\alpha \alpha ^\alpha f(\alpha )$$ (11) holds, with $`f(\alpha )=\mathrm{O}(\alpha ^{n1})`$. To leading order, $`f(\alpha )c_n/\alpha ^{n+1}`$. This result easily follows from the definitions of the quantities involved (Eqs. and ), which give $$\gamma (\alpha ,x_{n+1})=\gamma (\alpha ,x_n)+e^{x_n}_0^{c_n/\alpha ^n}e^t(t+x_n)^{\alpha 1}𝑑t.$$ (12) At this point we can proceed to prove the following theorem: Theorem For large (real) values of $`\alpha `$, the full asymptotic expansion of the solution to the problem posed by Eq. (6) can be expressed as $$x(\alpha )=\alpha +\underset{n=0}{\overset{\mathrm{}}{}}\frac{c_n}{\alpha ^n},$$ (13) where $$c_n=P_n^{(n)},$$ (14) and the coefficients $`P_k^{(n)}`$ can be calculated by iteration on the relation $$\gamma (\alpha ,x_n)\mathrm{\Gamma }(\alpha )/2+e^\alpha \alpha ^\alpha \underset{k=n}{\overset{\mathrm{}}{}}\frac{P_k^{(n)}}{\alpha ^{k+1}}.$$ (15) The proof is obtained by induction. In fact, Eq. (9) shows that the statement is true for $`n=0`$, with the coefficients $`P_k^{(0)}`$ available from the asymptotic analysis outlined in the proof of Lemma 1. If we now refer to the result of Lemma 2, with the leading order expression for $`f(\alpha )`$, and assume the statement (related to Eq.) to hold true for $`x_n`$, we find $$\gamma (\alpha ,x_{n+1})\frac{\mathrm{\Gamma }(\alpha )}{2}+e^\alpha \alpha ^\alpha \left[\underset{k=n}{\overset{\mathrm{}}{}}\frac{P_k^{(n)}}{\alpha ^{k+1}}+\frac{c_n}{\alpha ^{n+1}}+\mathrm{}\right].$$ (16) In other words, the statement is found to hold true also for $`x_{n+1}`$, provided $`c_n=P_n^{(n)}`$, as required by Eq. (14). The method thus provides a way to systematically improve the approximation to $`x(\alpha )`$ by means of $`n`$ steps, leading to an estimate $`x_n`$; step by step the possible presence of undesidered “shifted” (odd) terms is eliminated and the process leads to the complete determination of the coefficients defining the asymptotic series (13). At a given level $`n`$ of desidered accuracy, the coefficients $`P_k^{(n)}`$ depend on the values $`P_k^{(i)}`$ for $`i=0,\mathrm{},n1`$. The explicit computation yields: $$x(\alpha )\alpha \frac{1}{3}+\frac{8}{405\alpha }+\frac{184}{25515\alpha ^2}+\frac{1048}{1148175\alpha ^3}\frac{17557576}{15345358875\alpha ^4}+\mathrm{O}(\alpha ^5).$$ (17) The first two terms can be easily checked using standard general formulae for the leading terms of the relevant steepest descent asymptotic expansion. ## 3 Analytical properties of the Sersic law Therefore, the first terms of the asymptotic expansion of $`b(m)`$ (for real $`m`$) are $$b(m)2m\frac{1}{3}+\frac{4}{405m}+\frac{46}{25515m^2}+\frac{131}{1148175m^3}\frac{2194697}{30690717750m^4}+\mathrm{O}(m^5).$$ (18) Equation (18) now clearly explains the value of the interpolation formulae found earlier (C89, C91, PS97); note that $`4/405=0.0098765\mathrm{}`$. How many terms in the asymptotic expansion are required to obtain a better representation of $`b(m)`$ when compared to the previously introduced interpolations? We have computed the relative errors of the various expressions with respect to the true value of $`b(m)`$ (obtained by solving numerically Eq. with a precision of 20 significant digits) for integer values of $`m`$ in the range $`1m10`$, and the results are reported in Table 1. The first result is that using the first four terms of the expansion the true value of $`b(m)`$ is recovered with a relative error of $`6\times 10^7`$ for $`m=1`$, and $`4\times 10^9`$ for $`m=10`$, i.e., the asymptotic expansion so truncated performs much better than the formulae cited previously. Obviously, for larger values of $`m`$ the error becomes correspondingly smaller. The second somewhat surprising result is the fact that Eq. (18) is already very accurate for $`m`$ as small as 1. This allows us to include, within the reach of the present analysis, the case of exponential profiles. A third point that we have noted is that, for *fixed* $`m`$, there is an *optimal truncation* of the asymptotic expansion, beyond which, as is well known in the general context of asymptotic analysis, increasing the number of terms in the expansion does not improve the accuracy of the estimate. For example, for $`m=1`$, the optimal truncation occurs at the fourth term, for which the attained relative error is $`6\times 10^7`$. For simplicity, in the following part of this Section we will record a number of interesting analytical expressions restricted to their leading order. Of course, the asymptotic analysis provided here would allow us to give explicitly any higher order term, not shown below, if so desired. ### 3.1 Total luminosity and central potential for a spherical system with an R<sup>1/m</sup> projected luminosity profile From Eqs. (4) and (18) the total luminosity is found to be $$L=I_0R_\mathrm{e}^2\frac{2\pi m}{b^{2m}}\mathrm{\Gamma }(2m)I_0R_\mathrm{e}^22\pi ^{3/2}e^{1/3}e^{2m}\sqrt{m}[1+\mathrm{O}(m^1)],$$ (19) where we have used the fact that $`b^{2m}e^{1/3}(2m)^{2m}[1+\mathrm{O}(m^1)]`$. Following C91, the central potential of the spherically symmetric density distribution associated with the Sersic law is given by: $$\mathrm{\Phi }_0=G\frac{M}{L}I_0R_\mathrm{e}\frac{4\mathrm{\Gamma }(1+m)}{b^m}.$$ (20) Here $`M`$ is the total mass of the system, and the mass–to–light ratio is taken to be constant. Thus, an asymptotic estimate at given $`I_0`$, $`R_\mathrm{e}`$ is $$\mathrm{\Phi }_0\frac{GM}{L}I_0R_\mathrm{e}2^{5/2}\pi ^{1/2}e^{1/6}\frac{\sqrt{m}}{(2e)^m}[1+\mathrm{O}(m^1)].$$ (21) We note that, for $`m=1`$ and $`m=4`$, a truncation of Eq. (19) to the leading term is characterized by a relative error of 5.7 per cent and of 1.5 per cent, respectively. The corresponding truncation on Eq. (21) is associated with a relative error of 8.6 per cent and 2.3 per cent. The proper normalization of the R<sup>1/m</sup> profile, to be considered for a case with given scales $`L`$ and $`R_\mathrm{e}`$, is $$I(R)=\frac{L}{R_\mathrm{e}^2}\frac{b^{2m}e^{b\eta ^{1/m}}}{2\pi m\mathrm{\Gamma }(2m)},$$ (22) which thus provides the useful quantity $$I_\mathrm{e}=I(\eta =1)\frac{L}{R_\mathrm{e}^2}\frac{1}{2\pi ^{3/2}\sqrt{m}}[1+\mathrm{O}(m^1)],$$ (23) so that $$\mathrm{\Phi }_0\frac{GM}{R_\mathrm{e}}\frac{2^{3/2}}{\pi e^{1/6}}\left(\frac{e}{2}\right)^m[1+\mathrm{O}(m^1)].$$ (24) For $`m=1`$ and $`m=4`$, a truncation of Eq. (23) to the leading term is characterized by a relative error of 7.3 per cent and of 1.8 per cent, respectively. The corresponding truncation on Eq. (24) is associated with a relative error of 3.1 per cent and 0.8 per cent. ## 4 Conclusions In this paper, the full asymptotic expansion for the dimensionless scale factor $`b(m)`$ appearing in the Sersic profile has been constructed. It is shown that this expansion, even when truncated to the first four terms as $$b(m)=2m\frac{1}{3}+\frac{4}{405m}+\frac{46}{25515m^2}$$ (25) performs much better than the formulae given by C89, C91 and PS97, even for $`m`$ values as low as unity, with relative errors smaller than $`10^6`$. The use of this simple formula is thus recommended both in theoretical and observational investigations based on the Sersic law. With the aid of this formula, we have been able to clarify a number of interesting properties associated with the Sersic profile. The additional material presented in Appendix A can be compared to the simple power law $`R^2`$, often used in the past to fit the photometric profiles of elliptical galaxies. ###### Acknowledgements. We would like to thank F. Simien for several useful suggestions. This work was supported by MURST, contract CoFin98. L.C. was also partially supported by ASI, contract ASI-ARS-96-70. ## Appendix A Remarks on a power law approximation The surface brightness profile given in Eq. (1) is sometimes expressed as (see Eqs. -) $$I(R)=I_\mathrm{e}e^{b(1\eta ^{1/m})}.$$ (26) In this case, for a *fixed* location $`\eta >0`$, we may consider an asymptotic expansion of the surface brightness profile at constant $`I_\mathrm{e}`$ for $`m>>1`$. This is obtained by introducing a stretched radial coordinate $`\xi =\mathrm{ln}\eta `$ and by noting that $$b(1\eta ^{1/m})=b(1e^{\xi /m})=2\xi \left(1\frac{1}{6m}+\frac{2}{405m^2}\mathrm{}\right)\left(1+\frac{\xi }{2m}+\frac{\xi ^2}{6m^2}\mathrm{}\right).$$ (27) Thus, we find $$I(R)\frac{I_\mathrm{e}}{\eta ^2}[1+\mathrm{O}(m^1)].$$ (28) We may recall here that the photometric profiles of elliptical galaxies have often been described in the past in terms of power laws (see, e.g., Hubble 1930). A naive inspection of the first term omitted in the expansion (A2) suggests that Eq. (A3) is adequate provided $`|\xi (\xi 1/3)/m|<<1`$. Note that the term involved vanishes at $`\xi =0`$ and at $`\xi =1/3`$. This is an indication that the quality of the power law approximation is asymmetric, with a modest bias to the outer region. Consider the function $`f(\xi )=b(1e^{\xi /m})+2\xi `$. The quantity $`\mathrm{exp}(f)`$ gives the ratio between the $`R^{1/m}`$ profile and its $`R^2`$ approximation. The function $`f`$ diverges to $`\mathrm{}`$ both for $`\xi \mathrm{}`$ (i.e., $`\eta 0`$) and for $`\xi +\mathrm{}`$ (i.e., $`\eta +\mathrm{}`$), and has a single maximum $`f_M`$ at $`\xi _M`$, defined by the relation $`e^{\xi _M/m}=2m/b`$, where $$f_M=b2m+2m\mathrm{ln}\left(\frac{2m}{b}\right)\frac{1}{36m}+\mathrm{O}(m^2).$$ (29) Note that $`2m/b`$ is close to unity (in fact, $`\xi _M1/6`$), i.e., that $`\xi _M/m`$ is small. Therefore, the power law profile is slightly underluminous with respect to the $`R^{1/m}`$ profile in the radial range between the effective radius and an outer radius $`\xi _r2\xi _M1/3`$, where $`f(\xi _r)=0`$, which coincides with the outer location identified by a previous naive inspection (see comment after Eq. \[A3\]). Outside such radial range the power law profile is brighter than the $`R^{1/m}`$ profile. In such region, where $`f<0`$, we may ask how far (in radial range) Eq. (A3) applies, by studying the condition $`1e^fϵ`$, i.e., $`|f|ϵ`$. We expand $`f`$ for negative values of $`\xi `$ around $`\xi =0`$, and around $`\xi _r`$ for $`\xi >\xi _r`$. The range of applicability of Eq. (A3) is thus constrained by the condition $`(13ϵ)^m\begin{array}{c}<\hfill \\ \hfill \end{array}\eta \begin{array}{c}<\hfill \\ \hfill \end{array}(1+3ϵ)^me^{1/3}`$. This situation is illustrated in Fig. 1.
no-problem/9911/hep-ph9911428.html
ar5iv
text
# Non-Equilibrium Steady States and Transport in the Classical Lattice ϕ⁴ Theory ## Abstract We study the classical non-equilibrium statistical mechanics of scalar field theory on the lattice. Steady states are analyzed near and far from equilibrium. The bulk thermal conductivity is computed, including its temperature dependence. We examine the validity of linear response predictions, as well as properties of the non-equilibrium steady state. We find that the linear response theory applies to visibly curved temperature profiles as long as the thermal gradients are not too strong. We also examine the transition from local equilibrium to local non-equilibrium. The understanding of the dynamics of non–equilibrium field theories is important to many areas in physical sciences, from processes in inflation or baryogenesis in the early universe, transport processes in condensed matter, to the possible states of hadronic matter in heavy ion collisions, such as quark–gluon plasma, disoriented chiral condensates and color superconducting states. However, many problems linger even in the basic understanding of non-equilibrium statistical mechanics and transport. It is clearly desirable to address these problems regarding the non-equilibrium dynamics of field theories while making as few assumptions about the dynamics of the theory as possible. In this work, we study the steady state dynamics of classical massless $`\varphi ^4`$ lattice field theory in $`(d+1)`$ dimensions ($`d=1,3`$), under weak and strong thermal gradients. We study the classical field theory on a lattice since the problems are well defined and techniques are available to construct non-equilibrium states from first principles. It is not clear to us how to address the questions we pose in the full quantum theory without making some drastic assumptions. Our computations are non–perturbative. Furthermore, not only is the temperature within the system dynamical, but even the question of whether local equilibrium is achieved is not an assumption but is determined dynamically by the system. The equilibrium properties of classical $`\varphi ^4`$ theory have been studied in the past, including its ergodic properties, Hamiltonian dynamics and phase transitions. However, the kind of questions we address here have not been answered in the previous literature. Near equilibrium, linear response theory supposedly holds; yet there is no means to address its regime of validity within the theory itself since the computation is performed in equilibrium. By explicitly constructing non-equilibrium steady states near equilibrium, we examine the validity of the linear response theory. We then construct steady states of the system under stronger thermal gradients and study their physical properties. Here, fundamental questions arise, such as under which conditions local equilibrium is achieved. Local equilibrium, an assumption that equilibrium concepts can be applied locally to a problem which might be globally non-equilibrium, is widely used. In fact, thermalization is a concept which is assumed, often tacitly, in many applications of non-equilibrium physics. However, as a system moves away from equilibrium, what precisely constitutes ‘local equilibrium’ becomes unclear and it is of interest to understand what kind of deviations develop from it and why. These problems are also non-trivial from the statistical mechanical point of view; even small departures from equilibrium into non-equilibrium steady states (such as the those we study) already lead to peculiar behaviors including a divergent Gibbs entropy $`S_G`$ ($`S_G\mathrm{}`$) and a multi-fractal steady state measure. The steady-state distributions far from equilibrium are not well understood classically and very little is known concerning their quantum counterparts. We would like to see to what extent these peculiar statistical measures influence the steady state thermal profiles, $`T(x)`$. Classical field theories are relevant to the high temperature dynamics of quantum field theories and have proved effective, for instance, in computing finite temperature properties of the standard model . Furthermore, non-equilibrium dynamics of classical field theories is of interest in its own right, an understanding of which is essential to understanding the dynamics of the quantum theory. The approach we adopt here can be applied to classical lattice theories quite generally. Transport properties have been previously studied in scalar quantum field theory using linear response theory and the theory is known to have a classical, finite temperature limit for correlation functions. Yet the question of how to relate our results to those results is far from trivial and will not be pursued here. We start with the Lagrangian $$=\frac{1}{2}\left(\frac{\stackrel{~}{\varphi }(\stackrel{~}{x})}{\stackrel{~}{x}_\mu }\right)^2+\frac{\stackrel{~}{g}^2}{4}\stackrel{~}{\varphi }(\stackrel{~}{x})^4.$$ (1) This model, when discretized, reduces to a model of lattice vibrations with quartic anharmonicity, with the following dimensionless Hamiltonian $$H(\pi ,\varphi )=\frac{1}{2}\underset{𝐫}{}\left[\pi _𝐫^2+\left(\varphi _𝐫\right)^2+\frac{1}{2}\varphi _𝐫^4\right].$$ (2) Here $`\pi =\varphi /t`$, $`𝐫`$ runs over all sites in the lattice, and the lattice derivative has components $`_k\varphi _𝐫\varphi _{𝐫+𝐞_𝐤}\varphi _𝐫`$ ($`𝐞_𝐤`$ is the unit lattice vector in the $`k`$-th direction). The two theories are related by discretization and the rescalings, $`\varphi _𝐫(t)=a\stackrel{~}{g}\stackrel{~}{\varphi }(\stackrel{~}{𝐫},\stackrel{~}{t}),t=\stackrel{~}{t}/a,𝐫=\stackrel{~}{𝐫}/a`$, where $`a`$ is the lattice spacing. The equations of motion, $`\mathrm{}\varphi =\varphi ^3`$, are solved on a spatial grid, using two methods: fifth and sixth order Runge-Kutta, and leap-frog algorithms. In order to generate a stationary non-equilibrium statistical ensemble, thermal boundary conditions are imposed on the equations of motion. Specifically, at $`x=0`$ and $`x=L`$, we add two time-reversal invariant fields which act to dynamically thermalize these boundaries at given temperatures $`T_1`$ and $`T_2`$ (a more detailed account will be given elsewhere). For $`d>1`$ we impose periodic boundary conditions on the other directions. Apart from the thermal boundary conditions, the system evolves according to the dynamics dictated by the Hamiltonian (2). We used from $`10^6`$ to $`10^9`$ time steps of $`dt`$ from $`0.1`$ to $`0.001`$, with observables being sampled every $`\mathrm{\Delta }t=20100dt`$. In $`d=1`$, the lattice size was varied from $`L=20`$ to 8000, while in $`d=3`$ it ranged from $`50\times N\times N`$ ($`N`$ ranging from 3–20) to $`1000\times 3\times 3`$. We have verified that when $`T_1=T_2`$, these boundary conditions dynamically set all the temperatures inside the system to be equal to the boundary temperatures and reproduce the equilibrium canonical measure $`\rho _{eq}(\pi ,\varphi )\mathrm{exp}[H(\pi ,\varphi )/T_1]`$ at all points. By controlling $`T_1`$ and $`T_2`$ we can begin to explore the non-equilibrium steady state. One question we would like to address is how the temperature profile $`T(x)`$ behaves. Near equilibrium one would expect a linear profile, but beyond that the shape is unknown. In our near equilibrium simulations, $`T_1<T_2`$, we find a linear temperature profile and recover transport given by Fourier’s law, as shown in Fig. 1 (a). However, far from thermal equilibrium ($`T_1T_2`$), the temperature profile develops significant curvature, seen in 1 (b),(c) for the ratio $`T_2/T_1=10,20`$. It is of interest to understand the physics behind these temperature profiles. We would further like to understand until what point linear response and local equilibrium provide reasonable descriptions. Equilibrium: $`T_1=T_2`$. A standard approach to thermal conductivity utilizes equilibrium correlation functions to compute near-equilibrium transport. This linear response approach uses the Green–Kubo formula, $$\kappa (T)=\frac{1}{T^2}_0^{\mathrm{}}𝑑t𝑑𝐫𝒯^{0x}(𝐫,t)𝒯^{0x}(𝐫_0,0)_{eq},$$ (3) where the autocorrelation function is evaluated in the canonical ensemble, $`T_1=T_2`$. For our lattice calculation, we replace $`𝑑𝐫`$ with a lattice sum. It is interesting to note that the integrand in (3) has been argued to develop a long time tail behavior of $`t^{d/2}`$, leading to the divergence of (3) in $`d=1`$ . In Fig. 2 (top), a typical autocorrelation function for (1+1) dimensions is plotted to several hundred times the mean free time, which is well into the regime where long-time tails would be evident. The time integral is given in Fig. 2 (bottom), showing that the integral (3) is finite, which we attribute to the ‘on-site’ nature of the $`\varphi ^4`$ interaction, in contrast to some of the other models . Interestingly enough, we do find that the transient behavior of the Green–Kubo integrand is quite close to $`t^{1/2}`$ up to a few ten times the mean free time, after which it decays much faster. Consistent results are found in $`d=3`$ as well. The computed $`\kappa (T)`$ is shown in Fig. 3 (top). Near Equilibrium: $`T_1<T_2`$. Near equilibrium, we find that a linear temperature profile emerges dynamically. The thermal conductivity $`\kappa `$ is then obtained through Fourier’s law: $$\kappa (T)=\frac{𝒯^{0x}_{NE}}{T},𝒯^{0x}=_t\varphi \varphi ,$$ (4) where $`𝒯^{0x}_{NE}`$ is the heat flux averaged over the non-equilibrium steady state. Attention is paid to verifying the linear response properties by varying the temperature difference $`|T_2T_1|`$ around the same average temperature. $`T(x)`$ is the local temperature defined through an ideal gas thermometer, by $`T(x)=\pi ^2(x)_{NE}`$, where $`\pi (x)`$ is the momentum density. This will serve as a convenient definition as long as local equilibrium is achieved and the momentum distributions are gaussian. Here $`\mathrm{}_{NE}`$ indicates the ensemble average over the non-equilibrium steady state. To obtain the transport properties, each simulation is run long enough for observables such as $`𝒯^{0x}`$, the energy density as well as distribution functions to converge. In Fig. 3 (top), we compile the Green-Kubo and direct measurements of $`\kappa `$, plotted as a function of $`T`$. We find that these independent computations are quite consistent with each other. $`\kappa `$ is found to have a temperature dependence $$\kappa =AT^\gamma ,\{\begin{array}{cc}\gamma =1.38(2),A=2.72(4)\hfill & \text{(1+1) dimensions}\hfill \\ \gamma =1.64(4),A=9.1(2)\hfill & \text{(3+1) dimensions}\hfill \end{array}.$$ (5) This behavior is similar to that of lattice phonons at high temperature. We have also verified that a sensible bulk behavior exists, as shown in Fig. 3 (bottom); the thermal conductivity is independent of $`L`$ when it is larger than the mean free path, which, on the lattice, is of order of the conductivity. In trying to understand near equilibrium physics, one might be tempted to assign a statistical measure, such as $`\rho _{NE}(\pi ,\varphi )\mathrm{exp}[H(\pi ,\varphi )/T(x)]`$ to the non-equilibrium stationary state, or similar measures which assume some form for $`T(x)`$. Strictly speaking, this is not correct; the phase space measures which describe steady state non-equilibrium systems are multi-fractal (whether one studies shearing, heat flow and so forth), converging to smooth distributions (Boltzmann) only in the equilibrium limit. An important consequence is that the dynamical space is necessarily of lesser dimension than the equilibrium phase space. This in turn results in additional correlations of $`\pi (x)`$ and $`\varphi (x)`$. One can also see that the non-equilibrium measure is not locally Boltzmann since quantities such as $`\pi (x)\varphi (x^{})0`$ for $`xx^{}`$ which is also reflected in the non-zero energy flow, while $`\pi (x)=0`$ near and far from equilibrium. Far From Equilibrium: $`T_1T_2`$. When the temperature gradients are larger and we are no longer in the linear regime, the temperature profile becomes visibly curved. An example of such a non–linear temperature profile are given in Fig. 1. It should be noted that inside the boundaries, the dynamics is that of only the $`\varphi ^4`$ theory and the temperature profile is determined by it. When the temperature varies substantially in the system, one cause for the non–linearity is the temperature dependence of the thermal conductivity. If this were the only cause of non–linearity, the temperature profile can be determined within the region by integrating Eq. (4) (when $`\gamma 1`$) $$T(x)=T_1\left[1\left(1\left(\frac{T_2}{T_1}\right)^{1\gamma }\right)\frac{x}{L}\right]^{\frac{1}{1\gamma }}.$$ (6) $`T(x)`$ is a function only of $`x/L`$ so that it is consistent with a smooth continuum limit. This expression for $`T(x)`$ provides very good descriptions of the measured profiles with significant curvature, as can be seen in Fig. 1 where Eq. (6) (dashes) is almost indistinguishable from the measured steady state profiles (solid). This indicates that linear response extends well beyond the regimes of small temperature differences, when applied locally. À priori, this was not clear. By continuing to increase the ratio $`T_2/T_1`$ in the simulation, we do eventually reach steady state situations where the linear response formula no longer works. At this point, we begin to see indications that the concept of local equilibrium also becomes more tenuous. To analyze these questions more concretely, we need a dimensionless measure of how strong the thermal gradient is. A natural choice we adopt is $`\lambda T/T`$, where $`\lambda `$ is the mean free path. In our model, the heat capacity per unit volume, $`C_V`$, and the sound speed, $`c_s`$, are of order unity, so that elementary kinetic theory suggests that the mean free path is $`\lambda d\kappa `$, where $`d`$ is the spatial dimension. To quantify the departures from the linear response formula, we plot in Fig. 4 (for one spatial dimension) the deviation of the measured heat flux to that obtained from linear response prediction using Eqs. (4)—(6) (denoted $`𝒯^{0x}_{NE}`$ and $`𝒯^{0x}_{LR}`$, respectively), as a function of the quantity $`\kappa (T)T/T`$. The figure includes different lattice sizes and temperatures. We see that for $$\kappa (T)\frac{T}{T}1$$ (7) linear response theory holds quite well. This includes systems with significant curvature in $`T(x)`$ such as Fig. 1. Eventually, for sufficiently strong gradients, the measured heat flow begins to deviate from the linear response results; the system does not conduct heat as well as its linear response theory prediction. At this point, we observe simultaneously the departure of other quantities from a ‘local equilibrium’ characterized by gaussian momentum distributions. To see the departure from local equilibrium, we follow the behavior of various observables, which include the momentum cumulants, the steady-state momentum distributions $`f(\pi _k)`$ as well as heat flux and correlation functions. For instance, in equilibrium, $`\pi ^4(x)/\pi ^2(x)^2=3`$, $`\pi ^6(x)/\pi ^2(x)^3=15`$ and so forth. In the regime where the linear response theory breaks down, the momentum distributions become more sharply peaked and are no longer gaussian even in the steady state, and the deviations in the cumulants become apparent. This indicates that at least in our theory, higher order corrections to linear response are not that well founded since the concept of temperature becomes tenuous at this point. Since the heat flow is a constant, we also note that the local equilibrium condition (7) is more likely to be satisfied at the higher temperature end. Similar behavior is seen in (3+1) dimensions. We have further examined the behavior of the (coarse grained) Boltzmann entropy $`S_B`$. While Gibbs entropy is known to be singular, any similar divergence in the coarse grained $`S_B`$ would only be evident if one extrapolated the measured values to the continuum limit. Instead, we consider if the notion of local entropy changes significantly as the system moves away from equilibrium. To this end, we compute the Boltzmann entropy $`S_B`$ from the 1- and 2-body densities $`f^{(1)}(\pi (x),\varphi (x))`$ and $`f^{(2)}(\pi (x),\varphi (x),\pi (x^{}),\varphi (x^{}))`$ in the non-equilibrium steady states, from $`S_B^{(k)}=𝑑\mu ^{(k)}f^{(k)}\mathrm{log}f^{(k)}`$. We find that $`S^{(1)}`$ does not shift noticeably from its equilibrium value regardless of how far the system is from equilibrium. Further, $`S^{(2)}`$ $`(2S^{(1)})`$ is only slightly less than its upper limit $`2S^{(1)}`$ and remains so even far from equilibrium. So unlike $`S_G`$ $`\left(VS_B^{(1)}\right)`$, $`S_B`$ is found to be rather insensitive to the non-equilibrium nature of the system. While this could be a manifestation of coarse graining, it does suggest that some local thermodynamic concepts might still be useful in making connections with non-equilibrium thermodynamics. The behavior of scalar lattice field theory near and far from equilibrium has been explored, and the present study allows us to establish a number of features regarding the non-equilibrium stationary state. The bulk thermal conductivity and its dependence on the temperature over few decades was found. To our knowledge, such a computation from first principles, without assuming linear response, has not been performed previously. This was done using both linear response (equilibrium) and direct (near equilibrium) approaches. The $`d=1`$ Green-Kubo integrals in our theory are non-divergent and agree with the direct computation, in contrast to some of the other models. We find that linear response theory is quite robust and works well even when the steady state thermal profile has significant curvature, and as a consequence we derive an analytic description for the profile. By driving the system farther from equilibrium, we are able to see when linear response breaks down. Surprisingly, this is found to be near the same point where local equilibrium becomes noticeably violated. A sufficient condition for noticeable departures from linear response and local equilibrium in $`d=1,3`$ seems to be $`\lambda T/T1/10`$. It would be interesting to examine the dynamics of non-equilibrium phase transitions or explore dynamical cooling of boundary temperatures to provide a means to access more complex non-equilibrium environments, and in particular, the dynamics of ultrarelativistic heavy–ion collisions. We acknowledge support through grants at Keio University and DOE grant DE-FG02-91ER40608.
no-problem/9911/cond-mat9911462.html
ar5iv
text
# The dynamical structure factor in disordered systems ## Abstract We study the spectral width as a function of the external momentum for the dynamical structure factor of a disordered harmonic solid, considered as a toy model for supercooled liquids and glasses. Both in the context of single-link coherent potential approximation and of a single-defect approximation, two different regimes are clearly identified: if the density of states at zero energy is zero, the Rayleigh $`p^4`$ law is recovered for small momentum. On the contrary, if the disorder induces a non vanishing density of states at zero energy, a linear behaviour is obtained. The dynamical structure factor is numerically calculated in lattices as large as $`96^3`$, and satisfactorily agrees with the analytical computations. PACS numbers: 61.43.Fs,63.50.+x The spectrum of vibrational excitations of supercooled liquids and glasses is attracting a great deal of attention from the experimental side , the numerical simulations and also from the analytical point of view . Vibrational excitations on the GHz range develop in the supercooled liquid, by the same temperatures where anomalous behaviour of the specific-heat is found. Some rather substance-independent features in this vibrational spectrum are found. For example, the vibrational density of states presents an excess respect to the usual Debye behaviour ($`g(\omega )\omega ^2`$) known as the Boson Peak. The dynamical structure-factor, $`S(p,\omega )`$, reveals well-defined sound-like peaks for wave-lengths not much greater than the inter-particle distance. Moreover, the speed of this high-frequency sound is close to the one of the low-frequency typical one. For fixed external momentum, $`p`$, the width of the spectral peak grows as $`p^2`$, which has recently been recovered in the mode coupling approximation (see ). However, in Ref. , a simpler model of a disordered three-dimensional harmonic solid was studied. There it was claimed that if some spring-constants are allowed to have a small negative value, (but not so much that negative-energy modes appear) a boson peak develops. In the context of the CPA approximation , the usual $`p^4`$ width of $`S(p,\omega )`$ arisen from Rayleigh-scattering is reported. It was, however, noticed that a rather different scaling appeared at the characteristic frequencies of the Boson-Peak. Furthermore, in Ref. , it was shown a $`p^2`$ broadening for a one-dimensional disordered harmonic solid model. In this paper, we want to investigate the problem, both by numerical and analytical means. Our main finding will be that the Rayleigh-scattering $`p^4`$ broadening holds, unless the solid becomes unstable. That is, if the dynamical-matrix, to be defined later, has an extensive number of negative eigenvalues, the system do present sound-like peaks in its $`S(p,\omega )`$, but with a width proportional to $`p`$, at least for very small values of $`p`$. Otherwise, the standard $`p^4`$ behaviour is to be expected. To be more specific, our dynamical matrix in $`D`$ dimensions is given by $`H`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{xy}{}}\varphi _x_{xy}\varphi _y,`$ (1) $`_{xy}`$ $`=`$ $`{\displaystyle \underset{\mu =1}{\overset{D}{}}}{\displaystyle \frac{1+\alpha _{y,\mu }}{2}}\left(\delta _{xy}\delta _{x,y+\widehat{\mu }}\right)`$ (2) $`+`$ $`{\displaystyle \frac{1+\alpha _{y\widehat{\mu },\mu }}{2}}\left(\delta _{xy}\delta _{x,y\widehat{\mu }}\right),`$ (3) where $`\widehat{\mu }`$ is the lattice unit-vector in the $`\mu `$ direction. Notice that we cannot separate the longitudinal and transverse modes since, for the sake of simplicity, our matrix has no internal indices as in Ref. . The $`\alpha _{y,\mu }`$ is the random part of the spring constant that joins the sites $`y`$ and $`y+\widehat{\mu }`$. In a finite lattice, periodic boundary conditions are applied. The dynamical matrix, verifies the constraint related with traslational symmetry: the vector of all equal components is eigenvector with zero eigenvalue. For a harmonic solid, the object that we will study is the dynamical structure function, $`S(p,E)`$, in the energy rather than in the frequency-domain: $$S(p,E)=\frac{1}{\pi }\mathrm{Im}\underset{ϵ0}{lim}\overline{p|\frac{1}{E+iϵ}|p}.$$ (4) In the above expression, the bra-ket notation has been used and $`|p`$ stands for a normalized plane-wave of momentum $`p`$. As usual, the overline represents the average over the random variables $`\alpha `$. We choose to work in the energy, rather than in the frequency domain, since we mainly want to consider the case where a significant fraction of the spectrum is negative, and so the transformation $`E=\omega ^2`$ is no longer well defined. We study the case where the $`\alpha _{y,\mu }`$ are uncorrelated, random variables whose probability distribution is $$p(\alpha )=(1\rho )\delta (\alpha )+\rho f(\alpha ),0\rho 1.$$ (5) In the above equation, $`\rho `$ is the probability of finding one defect, while $`f`$ is a continuous probability function, that we take flat between $`\lambda `$ and $`0`$ ($`\lambda <0`$). Notice that the disorder of Ref. is recovered, by taking $`\rho =1`$ and $`f`$ Gaussian. The rationale for choosing this kind of disorder is that a spring of an unusually large negative spring-constant induces a negative energy eigenvalue of the dynamical matrix $``$. In fact, if one keeps in Eq.(3) only the spring connecting sites $`1`$ and $`2`$, corresponding to the large negative defect, the eigenvectors are easily shown to be $`\varphi _x=(\delta _{x,1}+\delta _{x,2})/\sqrt{2}`$ and $`(\delta _{x,1}\delta _{x,2})/\sqrt{2}`$. However, assuming that the surrounding springs have their ordered value, one finds that the Hamiltonian (1) is negative for the displacement configuration $`(\delta _{x,1}\delta _{x,2})/\sqrt{2}`$ if the spring-constant verifies $`\alpha <(D+\frac{1}{2})`$. Therefore, one has a contribution of order $`\rho `$ to the density of states over the negative spectrum. Moreover, if we assume that some of the surrounding spring have very small positive values, a contribution of higher-order in $`\rho `$ to the density of states is generated, no matter how small is the negative value of the spring constant, $`\alpha +1`$. Therefore, if the probability of $`\alpha <1.0`$ is non-vanishing, the hybridization of the localized field-configuration described above, with the plane-waves eigenstates will non-trivially modify the eigenvectors of $``$. To study this, we shall perform a single-defect calculation of the resolvent. In this way, we learn that the order $`\rho `$ threshold for the presence of negative eigenvalues is not $`\alpha =D1/2`$, as roughly shown in the introduction, but $`D`$. This threshold separate two well defined scaling limits for the width of the $`S(q,E)`$ at small $`q`$: * If the density of states is null at zero energy the imaginary part of the self-energy $`\mathrm{\Sigma }`$ is proportional to $`p^{D+2}`$ (Rayleigh scattering, that yields a spectral width $`p^{D+1}`$ in the frequency domain). * When an extensive number of negative eigenvalues is present, $`\mathrm{\Sigma }p^2`$ (or $`p`$ for the width in the frequency domain). Of course, the $`p^{D+2}`$ contribution will be still present but sub-dominant at low momentum. A crossover might be visible, depending on the strength of the disorder. The dynamical matrix in the presence of an unique defect of amplitude $`\alpha `$ can be split in two terms: $$_{xy}=_{xy}^0+_{xy}$$ (6) where the pure crystal matrix $`^0`$ has plane waves eigenvectors with eigenvalues $$E_0(p)=\underset{\nu }{}(1\mathrm{cos}p_\nu ).$$ (7) The perturbation, hence, connects the two sites $`y^0`$ and $`y^0+\nu `$: $`_{xy}`$ $``$ $`\alpha ^{y^0;\nu }|y^0;\mu y^0;\mu |,`$ (8) $`|y^0;\mu `$ $``$ $`{\displaystyle \frac{\delta _{x,y^0}\delta _{x,y^0+\nu }}{\sqrt{2}}}.`$ (9) The propagator can be written as $$\frac{1}{z}=\frac{1}{z^0}+\frac{1}{z^0}𝒯\frac{1}{z^0},$$ (10) where the resummation of the harmonic series gives $$𝒯|y^0;\nu \frac{\alpha }{1\alpha a(z)}y^0;\nu |,$$ (11) with $$a(z)\frac{1}{D}\frac{d^Dq}{(2\pi )^D}\frac{E_0(q)}{zE_0(q)}.$$ (12) Notice that the correction term in Eq.(11), has an isolated singularity for the value of $`z`$ satisfying $`1\alpha a(z)=0`$. This value decrease monotonically from $`z=0`$, for $`\alpha =D`$, and will be called $`z_\alpha `$. This isolated singularity, correspond to the ground state of the dynamical matrix, the residue being $`\mathrm{\Psi }_F(x)\mathrm{\Psi }_F^{}(y)`$. Therefore, one obtain the wave-function for this eigenvalue $$\mathrm{\Psi }_F(x)\frac{d^Dq}{(2\pi )^D}\frac{1e^{iq_\nu }}{z_\alpha E_0(q)}e^{iqx},$$ (13) that has a localization length of order $`|z_\alpha |^{1/2}`$. We thus see, that unless $`\alpha `$ was exceedingly close to the critical value $`D`$, the eigenvector is strongly localized around the defect. The single-defect approximation to the self-energy, Eq.(14), amounts to consider that each defect only contributes to its own localized eigenvector, and that no other defect is within its localization length. Let us turn back to the original problem, with an extensive number of defects, and repeat the above calculation, neglecting all terms which contain two different defects. The matrix $`𝒯`$ is now a sum of terms like the one in Eq.(11). If we now perform the average over the $`\alpha `$’s and apply Dyson resummation, the self-energy part of the propagator turns out to be $$\mathrm{\Sigma }(z,p)=\rho E_0(p)𝑑\alpha f(\alpha )\frac{\alpha }{1\alpha a(z)},$$ (14) at first order in $`\rho `$ (the defect interactions will generate the order $`\rho ^2`$ and higher order corrections to the single defect result). The width of the $`S(q,E)`$ is simply given by the value of the imaginary part of the self-energy at the peak, whose position can be obtained from the real part of the self-energy, $`E^{\mathrm{max}}(p)E_0(p)+\mathrm{Re}\mathrm{\Sigma }(E_0(p),p)`$. Notice that our self energy is proportional to $`E_0(p)`$ so that we are basically getting a finite renormalization of the speed of sound. In order to estimate the imaginary part of the self energy, it is useful to realize that for small positive values of the energy, one has $$a(E+iϵ)\frac{1}{D}i\frac{E^{D/2}}{2^{D/2}\pi ^{D/21}\mathrm{\Gamma }(D/2)}.$$ (15) Therefore, if the probability density $`f(\alpha )`$ do not allow $`\alpha `$ to be smaller than $`D`$, the only imaginary term in Eq.(14) comes from $`a(E+iϵ)`$, and it is of order $`p^2E^{3/2}`$, yielding a value of order $`p^5`$ at the peak. On the other hand, if $`\alpha `$ can be smaller than $`D`$, an imaginary part of order $`p^2f(D)`$ arises from the pole. Led by the functional form of the self-energy in Eq.(14), one can consider self-energies of the form $`f(E)E_0(p)`$, also when $`\rho `$ is not small, and the single defect approximation no longer holds. This is the idea lying under the well-known CPA approximation , where one sets $`\mathrm{\Sigma }(z,p)=(\mathrm{\Gamma }(z)1)E_0(p)`$. A self consistency equation can be readily written: $$\overline{\frac{1+\alpha \mathrm{\Gamma }(z)}{\mathrm{\Gamma }(z)a\left(z/\mathrm{\Gamma }(z)\right)\left(1+\alpha \mathrm{\Gamma }(z)\right)}}=0.$$ (16) It is clear that the width of the $`S(q,E)`$ critically depends on the value of $`\mathrm{\Gamma }(0+iϵ)`$. For the flat distribution of $`\alpha `$ introduced in Eq.(5), one can solve Eq.(16) in the limit of small $`\rho `$, as $`\mathrm{\Gamma }(0)=1+b\rho +𝒪(\rho ^2)`$ obtaining $$b=D\left(1D\mathrm{log}\frac{D}{|\lambda +D|}+i\pi \frac{D}{\lambda }\theta (D+\lambda )\right),$$ (17) where $`\theta (x)=1`$ for $`x<0`$ and zero otherwise. It is easy to check that the single-defect result is exactly the same as in Eq.(17). Fixing from now on $`D=3`$, Eq.(16) in the general case, can be numerically solved. We choose to write it as a fixed-point equation, and solve it recursively. The only tricky part, is the evaluation of $`a(z)`$ defined in Eq.(12), that we make by calculating the unperturbed density of states by a Monte Carlo simulation. In this way, we are able to obtain estimates of $`\mathrm{\Gamma }(z)`$ with an accuracy of $`10^4`$. We find that in the CPA approximation, there are the same two regimes as in the single-defect computation, separated by a critical line, whose coordinates are shown in table I. The appearance of the $`p^5`$ regime coincides, and it is due to, the vanishing of the negative-energy spectrum. However, as explained in the introduction, one expects rather that what vanish is the order $`\rho `$ contribution to the density of states in this region. In fact, we expect a non-vanishing density of states all the way down to $`\lambda =1`$, where no negative spring exists. It is remarkable that when $`\rho =1`$, the CPA approximation keeps a non-vanishing fraction of negative eigenvalues up to $`\lambda =1.2`$, quite close to the correct value. The agreement between the CPA approximation and the results from numerical simulations for the $`S(E,p)`$ turns out to be better than $`5\%`$ in the two extreme cases (see figs. 2,3). That is the one where there is no negative springs ($`\lambda =1`$) and the one where the spring constant can be very negative ($`\lambda =10`$). Close to the CPA critical line (in the unstable crystal side) the agreement is still quite good. On the (CPA) stable border side, a scaling between $`p^6`$ and $`p^5`$ is found, depending on the density of defects, $`\rho `$. For example, with the value of the density of defects $`\rho =0.1`$, in fig. 1) the transition between the two regimes found by the CPA approximation at $`\lambda =2.15`$ is very evident. We numerically computed the structure factor $`S(E,q)`$ for our model utilizing the method of moments , which allows to study the statistical properties of the eigenvalues and the eigenvectors of large dynamical matrices, avoiding their diagonalization. This method is a clever modification of the Lanczos method and it shares the same weakness, namely the lack of orthogonality when a too large number of moments is computed. Another limitation is the necessity of setting a finite value of $`ϵ`$ in Eq.(4). A reasonable value for $`ϵ`$ is the mean distance, between eigenstates, which is roughly given by $`(2D\lambda )/V`$, where $`V`$ is the lattice volume (that is $`10^5`$ in our $`96^3`$ lattice). If the width of the peak is comparable with $`ϵ`$, the results will be definitely affected by finite-size effects. The limitation related with the number of moments, is not serious for the central part of the $`S(q,E)`$ curve, but can be rather strong if one wants to calculate the tails of the distribution. In practice, we have used $`30`$ moments for $`10`$ different disorder realizations, finding very satisfying results, unless the peak height approaches values of order $`10^5`$, when finite size effects turns out to be important. We fit our results to the Breit-Wigner form: $$S(E,q)=\frac{𝒩}{(EE_0)^2+\mathrm{\Sigma }^2},$$ (18) which satisfactorily describes the peak in all cases, although usually overestimates the tails of the distribution. The position of the peak $`E_0`$ is linear in $`p^2`$ as expected. More importantly, the results from numerical simulations (fig. 4) show very clearly that the critical line previously discussed is an artifact of the CPA approximation and a real system actually becomes unstable when there is an extensive number of negative springs, no matter how small is the fraction of them, as expected from the simple analytic considerations sketched previously. For larger values of $`p`$ the $`p^5`$ contribution is no longer negligible and a complicated intermediate scaling appears. In this work, we used the fact that both the single-defect approximation and the CPA approximation allow to write the self-energy of a disordered harmonic solid, for small $`E`$ and $`p`$, as $`f(E)E_0(p)`$. An unavoidable consequence of this functional form is that violations of the Rayleigh $`p^5`$ scaling of the width of the peaks of the $`S(p,E)`$ are intrinsically tied with the appearance of a negative energy spectrum (i. e. solid instability). We have argued that an extensive number of negative eigenvalues should appear, as soon as the spring-constants are allowed to be negative. Our numerical calculations on $`96^3`$ disordered lattices confirm the above picture. For wave-lengths in the range $`[a,10a]`$ ($`a`$ being the lattice constant), a crossover regime may come out, depending on the strength of the disorder, from the competition between the $`p^2`$ term and $`p^5`$ term in the self-energy. Therefore, the vibrational excitations of the disordered solid can look qualitatively similar to the finite temperature Instantaneous Normal Modes of supercooled liquids and glasses, where negative eigenvalues are always present. We gratefully acknowledge interesting discussions with A. Gonzalez, M. Mézard, G. Ruocco and G. Viliani. V.M.M. is a M.E.C. fellow and has been partially suported by CYCyT(AEN97-1708 and AEN99-1693). Our numerical computations have been carried out on the Kalix2 pentium cluster of the University of Cagliari.
no-problem/9911/hep-ph9911251.html
ar5iv
text
# Dynamical Supersymmetry Breaking with Gauged 𝑈⁢(1)_𝑅 Symmetry preprint: TMUP-HEL-9912 TIT/HEP-435 KEK-TH-659 Abstract We propose a simple model of dynamical supersymmetry breaking in the context of minimal supergravity with gauged $`U(1)_R`$ symmetry. The model is based on the gauge group $`SU(2)\times U(1)_R`$ with three matters. Since the $`U(1)_R`$ symmetry is gauged, the Fayet-Iliopoulos D-term appears due to the symmetry of supergravity. On the other hand, the superpotential generated dynamically by the $`SU(2)`$ gauge dynamics leads to run away potential. Since the supersymmetric vacuum condition required by the D-term potential contradicts the one required by the superpotential, supersymmetry is broken. The supersymmetry breaking scale is controlled by the dynamical scale of the $`SU(2)`$ gauge interaction. We can choose the parameters in our model for vanishing cosmological constant. Our model is phenomenologically viable with the gravitino mass of order 1 TeV or 10 TeV. The supersymmetric extension is one of the most promising way to provide a solution to the gauge hierarchy problem beyond the standard model . However, since none of the superpartners has been observed yet, supersymmetry should be broken at low energies. The origin of the supersymmetry breaking still remains as the one of the biggest mysteries in supersymmetric theories. The models of the spontaneous supersymmetry breaking at the tree level were proposed many years ago . However, since these models had dimensionful parameters given by hand, there was no explanation for the hierarchy between the scale of the supersymmetry breaking and Planck scale. More complete model may be the model in which the origin of the scale of the supersymmetry breaking can be explained by the model itself. An example of such model is the dynamical supersymmetry breaking model . While this model have no dimensionful parameter from the beginning, the dimensionful parameter is induced by the non-perturbative gauge dynamics. It seems to be possible to extend such a model into the supergravity model, if the four dimensional space-time is flat. In this paper, we propose a simple model of dynamical supersymmetry breaking in the context of the minimal supergravity with gauged $`U(1)_R`$ symmetry. Our model is based on the gauge group $`SU(2)\times U(1)_R`$. Since the $`U(1)_R`$ symmetry is gauged, the Fayet-Iliopoulos D-term appears due to the symmetry of supergravity. On the other hand, the non-perturbative effect of the $`SU(2)`$ gauge dynamics generates the superpotential dynamically, which leads to the run away potential. Since the supersymmetric vacuum condition required by the D-term potential contradicts the one required by the superpotential, supersymmetry is broken. The supersymmetry breaking scale is controlled by the scale of the $`SU(2)`$ gauge dynamics. Analyzing the potential minimum, we find that the cosmological constant can vanish, if the parameters in our model are appropriately chosen. The mass spectrum of the model is also discussed. The scalars with non-zero $`U(1)_R`$ charges get soft supersymmetry breaking masses at the tree level by the vacuum expectation value of the D-term. These masses are the same order of the magnitude of the gravitino mass. On the other hand, for the gauginos in the minimal supersymmetric standard model we can consider two possibilities. One is to introduce the higher dimensional term in the gauge kinetic function. The other is to consider the anomaly mediation scenario without non-trivial gauge kinetic function. The gaugino masses are found to be the same order of the gravitino mass or a few orders smaller than the gravitino mass in the former case or the latter case, respectively. Our model is based on the gauge group $`SU(2)\times U(1)_R`$ with the following matter contents. <sup>*</sup><sup>*</sup>*In the following, we do not discuss the cancellation of the gauge anomaly $`[U(1)_R]^3`$ and the mixed gravitational anomaly of $`U(1)_R`$. The discussion depends on the full particle contents of the theory, and it is out of the main subject of this paper . Here, we simply assume that these anomalies are canceled, if all particle contents are considered with the appropriate $`U(1)_R`$ charge assignment. | | $`SU(2)`$ | $`U(1)_R`$ | | --- | --- | --- | | $`Q_1`$ | 2 | $`1`$ | | $`Q_2`$ | 2 | $`1`$ | | $`S`$ | 1 | $`+4`$ | The general renormalizable superpotential at the tree level is $`W=\lambda S\left[Q_1Q_2\right],`$ (1) where square brackets denote the contraction of the $`SU(2)`$ index by the $`ϵ`$-tensor, $`\lambda `$ is a dimensionless coupling constant. We assume that $`\lambda `$ is real and positive in the following. It is known that the superpotential is generated dynamically by non-perturbative (instanton) effect of the $`SU(2)`$ gauge dynamics . The total effective superpotential is found to be $`W_{eff}=\lambda S\left[Q_1Q_2\right]+{\displaystyle \frac{\mathrm{\Lambda }^5}{\left[Q_1Q_2\right]}},`$ (2) where the second term is the dynamically generated superpotential, and $`\mathrm{\Lambda }`$ is the dynamical scale of the $`SU(2)`$ gauge interaction. Note that the supersymmetric vacuum lies at $`S\mathrm{}`$ and $`Q_1,Q_20`$, if only the F-term potential is considered. Next, let us consider the D-term potential. The gauged $`U(1)_R`$ symmetry is impossible in the globally supersymmetric theory, since the generators of the $`U(1)_R`$ symmetry and supersymmetry do not commute with each other. On the other hand, in the supergravity theory the $`U(1)_R`$ symmetry can be gauged as if it were a usual global symmetry . However, there is a crucial difference that the Fayet-Iliopoulos D-term of the gauged $`U(1)_R`$ symmetry appears due to the symmetry of supergravity. This fact is easily understood by the standard formula for supergravity theories . Using the generalized Kähler potential $`G=K+\mathrm{ln}|W|^2`$, the D-term is given by $`D=_iq_i(G/z_i)z_i`$, where $`q_i`$ is the $`U(1)_R`$ charge of the field $`z_i`$. Note that the contribution from the superpotential leads to the constant term, since the superpotential has $`U(1)_R`$ charge 2. With the above particle contents, the D-term potential is found to be $`V_D={\displaystyle \frac{g_R^2}{2}}\left(4S^{}SQ_1^{}Q_1Q_2^{}Q_2+2M_P\right)^2,`$ (3) where $`M_P=M_{pl}/\sqrt{8\pi }`$ is the reduced Planck mass, $`g_R`$ is the $`U(1)_R`$ gauge coupling, and the minimal Kähler potential, $`K=S^{}S+Q_1^{}Q_1+Q_2^{}Q_2`$, is assumed. This assumption is justified by our final result with $`\mathrm{\Lambda }M_P`$ which means that the $`SU(2)`$ gauge interaction is weak at the Planck scale. Note that the supersymmetric vacuum condition required by the D-term potential contradicts the one required by the effective superpotential of eq.(2). Therefore, supersymmetry is broken. This consequence remains correct, if there is no other superfields which have negative $`U(1)_R`$ charges. We give some comments on this point in the final part of this paper. Let us analyze the total potential in our model. Here, note that the cosmological constant should vanish. This requirement comes not only from the observations of the present universe but also from the consistency of our discussion. Since it is not clear whether the superpotential discussed above can be dynamically generated even in the curved space, the space-time should be flat for our discussion to be correct. Note that we cannot take the usual strategy, namely, adding a constant term to the superpotential, since such a term is forbidden by the $`U(1)_R`$ gauge symmetry. It is a non-trivial problem whether we can obtain the vanishing cosmological constant in our model. Assuming that the potential minimum lies on the D-flat direction of the SU(2) gauge interaction, we take the vacuum expectation values such that $`S=s`$ and $`Q_i^\alpha =v\delta _i^\alpha `$, where $`i`$ and $`\alpha `$ denote the flavor and $`SU(2)`$ indices, respectively. Here, we can always make $`s`$ and $`v`$ real and positive by symmetry transformations. The total potential is given by $`V(v,s)`$ $`=`$ $`e^K\left[\left(\lambda v^2+sW\right)^2+2v^2\left(\lambda s{\displaystyle \frac{\mathrm{\Lambda }^5}{v^4}}+W\right)^23W^2\right]`$ (4) $`+`$ $`{\displaystyle \frac{g_R^2}{2}}\left(4s^22v^2+2\right)^2,`$ (5) where $`K`$ and $`W`$ are the Kähler potential and superpotential, respectively, which are given by $`K`$ $`=`$ $`s^2+2v^2,`$ (6) $`W`$ $`=`$ $`\lambda sv^2+{\displaystyle \frac{\mathrm{\Lambda }^5}{v^2}}.`$ (7) Here, all dimensionful parameters are taken to be dimensionless with the normalization $`M_P=1`$. The first line in eq.(4) comes from the F-term (except for $`W^2`$ term) and the remainder is the D-term potential. Since the potential is very complicated, it is convenient to make some assumptions for the values of parameters. First, assume that $`g_R\lambda ,\mathrm{\Lambda }`$. Since the D-term potential is proportional to $`g_R^2`$ and positive definite, the potential minimum is expected for $`V_D`$ to be small as possible. If we assume $`s1`$ and $`v1`$, the potential can be rewritten as $`Ve^2\left(\lambda ^23\mathrm{\Lambda }^{10}\right).`$ (8) It is found that $`\lambda \sqrt{3}\mathrm{\Lambda }^5`$ is required in order to get the vanishing cosmological constant. Let us consider the stationary conditions of the potential. Using the assumptions $`s1`$ and $`v=1+y`$ ($`|y|1`$), the stationary conditions can be expanded with respect to $`s`$ and $`y`$. Considering the relations $`g_R\lambda \mathrm{\Lambda }^5`$, the condition $`V/y=0`$ leads to $`ys^2{\displaystyle \frac{e^2\lambda ^2}{2g_R^2}}.`$ (9) Using this result, the expansion of the condition $`V/s=0`$ leads to $`s{\displaystyle \frac{\lambda \mathrm{\Lambda }^5}{8\lambda ^2\mathrm{\Lambda }^{10}}}.`$ (10) By the numerical analysis, the above rough estimation is found to be a good approximation. The result of numerical calculations is the following. $`y`$ $``$ $`4.7\times 10^3,`$ (11) $`s`$ $``$ $`6.8\times 10^2.`$ (12) Here, we used the values of $`\mathrm{\Lambda }=10^3`$, $`\lambda 1.8\mathrm{\Lambda }^5`$ and $`g_R=10^{12}`$. For these values of the parameters, we can obtain the vanishing cosmological constant. Note that the numerical values of eqs. (11) and (12) are almost independent of the actual value of $`\mathrm{\Lambda }`$, if the condition $`g_R\mathrm{\Lambda }^5`$ is satisfied and the ratio $`\lambda /\mathrm{\Lambda }^5`$ is fixed. This can be seen in the approximate formulae of eqs.(9) and (10). We can choose the value of $`\mathrm{\Lambda }`$ in order to get a phenomenologically acceptable mass spectrum. Now we discuss the mass spectrum in our model. Using the above values of parameters, the gravitino mass is estimated as $`m_{3/2}=e^{K/2}W3.0\times {\displaystyle \frac{\mathrm{\Lambda }^5}{M_P^4}}.`$ (13) The gravitino mass contributes to the masses of scalar partners via the tree level interactions of supergravity. Note that there is another contribution, if scalar partners have non-zero $`U(1)_R`$ charges. In this case, they also get the mass from the vacuum expectation value of the D-term, and it is estimated as $`m_{Dterm}^2=qg_R^2D\left(7.3\times {\displaystyle \frac{\mathrm{\Lambda }^5}{M_P^4}}\right)^2q,`$ (14) where $`q`$ is the $`U(1)_R`$ charge. This mass squared is always positive for the scalar partners with positive $`U(1)_R`$ charges. The mass is the same order of the magnitude of the gravitino mass. This is because $`g_R`$ is canceled out in the above estimation (see eq.(9)). For gaugino masses, we can consider two cases. One is to introduce a gauge invariant higher dimensional term $`S([Q_1Q_2])^2/M_P^5`$ in the gauge kinetic function. In this case, gaugino masses are found to be the same order of the gravitino mass. The other is to consider the anomaly mediation of supersymmetry breaking without the non-trivial gauge kinetic function. The higher dimensional term $`S([Q_1Q_2])^2/M_P^5`$ in the gauge kinetic function can be forbidden to all orders by the discrete symmetry. In this case, gaugino masses are given by the gravitino mass times beta functions, which are a few orders smaller than the gravitino mass. Considering the experimental bound on gaugino masses in the minimal supersymmetric standard model , the gravitino mass is taken to be of the order of 1 TeV or 10 TeV in the former case or the latter case, respectively. From this phenomenological constraint, the dynamical scale of the $`SU(2)`$ gauge interaction is found to be of the order of $`10^{15}`$ GeV for both cases. This means that we have to fine-tune $`\lambda 10^{15}`$ to have the vanishing cosmological constant at tree level. <sup>§</sup><sup>§</sup>§This small Yukawa coupling is consistent with our discussion in the following sense. Since $`S`$ has the vacuum expectation value, the mass for $`Q_i`$ is generated through the Yukawa coupling in eq.(2). The relation $`\lambda S\mathrm{\Lambda }`$ is needed not to change our result from the $`SU(2)`$ gauge dynamics. This fine-tuning is also necessary in order to get the soft supersymmetry breaking masses of the same order of the gravitino mass. Finally, we give some comments. Our model has the same structure of the supersymmetry breaking model with the anomalous $`U(1)`$ gauge symmetry . In the model, the Fayet-Iliopoulos D-term is originated from the anomaly of the $`U(1)`$ gauge symmetry . On the other hand, in our model the origin of the D-term is the symmetry of supergravity with the gauged $`U(1)_R`$ symmetry. The D-term appears even if the $`U(1)_R`$ gauge interaction is anomaly free. The mechanism of the supersymmetry breaking in our model can work unless there are other superfields with negative $`U(1)_R`$ charges. However, when our model is combined with the visible sector, for example, the minimal supersymmetric standard model, it is highly non-trivial whether all the gauge anomalies can be canceled out with only semi-positive $`U(1)_R`$ charged superfields in the visible sector . The easiest way to remain our discussion correct is to give up the cancellations of all the anomalies, and consider the Green-Schwarz mechanism by introducing the dilaton field as it is done in the model with anomalous $`U(1)`$ gauge symmetry . In this case, one can construct a full model combined with our hidden sector . Although new Fayet-Iliopoulos D-term appears due to the $`U(1)_R`$ gauge anomaly, its magnitude is suppressed compared with that of the gauged $`U(1)_R`$ symmetry. Hence, our results obtained above is little changed. Unfortunately, the introduction of the dilaton field in the model causes new difficult problems such as the stabilization of the dilaton potential, the vacuum expectation value of the dilaton F-term and so on . Therefore, it is likely expected to construct the model with our supersymmetry breaking mechanism without the Green-Schwarz mechanism. Indeed, we can construct the anomaly free model with some extensions of the model presented in this paper . ###### Acknowledgements. We would like to thank Hitoshi Murayama for helpful comments. This work was supported in part by the Grant-in-aid for Science and Culture Research form the Ministry of Education, Science and Culture of Japan (#11740156, #3400, #2997). N.M and N.O. are supported by the Japan Society for the Promotion of Science for Young Scientists.
no-problem/9911/math9911120.html
ar5iv
text
# 1 The main theorem. ## 1 The main theorem. We recall in this section the definition of the Kauffman bracket skein module (KBSM) and formulate the main result of the paper. ###### Definition 1.1 (\[P-1, H-P-1\]) Let $`M`$ be an oriented 3-manifold, $`R`$ a commutative ring with identity and $`A`$ its invertible element. Let $`_{fr}`$ be the set of unoriented framed links in $`M`$, including the empty link $`\mathrm{}`$. Let $`S`$ be the submodule of $`R_{fr}`$ generated by skein expressions $`L_+AL_0A^1L_{\mathrm{}}`$, where the triple $`L_+,L_0,L_{\mathrm{}}`$ is shown in Fig.1.1, and $`LT_1+(A^2+A^2)L`$, where $`T_1`$ denotes the trivial framed knot. We define the Kauffman bracket skein module (KBSM), denoted by<sup>1</sup><sup>1</sup>1 The standard notation for the KBSM is $`𝒮_{2,\mathrm{}}(M;R,A)`$, \[P-1, H-P-1\], but in this paper we do not discuss skein modules other than KBSM so for simplicity we drop $`(2,\mathrm{})`$ from the notation. $`𝒮(M;R,A)`$, as the quotient $`𝒮(M;R,A)=R_{fr}/S`$. Fig. 1.1. Notice that $`L^{(1)}=A^3L`$ in $`𝒮(M;R,A)`$, where $`L^{(1)}`$ denotes a link obtained from $`L`$ by one positive twist of the framing of $`L`$. We call this the framing relation. For the sake of shortness of notation, we will often drop $`(R,A)`$ from $`𝒮(M;R,A)`$ and write simply $`𝒮(M)`$, as long as it unambiguous. ###### Theorem 1.2 Assume that $`(A^k1)`$ is invertible in $`R`$ for any $`k>0`$. Then $$𝒮(M_1\mathrm{\#}M_2)=𝒮(M_1)𝒮(M_2),$$ where $`M_1\mathrm{\#}M_2`$ denotes the connected sum of compact 3-manifolds $`M_1`$ and $`M_2`$. In particular we have: ###### Corollary 1.3 If $`R`$ is a field of rational functions in variable $`A`$, $`(A)`$, or $`R`$ is a field of complex numbers, $`C`$, and $`A`$ is not a root of unity, then $`𝒮(M_1\mathrm{\#}M_2)=𝒮(M_1)𝒮(M_2)`$. ## 2 Basic properties of skein modules Below we list some elementary properties of KBSM (which also hold for other skein modules), \[P-1\] (compare also \[H-P-1, P-3\]). ###### Proposition 2.1 1. An orientation preserving embedding of 3-manifolds $`i:MN`$ yields the homomorphism of skein modules $`i_{}:𝒮(M)𝒮(N)`$. The above correspondence leads to a functor from the category of 3-manifolds and orientation preserving embeddings (up to ambient isotopy) to the category of $`R`$-modules (with a specified invertible element $`AR`$). 2. 1. If $`N`$ is obtained from $`M`$ by adding a 3-handle to it (i.e. capping off a hole so that $`M=N\mathrm{\#}D^3`$), and $`i:MN`$ is the associated embedding, then $`i_{}:𝒮(M)𝒮(N)`$ is an isomorphism. 2. If $`N`$ is obtained from $`M`$ by adding a 2-handle to it, and $`i:MN`$ is the associated embedding, then $`i_{}:𝒮(M)𝒮(N)`$ is an epimorphism. 3. If $`M_1M_2`$ is the disjoint sum of 3-manifolds $`M_1`$ and $`M_2`$ then $$𝒮(M_1M_2)=𝒮(M_1)𝒮(M_2).$$ 4. (Universal Coefficient Property) Let $`r:RR^{}`$ be a homomorphism of rings (commutative with 1). We can think of $`R^{}`$ as an $`R`$ module. Then the identity map on $`_{fr}`$ induces the isomorphism of $`R^{}`$ (and $`R`$) modules: $$\overline{r}:𝒮(M;R,A)_RR^{}𝒮(M;R^{},r(A)).$$ 5. If $`F`$ is a surface, then the KBSM $`𝒮(F\times I)`$ is a free $`R`$-module with basis $`B(F)`$ consisting of links on $`F`$, up to ambient isotopy of $`F`$, without contractible components (but including the empty link). Results in Proposition 2.1 are well known; compare \[P-1, H-P-1, P-S, P-3\]. We clarify some points of them below: 1. If $`i:MN`$ is an orientation reversing embedding then $`i_{}`$ is a $`Z`$-homomorphism and $`i_{}(Aw)=A^1i_{}(w)`$. 2. 1. It holds because the co-core of a 3-handle is $`0`$-dimensional. 2. It holds because the co-core of a 2-handle is $`1`$-dimensional. 3. This is a consequence of the well known property of short exact sequences, \[Bl\]: If $`0A^{}AA^{\prime \prime }0`$ and $`0B^{}BB^{\prime \prime }0`$ are short exact sequences of $`R`$-modules then $`0A^{}B+AB^{}ABA^{\prime \prime }B^{\prime \prime }0`$ is a short exact sequence. 4. This important fact follows easily from right exactness of the tensor functor (applied to a short exact sequence) and from the “five lemma” (see for example \[C-E\]). 5. This applies in particular to a handlebody, because $`H_n=P_n\times I`$, where $`H_n`$ is a handlebody of genus $`n`$ and $`P_n`$ is a disc with $`n`$ holes. ## 3 Outline of the proof of the main theorem. 1. Any compact 3-dimensional manifold can be obtained from a handlebody by adding 2 and 3-handles to it. The KBSM of a handlebody is a well understood free module (Prop. 2.1(5)), adding a 3-handle does not change the module (Prop. 2.1(2)(i)) and adding a 2-handle gives new relations to the skein module, but no new generators (Prop. 2.1(2)(ii)). 2. If a 2-handle is added, all new relations are obtained by sliding links along the 2-handle (Lemma 4.1). There is usually an infinite collection of relations, but in the case of a 2-handle added along a meridian curve (creating $`S^2`$), we prove that the relations form a “controllable” sequence, which, over the field $`(A)`$, allows us to reduce all curves cutting the sphere, but not more. 3. The embedding $`j:M_1\mathrm{\#}D^3M_2\mathrm{\#}D^3M_1\mathrm{\#}M_2`$ yields an epimorphism of the KBSM. To see that every link in $`𝒮(M_1\mathrm{\#}M_2)`$ is in the image, it suffices to consider relations given by very simple slidings (Fig. 6.1), using the “second side” of $`S^2`$. We use also the fact that $`A^k1`$ is invertible in $`R`$. $``$ denotes the disjoint sum. The connected sum $`M\mathrm{\#}D^3`$ is a manifold obtained from $`M`$ by cutting off a hole in $`M`$. In particular, by Proposition 2.1(2)(i), the skein modules of $`M`$ and $`M\mathrm{\#}D^3`$ coincide. 4. We will start the proof of Theorem 1.2, by considering the case of $`M_1`$ and $`M_2`$ being handlebodies; $`M_1=H_n,M_2=H_m`$. $`H_n\mathrm{\#}H_m`$ is equal to $`H_{n+m}`$ with a 2-handle added along the boundary of the meridian disc separating $`H_n`$ from $`H_m`$. We show that the embedding $`i:H_nH_mH_n\mathrm{\#}H_m`$ yields an isomorphism of skein modules assuming that $`A^k1`$ is invertible in $`R`$. For this we show that all sliding relations are generated by slidings of Fig. 5.1. 5. We generalize 4. by considering any $`M_1\mathrm{\#}M_2`$ and observing that $`M_1\mathrm{\#}M_2`$ is obtained from $`H_n\mathrm{\#}H_m`$ by adding 2-handles to $`H_n`$ or $`H_m`$. 6. In steps 4 and 5 we have to show that even a very complicated sliding (say, a curve in $`H_n`$ first being pushed to $`H_m`$ and back several times and only then slid), can be reduced to slidings of Fig. 5.1 or slidings taking place totally in $`H_n`$ or $`H_m`$ (compare Lemma 6.1). ## 4 Handle sliding lemma. ###### Lemma 4.1 Let $`(M,M)`$ be a 3-manifold with the boundary $`M`$, and let $`\gamma `$ be a simple closed curve on the boundary. Let $`N=M_\gamma `$ be the 3-manifold obtained from $`M`$ by adding a 2-handle along $`\gamma `$. Furthermore let $`_{fr}^{gen}`$ be a set of framed links in $`M`$ generating $`𝒮(M)`$. Then $`𝒮(N)=𝒮(M)/J`$, where $`J`$ is the submodule of $`𝒮(M)`$ generated by expressions $`Lsl_\gamma (L)`$, where $`L_{fr}^{gen}`$ and $`sl_\gamma (L)`$ is obtained from $`L`$ by sliding it along $`\gamma `$ (i.e. handle sliding). Proof: Let $`S^1\times [1,1]`$ be a tubular neighborhood of $`\gamma `$ in $`M`$ ($`\gamma =S^1\times \{0\}`$), and consider a 2-handle added along $`\gamma `$, that is $`D^2\times D^1`$ and a homeomorphism $`\varphi :D^2\times D^1S^1\times [1,1]`$. Then $`N=M_\varphi D^2\times D^1`$. Let $`f:MN`$ be a natural embedding, then $`f_{}:𝒮(M)𝒮(N)`$ is an epimorphism, Prop. 2.1(2)(ii) (any link in $`N`$ can be pushed (ambient isotoped) to $`M`$). Furthermore any skein relation can be performed in $`M`$. The only difference between KBSM of $`M`$ and $`N`$ lies in the fact that some nonequivalent links in $`M`$ can be equivalent in $`N`$; the difference lies exactly in the possibility of sliding a link in $`M`$ along the added 2-handle (that is $`L`$ is moving from one side of the co-core of the 2-handle to another); compare Fig.4.1. The proof of Lemma 4.1 is completed. $`\mathrm{}`$ Fig. 4.1. Lemma 4.1 allows us to write an (infinite) presentation of the Kauffman bracket skein module of any compact 3-manifold, using Heegaard decomposition and knowledge of the module for handlebodies; Proposition 2.1(5). This general presentation is not satisfactory and in some cases we can write a simpler presentation. ## 5 Epimorphism We show in this section that for a connected sum $`M_1\mathrm{\#}M_2`$, and $`(A^k1)`$ invertible in $`R`$, $`𝒮(M_1\mathrm{\#}M_2)`$ is generated by links, components of which are in $`M_1`$ or $`M_2`$. The above fact follows from the slightly more general proposition which we prove below. ###### Proposition 5.1 Let $`M`$ be an oriented 3-manifold, $`D`$ a meridian disk in $`M`$, that is a properly embedded $`2`$-disk in $`M`$, and $`\gamma =D`$. If $`R`$ is a ring with $`(A^k1)`$ invertible for any $`k`$ then the embedding $`j:(MD)M_\gamma `$, where $`M_\gamma `$ is obtained from $`M`$ by adding a 2-handle along $`\gamma `$, induces an epimorphism of the KBSM, $`j_{}:𝒮(MD)𝒮(M_\gamma )`$. Proof: The regular neighborhood, $`V_D=[1,1]\times D`$, of $`D`$ in $`M`$ can be projected into 2-disk $`D_p=[1,1]\times [0,1]`$ (then $`V_D=D_p\times [0,1]`$), and we use $`D_p`$ to present link diagrams, compare Fig.5.1. In $`𝒮(M_\gamma )`$ one has sliding relations described in Fig. 5.1 (with blackboard framing). These relations can be written as $`p(z_k)=(A^2A^2)z_k`$, where $`z_k`$ is a link in $`M`$ in general position with $`D`$ and cutting it $`k`$ times; Fig.5.1. After simplifying the formula, using the Kauffman bracket skein relations, one gets: $`p(z_k)=(A^{2k+2}A^{2k2})z_k+\mathrm{\Sigma }_{i=0}^{k2}w_i(A)z_i`$ and finally $$(A^{2k+4}1)(A^{2k}1)z_k=A^{2k2}\mathrm{\Sigma }_{i=0}^{k2}w_i(A)z_i$$ that is $`(A^{2k+4}1)(A^{2k}1)z_k`$ is a linear combination of links with a smaller than $`k`$ intersection number with the 2-sphere $`D_\gamma `$. For $`(A^{2k+4}1)(A^{2k}1)`$ invertible in $`R`$, one can eliminate $`z_k`$ from the set of generators. Using induction, one can eliminate all elements of $`𝒮(M_\gamma )`$ which cut the 2-sphere $`D_\gamma `$ non-trivially. Thus $`j_{}`$ is an epimorphism. $`\mathrm{}`$ Fig. 5.1. ###### Corollary 5.2 If $`R`$ is a ring with $`(A^k1)`$ invertible for any $`k>0`$ then the embedding $`j:M_1\mathrm{\#}D^3M_2\mathrm{\#}D^3M_1\mathrm{\#}M_2`$ induces an epimorphism of the KBSM: $`j_{}:𝒮(M_1\mathrm{\#}D^3M_2\mathrm{\#}D^3)𝒮(M_1\mathrm{\#}M_2)`$. ## 6 Proof of the main theorem We start the proof by showing that handle sliding described in Fig. 5.1 generates all handle sliding relations as long as $`(A^k1)`$ is invertible for any $`k>0`$. This allows us to prove the main theorem for handlebodies as we know the basis of the KBSM in this case so we are able to choose only those handle slidings which reduce those links from the basis which cut $`S^2`$ (from the connected sum) non-trivially. Finally we prove the main theorem for any compact 3-manifold using the fact that such a manifold is obtained from a handlebody by adding 2 and 3-handles. We say that a handle sliding $`sl_\gamma `$ of a link $`L`$ in $`M`$ along $`\gamma M`$ has support in a submanifold $`V`$ of $`M`$ if $`\gamma V`$ and $`L`$ and $`sl_\gamma (L)`$ are identical outside $`V`$ <sup>2</sup><sup>2</sup>2Here we consider the concrete realization of links not ambient isotopy class. To omit confusion, we often will write $`D_L`$ for a representative of the ambient isotopy class of a link $`L`$.. ###### Lemma 6.1 Let $`D`$ be a meridian disk in $`M`$, $`\gamma =D`$, and let $`V_D=[1,1]\times D`$ be a regular neighborhood of $`D`$ in $`M`$. 1. If a link $`L`$ is disjoint from the disks $`\{1,1\}\times D`$ and the sliding $`sl_\gamma `$ has support in $`(1,1)\times D`$, then the relation $`L=sl_\gamma (L)`$ holds in $`𝒮(M)`$. 2. Let $`M_0`$ be a component of $`M`$ which contains $`D`$ and assume that $`LM_0`$ is a trivial knot, then the sliding relation $`L=sl_\gamma (L)`$ holds in $`𝒮(M)`$ for any sliding of $`L`$. 3. Let $`L_{fr}^{gen}`$ be a set of links generating $`𝒮(M)`$ and for each $`L`$ in $`L_{fr}^{gen}`$ choose a representative $`D_L`$ embedded in $`M`$ in such a way that $`D_L`$ cuts the meridian disk $`D`$ in the minimal number of points in its ambient isotopy class. Let $`I_0`$ be a submodule of $`𝒮(M)`$ generated by sliding relations of Fig. 5.1, for $`z_k=D_L`$, $`LL_{fr}^{gen}`$. In other words $`I_0`$ is generated by expressions $`p(D_L)+(A^2+A^2)D_L`$. Then for any representative, $`\stackrel{~}{D}_L`$, of a link $`L`$ in $`M`$ a sliding of Fig. 5.1 preserves the element of $`𝒮(M)/I_0`$ (i.e. $`(p(L)+(A^2+A^2)L)I_0`$). 4. If $`R`$ is a ring with $`(A^k1)`$ invertible for any $`k`$ and $`I_0`$ is defined as in (c) then any sliding relation holds in $`𝒮(M)/I_0`$. Proof: 1. Because $`V_D`$ is a 3-disk and $`(V_D)_\gamma `$, obtained from $`V_D`$ by adding a 2-handle along $`\gamma `$, is a 3-disk with a hole, therefore adding this 2-handle does not change the KBSM (Lemma2.1(2)(ii)). Therefore in $`𝒮(V_D)`$ and $`𝒮(M\{1,1\}\times D)`$ any relation of the form $`L=sl_\gamma (L)`$ holds. The embedding $`i:(M\{1,1\}\times D)M`$ induces the homomorphism of the KBSM $`i_{}:𝒮(M\{1,1\}\times D)𝒮(M)`$, thus $`i_{}(L)=i_{}(sl_\gamma (L))`$. Lemma 6.1 follows, as assumptions of the lemma are chosen in such a way that any allowed sliding in $`M`$ is also a sliding in $`M(\{1,1\}\times D)`$. 2. $`LM_0`$, being a trivial knot, can be isotoped into $`V_D`$ without changing an ambient isotopy type of the result of the sliding, which will be also a trivial knot by part (a) of the lemma. 3. Any link $`L`$ in $`M`$ can be written in $`𝒮(M)`$ as a linear combination of elements of $`L_{fr}^{gen}`$. A sliding described in Fig. 5.1 does not depend on the presentation of $`L`$ (or elements of $`L_{fr}^{gen}`$) so the lemma folllows. Notice that the sliding relation of Fig. 5.1 performed on the link $`D_L`$ disjoint from $`D`$ holds already in $`𝒮(M)`$. 4. Let $`D_L`$ be a realization of a link $`L`$ in $`M`$. As $`D_L`$ is arbitrary, one can assume that sliding has support in $`V_D`$. Using relations from $`I_0`$ and the conclusion of the part (c) of the lemma together with Theorem 5.1 we can see that our slidings are performed on links in $`M\{1,1\}\times D`$ and have support in $`V_D`$, so by (a) of the lemma they do not introduce any new relation. To visualize the assertion that modulo $`I_0`$ we need to slide only links $`D_L`$ in $`M\{1,1\}\times D`$ with sliding support in $`V_D`$, consider disks $`D_1=\{1\}\times D`$ and $`D_2=\{1\}\times D`$ with $`\gamma _i=D_i`$. Let $`D_L`$ be an arbitrary link in $`M`$ and $`sl_\gamma `$ a sliding with support in $`int(V_D)`$. Slidings along $`\gamma _1`$ and $`\gamma _2`$ of the type described in Fig. 5.1, yield relations in $`𝒮(M)`$ satisfied in $`𝒮(M)/I_0`$ (as $`\gamma _i`$ is parallel to $`\gamma `$). These slidings allow us to reduce $`D_L`$ to a linear combination of links (curve systems) in $`MD_1D_2`$. Furhermore the support of sliding $`sl_\gamma `$ is (unchanged) in $`int(V_D)`$. $`\mathrm{}`$ As a corollary we get the main theorem for handlebodies. ###### Corollary 6.2 (Main theorem for handlebodies) Let $`D`$ be a meridian disk of a handlebody $`H_n`$, $`\gamma =D`$. If $`R`$ is a ring with $`(A^k1)`$ invertible for any $`k`$ then the embedding $`j:(H_nD)(H_n)_\gamma `$ induces an isomorphism $$j_{}:𝒮(H_nD)𝒮((H_n)_\gamma ).$$ Proof: By the handle sliding lemma (Lemma 4.1) one has $`𝒮((H_n)_\gamma )=𝒮(H_n)/J`$ where $`J`$ is the submodule of $`𝒮(H_n)`$ generated by slidings $`Lsl_\gamma (L)`$. By Lemma 6.1 $`J=J_0`$ for any generating set $`L_{fr}^{gen}`$ of $`𝒮(H_n)`$. We can assume that $`H_n=P_n\times [0,1]`$ and $`D=C\times [0,1]`$ for an arc $`C`$ properly embedded in the disk with $`n`$ holes, $`P_n`$. Let $`B(P_n)`$ be the basis of $`𝒮(H_n)`$ as described in Proposition 2.1(5). Let $`B_i(P_n)`$ be a subset of $`B(P_n)`$ composed of links with geometric intersection number with $`C`$ equal to $`i`$. By Lemma 6.1, $`J_0`$ is generated by sliding relations of Fig. 5.1, one relation for each element of $`B_i(P_n)`$ for $`i>0`$. Because $`B(P_n)`$ is a basis of $`𝒮(H_n)`$, therefore $`B_0(P_n)`$ is a basis of $`𝒮(H_n)/J_0`$. On the other hand $`B_0(P_n)=B(P_nC)`$ is also a basis of $`𝒮(H_nD)`$, thus $`j_{}`$ is an isomorphism. $`\mathrm{}`$ We are ready now to prove the main theorem. ###### Theorem 6.3 Let $`D`$ be a meridian disk of $`M`$ and $`\gamma =D`$. If $`R`$ is a ring with $`(A^k1)`$ invertible for any $`k`$ then the embedding $`j:(MD)M_\gamma `$ induces an isomorphism $$j_{}:𝒮(MD)𝒮(M_\gamma )$$ Proof: $`M_\gamma `$ can be obtained from $`(H_n)_\gamma `$ by adding to $`M_\gamma `$ some 2-handles (disjoint from the 2-handle added along $`\gamma `$) and some 3-handles. By Lemma 4.1 and Proposition 2.1(2) $`𝒮(M_\gamma )`$ is obtained from $`𝒮((H_n)_\gamma )`$ by sliding links generating $`𝒮((H_n)_\gamma )`$ along these 2-handles. Denote these slidings by $`sl_h`$. Consider any link $`L`$ in $`(H_n)_\gamma `$ and any sliding $`sl_h`$. We can choose a representative $`D_L`$ of $`L`$ so that $`D_L`$ and $`sl_h(D_L)`$ are identical in the neighborhood of $`S^2=D_\gamma `$. By Lemma 6.1 we can present $`D_L`$ in $`𝒮((H_n)_\gamma )`$ as a linear combination of links which are disjoint from $`S^2`$ and differ from $`D_L`$ only in a small neighborhood of $`S^2`$. Thus the sliding relation $`D_Lsl_h(D_L)`$ is a linear combination of sliding relations in $`(H_n)_\gamma S^2`$. Therefore $`j_{}:𝒮(MD)𝒮(M_\gamma )`$ is an isomorphism. The proof of Theorem 6.3 is complete. $`\mathrm{}`$ ###### Corollary 6.4 Let $`R`$ be a ring with $`(A^k1)`$ invertible for any $`k`$ then 1. If $`S^2`$ is a 2-sphere embedded in $`M`$ then the embedding $`j:MS^2M`$ yields an isomorphism of the KBSM $`j_{}:𝒮(MS^2)𝒮(M)`$. 2. The embedding $`M_1\mathrm{\#}D^3M_2\mathrm{\#}D^3`$ to $`M_1\mathrm{\#}M_2`$ yields an isomorphism of the KBSM. 3. $`𝒮(M_1\mathrm{\#}M_2)`$ is isomorphic to $`𝒮(M_1)𝒮(M_2)`$. 4. $`𝒮(S^1\times S^2)=R`$ Proof: The case (i) follows immediately from Theorem 6.3. The case (ii) corresponds to the case of (i) when $`S^2`$ separates $`M`$. $`M_1\mathrm{\#}M_2S^2`$ and $`M_1\mathrm{\#}D^3M_2\mathrm{\#}D^3`$ differ only by parts of their boundaries so their KBSM are the same. The case (iii) follows from (ii) by Proposition 2.1(3). The case (iv) follows from (i) as $`S^1\times S^2S^2`$ is a 3-disk with two holes. The case (iv) is also a special case of a general theorem in \[H-P-2\]. $`\mathrm{}`$ ###### Corollary 6.5 If $`S^2`$ is a 2-sphere embedded in $`M`$ and the KBSM $`𝒮(MS^2;Z[A^{\pm 1}],A)`$ is free then the embedding $`j:MS^2M`$ yields a monomorphism of the KBSM $`j_{}:𝒮(MS^2;Z[A^{\pm 1}],A)𝒮(M;Z[A^{\pm 1}],A)`$. Proof: Let the set $`\{x_\alpha \}`$ be a basis of $`𝒮(MS^2;Z[A^{\pm 1}],A)`$. By the Universal Coefficient Property it is also a basis of $`𝒮(MS^2;(A),A)`$. By Theorem 6.3 the set $`\{j_{}(x_\alpha )\}`$ is a basis of $`𝒮(M;(A),A)`$. Therefore it is a $`Z[A^{\pm 1}]`$ linearly independent set in $`𝒮(M;Z[A^{\pm 1}],A)`$, and therefore $`j_{}`$ is a monomorphism. $`\mathrm{}`$ ## 7 Generalizations and Speculations. Theorem 1.2 does not hold for the ring $`R=Z[A^{\pm 1}]`$. As observed in \[P-2\], Theorem 4.4, $`𝒮(M_1\mathrm{\#}M_2;Z[A^{\pm 1}],A)`$ often contains a torsion part. The general description (generators and relators) of the KBSM is possible by Theorem 4.1 but to have a more meaningful description one should first analyze relative KBSM in manifolds $`M_1`$ and $`M_2`$ and the KBSM of the connected sum would be the sum of tensor products of “reduced” relative skein modules. Finally one should be able to obtain for KBSM a Van Kampen-Seifert type theorem for 3-manifolds (glued along surfaces) <sup>3</sup><sup>3</sup>3The recent paper by W.Lofaro is a step in this direction \[Lof\].. The theorem could be reminiscent of Topological Quantum Field Theory formalism \[At\]. We plan to give the detailed description of the KBSM of connected and disc sums of 3-manifolds in \[P-4\]. Here we quote one, relatively simple result (where there is no need to invoke the notion of the relative KBSM). ###### Theorem 7.1 $$𝒮(H_n\mathrm{\#}H_m)=𝒮(H_{n+m})/$$ where $``$ is the ideal generated by expressions $`z_kA^6u(z_k)`$, for any even $`k2`$, and $`z_kB_k(P_{n+m})`$, where $`B_k(P_{n+m})`$ is a subset of a basis $`B(P_{n+m})`$ composed of links with geometric intersection number with a disk $`D`$ separating $`H_n`$ and $`H_m`$ equal to $`k`$. $`u(z_k)`$ is a modification of $`z_k`$ in the neighborhood of $`D`$, as shown in Fig. 7.1. Our relation $`z_k=A^6u(z_k)`$, is a result of the sliding relation $`z_k=sl_D(z_k)`$ as illustrated in Fig. 7.2. Fig. 7.1. Fig. 7.2. Department of Mathematics, University of Maryland College Park, MD 20742 The author is on leave from: Department of Mathematics, The George Washington University 2201 G Str. Funger Hall Washington, D.C. 20052 przytyck@gwu.edu
no-problem/9911/cond-mat9911066.html
ar5iv
text
# Entropy, Macroscopic Information, and Phase Transitions ## I Introduction The relationship between entropy and information has been subject of a long controversy, almost as old as the Second Law of Thermodynamics. The history of this controversy, closely linked to the analysis of the Maxwell demon, has been presented in detail in the excellent book of reprints collected by Leff and Rex leff . Maxwell devised his demon to show the probabilistic nature of the Second Law of Thermodynamics: a being capable of measuring the position and velocity of the molecules of a gas could in principle violate the Second Law. Operating a door in an adiabatic wall between two gases at different temperatures, the demon could induce a flow of energy from the cold to the hot gas. The conclusion is that information about the microscopic details of a system allows one to beat the Second Law. One of the most relevant sequels of the Maxwell demon is the Szilard engine leff ; szil . It consists of a box with a single-particle gas, i.e., a particle that thermalizes in any collision with the walls. A piston can be introduced (or removed) either at the middle of the box or at two opposite walls (see Figure 1). The engine operates as follows. We insert the piston in the middle of the box and measure in which side the particle gets trapped. Then we let the gas expand quasistatically and remove the piston. In the expansion the gas performs work: $$W=_V^{V/2}P𝑑V=kT\mathrm{ln}2.$$ (1) This work can be used, for instance, to lift a weight and store $`kT\mathrm{ln}2`$ as potential energy. The energy is taken from the thermal bath, since the internal energy of the gas is constant. Therefore, the Szilard engine extracts energy from a single thermal bath and performs work, in contradiction with the Second Law of Thermodynamics. Notice that, for the engine to work properly, it is absolutely necessary to know in which side the particle gets trapped. In this way, we can exert a pressure on the piston equal and opposite to the pressure of the gas and let it expand quasistatically. On the other hand, if the direction of the pressure were not correct, the gas would expand irreversibly and Eq. (1) would not hold. As in the original Maxwell demon, the Szilard engine can beat the Second Law of Thermodynamics only if some information about the state of the system is available. The literature on the Szilard engine, as well as on the Maxwell demon, has focused mainly on the heat dissipation accompanying the measurement, i.e., the acquisition of information, and/or accompanying the erasure of this information leff ; szil ; landauer ; bennet ; fahn . As an exception, Magnasco presented in Ref. magszil an interesting analysis of the topology of the phase space of the engine. Nevertheless, none of these papers has analyzed one of the obscure points of the Szilard engine, namely, that it consists of microscopic and macroscopic degrees of freedom interacting with each other. This mixture of micro (the particle) and macro (the piston) makes the Szilard engine a rather difficult and unclear problem for many physicists, even for those working on Statistical Mechanics. In this paper I address this question, by giving a novel interpretation to one of the steps of the Szilard engine. The insertion of the piston in the middle of the box can be interpreted as a second order phase transition or spontaneous symmetry breaking. The Hamiltonian of the particle is symmetric under the permutation of the two sides of the box. However, the particle gets trapped in only one of the sides. This is equivalent to what happens in an Ising model when it is driven from a paramagnetic to a ferromagnetic phase in the absence of external magnetic field. We will see below that all the astounding facts of the Szilard engine are reproduced in the Ising model and in any system exhibiting second order phase transitions. The benefit of this new interpretation is twofold. On one side, it helps to understand better the Szilard engine and the relationship between entropy and information, since we will reach the same conclusions without the use of single-particle gases interacting with pistons. On the other side, it reveals that the consequences of this relationship and the intriguing aspects of the Szilard engine are not restricted to academic and artificial constructions, such as the Maxwell demon and the Szilard engine itself, but they are present in any spontaneous symmetry breaking, that is to say, almost everywhere in Nature. The paper is organized as follows. In Section II, I analyze the energetics of two processes in the Szilard engine. Section III is a brief review of the concept of spontaneous symmetry breaking and the Ising model. In Section IV, I introduce two processes in the Ising model which are equivalent to the processes studied in Section II. Section V discusses the implications of the above results on the definition of entropy and on the general validity of the Second Law. Finally, in Section VI, I present some conclusions and a list of open problems. ## II Two processes in the Szilard engine Consider the Szilard gas and the processes $`A`$ and $`C`$ described in Fig. 2. In $`C`$, the piston is inserted in the middle of the box and the particle gets trapped in one of the sides. In $`A`$, the piston is introduced in the rightmost wall and moved slowly to the middle of the box. Then, $`C`$ is the first step of the Szilard cycle and $`A`$ is the inversion of the rest of the cycle (cf. Figs. 1 and 2). Let us investigate the energetics of these two processes, i.e., the energy transfer between the particle and its surroundings. The particle exchanges energy with two external systems: the thermal bath, and some external agent that handles the piston, exerting pressure when it is needed. As in Thermodynamics, I call heat, $`Q`$, the energy transferred from the thermal bath to the particle in a given process and, work, $`W`$, the energy transferred from the system to the external agent. Finally, if $`𝒰`$ is the internal energy of the particle, the First Law of Thermodynamics, $`\mathrm{\Delta }𝒰=QW`$, holds for any process. In our particular case, process $`C`$ does not require any work, or at least the work can be arbitrarily small. On the other hand, process $`A`$ involves a compression of the single-particle gas to half of its volume and in this compression, if carried out quasistatically, a work $`kT\mathrm{ln}2`$ is done by the external agent. Therefore, as defined above, work is given by: $$W_A=kT\mathrm{ln}2;W_C=0.$$ (2) The internal energy of the particle remains constant since the two processes are isothermal. Thus, the heat in each process is: $$Q_A=kT\mathrm{ln}2;Q_C=0$$ (3) i.e., along $`A`$, energy is transferred from the system to the thermal bath. The difference in the energetics of $`A`$ and $`C`$ is the key point of the Szilard engine. The engine is nothing but the cycle $`CA^1`$, where $`A^1`$ is the inverse of process $`A`$. The energetics of $`A^1`$ is $`W_{A^1}=W_A`$ and $`Q_{A^1}=Q_A`$, only if $`A^1`$ is the true inversion of $`A`$, i.e., if the external agent exerts a pressure equal to the pressure of the gas and thus the expansion is done adiabatically. In this case, we have $`W_{CA^1}=kT\mathrm{ln}2`$. Notice that so far I have restricted the discussion to energy. The consequences of the above results on the definition of entropy will be explored in Section V. ## III Symmetry breaking transitions I have split the Szilard cycle into processes $`A`$ and $`C`$, and showed that the paradoxical nature of the engine lies in the energetics of these two processes. Moreover, as mentioned in the Introduction, process $`C`$ can be seen as a spontaneous symmetry breaking and process $`A`$ as a forced or non-spontaneous symmetry breaking. In fact, symmetry breaking is the only necessary ingredient to reproduce all the relevant features of the Szilard engine. Let us remind first what a spontaneous symmetry breaking is. If $`(x)`$ is the Hamiltonian of a system, $`x`$ being a point in the phase space $`\mathrm{\Gamma }`$, Statistical Mechanics prescribes that the probability density for the equilibrium state of the system at temperature $`T`$ is given by the Gibbs distribution: $$\rho _T(x)=\frac{e^{\beta (x)}}{Z}$$ (4) where $`\beta =1/kT`$ and $`Z=_\mathrm{\Gamma }e^\beta `$ is the partition function. From Eq. (4) we see that $`\rho _T(x)`$ has the same symmetries as $`(x)`$. Nevertheless, in some cases, a macroscopic system is not described by the Gibbs distribution. The phase space splits into $`n`$ pieces, $`\mathrm{\Gamma }_1,\mathrm{\Gamma }_2,\mathrm{},\mathrm{\Gamma }_n\mathrm{\Gamma }`$ and the macroscopic system is confined within one of them. The distribution that describes the system is: $$\rho _i(x)=\frac{e^{\beta (x)}}{Z_i}𝒳_{\mathrm{\Gamma }_i}(x)$$ (5) where $`𝒳_A(x)`$ is the indicator function of the set $`A\mathrm{\Gamma }`$, i.e., $`𝒳_A(x)=1`$ if $`xA`$ and $`𝒳_A(x)=0`$ if $`xA`$, and $`Z_i`$ is the partition function restricted to $`\mathrm{\Gamma }_i`$. The distributions $`\rho _i(x)`$, called macroscopic phases, have less symmetries than the Hamiltonian. The partition of the phase space, called the coexistence of macroscopic phases, occurs for some values of the temperature and the parameters of the Hamiltonian. Finally, which of the macroscopic phases is chosen depends on the past of the system and/or on thermal fluctuations. The globally coupled Ising model is one of the simplest systems exhibiting coexistence of macroscopic phases huang . Its Hamiltonian is: $$(\{s_i\};J,B)=\frac{J}{N}\underset{i=1}{\overset{N1}{}}\underset{j=i+1}{\overset{N}{}}s_is_jB\underset{i=1}{\overset{N}{}}s_i$$ (6) where the spins take values $`s_i=\pm 1`$, with $`i=1,2,\mathrm{},N`$. It depends on two parameters: the coupling $`J`$ between spins and the external field $`B`$. It is called globally coupled because every spin interacts with all the others. The system exhibits coexistence of two macroscopic phases when $`B=0`$ and $`J/kT>1`$. One of the phases is restricted to $`\mathrm{\Gamma }_+`$, the set of configurations $`\{s_i\}`$ with positive global magnetization $`M_is_i>0`$, and the other is restricted to $`\mathrm{\Gamma }_{}`$, the set of configurations with negative magnetization. Each phase breaks the symmetry $`\{s_i\}\{s_i\}`$ that the Hamiltonian possesses for $`B=0`$. When temperature is lowered, keeping $`B=0`$, from an initial value above the critical temperature $`T_cJ/k`$, a second order phase transition occurs at $`T=T_c`$. Below $`T_c`$ the system is in one of the two macroscopic phases. None of the phases is favored along the process, since $`B=0`$. Therefore, the system chooses the macroscopic phase at random or, more precisely, the choice is induced by thermal fluctuations. The globally coupled Ising model also exhibits first order phase transitions when the field crosses $`B=0`$ below $`T_c`$. The external field breaks the symmetry $`\{s_i\}\{s_i\}`$ of the Hamiltonian and, if the coexistence region is reached decreasing a positive field, the macroscopic phase is the one with positive magnetization. This can be called forced or non-spontaneous symmetry breaking. To reproduce in the Ising model the two processes, $`A`$ and $`C`$, discussed in Section II for the Szilard engine, we need to induce a spontaneous symmetry breaking at constant temperature (remember that processes $`A`$ and $`C`$ in the Szilard engine are isothermal). This can be achieved if we tune the coupling $`J`$ at constant temperature $`T`$. The spontaneous symmetry breaking occurs then for a critical coupling $`J_c1/kT`$, and for $`B=0`$ and $`J>J_c`$ the system exhibits coexistence of phases. ## IV Two processes in the Ising model Consider the following two processes on the plane $`(J,B)`$ (see Figure 3): * Process $`A`$: starting at $`(0,0)`$, the field is increased up to $`B_f>0`$, then the coupling is increased up to $`J_f>J_c`$, then the field is decreased down to zero. * Process $`C`$: starting at $`(0,0)`$, the coupling is increased up to $`J_f>J_c`$, keeping $`B=0`$. The two processes are quasistatic in the following sense: they are slow enough for the system to relax within each possible macroscopic phase, but fast enough for the system to remain in one of the two phases. Applying to process $`A`$ the formalism described in the Appendix, one finds the following energetics, up to order $`kT`$ parr : $$W_A=(T,J_f,0)+(T,0,0)kT\mathrm{ln}2$$ (7) where $`(T,J,B)=kT\mathrm{ln}Z(\beta ,J,B)`$ and $`Z(\beta ;J,B)=e^\beta `$ is the partition function of the system. $`Z(\beta ;J,B)`$ and $`(T,J,B)`$ must be considered here as mere mathematical definitions and we should refrain from attributing them any physical meaning at this stage of the discussion. For process $`C`$ one has: $$W_C=(T,J_f,0)+(T,0,0).$$ (8) Therefore, $`W_AW_C=kT\mathrm{ln}2`$, i.e., the external agent has to do more work to complete process $`A`$ than to complete $`C`$, exactly as in the Szilard engine. The whole discussion on the Szilard engine in Sections I and II can be applied to the Ising model. For instance, one can design a cyclic engine as $`CA^1`$. Let us first analyze the inverse processes $`A^1`$ and $`C^1`$ in detail. The inversion of $`C`$ does not present any difficulty. The energetics of $`C^1`$ is simply $`W_{C^1}=W_C`$ and $`Q_{C^1}=Q_C`$. On the other hand, if we try to invert $`A`$, the sign of the field must be the same as the sign of the initial magnetization of the system. If we start to increase a positive field on a system with negative magnetization, the system becomes metastable, it runs along one of the branches of a hysteresis cycle and eventually relaxes irreversibly to the stable state for some value of the field $`B`$ (see Fig. 4). The most general case is when we have an ensemble of systems. If initially a fraction $`\alpha `$ of them have negative magnetization, the energetics of $`A^1`$ is given by: $$W_{A^1}=W_A\alpha \frac{A_{hys}}{2}$$ (9) where $`A_{hys}`$ is the area of the hysteresis cycle at $`J=J_f`$, as shown in Fig. 4. The hysteresis phenomenon is not present in the Szilard engine. However, it has similar consequences as exerting the pressure in the wrong direction along the expansion, since in both cases the system evolves irreversibly doing less work. Consider now the equivalent to the Szilard engine, i.e., the cycle $`CA^1`$ on an ensemble of Ising models. Its energetics (per system) is immediately obtained from Eqs. (7-9): $$W_{CA^1}=W_C+W_{A^1}=kT\mathrm{ln}2\alpha \frac{A_{hys}}{2}.$$ (10) where $`\alpha `$ is the fraction of systems with magnetization of the same sign as the field in $`A^1`$. There are two consequences of this expression. First, if instead of an ensemble we take a single system and measure its magnetization after $`C`$ to decide the sign of the field, then $`\alpha =0`$ and $`W_{CA^1}=kT\mathrm{ln}2>0`$, i.e., the system is extracting energy from the thermal bath and converting it into work. We recover the same result as in the Szilard engine but now with a genuine macroscopic system. Thus, we have a Maxwell demon with the important novelty that he needs to measure a macroscopic quantity. Second, for an ensemble, $`\alpha =1/2`$, and we still can beat the Second Law unless: $$A_{hys}4kT\mathrm{ln}2.$$ (11) This inequality is a byproduct of this theory and clarifying its origin is one of the open problems of the present work. ## V Entropy and macroscopic uncertainty The above discussion has focused on energy. I will explore in this Section the consequences of the previous results on the definition of entropy. The change of entropy in the thermal bath along a process is given by $`\mathrm{\Delta }S_{bath}=Q/T`$, whereas the entropy of the external agent is constant because its interaction with the system is purely mechanical. Then the change of the total entropy is: $$\mathrm{\Delta }S_{total}=\frac{Q}{T}+\mathrm{\Delta }S_{syst}.$$ (12) Second Law of Thermodynamics tells us that, if a process is reversible, $`\mathrm{\Delta }S_{total}=0`$, and, if it is irreversible, $`\mathrm{\Delta }S_{total}>0`$. In particular, for a cyclic process, $`\mathrm{\Delta }S_{syst}=0`$ hence $`Q0`$. This is the Kelvin-Planck statement of the Second Law: it is not possible to extract energy from a single thermal bath in a cyclic process. However, Eq. (12) and the Second Law lead to contradictions when applied to processes $`A`$ and $`C`$. For instance, $`\mathrm{\Delta }S_{total}^{CC^1}=\mathrm{\Delta }S_{total}^{AA^1}=0`$. Therefore, $`AA^1`$ and $`CC^1`$ are reversible and so are their components, $`A`$, $`A^1`$, $`C`$, and $`C^1`$. On the other hand $`\mathrm{\Delta }S_{total}^{AC^1}=k\mathrm{ln}2`$, hence $`AC^1`$ is irreversible. Moreover, if $`A`$ and $`C`$ are reversible, we obtain $`\mathrm{\Delta }S_{syst}^C=\mathrm{\Delta }S_{syst}^A+k\mathrm{ln}2`$. These contradictions are usually explained with the following definition for the thermodynamic entropy of the system: $$S_{syst}^{(ens)}=k\mathrm{ln}\rho _{ens}$$ (13) where $`\rho _{ens}`$ is the probability distribution describing an ensemble of systems. After process $`C`$, $`\rho _{ens}=(\rho _++\rho _{})/2`$ where $`\rho _+`$ and $`\rho _{}`$ are the probability distribution of the two macroscopic phases (see Section III). On the other hand, after $`A`$, $`\rho _{ens}=\rho _+`$. Then, $`S_{syst}^{(ens)}`$ is $`k\mathrm{ln}2`$ bigger after $`C`$ than after $`A`$. This picture is, however, rather unsatisfactory if we deal with single systems instead with ensembles, since $`\rho _{ens}`$ becomes a subjective quantity. For instance, the physical state of an Ising model after process $`A`$ is the same as after $`C`$ if the final magnetization is positive. The only difference between these two situations is that we ignore the magnetization after $`C`$. Then $`S_{syst}^{(ens)}`$, as defined in Eq. (13), is a subjective quantity for single systems. Mathematically, this can be expressed as: $$S_{syst}^{(ens)}=k\mathrm{ln}\rho _{single}+H.$$ (14) Here, $`\rho _{single}`$ is the invariant measure that gives the temporal average of any magnitude and it is a fully objective distribution for a single system ergo . $`H`$ is the ignorance or uncertainty that we have about the macrostate of the system. It is measured using Shannon formula: $`H=k_ip_i\mathrm{ln}p_i`$ where $`p_i`$ is the probability of having an instance $`i`$ (in the Szilard and Ising case, after $`C`$, $`H=1\text{bit}=k\mathrm{ln}2`$). Moreover, in this interpretation not only entropy is subjective but also the concept of reversibility. Consider $`C^1`$ on a single system: it is reversible if we do not know the initial macroscopic magnetization and it is irreversible if we do know it. I propose a simpler interpretation of the above results. In this new interpretation, entropy is an objective magnitude for single systems, but we are forced to admit that it decreases along some processes, in contradiction with some formulations of the Second Law. However, the main limitation imposed by the Second Law, namely, the Kelvin-Planck statement, remains valid, since these processes cannot be used to construct cycles. Ishioka and Fuchikami, in these proceedings ishioka , have reached similar conclussions. The assumptions for this interpretation are the following: 1. The thermodynamic entropy of a system is given by: $$S_{syst}k\mathrm{ln}\rho _{single}.$$ (15) 2. If an external agent induces, in a quasistatic and isothermal way, a spontaneous symmetry breaking with $`n`$ phases, the total entropy (the sum of the entropies of the system, thermal bath, and external agent) decreases by $`k\mathrm{ln}n`$. I will call these processes anti-irreversible (in ishioka the term partitioning processes is used instead) and they correspond to the creation of macroscopic uncertainty. 3. Along the inverse of an anti-irreversible process, the total entropy increases by $`k\mathrm{ln}n`$. I will call these processes quasi-irreversible or simply irreversible. Process $`C`$ is anti-irreversible and $`C^1`$ is quasi-irreversible. The reason of the names is the following: $`C^1`$ cannot be truly reversed because, after $`C^1C`$ the initial magnetization could be opposite to the final one. Processes $`A`$ and $`A^1`$ are reversible in the standard sense, i.e., total entropy does not change. The reader can check that every combination of processes $`A`$, $`C`$ and their inversions are explained with the above three rules. Moreover, entropy and reversibility become fully objective concepts. The measurement process can be also explained with this new Thermodynamics. Consider, as a model of a system and a measurement device, the Hamiltonian: $$(\{s_i^{(1)}\},\{s_i^{(2)}\};J_1,J_2,J_{12})=\frac{J_1}{N}\underset{j>i}{\overset{N}{}}s_i^{(1)}s_j^{(1)}\frac{J_2}{N}\underset{j>i}{\overset{N}{}}s_i^{(2)}s_j^{(2)}\frac{J_{12}}{N}\underset{i,j=1}{\overset{N}{}}s_i^{(1)}s_j^{(2)}$$ which corresponds to two coupled Ising models, 1 (system) and 2 (measurement device or ‘pointer’). The following table shows the behavior of the total entropy, as defined by (12) and (15), and the macroscopic uncertainty $`H`$, along two isothermal and quasistatic processes: | | Step | $`S_{total}S_{total}^0`$ | $`H`$ | | | Step | $`S_{total}S_{total}^0`$ | $`H`$ | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | 1) | $`J_1:0J_f`$ | $`k\mathrm{ln}2`$ | 1 bit | | | $`J_1:0J_f`$ | $`k\mathrm{ln}2`$ | 1 bit | | 2) | $`J_{12}:0J_f`$ | $`k\mathrm{ln}2`$ | 1 bit | | | $`J_{12}:0J_f`$ | $`k\mathrm{ln}2`$ | 1 bit | | 3) | $`J_2:0J_f`$ | $`k\mathrm{ln}2`$ | 1 bit | | | $`J_2:0J_f`$ | $`k\mathrm{ln}2`$ | 1 bit | | 4) | $`J_1:J_f0`$ | $`k\mathrm{ln}2`$ | 1 bit | | | $`J_{12}:J_f0`$ | $`k\mathrm{ln}2`$ | 1 bit | | 5) | $`J_{12}:J_f0`$ | $`k\mathrm{ln}2`$ | 1 bit | | | $`J_1:J_f0`$ | $`0`$ | 1 bit | | 6) | $`J_2:J_f0`$ | $`0`$ | $`0`$ | | | $`J_2:J_f0`$ | $`k\mathrm{ln}2`$ | $`0`$ | Both processes involve a spontaneous symmetry breaking (step 1), copying the outcome (steps 2-3) and erase of the copy and the original (steps 4-6). The first process (left column) can be interpreted as a reversible measurement. Measurement can be defined in a rather general way as any procedure which allows one to drive a system from the region of coexistence of phases to a region of non-coexistence along a reversible process, i.e., avoiding the critical point as well as the possibility of hysteresis. This is done in step 4 of the first column, where $`J_1`$ is decreased down to zero along a reversible process. As a result, the total entropy is lowered by $`k\mathrm{ln}2`$ in the first five steps. Notice also that, to drive the whole system 1+2 to its initial state, we have to reset the measurement device 2, by crossing again a critical point, i.e., along a quasi-irreversible process (step 6). We thus recover Bennet’s interpretation of the Szilard engine bennet . I have included the other process (right column in the table) to show how subtle the measurement and the erasure processes can be. If subsystem 1 is uncoupled before driven to its initial state, then it crosses a critical point and the entropy increases. Step 5 in the right column is quasi-irreversible, because initially the magnetizations of 1 and 2 have the same sign, and, if step 5 were reversed the final magnetizations would be uncorrelated. A similar effect of the correlation between the particle and the measurement device in the Szilard engine was pointed out by Fahn in Ref. fahn . ## VI Conclusions and open problems Two objections can be raised against the Thermodynamics proposed in the last Section. The first is that energy is an extensive property, i.e., of order $`NkT`$, and terms of order $`kT\mathrm{ln}2`$ are negligible and even much smaller than the energy fluctuations. This objection applies to any Maxwell demon but it is not sufficient to exorcize it. The reason is that the demon can repeat the cycle as many times as he wants, converting a macroscopic amount of heat into work. The second objection is that the increase of entropy can be derived from non-equilibrium theories, such as the Fokker-Planck formalism. If $`𝐪`$ are the (overdamped) degrees of freedom of a system, the probability distribution obeys the Fokker-Planck equation (FPE): $$_t\rho (𝐪,t)=𝐉(𝐪,t)$$ (16) where the current is $`𝐉(𝐪,t)=[\mu (𝐪,t)]\rho (𝐪,t)`$ and the chemical potential is defined as $`\mu (𝐪,t)V(𝐪,t)+kT\mathrm{ln}\rho (𝐪,t)`$. From these equations one can derive the following identity magszil : $`k_t{\displaystyle 𝑑𝐪\rho (𝐪,t)\mathrm{ln}\rho (𝐪,t)}`$ $`=`$ $`{\displaystyle \frac{1}{T}}{\displaystyle 𝑑𝐪V(𝐪,t)_t\rho (𝐪,t)}+{\displaystyle \frac{1}{T}}{\displaystyle 𝑑𝐪\frac{|𝐉(𝐪,t)|^2}{\rho (𝐪,t)}}`$ (17) $`=`$ $`{\displaystyle \frac{\dot{Q}}{T}}+{\displaystyle \frac{1}{T}}{\displaystyle 𝑑𝐪\frac{|𝐉(𝐪,t)|^2}{\rho (𝐪,t)}}`$ If the left-hand side of this equation is interpreted as $`\dot{S}_{syst}`$, the change of the entropy of the system per unit of time, then the total change of entropy, $`\dot{S}_{total}=\dot{Q}/T+\dot{S}_{syst}`$, is always positive. A similar result can be obtained for underdamped degrees of freedom shizume . How then have we obtained $`\dot{S}_{total}<0`$ for some processes involving phase transitions? The answer is that the distribution that appears in the FPE (16) is $`\rho _{ens}`$ and not $`\rho _{single}`$. Then, the FPE is not appropriate to describe single macroscopic systems. One of the open problems of the present work is to characterize $`\rho _{single}`$ and derive an evolution equation similar to the FPE. Other open problems are: (a) analyze the role of hysteresis and the origin of inequality (11); (b) extend the above discussion to the breaking of a continuous symmetry, where an infinite number of macroscopic phases coexist; (c) include the external agent in the Hamiltonian as a set of macroscopic degrees of freedom; and (d) explore the implications of the decrease of entropy along anti-irreversible processes, specially in cosmology. ## Appendix Consider a system whose Hamiltonian $`(x;𝐑)`$ depends on a set of external parameters collected in a vector $`𝐑`$. We are interested in the energetics of a process along which the system is in contact with a thermal bath at temperature $`T`$ and the parameters are changed by an external agent as $`𝐑(t)`$ with $`t[0,𝒯]`$. The expressions for work and heat per unit of time along this process are shizume ; denbigh : $$\dot{Q}=_\mathrm{\Gamma }𝑑x(x;𝐑(t))\frac{\rho (x;t)}{t};\dot{W}=_\mathrm{\Gamma }𝑑x\rho (x;t)\frac{(x;𝐑(t))}{t}$$ (18) where $`\mathrm{\Gamma }`$ is the phase space of the system and $`\rho (x;t)`$ the probability density for the state $`x`$. Heat and work, as given by Eq. (18), obey the First Law of Thermodynamics: $`\dot{𝒰}=\dot{Q}\dot{W}`$. If the process is quasistatic, $`𝒯\mathrm{}`$, the probability density at time $`t`$ depends only on the value of the external parameters at $`t`$, i.e., $`\rho (x;t)=\rho (x;𝐑(t))`$. In this case, the heat and the work in the whole process are given by: $$Q=_A\delta Q(𝐑);W=_A\delta W(𝐑)$$ (19) where $`A`$ is the path that $`𝐑(t)`$ describes along the process and the infinitesimal work and heat are given by: $`\delta Q(𝐑)`$ $`=`$ $`{\displaystyle _\mathrm{\Gamma }}𝑑x(x;𝐑){\displaystyle \frac{\rho (x;𝐑)}{𝐑}}𝑑𝐑`$ $`\delta W(𝐑)`$ $`=`$ $`{\displaystyle _\mathrm{\Gamma }}𝑑x\rho (x;𝐑){\displaystyle \frac{(x;𝐑)}{𝐑}}𝑑𝐑.`$ (20) The most familiar implementation of the above expressions is obtained when the state of the system is the Gibbs distribution, $`\rho _T(x;𝐑)e^{\beta (x;𝐑)}/Z(\beta ,𝐑)`$. For this particular case, Eqs. (20) reduce to $$\delta Q(𝐑)=T\frac{S(T,𝐑)}{𝐑}d𝐑;\delta W(𝐑)=\frac{(T,𝐑)}{𝐑}d𝐑$$ (21) where $`S(T,𝐑)=k_\mathrm{\Gamma }𝑑x\rho _T(x;𝐑)\mathrm{ln}[\rho _T(x;𝐑)]`$ and $`(T,𝐑)=kT\mathrm{ln}Z(\beta ,𝐑)`$ are, respectively, the free energy and the entropy of the system. Although processes $`A`$ and $`C`$ considered in the text are isothermal and quasistatic, the state $`\rho (x;𝐑)`$ is not equal to $`\rho _T(x;𝐑)`$ in the region of coexistence of macroscopic phases. Consequently their energetics, up to terms of order $`kT`$, differ from the one prescribed by standard equilibrium Thermodynamics. To get Eqs. (2), I have used Eqs. (19) and (20) with the following prescription for $`\rho (\{s_i\};J,B)`$ along process $`C`$: $`\rho (\{s_i\};J,0)=\rho _T(\{s_i\};J,B)`$ if $`J<J_c`$ and $`\rho (\{s_i\};J,0)=\rho _+(\{s_i\};J,B)`$ if $`JJ_c`$, where $`\rho _+`$ is $`\rho _T`$ restricted to $`\mathrm{\Gamma }_+`$, the set of configurations with positive magnetization. The precise location of the replacement of $`\rho _T`$ by $`\rho _+`$ does not affect the results. In fact, the energetics is the same as if calculated using $`\rho _T`$, for symmetry reasons parr . Along process $`A`$, the state is given by: $`\rho _T(\{s_i\};J,B)`$ if $`J=0`$ or $`B=B_f`$ (first two steps of $`A`$) and by $`\rho _+(\{s_i\};J,B)`$ if $`J=J_f`$ (last step). Again the energetics, up to order $`kT`$, does not depend on where precisely the system changes from $`\rho _T`$ to $`\rho _+`$. The above prescription has been chosen for simplicity. The replacement of $`\rho _T`$ by $`\rho _+`$ is only significant at the end of the last step, i.e., when the system is close to the region of coexistence. $`W_A`$ can be calculated by using (21) along the first two steps and using the partition function restricted to $`\mathrm{\Gamma }_+`$ along the third step. It can be rigorously proven that the energetics is the one given by (2) plus terms of order $`kTe^{\gamma N}`$, where $`\gamma `$ is positive and of order $`o(1)`$ if $`B_f`$ and $`J_f`$ are large enough. Details of the calculations will be given elsewhere parr .
no-problem/9911/astro-ph9911088.html
ar5iv
text
# 4 Trailed spectrograms (top panels), Doppler maps (middle panels) and computed data (bottom panels) are plotted for "H"⁢𝛽, He ii 𝜆4686 Å, and He i 𝜆4472 Å for both nights. The computed data were reconstructed from the Doppler maps. The letters A, B and C mark flares visible in both "H"⁢𝛽 and He ii 𝜆4686 Å; these lead to some artifacts (diagonal stripes) in the maps.
no-problem/9911/hep-ph9911260.html
ar5iv
text
# References BUTP-99/25 Spectrum and Decays of Hadronic Atoms J. Gasser Institute for Theoretical Physics, University of Bern, Sidlerstrasse 5, CH-3012, Bern, Switzerland V. E. Lyubovitskij<sup>1</sup><sup>1</sup>1Present address: Institute of Theoretical Physics, University of Tübingen, Auf der Morgenstelle 14, D-72076 Tübingen, Germany Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980 Dubna, Russia and Department of Physics, Tomsk State University, 634050 Tomsk, Russia and A. Rusetsky Institute for Theoretical Physics, University of Bern, Sidlerstrasse 5, CH-3012, Bern, Switzerland, Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980 Dubna, Russia and HEPI, Tbilisi State University, 380086 Tbilisi, Georgia ## Abstract Using non relativistic effective Lagrangian techniques, we analyze the hadronic decay of the $`\pi ^+\pi ^{}`$ atom and the strong energy-level shift of pionic hydrogen in the ground state. We derive general formulae for the width and level shift, valid at next-to-leading order in isospin breaking. The result is expressed in terms of hadronic threshold amplitudes that include isospin-breaking effects. In order to extract isospin symmetric scattering lengths from the data, we invoke chiral perturbation theory, that allows one to relate the scattering lengths to the threshold amplitudes. Recent years have seen a growing interest in the study of hadronic atoms. At CERN, the DIRAC collaboration aims to measure the $`\pi ^+\pi ^{}`$ atom lifetime to $`10\%`$ accuracy. This would allow one to determine the difference $`a_0a_2`$ of $`\pi \pi `$ scattering lengths with $`5\%`$ precision. This measurement provides a crucial test for the large/small condensate scenario in QCD: should it turn out that the quantity $`a_0a_2`$ is different from the value predicted in standard ChPT , one has to conclude that spontaneous chiral symmetry breaking in QCD proceeds differently from the widely accepted picture . In the experiment performed at PSI , one has measured the strong energy-level shift and the total decay width of the $`1s`$ state of pionic hydrogen, as well as the $`1s`$ shift of pionic deuterium. Using the technique described in Ref. , these measurements yield isospin symmetric $`\pi N`$ scattering lengths to an accuracy which is unique for hadron physics: $`a_{0+}^+=(1.6\pm 1.3)\times 10^3M_{\pi ^+}^1`$ and $`a_{0+}^{}=(86.8\pm 1.4)\times 10^3M_{\pi ^+}^1`$. The scattering length $`a_{0+}^{}`$ may be used as an input in the Goldberger-Miyazawa-Oehme sum rule to determine the $`\pi NN`$ coupling constant . A new experiment on pionic hydrogen has recently been approved. It will allow one to measure the decay $`A_{\pi ^{}p}\pi ^0n`$ to much higher accuracy and thus enable one, in principle, to determine the $`\pi N`$ scattering lengths from data on pionic hydrogen alone. This might vastly reduce the model-dependent uncertainties that come from the analysis of the three-body problem in $`A_{\pi ^{}d}`$. Finally, the DEAR collaboration at the DA$`\mathrm{\Phi }`$NE facility (Frascati) plans to measure the energy level shift and lifetime of the $`1s`$ state in $`K^{}p`$ and $`K^{}d`$ atoms \- with considerably higher precision than in the previous experiment carried out at KEK for $`K^{}p`$ atoms. It is expected that this will result in a precise determination of the $`I=0,1`$ $`S`$-wave scattering lengths - although, of course, one will again be faced with the three-body problem already mentioned. It will be a challenge for theorists to extract from this new information on the $`\overline{K}N`$ amplitude at threshold a more precise value of e.g. the isoscalar kaon-sigma term and of the strangeness content of the nucleon . We now turn to theoretical investigations of hadronic atoms. At leading order in isospin breaking, the energy-level shift and the decay width of these atoms can be expressed in terms of the strong hadronic scattering lengths through the well-known formulae by Deser et al. . More precisely, these formulae relate the ground state level shift - induced by the strong interaction - and its partial decay width into neutral hadrons (e.g., $`A_{\pi ^+\pi ^{}}\pi ^0\pi ^0`$, $`A_{\pi ^{}p}\pi ^0n`$) to the corresponding isospin combinations of strong scattering lengths, $`\mathrm{\Delta }E_{\mathrm{str}}\mathrm{\Psi }_0^2\mathrm{Re}a_{cc},\mathrm{\Gamma }_{c0}(\mathrm{phase}\mathrm{space})\times \mathrm{\Psi }_0^2|a_{c0}|^2.`$ (1) Here, $`\mathrm{\Psi }_0`$ denotes the value of the Coulomb wave function at the origin, and $`a_{cc}`$, $`a_{c0}`$ stand for the relevant isospin combinations of strong scattering lengths. We have used the notation ”$`c`$” for ”charged” (e.g., $`\pi ^+\pi ^{}`$, $`\pi ^{}p`$) and ”$`0`$” for ”neutral” (e.g., $`\pi ^0\pi ^0`$, $`\pi ^0n`$) channels. The accuracy of these leading-order formulae is however not sufficient to fully exploit existing and forthcoming high-precision data on hadronic atoms. Indeed, for that purpose, one has to evaluate isospin-breaking corrections at next-to-leading order. The aim of the present talk is to show how this can be achieved. Recently, using a non relativistic effective Lagrangian framework, a general expression for the decay width $`\mathrm{\Gamma }_{A_{2\pi }\pi ^0\pi ^0}`$ of the $`1s`$ state of the $`\pi ^+\pi ^{}`$ atom was obtained at next-to-leading order in isospin-breaking . We denote the fine-structure constant $`\alpha `$ and the quark mass difference squared $`(m_dm_u)^2`$ by the common symbol $`\delta `$. Then, the decay width is written in the following form<sup>2</sup><sup>2</sup>2 We use throughout the Landau symbols $`O(x)`$ \[$`o(x)`$\] for quantities that vanish like $`x`$ \[faster than $`x`$\] when $`x`$ tends to zero. Furthermore, it is understood that this holds modulo logarithmic terms, i.e. we write also $`O(x)`$ for $`x\mathrm{ln}x`$., $`\mathrm{\Gamma }_{A_{2\pi }\pi ^0\pi ^0}`$ $`=`$ $`{\displaystyle \frac{2}{9}}\alpha ^3p^{}𝒜_{\pi \pi }^2(1+K_{\pi \pi }),𝒜_{\pi \pi }=a_0a_2+O(\delta ),`$ $`K_{\pi \pi }`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }M_\pi ^2}{9M_{\pi ^+}^2}}(a_0+2a_2)^2{\displaystyle \frac{2\alpha }{3}}(\mathrm{ln}\alpha 1)(2a_0+a_2)+o(\delta ).`$ (2) Here $`p^{}=(M_{\pi ^+}^2M_{\pi ^0}^2\frac{1}{4}M_{\pi ^+}^2\alpha ^2)^{1/2}`$, and $`a_I(I=0,2)`$ denote the strong $`\pi \pi `$ scattering lengths in the channel with total isospin $`I`$, and the quantity $`𝒜_{\pi \pi }`$ is calculated as follows . One calculates the relativistic amplitude for the process $`\pi ^+\pi ^{}\pi ^0\pi ^0`$ at $`O(\delta )`$ in the normalization chosen so that at $`O(\delta ^0)`$ the amplitude at threshold coincides with the difference $`a_0a_2`$ of (dimensionless) $`S`$-wave $`\pi \pi `$ scattering lengths. Due to the presence of virtual photons, the amplitude is multiplied by an overall Coulomb phase that is removed. The real part of the remainder contains terms that diverge like $`|𝐩|^1`$ and $`\mathrm{ln}2|𝐩|/M_{\pi ^+}`$ at $`|𝐩|0`$ ($`𝐩`$ denotes the relative 3-momentum of charged pion pairs). The quantity $`𝒜_{\pi \pi }`$ is obtained by subtracting these divergent pieces, and by then evaluating the remainder at $`𝐩=0`$. We shall refer to $`𝒜_{\pi \pi }`$ as the physical scattering amplitude at threshold. A few remarks are in order. As it is seen explicitly from Eq. (S0.Ex1), one can directly extract the value of $`𝒜_{\pi \pi }`$ from the measurement of the decay width, because the correction $`K_{\pi \pi }`$ is very small and the error introduced by it is negligible. We emphasize that in derivation of Eq. (S0.Ex1), chiral expansions have not been used. On the other hand, if one further aims to extract strong scattering lengths from data, one may invoke chiral perturbation theory (ChPT) and to relate the quantities $`𝒜_{\pi \pi }`$ and $`a_0a_2`$ order by order in the chiral expansion. This requires the evaluation of isospin-breaking corrections to the scattering amplitude. The corrections to the hadronic atom characteristics, evaluated in this manner contain, in general, contributions which have not been taken into account so far within the potential scattering approach to the same problem . An obvious candidate for these contributions is the effect coming from the direct quark-photon coupling that is encoded in the so-called ”electromagnetic” low-energy constants (LEC’s) in ChPT. A second effect is related to the convention-dependent definition of the isospin-symmetric world against which the isospin-breaking corrections are calculated. We adopt the widely used convention that the masses of the isospin multiplets $`(\pi ^\pm ,\pi ^0)`$ and $`(p,n)`$ in this world coincide with the masses of the charged particles in the real world. This definition induces a contribution to the isospin-breaking corrections in the level shifts and decay widths. We shall display below both corrections explicitly in the case of the $`\pi ^{}p`$ energy-level shift, where these effects emerge already at tree level. The investigation of the $`\pi ^{}p`$ atom is very similar to the procedure used in the description of the $`\pi ^+\pi ^{}`$ atom . In the following, we restrict ourselves to the case of the strong energy-level shift of the $`\pi ^{}p`$ atom in the ground state. Because the proton-neutron mass difference contains terms linear in $`m_dm_u`$, we count $`\alpha `$ and $`m_dm_u`$ as quantities of the same order, and denote them by the common symbol $`\delta ^{}`$. \[Since this counting is merely a matter of convenience, our previous results on the $`\pi ^+\pi ^{}`$ atom remain of course unaltered.\] Further, for the energy shift of hadronic atoms, one can no longer neglect the electromagnetic contributions coming from transverse photons as it was done in the case of the width of the $`\pi ^+\pi ^{}`$ atom. The reason for this can easily be seen from counting powers of $`\alpha `$ in the energy-level shift. The binding energy of the atom starts at $`O(\alpha ^2)`$ (nonrelativistic value $`E_{\mathrm{NR}}=\frac{1}{2}\mu _c\alpha ^2`$, where $`\mu _c`$ denotes the reduced mass of $`\pi ^{}p`$ system), and the corresponding QED corrections start at $`O(\alpha ^4)`$. According to Eq. (1), the leading-order strong energy-level shift is O($`\alpha ^3`$), while the next-to-leading order corrections start at $`O(\alpha ^4)`$ and should therefore be treated on the same footing as the QED corrections<sup>3</sup><sup>3</sup>3 There is one important exception to this rule. Though the vacuum polarization correction starts at $`O(\alpha ^5)`$, it is amplified by a large factor $`(\mu _c/m_e)^2`$, where $`m_e`$ denotes the electron mass. Since $`\alpha \mu _c/m_e1`$, this contribution is numerically as important as the leading-order strong contribution (see ). The graph responsible for this contribution can be, however, easily singled out and the contribution from it merely added to the final result.. QED corrections, however, are not considered here - we focus on the strong energy-level shift alone. For the latter, it is straightforward to obtain a general formula very similar to Eq. (S0.Ex1), that gives the strong energy-level shift including $`O(\delta ^{})`$ corrections: $`\mathrm{\Delta }E_{\mathrm{str}}=2\alpha ^3\mu _c^2𝒜_{\pi N}(1+K_{\pi N}),`$ (3) where $`K_{\pi N}`$ is a quantity of order $`\delta ^{}`$ (modulo logarithms) and can be expressed in terms of the $`S`$-wave $`\pi N`$ scattering lengths $`a_{0+}^+`$ and $`a_{0+}^{}`$. Since $`K_{\pi N}`$ is small, the error introduced by the uncertainty in the determination of $`a_{0+}^+,a_{0+}^{}`$ is negligible. The major uncertainty in the energy-level shift comes from the quantity $`𝒜_{\pi N}`$ whose definition is very similar to that of $`𝒜_{\pi \pi }`$. To evaluate this quantity, one has to calculate the relativistic scattering amplitude for the process $`\pi ^{}p\pi ^{}p`$ at $`O(\delta ^{})`$, subtract all diagrams that are made disconnected by cutting one virtual photon line and remove the Coulomb phase. The real part of the remainder, as for the $`\pi ^+\pi ^{}`$ case, contains singular pieces that behave like $`|𝐩|^1`$ and $`\mathrm{ln}|𝐩|/\mu _c`$ that should be again subtracted ($`𝐩`$ denotes the relative 3-momentum of the $`\pi ^{}p`$ pair in CM). The rest - evaluated at $`𝐩=0`$ \- coincides, by definition, with $`𝒜_{\pi N}`$. \[The normalization of the relativistic amplitude is chosen so that $`𝒜_{\pi N}=a_{0+}^++a_{0+}^{}+O(\delta ^{})`$.\] Further, to analyze the isospin-breaking corrections to the energy-level shift, we relate the physical scattering amplitude at threshold $`𝒜_{\pi N}`$ to the scattering lengths $`a_{0+}^+,a_{0+}^{}`$ in ChPT. At $`O(p^2)`$ in the chiral expansion, where the amplitude is determined by tree diagrams, this relation is remarkably simple. Constructed on the basis of the effective $`\pi N`$ Lagrangian , the amplitude contains the pseudovector Born term $`𝒜_{\pi N}^{\mathrm{pv}}`$ with physical masses, and a contribution that contains a linear combination of $`O(p^2)`$ LEC’s, $`𝒜_{\pi N}^{(2)}`$ $`=`$ $`a_{0+}^++a_{0+}^{}+ϵ_{\pi N}^{(2)}`$ (4) $`=`$ $`𝒜_{\pi N}^{\mathrm{pv}}+2\widehat{m}B\kappa _1c_1+M_{\pi ^+}^2(\kappa _2c_2+\kappa _3c_3)+e^2(\sigma _1f_1+\sigma _2f_2),`$ where the quantity $`B`$ is related to the quark condensate, and where $`c_i`$ ($`f_i`$) are strong (electromagnetic) LEC’s from the $`O(p^2)`$ Lagrangian of ChPT. Furthermore, $`\kappa _i`$ and $`\sigma _i`$ denote isospin symmetric coefficients whose explicit expressions are not needed here. From Eq. (4), it is straightforward to visualize both mechanisms of isospin-breaking corrections to the hadronic atom observables, not included in potential approaches. The direct quark-photon coupling is encoded in the coupling constants $`f_i`$, whereas the effect of the mass tuning in the hadronic amplitude (described above) is due to the term proportional to $`2\widehat{m}B`$. Indeed, at this order in the chiral expansion, one has $`2\widehat{m}B=M_{\pi ^0}^2`$. As we express the strong amplitude in terms of charged masses by convention, we write $`2\widehat{m}B=M_{\pi ^+}^2\mathrm{\Delta }_\pi ;\mathrm{\Delta }_\pi =M_{\pi ^+}^2M_{\pi ^0}^2,`$ (5) and obtain $`ϵ_{\pi N}^{(2)}=\mathrm{\Delta }_\pi \kappa _1c_1+e^2(\sigma _1f_1+\sigma _2f_2)+O(\widehat{m}\delta ^{})+o(\delta ^{}).`$ (6) Estimates for the energy-level shift on the basis of the expression (6) will be presented elsewhere. Here we note that a simple order-of-magnitude estimate for $`f_1`$ shows that $`f_1`$ induces an uncertainty in the energy-level shift of roughly the same size as the total correction given in Ref. . To summarize, we have applied a non relativistic effective Lagrangian approach to the study of $`\pi ^+\pi ^{}`$ and $`\pi ^{}p`$ atoms in the ground state. A general expression for the width $`\mathrm{\Gamma }_{A_{2\pi }\pi ^0\pi ^0}`$ and for the strong level shift of pionic hydrogen has been obtained at next-to-leading order in isospin breaking. The sources of the isospin-breaking corrections in these quantities, complementary to ones already considered in the potential scattering theory approach, have been clearly identified. Acknowledgments. V. E. L. thanks the Organizing Committee of MENU99 Symposium for financial support and the University of Bern, where this work was performed, for hospitality. This work was supported in part by the Swiss National Science Foundation, and by TMR, BBW-Contract No. 97.0131 and EC-Contract No. ERBFMRX-CT980169 (EURODA$`\mathrm{\Phi }`$NE).
no-problem/9911/astro-ph9911244.html
ar5iv
text
# ROSAT PSPC observations of T Tauri stars in MBM12Table 4 is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html. ## 1 Introduction The nearest molecular cloud complex to the sun (distance $``$ 65 pc) consists of clouds 11, 12, and 13 from the catalog of Magnani et al. (1985) and is located at (l,b) $``$ (159.4,$``$34.3). This complex of clouds (which we will refer to as MBM12) was first identified by Lynds (1962) and appears as objects L1453-L1454, L1457, L1458 in her catalog of dark nebulae. The mass of the entire complex is estimated to be $``$ 30–200 M based on radio maps of the region in <sup>12</sup>CO, <sup>13</sup>CO and C<sup>18</sup>O (Pound et al. 1990; Zimmermann & Ungerechts 1990). Recently, there has been much interest in understanding the origin of many isolated T Tauri stars (TTS) and isolated regions of star-formation. For example, within $``$ 100 pc from the sun there are at least two additional regions of recent star-formation: the TW Hydrae association (distance $``$ 50 pc; e.g, Kastner et al. 1997; Webb et al. 1999) and the $`\eta `$ Chamaeleontis region (distance $``$ 97 pc; Mamajek et al. 1999). Both of these star-forming regions appear to be isolated in that they do not appear to be associated with any molecular gas. In addition, both are comprised mainly of “weak-line” TTS<sup>1</sup><sup>1</sup>1We define “weak-line” TTS (WTTS) to be TTS with H$`\alpha `$ equivalent widths, W(H$`\alpha `$) $`>`$ $``$10Å and “classical” TTS (CTTS) to be TTS with W(H$`\alpha `$) $`<`$ $``$10Å where the negative sign denotes emission. In contrast, most of the TTS in MBM12 are CTTS which are still associated with their parent molecular cloud. In addition to the above isolated star-forming regions, TTS have been found outside of the central cloud core regions in many nearby star-forming cloud complexes (see references in Feigelson 1996). Several theories exist to explain how TTS can separate from their parent molecular clouds either by dynamical interactions (Sterzik & Durisen 1995) or by high-velocity cloud impacts (Lépine & Duvert (1994)). Feigelson (1996) also suggests that some of these TTS may form in small turbulent cloudlets that dissipate after forming a few TTS. Since the TTS in MBM12 appear to still be in the cloud in which they formed, we know they have not been ejected from some other more distant star-forming region. Therefore MBM12 may be an example of one of the cloudlets proposed by Feigelson (1996). Moriarity-Schieven et al. (1997) argue that MBM12 has recently been compressed by a shock associated with its interface with the Local Bubble. This shock may also have recently triggered the star-formation currently observed in MBM12 (cf. Elmegreen 1993). Alternatively Ballesteros-Paredes et al. (1999) suggest that MBM12 may be an example of a star-forming molecular cloud that formed via large scale streams in the interstellar medium. MBM12 is different from most other high-latitude clouds at $`|b|`$ $`>`$ 30 in terms of its higher extinction and its star formation capability (e.g., Hearty et al. 1999). Based on CO observations and star counts, the peak extinction in the cloud is $`A_\mathrm{v}`$ $``$ 5 mag (Duerr & Craine 1982a; Magnani et al. 1985; Pound et al. 1990; Zimmermann & Ungerechts 1990). However, molecular clouds are clumpy and it is possible that some small dense cores with $`A_\mathrm{v}`$ $`>`$ 5 mag were not resolved in previous molecular line and extinction surveys. For example, Zuckerman et al. (1992) estimate $`A_\mathrm{v}`$ $``$ 11.5 mag through the cloud, along the line of sight to the eclipsing cataclysmic variable H0253+193 located behind the cloud and we estimate $`A_\mathrm{v}`$ $``$ 8.4–8.9 along the line of sight to a G9 star located on the far side of the cloud (Sect. 4) Although there is evidence for gravitationally bound cores in MBM12, the entire cloud does not seem to be bound by gravity or pressure (Pound et al. 1990; Zimmermann & Ungerechts 1990). Therefore, the cloud is likely a short-lived, transient, object similar to other high latitude clouds which have estimated lifetimes of a few million years based on the sound crossing time of the clouds (Magnani et al. (1985)). If this is the case, MBM12 will dissipate in a few million years and leave behind an association similar to the TW Hydrae association that does not appear to be associated with any molecular material. Previous searches for TTS in MBM12 have made use of H$`\alpha `$, infrared, and X-ray observations. The previously known TTS in MBM12 are listed in Table 1 with their coordinates, spectral types, apparent magnitudes, and selected references. We include the star S18 in the list even though Downes & Keyes (1988) point out that it could be an Me star rather than a T Tauri star since our observations confirm that it is a CTTS. The previously known and new TTS stars identified in this study are plotted in Fig. 1 with an IRAS 100 $`\mu `$m contour that shows the extent of the cloud. Although the TTS LkH$`\alpha `$264 is a well studied object because of its extreme CTTS features (i.e., strong H$`\alpha `$ emission and infrared excess) and it is known to have strong Li I $`\lambda `$6708 Å absorption (Herbig 1977), there is no measurement for the equivalent width of the lithium line, W(Li), of this star in the literature. The only previously known T Tauri star in this cloud which has a published measurement of W(Li) is E02553+2018 which was first identified as a T Tauri star in the Einstein Extended Medium Sensitivity Survey (EEMSS) (Gioia et al. 1990; Stocke et al. 1991). More recently, Martín et al. (1994) presented a high resolution spectrum of this star for which they measured W\[Li\] = 475 mÅ. Caillault et al. (1995) reported an X-ray count rate for this source of 0.008 counts s<sup>-1</sup> in the Einstein band. None of the other TTS in the cloud have been previously detected in X-rays. We present the Hipparcos satellite observations in Sect. 2 that show the distance to MBM12 is not as well constrained as previously thought and our high resolution observations of two TTS in the cloud that indicate the TTS are probably at the same distance as the cloud. In Sect. 3 we describe the ROSAT All-Sky Survey (RASS) and ROSAT pointed observations investigated. In Sect. 4 we present the results of our low-resolution spectroscopic observations of the T Tauri star candidates. In Sect. 5 we discuss the X-ray variability of the TTS and in Sect. 6 we derive an X-ray luminosity function for the TTS in MBM12. In Sect. 7 we present our conclusions. ## 2 The distance to MBM12 and its T Tauri stars ### 2.1 The distance to the gas The distance to MBM12 was first estimated by Duerr & Craine (1982a) using Wolff diagrams. They found evidence for two clouds along this line of sight: one cloud at a distance of 200–300 pc with a typical visual extinction of $``$ 2–3 mag and another cloud at a distance of 500–800 pc with a typical visual extinction of $``$ 1.5 mag. However, more recent observations show that this is probably one complex of molecular gas at a much smaller distance. A more accurate estimate of the distance was reported by Hobbs et al. (1986) and Hobbs et al. (1988). The method used was to identify 10 bright stars in the direction of the cloud for which a spectroscopic parallax could be determined and look for interstellar NaI D lines in the spectra of these stars. They found that the star HD18404 (distance $``$ 60 pc) showed no interstellar absorption features and is therefore presumably in front of the cloud and the star HD18519/20 (distance $``$ 70 pc) did show interstellar absorption features and is therefore behind the cloud. Since observations of the other 8 stars are consistent with these two stars, most investigators have assumed a distance of $``$ $`65\pm 5`$ pc for the cloud. Since the Hipparcos satellite measured the trigonometric parallax of most of these stars it is no longer necessary to assume a spectral type or intrinsic luminosity (as is necessary for a spectroscopic parallax) to measure their distance. According to Hipparcos, the distance to HD18404 is $``$ 32$`\pm `$1 pc and the distance to HD18519/20 is $``$ 90$`\pm `$12 pc. The stars used by Hobbs et al. (1986) and Hobbs et al. (1988) to establish the distance to MBM12 are listed in Table 2 along with their apparent magnitude, spectral type, distance based on spectroscopic parallax, distance based on the Hipparcos parallax, and whether the spectrum presented in Hobbs et al. (1986) and Hobbs et al. (1988) showed interstellar Na I absorption lines. Although the Hipparcos results indicate the distance to MBM12 is not as well constrained as thought, it is consistent with the previous result (i.e., $``$ 65$`\pm `$35 pc). However, since the revised distance estimate has $``$ 50% uncertainty it cannot be used to derive accurate parameters for the TTS in the cloud. Zimmermann & Ungerechts (1990) point out that, although the distance estimate based on the two bright stars from Hobbs et al. (1986) and Hobbs et al. (1988) is valid for the northern section of the cloud, radio observations of the molecular gas show there are at least 4 velocity components that may all be at different distances. However, the polarization map of MBM12 produced by Bhatt & Jain (1992) shows that the polarization of the stars in the upper region of the cloud (where the Hobbs et al. 1986 and Hobbs et al. 1988 stars are located) is the same at that of the lower region of the cloud. Thus, they argue that the entire complex is at the same distance. Zimmermann & Ungerechts (1990) also find that the ratio of the CO mass to the virial mass M(CO)/M<sub>vir</sub> = 0.03 for the whole cloud indicating the entire cloud is not gravitationally bound. However, they point out that some of the cores in the larger clumps may be gravitationally bound. If the cloud is found to be at a somewhat greater distance, these results will support the hypothesis that a few of the large clumps are gravitationally bound. ### 2.2 The association of the stars with the gas Although the TTS and the cloud are projected along the same line of sight, they may be at different locations and thus not associated with each other. However, we can check whether the radial velocities of the stars are similar with the gas to find out if it is likely that the T Tauri stars are associated with the cloud. The distribution of cloud velocities are quite large with at least 4 components with average velocities $``$5.0 km s<sup>-1</sup> (I), $``$2.3 km s<sup>-1</sup> (II), 1.4 km s<sup>-1</sup> (III), and 5.0 km s<sup>-1</sup> (IV) (Zimmermann & Ungerechts 1990; Pound et al. 1990; Moriarty-Schieven et al. 1997). We obtained high resolution spectra of two of the T Tauri stars in MBM12 with FOCES at the Calar Alto 2.2-m telescope in August 1998. The spectra for these stars (RXJ0255.4+2005 and LkH$`\alpha `$264, see Fig. 3.) allow us to estimate their radial velocities and confirm the W(Li) measurements of our low resolution spectra presented in Sect. 4. Determinations of radial velocity, RV, and projected rotational velocity, vsin$`i`$, have been obtained by means of cross correlation analysis of the stellar spectra with those of radial velocity and rotational standard stars, treated in analogous way. Given the large spectral range covered by the FOCES spectra, the cross correlation of the target and template stars was performed after rebinning the spectra to a logarithmic wavelength scale, in order to eliminate the dependence of Doppler shift on the wavelength. Moreover, only parts of the spectra free of emission lines and/or not affected by telluric absorption lines have been used. Therefore, the NaI D, and H$`\alpha `$ lines as well as wavelengths longer than about 7000 Å have been excluded from the cross-correlation analysis. The result of the cross-correlation is a correlation peak which can be fitted with a Gaussian curve. The parameters of the Gaussian, center position and full-width at half-maximum (FWHM) are directly related to RV and vsin$`i`$, respectively. The method of the correlation has been fully described by Queloz (1994), and Soderblom et al. (1989). More details about the calibration procedure can be found in Appendix A of Covino et al. (1997). The radial velocities we measured for the two MBM12 TTS listed in Table 3 are similar to that of the molecular gas (Zimmermann & Ungerechts 1990; Pound et al. 1990). Radial velocity measurements have not yet been made for the fainter stars. Nevertheless, the superposition of the TTS on the cloud and the similar radial velocities of at least two of the stars with the gas are strong evidence to support that the TTS are associated with the cloud. ## 3 The ROSAT observations of MBM12 Since both CTTS and WTTS are typically $``$ 10<sup>3</sup>–10<sup>5</sup> times more luminous in the X-ray region of the spectrum than average (i.e., older) low-mass stars (Damiani et al. 1995), we made use of the ROSAT pointed and the RASS observations of MBM12 to identify previously unknown TTS in the cloud. The 25 ks ROSAT PSPC pointed observation (Sequence number 900138) was centered at (RA,Dec) $``$ (2:57:04.8,+19:50:24). Although they were originally discovered by other means, all of the previously known TTS in the central region of MBM12 were also detected with ROSAT. Since the extent of the molecular gas is not known (in particular for the MBM13 region) and TTS can sometimes be displaced several parsecs from their parent clouds, we also searched in the RASS database in a $``$ 25 deg<sup>2</sup> region around MBM12. Details about ROSAT and its PSPC detector can be found in Trümper (1983) and Briel & Pfeffermann (1995), respectively. The RASS broad-band image of the region investigated around MBM12 and the ROSAT pointed observation are displayed in Fig. 2. The X-ray source search was conducted in different ROSAT standard “bands”, defined as follows: “broad” = 0.08–2.0 keV; “soft” = 0.08–0.4 keV; “hard” = 0.5–2.0 keV; “hard1” = 0.5–0.9 keV; “hard2” = 0.9–2.0 keV. We identified all of the X-ray sources above a maximum likelihood<sup>2</sup><sup>2</sup>2The maximum likelihood can be converted into probability P through the equation P = 1 $``$ exp($``$ML). threshold of 7.4 in both the RASS and the ROSAT PSPC pointed observations of MBM12. In addition, we selected only those X-ray sources above a count rate threshold of $``$ 0.03 cts s<sup>-1</sup> in the RASS observation and a count rate threshold of 0.0013 cts s<sup>-1</sup> in the ROSAT PSPC pointed observation. The one previously known TTS candidate, S18, near the cloud MBM13 was detected in the RASS with a ML = 6.4 (i.e., below our threshold), however, since our optical spectroscopic observations confirm that it is a TTS we include it in our study. We identified 49 X-ray sources in the ROSAT PSPC pointed observation of MBM12 (including all of the previously known TTS in the central region of the cloud) and 28 X-ray sources detected in the RASS (including S18) in the regions displayed in Fig. 2. Three stars were detected both on the RASS and in the pointed observation. We list the sources detected in the ROSAT PSPC pointed observation and in the RASS in Table 4. We include the ROSAT source name, the X-ray source coordinates, the maximum likelihood for existence for each source, the broad-band count rates, the X-ray hardness ratios $`HR1`$ and $`HR2`$ (as defined in Neuhäuser et al. 1995), the apparent visual magnitude taken from the Guide Star Catalog (magnitudes for the fainter sources indicated with a “:” are estimated from the digitized sky survey images), and the broad-band X-ray to optical flux ratio. We also list the spectral type and the H$`\alpha `$ and lithium equivalent widths of the sources that have been observed spectroscopically and comments collected from our search through the SIMBAD<sup>3</sup><sup>3</sup>3This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. and NED<sup>4</sup><sup>4</sup>4The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, Caltech, under contract with the National Aeronautics and Space Administration. databases concerning the objects. Assuming a mean X-ray count-rate-to-flux conversion factor of 1.1 $`\times `$ 10<sup>-11</sup> erg cts<sup>-1</sup> cm<sup>-2</sup>, which we derive from X-ray spectral fits of the TTS in Sect. 6, if the cloud is at a distance of 65 pc, the limiting luminosities of the observations are $`1.7\times 10^{29}`$ erg s<sup>-1</sup> and $`7.2\times 10^{27}`$ erg s<sup>-1</sup> for the RASS and ROSAT pointed observations, respectively. Therefore, these observations are sufficient to detect most of the WTTS in the cloud since the threshold is below the X-ray faintest stars in the WTTS X-ray luminosity function (e.g., Neuhäuser et al. 1995). Although the RASS observation of MBM12 in not sensitive enough to detect all of the CTTS in the cloud, the objective prism survey by Stephenson (1986) identified all of the H$`\alpha `$ emission sources in this region down to a visual magnitude threshold of $``$ 13.5. Since this limiting magnitude corresponds to the early M spectral types in MBM12, the current population of TTS in MBM12 presented in this paper should be complete for all earlier spectral types. Since MBM12 is at relatively high galactic latitude, many of the 81 X-ray sources we identified are extragalactic. Therefore, we used the X-ray to optical flux ratios (see Table 4) to remove extragalactic sources from our list of candidates (cf. Hearty et al. 1999). All sources which have log($`f_\mathrm{x}/f_\mathrm{v}`$$`>`$ 0.0 are considered to be extragalactic and those with log($`f_\mathrm{x}/f_\mathrm{v}`$$`<`$ 0.0 are considered to be stellar objects, some of which could be PMS. We also searched the literature to remove cataloged non-PMS stars from our list of candidates. Finally we were left with a list of X-ray sources identified in the RASS and ROSAT pointed observations of the cloud which have stellar optical counterparts that may be PMS stars. However, many of these stars may be other types of X-ray active stars (e.g., RS CVn and dMe stars) and nearby main sequence stars (which may not be intrinsically X-ray bright, but are near enough so that their X-ray flux is large) that are more difficult to separate from PMS stars by X-ray observations alone. Therefore, follow-up spectral observations are necessary to identify which X-ray sources are T Tauri stars. ## 4 The optical spectroscopy In order to complete the census of the TTS population of MBM12 we require follow-up observations. Since lithium is burned quickly in convective stars, a measurement of W(Li) along with a knowledge of the spectral type of a star can be a reliable indicator of youth. Therefore we obtained broad-band, low-resolution, optical spectra of the X-ray emitting TTS candidates to determine spectral types and measure the equivalent width of the H$`\alpha `$ emission and Li I 6708 Å absorption lines. The spectra were obtained from October 9–11, 1998 with the Calar Alto Faint Object Spectrograph (CAFOS) at the 2.2-m telescope at Calar Alto, Spain. The 24$`\mu `$m pixels of the SITe-1d 2048$`\times `$2048 chip with the G-100 grism provided a reciprocal dispersion of $``$ 2.1 Å pixel<sup>-1</sup>. The resolving power, $`R`$ = $`\lambda /\delta \lambda `$ $``$ 1000, derived from the measurement of the FWHM (FWHM $``$ 6.4 Å) of several well isolated emission lines of the comparison spectra is sufficient to resolve the lithium absorption line in T Tauri stars. The wavelength range $``$ 4900 to 7800 Å was chosen to detect two indicators of possible youth (H$`\alpha `$ emission and Li I $`\lambda `$6708 Å absorption) and to determine spectral types. All spectra were given an initial inspection at the telescope. If a particular star showed signs of youth or the integration produced fewer than $``$ 1000 cts pixel<sup>-1</sup>, at least one additional integration was performed. The results of the spectroscopic observations of the TTS in MBM12 are summarized in Table 5. We list the name of the star; the coordinates for the optical source; the spectral type; the log of the effective temperature, log$`T_{\mathrm{eff}}`$, assuming luminosity class V and using the spectral type-effective temperature relation of de Jager & Nieuwenhuijzen (1987); apparent magnitude, $`V`$, taken from the Guide Star Catalog<sup>5</sup><sup>5</sup>5Magnitudes for the sources indicated with a “:” are estimated from the digitized sky survey images; the equivalent width of H$`\alpha `$, W(H$`\alpha `$); both the low and high resolution (when available) measurements of W(Li); the veiling corrected W(Li) (cf. Strom et al. 1989); and the derived lithium abundance based on the non-LTE curves of growth of Pavlenko & Magazzù (1996) assuming log$`g`$=4.5. The estimated error for the low-resolution W(Li) measurements is $``$ $`\pm `$ 90 mÅ based on the correlation with the three stars for which we have high resolution measurements. The optical spectra of the TTS in MBM12 are displayed in Fig. 3. The stars which show strong H$`\alpha `$ emission are also scaled by an appropriate factor to display the emission line. The spectra of the two stars we classify as young main sequence stars which still show lithium are displayed in Fig. 4. In addition to confirming that the star S18 is a CTTS with strong H$`\alpha `$ emission and Li I absorption, we identified 3 previously unknown TTS in MBM12. In order to estimate the relative age of the MBM12 stars with lithium we plot them in an W(Li) vs. T<sub>eff</sub> diagram (Fig. 5) along with stars from Taurus (age $``$ a few Myr), the TW Hydrae Association (age $``$ 10 Myr), the $`\eta `$ Chamaeleontis Cluster (age $``$ 2–18 Myr), IC 2602 (age $``$ 30 Myr), and the Pleiades (age $``$ 100 Myr). In addition, we plot isoabundance lines for the non-LTE curves of growth of Pavlenko & Magazzù (1996) for log$`g`$=4.5 stars and the isochrones for the non-rotating lithium depletion model of Pinsonneault et al. (1990). The positions of the MBM12 stars in the diagram indicates they are young objects with ages much less than that of the Pleiades or IC 2602. Although the relative ages between the stars in MBM12, the TW Hydrae Association, and the $`\eta `$ Chamaeleontis Cluster, cannot be discerned in Fig. 5, since most of the TTS in MBM12 are CTTS which are still associated with their parent molecular cloud the TTS in MBM12 must be younger than those in the TW Hydrae Association or the $`\eta `$ Chamaeleontis Cluster which are comprised mainly of WTTS not associated with any molecular cloud (i.e., the TTS in MBM12 have ages $`<`$ 10 Myr). Although the two F and G spectral type stars in which we detected lithium (HD 17332 and RXJ0255.3+1915) are located above the Pleiades in the W(Li) vs. T<sub>eff</sub> diagram, since they both show H$`\alpha `$ absorption stronger than any similar spectral type stars in IC 2602 (e.g., Randich et al. 1997), they are probably older than 30 Myr. Thus, we list them as main sequence stars in Table 5. Covino et al. (1997) have shown that low-resolution spectra tend to overestimate W(Li) in intermediate spectral types, therefore we probably over estimated the W(Li) for these two stars. The TTS in MBM12 are clearly lithium-rich relative to the stars in the Pleiades. However, current age dependent stellar population models predict that there should be a population of young stars with ages $`<`$ 150 Myr distributed across the sky. Therefore we compare the density of young X-ray sources detected in MBM12 with the age dependent stellar population model of Guillout et al. (1996) to find out if we are really seeing an excess of young X-ray sources in the direction of MBM12. In the galactic latitude range of 40 $`>`$ $`|`$b$`|`$ $`>`$ 30 Guillout et al. (1996) predict there should be 0.6–1.0 stars deg<sup>-2</sup> and 0.14–0.20 stars deg<sup>-2</sup> above an X-ray count rate threshold of 0.0013 cts s<sup>-1</sup> and 0.03 cts s<sup>-1</sup>, respectively, which have ages $`<`$ 150 Myr. Therefore, we expect to detect $``$ 1.9–3.1 young stars in this age group in the ROSAT pointed observation and 3.5–5 stars in this age group in the RASS observation. Since we observed several young stars which probably have ages $`<`$ 150 Myr but are not associated with MBM12 (i.e., the two intermediate spectral type stars which have not yet depleted their lithium and the 3 Me and 3 Ke stars listed in Table 4 which have depleted their lithium but show H$`\alpha `$ emission), there are a sufficient number of X-ray active stars in this region to account for the numbers predicted by Guillout et al. (1996). Therefore, the TTS we observe represent an excess of X-ray active young stars associated with MBM12. In addition to the X-ray selected T Tauri star candidates, we also observed the reddest star from a list of stars compiled by Duerr & Craine (1982b) which are along the line of sight to MBM12 and have V-I colors redder than 2.5 mag. The optical spectrum of this star, which we will call DC48, indicates it is a G9 star. Since Duerr & Craine (1982b) measured $`V`$ = 18.7 and V-I = 5.6 mag, it corresponds to a main sequence star with $`A_\mathrm{v}`$ $``$ 8.9 mag at a distance of $``$ 63 pc or a giant star with $`A_\mathrm{v}`$ $``$ 8.4 mag at a distance of $``$ 950 pc. The spectrum of the highly reddened ($`A_\mathrm{v}`$ $``$ 8.4–8.9) G9 star, DC48, is displayed in Fig. 4. ## 5 X-ray variability of the TTS We tested all of the TTS for X-ray variability using the methods described in Hambaryan et al. (1999). The only T Tauri Star which showed X-ray variability is the newly identified star RXJ0255.5+2005 that was detected both in the RASS and in the ROSAT pointed observation and flared during the pointed observation (see the light curve displayed in Fig. 6). The peak X-ray count rate during the flare increased by more than a factor of 6 from the pre-flare count rate. Although we do no have a sufficient number of counts ($``$ 1000 counts for the non-flare phase and $``$ 500 counts for the flare phase) for a detailed study of the evolution of the coronal temperature during the flare, we performed a rough spectral fit using a 2 temperature Raymond-Smith model (Raymond & Smith 1977) including a photoelectric absorption term using the Morrison & McCammon (1983) cross sections. We fit the data for 3 time intervals: the pre-flare phase, the flare, and the post-flare phase Fig. 7. Both temperature components increased during the flare and remained high throughout the post-flare phase. The parameters derived from the X-ray spectral fits are listed in Table 6 (see Sect. 6 for a description of the Table columns). Since the two temperature components are not well constrained by the spectral fit during the flare, these estimates should be viewed as a lower limits. The results of the spectral fits are consistent with the type of coronal heating seen in high signal-to-noise X-ray spectra of other flaring WTTS (e.g., Tsuboi et al. 1998). ## 6 The X-ray luminosity function Although our X-ray spectra do not have sufficiently high signal-to-noise for a detailed comparison of X-ray spectral models, we performed a spectral fit using a 2 temperature Raymond-Smith model including a photoelectric absorption term as described in Sect. 5 for the sources with at least 100 counts. For the sources with fewer than 100 counts we calculate the X-ray flux using an X-ray count rate to flux conversion factor of 1.1 $`\times `$ 10<sup>-11</sup> erg cts<sup>-1</sup> cm<sup>-2</sup> which is the mean conversion factor derived from the spectra for which we performed spectral fits. We list the total ROSAT broad band (0.08–2.0 keV) counts and count rates for the TTS in MBM12 and the derived interstellar+circumstellar absorption cross sections and plasma temperatures for the spectra in which we performed spectral fits in Table 6. The X-ray luminosities assume a distance of 65 pc. Since the binary LkH$`\alpha `$262/263 was not spatially resolved with the PSPC we fit the combined X-ray spectrum to estimate the combined X-ray luminosity but we divide that value in half to generate the X-ray luminosity function. In order to compare the derived X-ray luminosity function for the TTS in MBM12 with other flux limited X-ray luminosity functions we used the ASURV Rev. 1.2 package (Isobe & Feigelson 1990; LaValley et al. 1992), which implements the methods presented in Feigelson & Nelson (1985). Although the currently known TTS in MBM12 are all X-ray detected, the luminosity functions of other, more distant, star forming regions include upper limits. The derived X-ray luminosity function is displayed in Fig. 8 with the X-ray luminosity function for the TTS in the L1495E cloud in Taurus which (like MBM12) was observed in a deep (33 ks) ROSAT PSPC pointed observation (Strom & Strom 1994). The ROSAT pointed observation of L1495E is $``$ 20 times more sensitive than previous observations with the Einstein satellite. Strom & Strom (1994) used this observation to show that the X-ray luminosity of TTS extends to fainter luminosities than were observed with Einstein. We have re-reduced the pointed observation of L1495E in a way analogous to that of MBM12. The X-ray luminosity function we derive for L1495E (1) includes only the K and M spectral type TTS, (2) includes 6 upper limits, (3) assumes an X-ray to optical flux conversion factor of 1.1 $`\times `$ 10<sup>-11</sup> erg cts<sup>-1</sup> cm<sup>-2</sup>, and (4) assumes a distance of 140 pc. The X-ray luminosity functions in MBM12 and L1495E agree well: in MBM12 the log$`L_{\mathrm{x}\mathrm{mean}}`$ = 29.0$`\pm `$0.1 erg s<sup>-1</sup>and log$`L_{\mathrm{x}\mathrm{median}}`$ = 28.7 erg s<sup>-1</sup>; in L1495E log$`L_{\mathrm{x}\mathrm{mean}}`$ = 28.9$`\pm `$0.2 erg s<sup>-1</sup> and log$`L_{\mathrm{x}\mathrm{median}}`$ = 28.9 erg s<sup>-1</sup>. However, we note that the MBM12 X-ray luminosity function has a lower high-luminosity limit and a higher low-luminosity limit than the L1495E X-ray luminosity function. Therefore, although the pointed observation of MBM12 is more sensitive than the pointed observation L1495E (because MBM12 is much closer) our follow-up observations of the TTS in MBM12 may be incomplete for sources fainter than $`V`$ $``$ 15.5 mag. In addition, since we know that one of our X-ray sources, S18, is detected but below our threshold for follow-up observations, there may be other, fainter, X-ray emitting TTS in MBM12 with spectral types later than $``$ M2 (i.e., the spectral type of S18) that will be discovered in more sensitive follow-up observations. The discrepancy at the high luminosity end of the X-ray luminosity function may also be explained if the distance to the TTS in MBM12 is larger than 65 pc. Although an increased distance is allowed by the recent Hipparcos results it should be confirmed with further observations. ## 7 Conclusions Although MBM12 is not a prolific star-forming cloud when compared to nearby giant molecular clouds it is the nearest star-forming cloud to the sun and offers a unique opportunity to study the star-formation process within a molecular cloud at high sensitivity. We have presented follow-up observations of X-ray stars identified in the region of the MBM12 complex. These observations have doubled the number of confirmed TTS in this region. Since the ROSAT PSPC pointed observation of the central region of the cloud was sensitive enough to detect all of the previously known TTS in the cloud, we believe the list of 5 CTTS and 3 WTTS presented in Table 5 to be a nearly complete census of the TTS in MBM12 for spectral types earlier than $``$ M2. Assuming a mean mass of $``$ 0.6 M for the 8 currently known TTS in MBM12 and a cloud mass of 30–200 M (Pound et al. 1990; Zimmermann & Ungerechts 1990) the star-formation efficiency of MBM12 is $``$ 2–24%. Since the currently known TTS population in MBM12 is incomplete only for the lower mass objects, unless there are a huge number of these objects yet to be discovered in the cloud, this estimate of the star-formation efficiency will not change significantly. Although there is still a large uncertainty in the mass of the cloud the estimated star-formation efficiencies are consistent with that expected from clouds with masses on the order of 100 M (Elmegreen & Efremov 1997). By comparing the strengths of the H$`\alpha `$ emission and Li I $`\lambda `$6708 Å absorption lines of the TTS in MBM12 with those found in other young clusters, we place an upper limit on the age of the stars in MBM12 $``$ 10 Myr. By comparing the X-ray luminosity function of the TTS in MBM12 with that of the TTS in L1495E we predict that there are more young, low-mass, stars to be discovered in MBM12 and the assumed distance to the cloud may have to be increased. Although this prediction agrees with the recently revised distance estimate to the cloud ($``$ $`65\pm 35`$ pc) based on results of the Hipparcos satellite, it should be confirmed with future observations. We have also identified a reddened G9 star behind the cloud with $`A_\mathrm{v}`$ $``$ 8.4–8.9 mag. Therefore, there are at least two lines of sight through the cloud that show larger extinctions ($`A_\mathrm{v}`$ $`>`$ 5 mag) than previously thought for this cloud. This higher extinction explains why MBM12 is capable of star-formation while most other high-latitude cloud are not. ###### Acknowledgements. We wish to thank Patrick Guillout for helpful discussions about the expected population of young X-ray active stars located at high galactic latitude and Loris Magnani for insightful comments concerning this paper. We also thank an anonymous referee for suggestions which enable us to put firmer constraints on the age of the TTS in MBM12. The ROSAT project is supported by the Max-Planck-Gesellschaft and Germany’s federal government (BMBF/DLR). TH is grateful for a stipendium from the Max-Planck-Gesellschaft for support of this research. RN acknowledges a grant from the Deutsche Forschungsgemeinschaft (DFG Schwerpunktprogramm “Physics of star formation”)
no-problem/9911/chao-dyn9911007.html
ar5iv
text
# Artificiality of multifractal phase transitions ## Abstract A multifractal phase transition is associated to a nonanalyticity in the generalised dimensions. We show that its occurrence is an artifact of the asymptotic scaling behaviour of integral moments and that it is not observed in an analysis based on differential $`n`$-point correlation densities. PACS: 02.50.Sk, 47.53.+n, 47.27.Eq, 05.45.+b KEYWORDS: multifractals, phase transition, multiplicative branching processes, fully developed turbulence, multivariate correlation densities. CORRESPONDING AUTHOR: Martin Greiner Max-Planck-Institut für Physik komplexer Systeme Nöthnitzer Str. 38 D–01187 Dresden, Germany tel.: 49-351-871-1218 fax: 49-351-871-1199 email: greiner @ mpipks-dresden.mpg.de Multifractal measures appear in a number of nonlinear physical phenomena like turbulence , chaotic dynamical systems and high-energetic multiparticle dynamics , to name but a few. Due to the close analogy between the multifractal formalism and statistical thermodynamics , any nonanalyticity in the generalised dimensions is interpreted as a multifractal phase transition . This behaviour has been discussed in the context of the above mentioned phenomena and is also denoted as the occurrence of strong intermittency. Note that conventionally generalised dimensions are extracted from the asymptotic scaling behaviour of moments, where the latter represent integrals over the fundamental correlation densities. We will now demonstrate that a multifractal phase transition is an artefact of the integral moment analysis and is not observed in an analysis based on differential correlation densities, where the true generalised dimensions of any order, characterising the underlying multiscale process, are revealed. As the textbook example of a multifractal process we consider a one-dimensional discrete random multiplicative cascade. It is associated with a binary tree structure, obtained by hierarchically partitioning the original interval of length $`l_0=1`$ into subintervals of size $`l_j=2^j`$. The density $`\epsilon _\kappa ^{(j+1)}`$, associated with a $`(j+1)`$-generation interval characterised by the binary index $`\kappa =(k_1\mathrm{}k_{j+1})`$ with each $`k`$ taking on possible values $`0`$ or $`1`$, is multiplicatively linked to the density at the larger scale by $$\epsilon _{k_1\mathrm{}k_jk_{j+1}}^{(j+1)}=q_{k_1\mathrm{}k_jk_{j+1}}^{(j+1)}\epsilon _{k_1\mathrm{}k_j}^{(j)}.$$ (1) Independently for each branching, the left and right weights, $`q_L=q_{k_1\mathrm{}k_j0}^{(j+1)}`$ and $`q_R=q_{k_1\mathrm{}k_j1}^{(j+1)}`$, are drawn from a probabilistic splitting function $`p(q_L,q_R)`$ with support $`0q_L,q_R<\mathrm{}`$. Without loss of generality we assume $`p(q_L,q_R)=p(q_R,q_L)`$ and set $`q_L=1`$ as well as $`\epsilon ^{(0)}=1`$. Nature does not allow an infinitely long multifractal scaling range; in fully developed turbulence, for example, it is restricted to $`\eta lL`$, where $`\eta `$ and $`L`$ represent the Kolmogorov and the integral length scales, respectively. Consequently, we restrict the random multiplicative cascade to a finite number $`J`$ of cascade steps. The complete statistical information of the ensemble of generated cascade fields is then contained in the multivariate characteristic function $$Z[\lambda ^{(J)}]=\mathrm{exp}\left(\underset{k_1,\mathrm{},k_J=0}{\overset{1}{}}\lambda _{k_1\mathrm{}k_J}^{(J)}\epsilon _{k_1\mathrm{}k_J}^{(J)}\right),$$ (2) from which the $`n`$-point correlation densities are derived $$\rho _{\kappa _1\mathrm{}\kappa _n}^{[n]}=\epsilon _{\kappa _1}^{(J)}\mathrm{}\epsilon _{\kappa _n}^{(J)}=\frac{^nZ[\lambda ^{(J)}]}{\lambda _{\kappa _1}^{(J)}\mathrm{}\lambda _{\kappa _n}^{(J)}}|_{\lambda ^{(J)}=0}$$ (3) by taking appropriate derivatives with respect to the conjugate field variables $`\lambda _\kappa ^{(J)}`$. The multivariate characteristic function and the resulting $`n`$-point correlation densities have been calculated analytically in Refs. ; see also Refs. . For the extraction of generalised dimensions exponents so-called (box-) moments are considered, defined as $$M_n(J,j)=\frac{1}{2^j}\underset{k_1,\mathrm{},k_j=0}{\overset{1}{}}\left(\overline{\epsilon }_{k_1\mathrm{}k_j}^{(J,j)}\right)^n,$$ (4) where the backward density $$\overline{\epsilon }_{k_1\mathrm{}k_j}^{(J,j)}=\frac{1}{2^{Jj}}\underset{k_{j+1},\mathrm{},k_J=0}{\overset{1}{}}\epsilon _{k_1\mathrm{}k_J}^{(J)}$$ (5) has been resummed over the smallest scales from $`J`$ to $`j`$. With (5) the (box-) moment (4) can be understood as a (box-) integration over the $`n`$-point correlation density (3): $$\left(\overline{\epsilon }_{k_1\mathrm{}k_j}^{(J,j)}\right)^n=\frac{1}{2^{n(Jj)}}\underset{k_{j+1}^{(1)},\mathrm{},k_J^{(1)}=0}{\overset{1}{}}\mathrm{}\underset{k_{j+1}^{(n)},\mathrm{},k_J^{(n)}=0}{\overset{1}{}}\epsilon _{k_1\mathrm{}k_jk_{j+1}^{(1)}\mathrm{}k_J^{(1)}}^{(J)}\mathrm{}\epsilon _{k_1\mathrm{}k_jk_{j+1}^{(n)}\mathrm{}k_J^{(n)}}^{(J)},$$ (6) yielding the explicit expressions: $`M_1(J,j)`$ $`=`$ $`1,`$ (7) $`M_2(J,j)`$ $`=`$ $`q^2^j\left[{\displaystyle \frac{q_Lq_R}{2q^2}}+{\displaystyle \frac{2q^2q_Lq_R}{2q^2}}\left({\displaystyle \frac{q^2}{2}}\right)^{Jj}\right],`$ (8) and, for arbitrary order, $$M_n(J,j)=q^n^j\underset{\{p\}}{}a_{\{p\}}^{(n)}\underset{\mathrm{\Sigma }n_i=n}{}\left(\frac{q^{n_i}}{2^{n_i1}}\right)^{Jj},$$ (9) where $`\{p\}`$ stands for all possible partitions of $`_in_i=n`$ with $`n_i\{1,2,\mathrm{},n\}`$; the coefficients $`a_{\{p\}}^{(n)}`$ are simple scale-independent functionals of the splitting-function moments $`q_L^{m_1}q_R^{m_2}`$ with $`0m_1+m_2n`$. – While we will present a full technical understanding of the structure reflected in the expression (9) at a later stage of this presentation, some intuitive understanding can already be derived: the backward density (5) can be rewritten as $`\overline{\epsilon }_{k_1\mathrm{}k_j}^{(J,j)}`$ $`=`$ $`q_{k_1}^{(1)}\mathrm{}q_{k_1\mathrm{}k_j}^{(j)}{\displaystyle \frac{1}{2^{Jj}}}{\displaystyle \underset{k_{j+1},\mathrm{},k_J=0}{\overset{1}{}}}\epsilon _{k_{j+1}\mathrm{}k_J}^{(Jj)}`$ (10) $`=`$ $`\epsilon _{k_1\mathrm{}k_j}^{(j)}\left(1+\mathrm{\Delta }_{k_1\mathrm{}k_j}^{(Jj)}\right),`$ (11) where $`(1+\mathrm{\Delta }^{(Jj)})`$ represents the resummed density of the subcascade with length $`Jj`$ following the branching point $`(k_1\mathrm{}k_j)`$. This resummed density need not be strictly equal to $`1`$ and in general will fluctuate around $`1`$. Hence, the backward density (5) need not be identical to the forward density $`\epsilon _{k_1\mathrm{}k_j}^{(j)}`$. In view of (10), the first factor $`q^n^j`$ of the expression (9) then originates from $`(\epsilon _{k_1\mathrm{}k_j}^{(j)})^n`$ while the remainder of the expression (9) is equal to $`(1+\mathrm{\Delta }^{(Jj)})^n`$. For the very special case of a conservative cascade, where with $`p(q_L,q_R)=p(q_L)\delta (q_L+q_R2)`$ the sum of the left and right weight is conserved at every branching, the two coefficients $`a_{\{p\}}^{(2)}`$ of the second-order moment (7) become $`a_{\{1,1\}}=q_Lq_R/(2q^2)=1`$ and $`a_{\{2,0\}}=(2q^2q_Lq_R)/(2q^2)=0`$, since $`q_Lq_R=q(2q)`$. Similarly, all but one coefficients $`a_{\{p\}}^{(n)}`$ of the expression (9) vanish and the moment of order $`n`$ becomes exactly $`M_n(J,j)=q^n^j`$. Also in view of (10) this outcome is intuitively clear since due to the conservative nature of the splitting function the resummed density $`1+\mathrm{\Delta }^{(Jj)}`$ is strictly equal to one. The moment $`M_n(J,j)`$ does not depend on the length $`J`$ of the cascade and exhibits perfect scaling. The multifractal scaling exponents $$\tau (n)=\frac{\mathrm{ln}q^n}{\mathrm{ln}2}$$ (12) are deduced by setting $`M_n(J,j)=(l_0/l_j)^{\tau (n)}`$ and are related to the generalised dimensions $`D_n`$ by $`\tau (n)=(n1)(1D_n)`$. A factorised splitting function $`p(q_L,q_R)=p(q_L)p(q_R)`$ is a representative of non-conservative cascades, where the sum of the left and right weight is only globally conserved ($`q_L+q_R=2`$), but not locally ($`q_L+q_R2`$). As a consequence, the mixed splitting-function moments $`q_L^{n_1}q_R^{n_2}q^{n_1}(2q)^{n_2}`$ do not show the anticorrelation typical of conservative cascades. For this case coefficients $`a_{\{p\}}^{(n)}`$ are generally nonzero and the moments (9) do not show rigorous scaling. The resummed density $`1+\mathrm{\Delta }^{(Jj)}`$ of the subcascade with length $`Jj`$ now fluctuates around one and causes the deviations from rigorous scaling behaviour. For nonconservative cascades two subclasses of splitting functions have to be distinguished: in the case of so-called weak intermittency, the support of $`p(q_L,q_R)`$ is restricted to $`0q_L,q_R<2`$, so that the respective moments are restricted to $`q^n<2^{n1}`$, where $`n>1`$ and where the extra $`1/2`$ on the right hand side of this inequality comes from the requirement $`q=1`$. This implies that, in the limit of an infinitely long cascade, $`J\mathrm{}`$, only the true scaling term $`M_n(J,j)q^n^j`$ survives in the expression (9), so that asymptotically for $`jJ`$ the same multifractal scaling exponents (12) are extracted as from the corresponding conservative cascade. For the resummed density $`1+\mathrm{\Delta }^{(Jj)}`$ of the subcascade, this implies that in this asymptotic scaling range its probability distribution has converged to a scale-independent fixed point; see also Refs. . The second subclass of nonconservative splitting functions exhibits the phenomenon that has become known as multifractal phase transition or strong intermittency: once the support of $`p(q_L,q_R)`$ allows values $`q_L`$ and/or $`q_R`$ to exceed $`2`$, a critical order $`n_{\mathrm{crit}}`$ exists, so that $`q^n/2^{n1}<1`$ for $`n<n_{\mathrm{crit}}`$ and $`q^n/2^{n1}>1`$ for $`n>n_{\mathrm{crit}}`$. Then, again in the limit of a very long, but finite cascade and given that $`n>n_{\mathrm{crit}}`$, the term corresponding to the partition $`\{n,0,\mathrm{},0\}`$ dominates the moment (9) of order $`n`$, which hence for $`jJ`$ scales as $$M_{n>n_{\mathrm{crit}}}(J,jJ)a_{\{n,0,\mathrm{},0\}}^{(n)}\left(\frac{q^n}{2^{n1}}\right)^J2^{j(n1)}\left(\frac{l_0}{l_j}\right)^{n1}.$$ (13) For the multifractal scaling exponents this implies $$\tau (n)=\{\begin{array}{cc}\mathrm{ln}q^n/\mathrm{ln}2\hfill & (n<n_{\mathrm{crit}})\hfill \\ n1\hfill & (n>n_{\mathrm{crit}}),\hfill \end{array}$$ (14) so that there is a discontinuity in the first derivative of $`\tau (n)`$ with respect to the moment-order $`n`$ at $`n_{\mathrm{crit}}`$. For $`n<n_{\mathrm{crit}}`$ the multifractal scaling exponents $`\tau (n)=\mathrm{ln}\mathrm{exp}(n\mathrm{ln}q)/\mathrm{ln}2`$ may be interpreted as a free-energy-like function with moment order $`n`$ as inverse temperature. – According to (13), note that in the limit $`J\mathrm{}`$ of a non-physical, infinitely long cascade, moments with order larger than $`n_{\mathrm{crit}}`$ would diverge. In view of (10) this implies that the probability distribution of the subcascade-resummed density $`1+\mathrm{\Delta }^{(Jj)}`$ comes with an algebraic tail. For a nonconservative random multiplicative cascade with a factorised splitting function $`p(q_L,q_R)=p(q_L)p(q_R)`$, where, for example, $$p(q)=\frac{1}{\sqrt{2\pi }\sigma q}\mathrm{exp}\left(\frac{1}{2\sigma ^2}\left(\mathrm{ln}q+\frac{\sigma ^2}{2}\right)^2\right)$$ (15) is of log-normal type, the multifractal scaling exponents (14) are found to be $`\tau (n<n_{\mathrm{crit}})=\sigma ^2n(n1)/(2\mathrm{ln}2)`$ below the critical order $`n_{\mathrm{crit}}=2\mathrm{ln}2/\sigma ^2`$, which defines the multifractal phase transition at $`\tau (n_{\mathrm{crit}})=n_{\mathrm{crit}}1`$. In fully developed turbulence a good qualitative description of observed multiplier distributions in the surrogate energy dissipation field and an acceptable intermittency exponent $`\tau (2)=0.25`$ has been found for the parameter choice $`\sigma =0.42`$; for this value we find $`n_{\mathrm{crit}}=7.86`$. Note, that for the log-normal weight distribution the so-called Novikov rules do not apply since weights may exceed a value of $`2`$ with nonzero probability. – The scale-dependence of the exact second-order moment expression (7) is illustrated in Fig. 1. As expected for $`jJ`$, $`M_2(J,j)`$ scales as $`q^2^j`$ for $`\sigma <\sigma _{\mathrm{crit}}`$ and as $`2^j`$ for $`\sigma >\sigma _{\mathrm{crit}}`$, where $`\sigma _{\mathrm{crit}}=\sqrt{\mathrm{ln}2}`$ corresponds to $`n_{\mathrm{crit}}=2`$. As $`jJ`$ noticeable deviations from this scaling behaviour occur. Directly at $`\sigma =\sigma _{\mathrm{crit}}`$, where $`q^2=2`$, these finite size effects become so large that it becomes difficult to extract a scaling exponent asymptotically. Analogous findings hold for higher order moments $`M_n(J,j)`$. In the presence of a multifractal phase transition the information on the true multifractal scaling exponents (12) with $`n>n_{\mathrm{crit}}`$ appears to get lost. But this is not the case! Note in this respect that, according to (4)-(6), the moments $`M_n(J,j)`$ are box-integrals over the $`n`$-point correlation density (3), the latter thus being more fundamental than the former. For demonstration, we pick the two-point correlation density $$\rho ^{[2]}(d)=\epsilon _{\kappa _1}^{(J)}\epsilon _{\kappa _2}^{(J)}=q^2^{Jd}\left(q_Lq_R+\left(1q_Lq_R\right)\delta _{d,0}\right),$$ (16) which is a function of the ultrametric distance $`d=Jj`$ between the two bins $`\kappa _1=(k_1\mathrm{}k_jk_{j+1}\mathrm{}k_J)`$ and $`\kappa _2=(k_1\mathrm{}k_jk_{j+1}^{}\mathrm{}k_J^{})`$, where $`k_{j+1}k_{j+1}^{}`$. It is depicted in Fig. 2 for a random multiplicative cascade with a factorised splitting function of log-normal type and reveals perfect scaling with the true multifractal scaling exponents. For the two-point correlation density and, in general, for all $`n`$-point correlation densities no multifractal phase transition occurs. Then, why does a multifractal phase transition occur once based on integral moments and not on correlation densities? The answer to this question will be found in an inconspicuous property of the correlation densities. For demonstration we discuss this again only for second order. First, we realize that the second-order moment (4) can be cast in the form $$M_2(J,j)=\frac{1}{2^{Jj}}\left(\rho ^{[2]}(d=0)+\underset{d=1}{\overset{Jj}{}}2^{d1}\rho ^{[2]}(d)\right).$$ (17) Next, by adding and again subtracting something for $`d=0`$, the expression (16) is rewritten as $`\rho ^{[2]}(d)`$ $`=`$ $`q^2^{Jd}\left[q_Lq_R+\left({\displaystyle \frac{q_Lq_R}{2q^2}}q_Lq_R\right)\delta _{d0}\right]+{\displaystyle \frac{2q^2q_Lq_R}{2q^2}}q^2^J\delta _{d0}`$ (18) $``$ $`\stackrel{~}{\rho }^{[2]}(d)+{\displaystyle \frac{2q^2q_Lq_R}{2q^2}}q^2^J\delta _{d0}.`$ (19) For $`d0`$ the modified two-point correlation density $`\stackrel{~}{\rho }^{[2]}(d)`$ is identical to the original two-point correlation density $`\rho ^{[2]}(d)`$. The difference comes for $`d=0`$, where $$\frac{\stackrel{~}{\rho }^{[2]}(d=0)}{\stackrel{~}{\rho }^{[2]}(d=1)}=\frac{q^2}{2q^2}$$ (20) contrary to $`\rho ^{[2]}(d=0)/\rho ^{[2]}(d=1)=q^2/q_Lq_R`$. If we were only to substitute the modified two-point correlation density into (17), then this difference insures perfect moment scaling with the true multifractal scaling exponents as we arrive at the first term of the expression (7). Substitution of only the second term appearing in (18), which is proportional to $`\delta _{d0}`$, produces the second term in (7), which is proportional to $`(q^2/2)^J2^j`$ and reflects a trivial scaling with exponent $`\tau (2)|_{\mathrm{trivial}}=21`$. The appearance of the second term in (7) and (18), respectively, is a consequence of the missing anticorrelation between weights $`q_L`$ and $`q_R`$ in the splitting function; it vanishes only for a conservative splitting function $`p(q_L,q_R)=p(q_L)\delta (q_L+q_R2)`$ because then $`q_Lq_R=2q^2`$. It is worthwhile to elaborate in more detail on the occurrence of a multifractal phase transition from the viewpoint of Eq. (18). For demonstration we pick again a factorised splitting function of log-normal type (15), where $`q^2=\mathrm{exp}(\sigma ^2)`$ and $`q_Lq_R=1`$. As $`\sigma `$ increases monotonically from $`0`$ to $`\sigma _{\mathrm{crit}}=\sqrt{\mathrm{ln}2}`$, the ratio $`q^2/(2q^2)`$ appearing in (20) increases from $`1`$ to $`+\mathrm{}`$; then, as $`\sigma `$ is further increased, it changes sign and increases from $`\mathrm{}`$ at $`\sigma =\sigma _{\mathrm{crit}}^+`$ monotonically to $`1`$ as $`\sigma \mathrm{}`$. Consequently, as the ($`d=0`$) and ($`d=1`$) elements of the modified two-point correlation density contribute as the sum $`\rho ^{[2]}(d=0)+\rho ^{[2]}(d=1)`$ to the second-order moment (17), they enhance each other for $`\sigma <\sigma _{\mathrm{crit}}`$, but more or less cancel each other for $`\sigma >\sigma _{\mathrm{crit}}`$. Hence, for the former case the modified two-point correlation density dominates the moment scaling whereas for the latter case the $`\delta `$-function-like correction in (18) becomes dominant. We conclude: when generalised dimensions are determined via integral moments, one is likely to encounter artificial multifractal phase transitions, which are not a property of the underlying strongly intermittent nonconservative cascade process, but artifacts of small-scale resummation. The fundamental $`n`$-point correlation densities and, should the underlying process only be resolvable at an intermediate scale, so-called density correlators $$𝒞_{\kappa _1\mathrm{}\kappa _n}^{[m_1,\mathrm{},m_n]}=\frac{\left(\overline{\epsilon }_{\kappa _1}^{(J,j)}\right)^{m_1}\mathrm{}\left(\overline{\epsilon }_{\kappa _n}^{(J,j)}\right)^{m_n}}{\left(\overline{\epsilon }_{\kappa _1}^{(J,j)}\right)^{m_1}\mathrm{}\left(\overline{\epsilon }_{\kappa _n}^{(J,j)}\right)^{m_n}},$$ (21) avoid such contributions and are therefore a better choice in estimating generalised dimensions. ###### Acknowledgements. The authors thank Bruno Jouault, Peter Lipa and Hans Eggers for some fruitful discussions. This work was supported in part by the Deutsche Forschungsgemeinschaft.
no-problem/9911/quant-ph9911086.html
ar5iv
text
# Deterministic Quantum State Transformations ## Acknowledgements The author gratefully acknowledges the EPSRC for the award of a research fellowship. Thanks also go to Christof Zalka for his useful comments.
no-problem/9911/astro-ph9911468.html
ar5iv
text
# Gamma-ray bursts and the history of star formation ## 1 Introduction Gamma-ray bursts (GRBs) are detectable out to the edges of the observable Universe, and so provide information about the processes occurring within their progenitors at all cosmic epochs (Piran 1999a,b). If GRBs arise either from binary mergers of massive stellar remnants (neutron stars and black holes; Paczynski 1986) or from failed supernovae collapsing to form black holes (Woosley 1993), or from the collapse of massive stellar cores (hypernovae; Paczynski 1998), then they will be associated with the formation of massive stars. See Hogg & Fruchter (1999) and Holland & Hjorth (1999) for some observational evidence that this is the case. Because high-mass stars have very short lifetimes, the rate of GRBs should trace the formation rate of massive stars in the Universe. Hence, if the distribution of the redshifts of GRBs is known, this should allow the evolution of the high-mass star-formation rate to be derived (Totani et al. 1997; Wijers et al. 1998; Hogg & Fruchter 1999; Mao & Mo 1999; Krumhotz, Thorsett & Harrison 1999). Observations of faint galaxies in the optical and near-infrared wavebands have been used to estimate the history of star formation activity (Lilly et al. 1996; Madau et al. 1996; Steidel et al. 1999); however, absorption by interstellar dust in star-forming galaxies could significantly modify the results (Blain et al. 1999a,c). It is difficult but not impossible to correct these optically derived results to take account of this effect. By making observations in the near-infrared waveband, where the optical depth of the dust is less, some progress has been made (Pettini et al. 1998; Steidel et al. 1999). However, there are considerable uncertainties in the size of the corrections that should be applied. It is now possible to detect the energy that has been absorbed and re-emitted by dust in high-redshift galaxies directly in the form of thermal far-infrared radiation, which is redshifted into the submillimetre waveband. The sensitive SCUBA camera at the James Clerk Maxwell Telescope has revealed this emission directly (Smail, Ivison & Blain 1997; Barger et al. 1998; Hughes et al. 1998; Blain et al. 1999b, 2000; Barger, Cowie & Sanders 1999; Eales et al. 1999). It is possible to derive a history of high-mass star formation from the SCUBA observations (Blain et al. 1999a,c); at least as much energy is inferred to have been released from dust-enshrouded star formation activity as in the form of unobscured starlight. An independent test of the relative amount of obscured and unobscured star formation activity would be extremely valuable. The advantage of using GRBs for this purpose is that gamma-rays are not absorbed by interstellar dust, either within the host galaxy of the GRB or in the intergalactic medium along the line of sight to the host galaxy, and so dust extinction will not modify our view of the distant Universe observed using gamma rays. However, there is a problem, as in order to determine a redshift for a GRB the spectrum of the associated burst of optical transient radiation must be detected. If the burst is heavily enshrouded in dust, then it would be difficult to detect such a transient and to obtain its spectrum. In Section 2 we briefly review the differences between the histories of star formation derived from optical/near-infrared and far-infrared/submillimetre observations. In Section 3 we discuss and predict the associated redshift distribution of GRBs. In Section 4 we discuss the consequences for determining the history of star formation using observations of GRBs. We assume an Einstein–de Sitter world model with Hubble’s constant $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. ## 2 The history of star formation In Fig. 1 we compare five currently plausible star formation histories (Blain et al. 1999a,c), two of which are based on observations made in the optical and near-infrared wavebands and three of which are based on observations in the far-infrared and submillimetre wavebands. A wide variety of observational data that has been gathered in the optical and near-infrared wavebands is also plotted. The data is described in more detail in the caption of fig. 1 of Blain et al. (1999a) and by Steidel et al. (1999). In the first optically derived model (thin dashed line) it is assumed that no dust absorption takes place within the galaxies detected in deep optical images, and that there is no population of strongly obscured objects missing from these samples. This model closely follows the form of the history of star formation derived by Madau et al. (1996) from an analysis of the Hubble Deep Field at $`z>2`$, and from observations of the Canada–France Redshift Survey fields at $`z<1`$ by Lilly et al. (1996). In the second optically derived model (thick dashed line), dust extinction is assumed to be present. The model fits the data that has been corrected empirically to take account of the effects of dust, using radio and ISO satellite observations at $`z<1`$ (Flores et al. 1999) and using estimates of extinction in high-redshift galaxies estimated from near-infrared spectroscopy (Pettini et al. 1998; Steidel et al. 1999). The three different far-infrared/submillimetre models of the history of star formation fit all of the available data describing the background radiation intensity and the counts of dusty galaxies in these wavebands. The ‘Gaussian model’ (thin solid line) was derived by Blain et al. (1999c), and the ‘Modified Gaussian model’ (medium solid line) was changed slightly in order to fit the median redshift of plausible counterparts to submillimetre-selected galaxies (Smail et al. 1998) determined by Barger et al. (1999b). In the ‘Hierarchical model’ (thick solid line) the population of submillimetre-luminous galaxies is described in terms of short-lived luminous bursts induced by galaxy mergers (Blain et al. 1999a). This model provides a reasonable fit to the Barger et al. (1999b) redshift distribution if the dust temperature is assumed to be 35 K. ## 3 The redshift distribution of GRBs A histogram showing the eight spectroscopic redshifts of optical transients associated with GRBs (Greiner 1999) is plotted in Fig. 2 (solid histogram; Metzger et al. 1997; Djorgovski et al. 1998a,b,c, 1999a,b; Kulkarni et al. 1998; Galama et al. 1999; Vreeswijk et al. 1999). There are indications of the redshifts for three more GRBs, which are included in the derivation of the dotted histogram shown in Fig. 2, while for about another twenty GRBs no optical transient has been detected, in some cases despite sensitive searches. These numbers were compiled on 1999 November 16. In Fig. 2 we also present the expected redshift distributions of GRBs for each of the five star formation histories shown in Fig. 1. These redshift distributions are derived by integrating the function that describes the evolution of the global star-formation rate along a radial section of the Universe (Totani 1997; Wijers et al. 1998) and have been normalised to unity. In this calculation we assume that a typical GRB would be detectable at any redshift. If, in fact, there is a redshift-dependent selection function for GRBs, then the observed redshift distribution of bursts would be expected to be systematically lower as compared with the curves shown in Fig. 2. At present, there are too few redshifts with which to estimate the possible size of this effect; however, the detection of two GRBs at $`z>3`$ tends to argue against the strong anti-selection of high-redshift bursts. ## 4 Discussion It is clear from the histograms shown in Fig. 2 that too few redshifts are currently known to allow us to discriminate between the different models of the star formation history. Nor is the efficiency of generating GRBs from massive star formation activity known in sufficient detail to allow us to discriminate between the different models on the grounds of the absolute number of bursts observed. As an additional caveat, it seems likely that about 20 per cent of the submillimetre-selected galaxies are powered by gravitational accretion onto active galactic nuclei (AGNs; Almaini, Lawrence & Boyle 1999), and as such should not be associated with GRBs derived from exploding high-mass stars, unless a significant amount of high-mass star formation activity is taking place coevally with AGN fueling. If much of the high-mass star formation activity in the Universe does indeed take place in dust-enshrouded galaxies, then the optical transients of GRBs that occur in these galaxies would be less likely to be detected than those in dust-free galaxies, because while the GRB gamma-ray signal can escape, the associated optical transient emission will be obscured. Therefore, the inferred GRB redshift distribution at present might in fact be biased to lower redshifts, given the large number of GRBs that do not have detected optical transient counterparts. This selection effect for the detection of optical transients will depend on the geometry and environment of the host galaxy, and so is likely to be difficult to investigate reliably until a much larger sample of high-quality follow-up observations of GRBs has been assembled. It is interesting to note that the predicted redshift distribution of GRBs that was derived from unobscured optical observations of the star formation history, as shown by the thin dashed curve in Fig. 2, appears to provide the best agreement with the observed redshift distribution of the optical transients that has been determined so far. This suggests that there could be a common extinction bias against the detection of both dust-enshrouded star formation activity in optical galaxy surveys and of the optical transients of GRBs. Since systematic, rapid, deep and reliable searches for optical transients began in 1998 March (Akerlof et al. 1999), nine gamma-ray bursts have been detected with optical transients, four without and a further thirteen unreported, on 1999 November 16 (Greiner 1999). If those GRBs without optical transients are in dusty regions, then this lends support to the idea that comparable amounts of high-mass star formation occurs in heavily and lightly dust-enshrouded regions. One GRB (GRB980329) has so far been detected using SCUBA at a submillimetre wavelength of 850 $`\mu `$m (Smith et al. 1999), although it is unclear whether there is any component of thermal dust emission involved, or if the emission is entirely attributable to synchrotron radiation from the shocked interstellar medium. See Taylor et al. (1998, 1999) for a discussion of the properties of GRBs in the radio waveband, where they are not subject to obscuration by dust. The number of spectroscopic redshifts for the optical transients of GRBs is growing steadily, with a new determination being reported every two to three months (Greiner 1999). The number of redshifts for GRBs already exceeds the number of redshifts that have been obtained for galaxies detected in submillimetre-wave surveys that have reliable identifications. Once a sample of about 100 GRBs have redshifts then it should be possible to discriminate between the different models. As the predicted redshift distributions shown in Fig. 2 are significantly different, the GRB redshift distribution may provide a very significant constraint to the history of star formation activity at all epochs. ### 4.1 Dust-enshrouded infrared transients It will be important to pay close attention to the GRBs without optical transients. If the optical and ultraviolet radiation released by the GRB is obscured by dust, then since these heated dust grains have a low heat capacity, the reprocessed thermal emission could be detected directly as a transient signal at submillimetre to near-infrared wavelengths. Intense shocking and heating of the gas in the interstellar medium of the host galaxy could also lead to detectable emission of far-infrared and submillimetre-wave fine-structure atomic line radiation, and molecular rotational line radiation (see also Perna, Raymond & Loeb 2000). Because the host galaxy of the GRB is likely to be optically thin at far-infrared and submillimetre wavelengths, the detection of this radiation could provide a redshift for a GRB, even in the absence of an optical transient source if the opacity at optical wavelengths is very high. In the future, the large collecting area, excellent sensitivity and subarcsecond angular resolution of the Atacama Large Millimeter Array (ALMA; Wootten 2000) will make it the ideal instrument to conduct observations to search for any transient line and continuum radiation from GRBs in the submillimetre waveband. Recently, Waxman & Draine (2000) discussed the effects of a GRB on the sublimation of dust in the surrounding interstellar medium. We are currently investigating the observability of infrared transients from dust-enshrouded GRBs (Venemans & Blain, in preparation). ## 5 Conclusions A large statistical sample of redshifts for gamma-ray bursts (GRBs) will allow the star formation history of the Universe to be probed in more detail than is currently possible. Once about a hundred examples are known, the fraction of dust-enshrouded star formation activity that takes place as a function of redshift can be addressed by an analysis of both the redshift distribution of the optical transients and the fraction of GRBs for which an optical transient is detected. The results will also allow the fraction of AGN in submillimetre-selected samples and the form of their evolution to be estimated in a new way. The commissioning of the ALMA interferometer array will hopefully allow the absorbed optical and ultraviolet light from dust-enshrouded GRBs to be detected in the form of a far-infrared transient signal, potentially revealing the redshift of the GRB without requiring observations in the optical waveband. ## Acknowledgements AWB, Raymond & Beverly Sackler Foundation Research Fellow, gratefully acknowledges support from the Foundation as part of the Deep Sky Initiative programme at the IoA. We thank Brad Hansen, Kate Quirk and an anonymous referee for helpful comments on the manuscript, and Andrew Fruchter, George Djorgovski and Shri Kulkarni for useful conversations.
no-problem/9911/astro-ph9911407.html
ar5iv
text
# The First Five Minutes of a Core Collapse Supernova: Multidimensional Hydrodynamic Models ## The First Five Minutes of a Core Collapse Supernova: Multidimensional Hydrodynamic Models K. Kifonidis<sup>1</sup>, T. Plewa<sup>2,1</sup>, H.-Th. Janka<sup>1</sup>, E. Müller<sup>1</sup> <sup>1</sup> Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Strasse 1, D-85740 Garching, Germany <sup>2</sup> Nicolaus Copernicus Astronomical Center, Bartycka 18, 00716 Warsaw, Poland ## 1 Introduction Numerous observations of SN 1987 A suggest that extensive mixing has taken place in the exploding envelope of the progenitor star Sk -69 202. The early detection of X and $`\gamma `$-rays, the broad profiles of infrared Fe and Co lines, as well as the shape of the light curve cannot be explained without assuming that clumps of newly synthesized $`{}_{}{}^{56}\mathrm{Ni}`$, from layers close to the collapsed core, have penetrated into the hydrogen envelope (see the reviews of Arnett et al. 1989 and Müller 1998, and the references therein). That such mixing is probably generic in core collapse supernovae is indicated by spectroscopic studies of SN 1987 F, SN 1988 A, SN 1993 J (Spyromilio, 1994, and references therein) and SN 1995 V (Fassia et al., 1998). Furthermore, it might also explain the detection of fast moving clumps of metal-enriched material in the Vela (Aschenbach et al., 1995), Cas A (Anderson et al., 1994) and Puppis A (Winkler & Kirshner, 1985) supernova remnants as well as the isotopic composition of specific SiC grains with possible supernova origin found in primitive meteorites (Nittler et al. 1996; Amari this volume). But even more important from the point of view of supernova modellers is the fact, that a detailed understanding of the problem of nucleosynthesis and mixing can give us invaluable information about the explosion mechanism itself. Thus, the observations have instigated theoretical work on multidimensional supernova models which focused either on the role of convection occuring within the first second of a delayed, neutrino-driven explosion (Mezzacappa et al., 1998; Janka & Müller, 1996; Burrows et al., 1995; Herant et al., 1994; Miller et al., 1993), or on the growth of Rayleigh-Taylor instabilitites during the late evolutionary stages (Nagataki et al., 1998; Herant & Benz, 1992; Müller et al., 1991; Yamada & Sato, 1991; Hachisu et al., 1990). However, multidimensional simulations which follow the evolution of the stalled supernova shock from its revival due to neutrino heating, until its emergence from the stellar surface have not yet been performed. Due to the presence of vastly different spatial and temporal scales and the range of physical processes involved during the early stages following core collapse, all studies of mixing in core collapse supernovae have hitherto neglected the influence of neutrino-driven convection in seeding the Rayleigh-Taylor instabilities. Instead, a shock wave was created artificially by depositing the explosion energy near the center of a pre-collapse progenitor model and following the propagation of the shock in one spatial dimension until it had reached one of the unstable composition interfaces. This was either chosen to be the He/H interface in case of Type II supernova models (e.g. Herant & Benz 1992; Müller et al. 1991) or the C+O/He interface in case of Type Ib models (Hachisu et al., 1994). Only then were the 1D models mapped to a 2D grid and the rest of the evolution followed with a multidimensional code. A somewhat different, two-dimensional approach, was chosen in the recent calculations of Nagataki et al. (1997) who initiated the explosion using a parameterized, aspherical shock wave and computed the resulting nucleosynthesis using a marker particle approximation. These models have been subsequently used by Nagataki et al. (1998) for a study of Rayleigh-Taylor instabilities at the He/H interface. Still, however, these calculations do not address the complications introduced by the explosion mechanism and thus suffer from a number of assumptions. Furthermore, their numerical resolution appears to be hardly sufficient to resolve instabilities which occur within the first minutes of the evolution. In the present contribution a first step towards a more consistent multidimensional picture of core collapse supernovae is attempted by trying to answer the questions * What happens in the first minutes and hours of a core collapse supernova if neutrino heating indeed succeeds in reviving the supernova shock? * Can neutrino-driven convection in conjunction with the later Rayleigh-Taylor instabilities lead to the high iron velocities observed in SN 1987 A? For this purpose we have carried out high-resolution 2D supernova simulations which for the first time cover the neutrino-driven initiation of the explosion, the accompanying convection and nucleosynthesis as well as the Rayleigh-Taylor mixing. In the following, we present preliminary results from these calculations, focusing on the first $`300`$ seconds of evolution. A summary of our work can also be found in Kifonidis et al. (1999). ## 2 Numerical Method and Initial Data We split our simulation into two stages. The early evolution ($`t1`$ s) which encompasses shock revival by neutrino heating, neutrino-driven convection and explosive nucleosynthesis is followed with a version of the HERAKLES code (T. Plewa & E. Müller, in preparation). This hydrodynamics code solves the multidimensional hydrodynamic equations using the direct Eulerian version of the Piecewise Parabolic Method (Colella & Woodward, 1984) augmented by the Consistent Multifluid Advection (CMA) scheme of Plewa & Müller (1999) in order to guarantee exact conservation of nuclear species. We have added the input physics (neutrino source terms, equation of state, boundary conditions, gravitational solver) described in Janka & Müller (1996) (henceforth JM96) with the following modifications. General relativistic corrections are made to the gravitational potential following Van Riper (1979). A 14-isotope network is incorporated in order to compute the explosive nucleosynthesis. It includes the 13 $`\alpha `$-nuclei from $`{}_{}{}^{4}\mathrm{He}`$ to $`{}_{}{}^{56}\mathrm{Ni}`$ and a representative tracer nucleus which is used to monitor the distribution of the neutrino-heated, neutron-rich material and to replace the $`{}_{}{}^{56}\mathrm{Ni}`$ production when $`Y_\mathrm{e}`$ drops below $`0.49`$ (cf. Thielemann et al., 1996). We start our calculations 20 ms after core bounce from a model of Bruenn (1993) who has followed core-collapse and bounce in the 15 $`\mathrm{M}_{}`$ progenitor model of Woosley et al. (1988). The model is mapped to a 2D grid consisting of 400 radial zones ($`3.17\times 10^6\mathrm{cm}r1.7\times 10^9`$ cm), and 180 angular zones ($`0\theta \pi `$; cf. JM96 for details). A random initial seed perturbation is added to the velocity field with a modulus of $`10^3`$ of the (radial) velocity of Bruenn’s post-collapse model. The calculations are carried up to 885 ms. At this time the explosion energy has saturated and essentially all nuclear reactions have frozen out. We will henceforth refer to this calculation as our “explosion model”. The subsequent shock propagation through the stellar envelope and the growth of Rayleigh-Taylor instabilities is followed with the AMRA Adaptive Mesh Refinement (AMR) code (T. Plewa & E. Müller, in preparation). This code uses a different variant of HERAKLES as its hydrodynamics solver which does not include the neutrino physics. Gravity is neglected in the AMR calculations since, as is the case for the neutrino source terms, it does not influence the propagation of the shock during late evolutionary stages. However, gravity is important for determining the amount of fallback, a problem which is outside the scope of the present study. The equation of state takes into account contributions from photons, non-degenerate electrons, $`\mathrm{e}^+\mathrm{e}^{}`$-pairs, $`{}_{}{}^{1}\mathrm{H}`$, and the nuclei included in the reaction network. The AMR calculations are started with the inner and outer boundaries located at $`r_{\mathrm{in}}=10^8`$ cm (i.e. inside the hot bubble containing the neutrino-driven wind) and $`r_{\mathrm{out}}=2\times 10^{10}`$ cm, respectively. No further seed perturbations are added. We use up to four levels of mesh refinement and refinement factors of four in each grid direction, yielding a maximum resolution equivalent to that of a uniform grid of $`3072\times 768`$ zones. In order to keep the radial resolution as high as possible during any given evolutionary time, we do not include the entire star but allow the code to expand the radial extent of the base grid by a factor of 2 to 4 whenever the supernova shock is approaching the outer grid boundary. The latter is moved from its initial value out to $`r_{\mathrm{out}}=1.1\times 10^{12}`$ cm at $`t=300`$ s. Reflecting boundary conditions are used at $`\theta =0`$ and $`\theta =\pi `$ and free outflow is allowed across the inner and outer radial boundaries. ## 3 Nucleosynthesis and Neutrino-Driven Convection The general features of our explosion model are comparable to the models of JM96. For the initial neutrino luminosities, which are prescribed at the inner boundary, somewhat below the neutrino sphere, and decay with time as described in JM96, we have adopted a value of $`L_{\nu _e}^0=2.8125\times 10^{52}\mathrm{erg}/\mathrm{s}`$ for the electron neutrinos and $`L_{\nu _x}^0=2.375\times 10^{52}\mathrm{erg}/\mathrm{s}`$ for the heavy lepton neutrinos (with $`\nu _x=\nu _\mu ,\overline{\nu }_\mu ,\nu _\tau ,\overline{\nu }_\tau `$). The parameters describing lepton and energy loss of the inner iron core were set to $`\mathrm{\Delta }Y_l=0.0875,\mathrm{\Delta }\epsilon =0.0625`$ (cf. JM96). The neutrino spectra are the same as in JM96. For the chosen neutrino luminosities the shock starts to move out of the iron core almost immediately. Convection between shock and gain radius sets in $`30`$ ms after the start of the simulation in form of rising blobs of neutrino-heated, deleptonized material (with $`Y_\mathrm{e}0.5`$) separated by narrow downflows with $`Y_\mathrm{e}0.49`$ (Fig. 1). The shock reaches the Fe/Si interface at $`r=1.4\times 10^8`$ cm after $`100`$ ms. Shortly thereafter, at $`t120`$ ms temperatures right behind the shock have dropped below $`7\times 10^9`$ K, and $`{}_{}{}^{56}\mathrm{Ni}`$ starts to form in a narrow shell (Fig. 2). During the ongoing expansion and cooling, $`{}_{}{}^{56}\mathrm{Ni}`$ is also synthesized in the convective region. However, its synthesis proceeds exclusively in the narrow downflows which separate the rising bubbles and have a sufficiently high electron fraction $`Y_\mathrm{e}`$. This leads to a highly inhomogeneous nickel distribution (middle panel of Fig. 2) shortly before complete silicon burning freezes out at $`t250`$ ms. Moreover, convective motions are still present even after $`{}_{}{}^{56}\mathrm{Ni}`$ production ceases, and distort the nickel containing shell (upper panel of Fig. 2). Only when convection stops around $`t400`$ ms, the flow pattern becomes frozen in and the entire post-shock region expands nearly uniformly. Due to the asphericity of the shock caused by the rising bubbles, “bent” shells containing the products of incomplete silicon burning as well as oxygen burning form outside the nickel-enriched region. The post-shock temperature drops below $`2.8\times 10^9`$ K at $`t=495`$ ms, when the shock is about to cross the Si/O interface. Thus, our model shows only moderate oxygen burning (due to a non-vanishing oxygen abundance in the silicon shell), and negligible neon and carbon burning. This is caused by the specific structure of the progenitor model of Woosley et al. (1988) and may change when different (especially more massive) progenitors are used. In total, $`0.052\mathrm{M}_{}`$ of $`{}_{}{}^{56}\mathrm{Ni}`$ are produced, while $`0.10\mathrm{M}_{}`$ of material in the deleptonized bubbles are synthesized at conditions with $`Y_\mathrm{e}<0.49`$ and end up as neutron-rich nuclei. The explosion energy of our 2D model saturates at $`1.48\times 10^{51}`$ erg at $`t=885`$ ms. This value is still to be corrected for the binding energy of the outer envelope. ## 4 Growth of Rayleigh-Taylor Instabilities During the next seconds the shock detaches from the formerly convective shell that carries the products of explosive nucleosynthesis, looses its asphericity and crosses the C+O/He-interface. Along with the slow-down which the shock experiences after passing this interface due to the varying density gradient, the entire post-shock material is also rapidly decelerated. Twenty seconds after core-bounce, the metal-containing shell has thus been compressed to a thin, dense layer which is bounded inwards by a reverse shock and contains two regions which show crossed density and pressure gradients. The first of these is located at the Ni+Si/O-interface while the second coincides with the C+O/He-interface. Thus, Rayleigh-Taylor instabilities at the Ni+Si/O and C+O/He-interfaces grow rapidly. At $`t=100`$ s (Fig. 3) the instabilities are fully developed and have already interacted with each other. Nickel and silicon are dragged upward into the helium shell in rising mushrooms on angular scales from $`1^{}`$ to about $`5^{}`$, whereas helium is mixed inward in bubbles. Oxygen and carbon, located in intermediate layers of the progenitor, are swept outward as well as inward in rising and sinking flows. At $`t=300`$ s the densities between the dense mushrooms and the ambient medium differ by factors up to 5 while the fastest mushrooms have already propagated out to more than half the radius of the He core (Fig. 4). We observe that the nickel is born with very high velocities, of the order of 15,000 km/s at $`t=300`$ ms. These velocities drop significantly during the subsequent evolution, especially after the shock passes the C+O/He interface. At a time of $`t=50`$ s most of the $`{}_{}{}^{56}\mathrm{Ni}`$ is expanding with 3200 – 4500 km/s and maximum velocities $`v_{\mathrm{Ni}}^{\mathrm{max}}`$ are around 5800 km/s. During the following 50 seconds, $`v_{\mathrm{Ni}}^{\mathrm{max}}`$ drops to $`5000`$ km/s. However, after $`t=100`$ s, the clumps start to move essentially ballistically through the helium core and only a slight decrease of $`v_{\mathrm{Ni}}^{\mathrm{max}}`$ to $`4700`$ km/s occurs until $`300`$ s, when the bulk of $`{}_{}{}^{56}\mathrm{Ni}`$ has velocities below $`3000`$ km/s. We have recently accomplished to follow the subsequent evolution of our model up to 16 000 s after core-bounce. The unsteady propagation speed of the supernova shock which has formerly induced the instability at the Ni+Si/O and C+O/He-interfaces also leads to the formation of a dense (Rayleigh-Taylor unstable) shell at the He/H interface. As before, the inner boundary of this shell steepens into a reverse shock (Fig. 4), while in the process of this the entire shell is rapidly slowed down. Our high-resolution simulations reveal a potentially severe problem for the mixing of heavy elements into the hydrogen envelope of Type II supernovae like SN 1987A. We find that the fast nickel containing clumps, after having penetrated through this reverse shock, dissipate a large fraction of their kinetic energy in bow shocks created by their supersonic motion through the shell medium. This leads to their deceleration to $`2000`$ km/s in our calculations. Contrary to all previous studies, which tried to accelerate the nickel by the instabilities, our computations show, that the main problem for obtaining high nickel velocities is how to avoid decelerating the clumps once they reach the He/H interface. The growth time scale of the instability at this interface is too large to allow for a fast shredding of the dense shell and the formation of “holes” through which the clumps could penetrate more easily. The high iron velocities in SN 1987 A might therefore hint towards a smoother density profile exterior to the He-core, which would suppress dense shell formation, or towards a global asymmetry of the explosion. During the computations we became aware of oscillations with angle in parts of the postshock flow (Figs. 2 and 4). These are caused by the “odd-even decoupling” phenomenon associated with grid-aligned shocks (Quirk 1994; see also LeVeque 1998). As a consequence, the maximum nickel velocities, $`v_{\mathrm{Ni}}^{\mathrm{max}}`$, obtained in our AMR calculations have probably been overestimated by $`25\%`$ because the growth of some of the mushrooms was influenced by the perturbations induced by this numerical defect. The main results of our study, however, are not affected. ## 5 Conclusions We have studied the evolution of a core collapse supernova in a 15 $`\mathrm{M}_{}`$ blue supergiant progenitor focusing on the first 300 s after core bounce. As a result of the interplay between neutrino-matter interactions, hydrodynamic instabilities and nucleosynthesis in the framework of the neutrino-driven explosion mechanism, the products of explosive silicon and oxygen burning are mixed throughout the inner half of the helium core of the progenitor star. Ballistically moving, metal-enriched clumps with velocities up to more than $`4000`$ km/s are observed. Rayleigh-Taylor instabilities at the C+O/He and Ni+Si/O-interfaces transport helium deep inward and sweep nickel, silicon, oxygen and carbon outward in rising mushrooms. Our simulations suggest that high-velocity metal-rich clumps are ejected during the explosion of Type Ib (and Ic) supernovae. In case of Type II supernovae, however, the dense shell left behind by the shock passing the boundary between helium core and hydrogen envelope, causes a substantial deceleration of the clumps. Thus, we cannot confirm the results of Herant & Benz (1992) that “premixing” of the $`{}_{}{}^{56}\mathrm{Ni}`$ within the first minutes of the explosion, and its later interaction with the instability at the He/H-interface, can explain the high iron velocities observed in SN 1987 A. Moreover, our calculations strongly indicate, that all computations of Rayleigh-Taylor mixing in Type II supernovae carried out so far (including the case of SN 1987 A) have been started from overly simplified initial conditions since they have neglected clump formation during the first minutes of the explosion. In future work, it will be most interesting to study how the discussed effects depend on different progenitor models and to explore the implications of the strong outward mixing of $`{}_{}{}^{56}\mathrm{Ni}`$ for the light curves and spectra of Type Ib supernovae. ## Acknowledgements We are very grateful to Stanford Woosley for providing us with the progenitor model used in this study as well as for stimulating discussions about Type Ib supernovae. We thank S. Bruenn for making available the data of his post-bounce model and P. Cieciel ag and R. Walder for their help regarding visualization. The work of TP was partly supported by grant KBN 2.P03D.004.13 from the Polish Committee for Scientific Research. The simulations were performed on the NEC SX-4B and CRAY J916/16512 of the Rechenzentrum Garching.
no-problem/9911/math9911138.html
ar5iv
text
# 1 Introduction ## 1 Introduction The non-standard quantum deformation of $`sl(2,𝐑)so(2,1)`$ , here denoted $`U_z(sl(2,𝐑))`$, has been the starting point in the obtention of non-standard quantum algebras in higher dimensions. In particular, by taking two copies of $`U_z(sl(2,𝐑))`$ and applying the same procedure as in the standard (Drinfel’d–Jimbo) case , a quantum $`so(2,2)`$ algebra has been obtained in , while the corresponding deformation for $`so(3,2)`$ has been found in . These quantum algebras have been realized as deformations of conformal algebras for the Minkowskian spacetime. Furthermore, by following either a contraction approach or a deformation embedding method , non-standard quantum deformations for other Lie algebras have been deduced; amongst them it is remarkable the appearance of a non-standard quantum Poincaré algebra, which can be considered as a conformal quantum algebra for the Carroll spacetime, or alternatively as a null-plane quantum Poincaré algebra . All these results are summarized in the following diagram where the vertical arrows indicate the corresponding contractions leading to Poincaré algebras: $$\begin{array}{ccccc}U_z(sl(2,𝐑))& & U_z(sl(2,𝐑))U_z(sl(2,𝐑))U_z(so(2,2))& & U_z(so(3,2))\\ \epsilon 0& & \epsilon 0& & \epsilon 0\\ U_z(iso(1,1))& & \text{Null-plane Poincaré algebra}U_z(iso(2,1))& & U_z(iso(3,1))\end{array}$$ The aim of this contribution is to provide, starting again from $`U_z(sl(2,𝐑))`$, a new way in the obtention of non-standard quantum algebras. The first step is to construct a new non-standard quantum $`so(2,2)`$ algebra which could be the cornerstone of further constructions in higher dimensions. The essential idea is to require that $`U_z(sl(2,𝐑))`$ remains as a Hopf subalgebra so that this approach can be seen as a kind of complete deformation embedding method. Next, a contraction limit gives rise to a new $`(2+1)`$ quantum Poincaré algebra which contains a $`(1+1)`$ quantum Poincaré Hopf subalgebra: $$U_z(sl(2,𝐑))U_z(so(2,2))\stackrel{\epsilon 0}{}U_z(iso(1,1))U_z(iso(2,1))$$ It is interesting to stress that such new quantum $`so(2,2)`$ algebra is the symmetry algebra of a time discretization of the wave equation. Thus we recall in the next section the basic facts of the Lie algebra $`so(2,2)`$ in a conformal basis as well as its relationship with the $`(1+1)`$ wave equation. The Hopf algebra structure deforming $`so(2,2)`$, its role as a discrete symmetry algebra and its contraction to Poincaré are presented in the section 3. ## 2 Lie algebra so(2,2) Let us consider the real Lie algebra $`so(2,2)`$ generated by $`H`$ (time translations), $`P`$ (space translations), $`K`$ (boosts), $`D`$ (dilations) and $`C_1`$, $`C_2`$ (special conformal transformations). In this basis $`so(2,2)`$ is the Lie algebra of the group of conformal transformations of the $`(1+1)`$ Minkowskian spacetime. The Lie brackets of $`so(2,2)`$ read $$\begin{array}{ccc}[K,H]=P\hfill & [K,P]=H\hfill & [H,P]=0\hfill \\ [D,H]=H\hfill & [D,C_1]=C_1\hfill & [H,C_1]=2D\hfill \\ [D,P]=P\hfill & [D,C_2]=C_2\hfill & [P,C_2]=2D\hfill \\ [K,C_1]=C_2\hfill & [K,C_2]=C_1\hfill & [C_1,C_2]=0\hfill \\ [H,C_2]=2K\hfill & [P,C_1]=2K\hfill & [K,D]=0.\hfill \end{array}$$ (1) Three subalgebras of $`so(2,2)`$ are relevant for our purposes: $``$ $`\{H,P,K\}`$ which span the $`(1+1)`$ Poincaré algebra (first row in (1)). $``$ $`\{D,H,C_1\}`$ which give rise to $`so(2,1)sl(2,𝐑)`$ (second row in (1)). $``$ $`\{D,P,C_2\}`$ which also generate $`so(2,1)sl(2,𝐑)`$ (third row in (1)). A vector field representation of $`so(2,2)`$ in terms of the space and time coordinates $`(x,t)`$ is given by $$\begin{array}{c}H=_tP=_xK=t_xx_tD=x_xt_t\hfill \\ C_1=(x^2+t^2)_t+2xt_xC_2=(x^2+t^2)_x2xt_t.\hfill \end{array}$$ (2) The Casimir of the above Poincaré subalgebra is $`E=P^2H^2`$. The action of $`E`$ on a function $`\mathrm{\Phi }(x,t)`$ through the representation (2) (choosing for $`E`$ the value zero) leads to the $`(1+1)`$ wave equation: $$E\mathrm{\Phi }(x,t)=0\left(\frac{^2}{x^2}\frac{^2}{t^2}\right)\mathrm{\Phi }(x,t)=0.$$ (3) We shall say that an operator $`𝒪`$ is a symmetry of the equation $`E\mathrm{\Phi }(x,t)=0`$ if $`𝒪`$ transforms solutions into solutions, that is, $`E𝒪=\mathrm{\Lambda }E`$ where $`\mathrm{\Lambda }`$ is another operator. The Lie algebra $`so(2,2)`$ is the symmetry algebra of the wave equation: $`E`$ commutes with $`\{H,P,K\}`$ and in the realization (2) the remaining generators verify $$[E,D]=2E[E,C_1]=4tE[E,C_2]=4xE.$$ (4) ## 3 Non-standard quantum so(2,2) algebra We choose the $`sl(2,𝐑)`$ subalgebra of $`so(2,2)`$ spanned by $`\{D,H,C_1\}`$. Then we write in terms of these generators the non-standard quantum deformation of $`sl(2,𝐑)`$ in the form introduced in and denote $`\tau `$ the deformation parameter. This means that the classical $`r`$-matrix we are considering for $`so(2,2)`$ is $`r=\tau DH`$ (which is a solution of the classical Yang–Baxter equation). Now we look for a quantum $`so(2,2)`$ algebra that keeps the quantum $`sl(2,𝐑)`$ algebra as a Hopf subalgebra: $`U_\tau (sl(2,𝐑))U_\tau (so(2,2))`$. The resulting coproduct and commutation rules for $`U_\tau (so(2,2))`$ are given by: $$\begin{array}{c}\mathrm{\Delta }(H)=1H+H1\mathrm{\Delta }(P)=1P+Pe^{\tau H}\hfill \\ \mathrm{\Delta }(D)=1D+De^{\tau H}\mathrm{\Delta }(C_1)=1C_1+C_1e^{\tau H}\hfill \\ \mathrm{\Delta }(K)=1K+K1\tau De^{\tau H}P\hfill \\ \mathrm{\Delta }(C_2)=1C_2+C_2e^{\tau H}+2\tau De^{\tau H}K\tau ^2D(D+1)e^{2\tau H}P\hfill \end{array}$$ (5) $$\begin{array}{c}[K,H]=e^{\tau H}P[K,P]=(e^{\tau H}1)/\tau [H,P]=0\hfill \\ [D,H]=(1e^{\tau H})/\tau [D,C_1]=C_1+\tau D^2[H,C_1]=2D\hfill \\ [D,P]=P[D,C_2]=C_2[P,C_2]=2D\hfill \\ [K,C_1]=C_2[K,C_2]=C_1\tau D^2[C_1,C_2]=\tau (DC_2+C_2D)\hfill \\ [H,C_2]=e^{\tau H}K+Ke^{\tau H}[P,C_1]=2K\tau (DP+PD)[K,D]=0.\hfill \end{array}$$ (6) It can be checked that the universal quantum $`R`$-matrix for $`U_\tau (sl(2,𝐑))`$ also holds for $`U_\tau (so(2,2))`$. In our basis this element reads $$=\mathrm{exp}\left\{\tau HD\right\}\mathrm{exp}\left\{\tau DH\right\}.$$ (7) The relationship between $`U_\tau (so(2,2))`$ and a discretization of the wave equation can be established by means of the following differential-difference realization which under the limit $`\tau 0`$ gives the classical realization (2): $`H=_tP=_x`$ $`K=x\left({\displaystyle \frac{e^{\tau _t}1}{\tau }}\right)te^{\tau _t}_xD=x_xt\left({\displaystyle \frac{1e^{\tau _t}}{\tau }}\right)`$ (8) $`C_1=(x^2+t^2e^{\tau _t})\left({\displaystyle \frac{e^{\tau _t}1}{\tau }}\right)+2xt_x+\tau x_x+\tau x^2_x^2`$ (9) $`C_2=(x^2+t^2e^{2\tau _t})_x2xt\left({\displaystyle \frac{1e^{\tau _t}}{\tau }}\right)+\tau te^{2\tau _t}_x.`$ (10) The generators $`\{H,P,K\}`$ close a deformed Poincaré subalgebra (although not a Hopf subalgebra) whose Casimir is now $`E_\tau =P^2\left(\frac{e^{\tau H}1}{\tau }\right)^2`$. If we introduce the realization (10) then we find a time discretization of the wave equation on a uniform lattice with $`x`$ as a continuous variable: $$E_\tau \mathrm{\Phi }(x,t)=0\left\{\frac{^2}{x^2}\left(\frac{e^{\tau _t}1}{\tau }\right)^2\right\}\mathrm{\Phi }(x,t)=0.$$ (11) Therefore the deformation parameter $`\tau `$ appearing within the discrete derivative in (11) can be identified with the time lattice constant. Furthermore the generators (10) are symmetry operators of (11) since they fullfil $`[E_\tau ,X]=0\text{for}X\{H,P,K\}[E_\tau ,D]=2E_\tau `$ (12) $`[E_\tau ,C_1]=4(t+\tau +\tau x_x)E_\tau [E_\tau ,C_2]=4xE_\tau .`$ (13) Hence we conclude that $`U_\tau (so(2,2))`$ is the symmetry algebra of the discrete wave equation (11). In this respect we recall that the symmetries of a discretization of the wave equation in both coordinates $`(x,t)`$ on a uniform lattice were computed in , showing that they are difference operators which preserve the Lie algebra $`so(2,2)`$ as in the continuous case. Therefore some kind of connection between the results of and our quantum $`so(2,2)`$ algebra should exist as it was already established for discrete Shrödinger equations and quantum algebras . To end with, we work out the contraction from $`U_\tau (so(2,2))`$ to a new quantum Poincaré algebra: $`U_\tau (so(2,2))U_\tau (iso(2,1))`$. We apply to the Hopf algebra $`U_\tau (so(2,2))`$ the Inönü–Wigner contraction defined by the map $$H\epsilon HPPK\epsilon KC_1\epsilon C_1C_2C_2DD$$ (14) together with a transformation of the deformation parameter: $`\tau \tau /\epsilon `$. The limit $`\epsilon 0`$ leads to the coproduct and commutators of $`U_\tau (iso(2,1))`$: $$\begin{array}{c}\mathrm{\Delta }(H)=1H+H1\mathrm{\Delta }(P)=1P+Pe^{\tau H}\hfill \\ \mathrm{\Delta }(D)=1D+De^{\tau H}\mathrm{\Delta }(C_1)=1C_1+C_1e^{\tau H}\hfill \\ \mathrm{\Delta }(K)=1K+K1\mathrm{\Delta }(C_2)=1C_2+C_2e^{\tau H}+2\tau De^{\tau H}K\hfill \end{array}$$ (15) $$\begin{array}{ccc}[K,H]=0\hfill & [K,P]=(e^{\tau H}1)/\tau \hfill & [H,P]=0\hfill \\ [D,H]=(1e^{\tau H})/\tau \hfill & [D,C_1]=C_1\hfill & [H,C_1]=0\hfill \\ [D,P]=P\hfill & [D,C_2]=C_2\hfill & [P,C_2]=2D\hfill \\ [K,C_1]=0\hfill & [K,C_2]=C_1\hfill & [C_1,C_2]=0\hfill \\ [H,C_2]=2e^{\tau H}K\hfill & [P,C_1]=2K\hfill & [K,D]=0.\hfill \end{array}$$ (16) The universal quantum $`R`$-matrix for $`U_\tau (iso(2,1))`$ is also (7) so that it is formally preserved under contraction. Note also that the generators $`\{D,H,C_1\}`$ give rise to a $`(1+1)`$ quantum Poincaré subalgebra such that: $`U_\tau (iso(1,1))U_\tau (iso(2,1))`$. Finally we remark that if we would have chosen the $`sl(2,𝐑)`$ subalgebra spanned by $`\{D,P,C_2\}`$ instead of the one generated by $`\{D,H,C_1\}`$, then we would have obtained a quantum $`so(2,2)`$ algebra with $`P`$ as primitive generator (instead of $`H`$). This second choice would lead to a space discretization of the wave equation. Both quantum $`so(2,2)`$ algebras would be algebraically equivalent by the interchanges $`HP`$ and $`C_1C_2`$, however their contraction would lead to inequivalent quantum Poincaré algebras. A complete analysis of all these possibilities will be presented elsewhere. ## Acknowledgment This work was partially supported by Junta de Castilla y León, Spain (Project CO2/399).
no-problem/9911/astro-ph9911283.html
ar5iv
text
# The GRB/SN Connection: An Improved Spectral Flux Distribution for the SN-Like Component to the Afterglow of GRB 970228, the Non-Detection of a SN-Like Component to the Afterglow of GRB 990510, and GRBs as Beacons to Locate SNe at Redshifts z ≈ 4 – 5 ## An Improved Spectral Flux Distribution for the SN-like Component to the Afterglow of GRB 970228 The discovery of what appear to be SNe dominating the light curves and spectral flux distributions (SFDs) of the afterglows of GRB 980326 (Bloom et al. 1999) and GRB 970228 (Reichart 1999; Galama et al. 1999) at late times after these bursts strongly suggests that at least some, and perhaps all, of the long bursts are related to the deaths of massive stars. Here, we build upon the results of Reichart (1999) by modeling the SFD of the host galaxy of GRB 970228, fitting this model to measurements of the host galaxy, and using the fitted model to better subtract out the contribution of the host galaxy to measurements of the afterglow of this burst. In Figure 1, we plot the observed SFD of the host galaxy of GRB 970228, as measured with HST/WFPC2, HST/NICMOS2, and Keck I (Castander & Lamb 1999a; Fruchter et al. 1999). To these measurements and a broadband measurement made with HST/STIS (Castander & Lamb 1999a; Fruchter et al. 1999), which is not plotted, we fit a two-parameter, spectral synthesis model (see Castander & Lamb 1999a for details). The two parameters are the normalization of the SFD, and the age of the galaxy, defined to be the length of time that star formation has been occurring at a constant rate. Taking $`A_V=1.09`$ mag for the Galactic extinction along the line of sight (Castander & Lamb 1999b), we find a fitted age of $`270_{180}^{+460}`$ Myr; different values of $`A_V`$ affect primarily the fitted age, and not the fitted SFD. Furthermore, models in which star formation slows considerably, or ceases, are generally too red to account for the measurements. Finally, we note that the fitted J- and R-band spectral fluxes are perfectly consistent with what one finds simply from linear interpolation between adjacent photometric bands. In Figure 2 (left panel), we plot the SFD of the afterglow minus the observed SFD of the host galaxy from Figure 1. For the SFD of the afterglow, we use the revised K-, J-, and R-band measurements of Galama et al. (1999) and the I- and V-band measurements of Castander & Lamb (1999a; see also Fruchter et al. 1999); all of these measurements were taken between 30 and 38 days after the burst. We have scaled these measurements to a common time of 35 days after the burst, and have corrected these measurements for Galactic extinction along the line of sight (see Reichart 1999 for details). The K-band measurement of the afterglow is consistent with that of the host galaxy (Galama et al. 1999), resulting in an upper limit in Figure 2; J- and R-band measurements of the host galaxy are not available, again resulting in upper limits in Figure 2. As originally concluded by Reichart (1999), this SFD is consistent with that of SN 1998bw, after transforming it to the redshift of the burst, $`z=0.695`$ (Djorgovski et al. 1999), and correcting it for Galactic extinction along its line of sight (see Reichart 1999 for details). In Figure 2 (right panel), we plot the same distribution, but minus the modeled SFD of the host galaxy from Figure 1. The SN-like component to the afterglow is detected in the J band, and possibly in the R band. The J-band measurement suggests that the SN-like component is $`1/2`$ mag fainter, and $`1/2`$ of a photometric band bluer, than SN 1998bw; however, this difference in J-band spectral fluxes is significant only at the $`2.5`$ $`\sigma `$ level. When possible photometric zero point errors and uncertainties in our spectral synthesis model of the SFD of the host galaxy are included, this difference is significant only at the $`2`$ $`\sigma `$ level. However, it is suggestive of what is generally expected: the Type Ic SNe that are theorized to be associated with bursts (e.g., Woosley 1993) are not expected to be standard candles. ## The Non-Detection of a SN-like Component to the Afterglow of GRB 990510 Given the rapid rate at which the afterglow of GRB 990510 faded, $`t^{2.4}`$ (Stanek et al. 1999) or $`t^{2.2}`$ (Harrison et al. 1999), at late times after the burst, a SN1998bw-like component to the afterglow, if present, could have dominated the light curve after about a month at red and NIR wavelengths (Figure 3; Lamb & Reichart 1999). However, Fruchter et al. (1999b) find no evidence of such a component, to within a factor of 3 – 7 in brightness, from two broadband HST/STIS observations. Caution is in order before one concludes that a SN was not associated with GRB 990510, or one uses the non-detection of SN1998bw-like components to afterglows to place lower limits on the redshifts of bursts. In the case of GRB 990510, the above calculation requires that assumptions be made about (1) the form of the SFD of the afterglow at the times of the observations, since the observations spanned a wavelength range of 300 – 900 nm; (2) how to extrapolate the non-power-law light curve of the early afterglow to the times of the observations; e.g., Stanek et al. (1999) and Harrison et al. (1999) do this differently; (3) the range of the luminosity distribution of SNe associated with bursts, relative to the luminosity of SN 1998bw, since these SNe are not expected to be standard candles; (4) the SFD of SN 1998bw at ultraviolet wavelengths, since the redshift of this burst is $`z=1.619`$ (Vreeswijk et al. 1999); (5) whether differences in host galaxy extinction along the SN 1998bw and GRB 990510 lines of sight can be ignored; and (6) the underlying cosmological model. ## GRBs as Beacons to Locate Supernovae at Very High Redshifts If bursts are indeed associated with SNe, then the first bursts should have occurred shortly after the first stars formed, at redshifts of $`z1520`$ (Ostriker & Gnedin 1996; Gnedin & Ostriker 1997; Valageas & Silk 1999). Lamb & Reichart (1999) show that bursts and their afterglows should be detectable out to these very high redshifts. One way, of many (see Lamb & Reichart 1999 for details), in which bursts can be used to probe the early universe is to use them as beacons to locate SNe at very high redshifts. In Figure 4, we plot the V-band light curve of SN 1998bw added to the best-fit source-frame V-band light curve of the early afterglow of GRB 970228 from Reichart (1999), transformed to various redshifts and corrected for Galactic extinction (see Lamb & Reichart 1999 for details). We use the V band because SN 1998bw peaked in this band. Clearly, the peak of the SN component of the light curve can be detected, at least from space, out to the K band, which corresponds to a redshift of $`z3`$. The peak cannot be detected at longer wavelengths since such faint magnitudes cannot be reached in the L and M bands. However, from the K band, one should be able to detect SNe like SN 1998bw blueward of the peak out to redshifts of $`z45`$.
no-problem/9911/hep-ex9911038.html
ar5iv
text
# References In an analysis of the $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$ final state the WA102 collaboration observed three peaks in the mass spectrum . A spin analysis showed that the peak at 1.28 GeV was due to the $`f_1(1285)`$, the peak at 1.45 GeV could be interpreted as being due to interference between the $`f_0(1370)`$ and $`f_0(1500)`$ and the peak at 1.9 GeV, called the $`f_2(1950)`$, was found to have $`I^GJ^{PC}=0^+2^{++}`$ and decay to $`f_2(1270)\pi \pi `$ and $`a_2(1320)\pi `$ . However, it was not possible to determine whether the $`f_2(1950)`$ was one resonance with two decay modes, or two resonances, or if one of the decay modes was spurious. One of the major problems of studying the $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$ final state is the number of possible isobar decay modes that are present. Therefore in this paper, in order to study the $`a_2(1320)\pi `$ final state, an analysis is presented of the $`\eta \pi ^+\pi ^{}`$ channel. In addition, the spin analysis of the $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$ channel showed evidence in the $`J^{PC}`$ = $`2^+`$ $`a_2(1320)\pi `$ wave for the $`\eta _2(1645)`$ and $`\eta _2(1870)`$. There has been previous evidence that the $`\eta _2(1645)`$ and $`\eta _2(1870)`$ may decay to $`a_0(980)\pi `$ and $`f_2(1270)\eta `$. These two decay modes will be searched for in the present analysis. The data come from the WA102 experiment which has been performed using the CERN Omega Spectrometer, the layout of which is described in ref. . The selection of the reaction $$ppp_f(\eta \pi ^+\pi ^{})p_s$$ (1) where the subscripts $`f`$ and $`s`$ indicate the fastest and slowest particles in the laboratory respectively, has been described in ref. . The $`\eta `$ has been observed decaying to $`\gamma \gamma `$ and $`\pi ^+\pi ^{}\pi ^0`$. Fig. 1a) and 1b) show the $`\eta \pi ^+\pi ^{}`$ mass spectra for the decays $`\eta \gamma \gamma `$ and $`\eta \pi ^+\pi ^{}\pi ^0`$ respectively. The mass spectra are dominated by the $`\eta ^{}`$ and $`f_1(1285)`$. In this current paper a spin-parity analysis of the $`\eta \pi ^+\pi ^{}`$ channel is presented for the mass interval 1.0 to 2.0 GeV using an isobar model . Assuming that only angular momenta up to 2 contribute, the intermediate states considered are $`a_0(980)\pi `$, $`\sigma \eta `$, $`f_0(980)\pi `$, $`a_2(1320)\pi `$, and $`f_2(1270)\eta `$. $`\sigma `$ stands for the low mass $`\pi \pi `$ S-wave amplitude squared . The amplitudes have been calculated in the spin-orbit (LS) scheme using spherical harmonics. In order to perform a spin parity analysis the Log Likelihood function, $`_j=_i\mathrm{log}P_j(i)`$, is defined by combining the probabilities of all events in 40 MeV $`\eta \pi ^+\pi ^{}`$ mass bins from 1.0 to 2.0 GeV. The incoherent sum of various event fractions $`a_j`$ is calculated so as to include more than one wave in the fit, using the form: $$=\underset{i}{}\mathrm{log}\left(\underset{j}{}a_jP_j(i)+(1\underset{j}{}a_j)\right)$$ (2) where the term $`(1_ja_j)`$ represents the phase space background. This background term is used to account for the background below the $`\eta `$ (which is 10 % for the $`\gamma \gamma `$ decay mode and 15 % for the $`\pi ^+\pi ^{}\pi ^0`$ decay mode), $`\eta \pi ^+\pi ^{}`$ three body decays and decay modes not parameterised in the fit. The negative Log Likelihood function ($``$) is then minimised using MINUIT . Different combinations of waves and isobars have been tried and insignificant contributions have been removed from the final fit. The spin analysis has been performed independently for the two $`\eta `$ decay modes. As was shown in the previous analysis , for both decay modes and for M($`\eta \pi ^+\pi ^{}`$ $``$ 1.5 GeV the only wave required in the fit is the $`J^{PC}`$ = $`1^{++}`$ $`a_0(980)\pi `$ wave with spin projection $`|J_Z|`$ = 1. No $`J^{PC}`$ = $`0^+`$ $`a_0(980)\pi `$ or any $`\sigma \eta `$ waves are required in the fit. Fig. 2a) and 3a) show the $`J^{PC}`$ = $`1^{++}`$ $`a_0(980)\pi `$ wave where the $`f_1(1285)`$ and a shoulder at 1.4 GeV can be seen. Superimposed on the waves is the result of the fit used in ref. which uses a K matrix formalism including poles to describe the interference between the $`f_1(1285)`$ and the $`f_1(1420)`$. As can be seen from fig. 2a) and 3a) the parameterisation describes well the $`J^{PC}`$ = $`1^{++}`$ $`a_0(980)\pi `$ wave. For M($`\eta \pi ^+\pi ^{}`$ $``$ 1.5 GeV only waves with $`J^{PC}`$ = $`2^+`$ and $`|J_Z|`$ = 1 are required in the fit. In contrast to what was found in the analysis of the $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$ final state there is no evidence for any $`J^{PC}`$ = $`2^{++}`$ $`a_2(1320)\pi `$ wave. The largest change in Log Likelihood comes from the addition of the $`J^{PC}`$ = $`2^+`$ $`a_2(1320)\pi `$ wave with $`|J_Z|`$ = 1 which yields a Likelihood difference $`\mathrm{\Delta }`$ = 562 and 203 for the $`\eta \gamma \gamma `$ and $`\eta \pi ^+\pi ^{}\pi ^0`$ decays respectively and are shown in fig. 2b) and 3b). As was observed in the case of the $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$ channel , the $`J^{PC}`$ = $`2^+`$ $`a_2(1320)\pi `$ wave is consistent with being due to two resonances, the $`\eta _2(1645)`$ and the $`\eta _2(1870)`$. Superimposed on figs. 2b) and 3b) is the result of a fit using a single channel K matrix formalism with two resonances to describe the $`\eta _2(1645)`$ and $`\eta _2(1870)`$. The masses and widths determined for each resonance and each $`\eta `$ decay mode are given in table 1. The parameters found are consistent for the two decay modes and with the PDG values for these resonances. An alternative fit has been performed using two interfering Breit-Wigners. The parameters presented include not only the statistical error but also the systematic error, added in quadrature, representing the difference in the two fits. The addition of the $`J^{PC}`$ = $`2^+`$ $`a_0(980)\pi `$ wave with $`|J_Z|`$ = 1 yields a Likelihood difference $`\mathrm{\Delta }`$ = 66 and 23 for the two $`\eta `$ decay modes respectively and the waves are shown in figs. 2c) and 3c). Superimposed is the result of a fit using the parameters for the $`\eta _2(1645)`$ and the $`\eta _2(1870)`$ determined from the fit to the $`a_2(1320)\pi `$ final state. A good description of the data is found. The branching ratio of the $`\eta _2(1645)`$ and $`\eta _2(1870)`$ to $`a_2(1320)\pi /a_0(980)\pi `$ in the $`\eta \pi \pi `$ final state can be determined. Neglecting unseen decay modes the branching ratio of $`\eta _2(1645)`$ to $`a_2(1320)\pi `$/$`a_0(980)\pi `$ = 2.3 $`\pm `$ 0.4 and 2.1 $`\pm `$ 0.5 for the decays $`\eta \gamma \gamma `$ and $`\eta \pi ^+\pi ^{}\pi ^0`$ respectively. The branching ratio of $`\eta _2(1870)`$ to $`a_2(1320)\pi `$/$`a_0(980)\pi `$ = 5.0 $`\pm `$ 1.6 and 6.0 $`\pm `$ 1.9 respectively. Correcting for the unseen $`a_2(1320)`$ decay modes using the PDG branching ratio and using the branching ratio for the $`a_0(980)`$ to $`\eta \pi `$ determined by this experiment of 0.86 $`\pm `$ 0.10 and taking the average of the two $`\eta `$ decay modes the branching ratio to $`a_2(1320)\pi `$/$`a_0(980)\pi `$ for the $`\eta _2(1645)`$ is 13.0 $`\pm `$ 2.7 and for the $`\eta _2(1870)`$ is 32.6 $`\pm `$ 12.6. The addition of the $`J^{PC}`$ = $`2^+`$ $`f_2(1270)\eta `$ wave with $`|J_Z|`$ = 1 yields a Likelihood difference $`\mathrm{\Delta }`$ = 42 and 12 and the waves are shown in figs. 2d) and 3d). The $`f_2(1270)\eta `$ wave shows little evidence for the $`\eta _2(1645)`$. Superimposed is the result of a fit using the parameters for the $`\eta _2(1870)`$ determined from the fit to the $`a_2(1320)\pi `$ final state. A satisfactory description of the data is found. Correcting for the unseen $`a_2(1320)`$ and $`f_2(1270)`$ decay modes the branching ratio of the $`\eta _2(1870)`$ to $`a_2(1320)\pi /f_2(1270)\eta `$ has been determined to be 19.2 $`\pm `$ 7.2 and 27.6 $`\pm `$ 16.4 for the two $`\eta `$ decay modes. The addition of the $`J^{PC}`$ = $`2^+`$ $`f_0(980)\eta `$ or $`J^{PC}`$ = $`2^+`$ $`\sigma \eta `$ waves produces no significant change in the Likelihood and hence they have been excluded from the fit. The resulting background term is found to be smooth and structureless and corresponds to $``$ 40 % of the channel. In previous analyses it has been observed that when the centrally produced system has been analysed as a function of the parameter $`dP_T`$, which is the difference in the transverse momentum vectors of the two exchange particles , all the undisputed $`q\overline{q}`$ states (i.e. $`\eta `$, $`\eta ^{}`$, $`f_1(1285)`$ etc.) are suppressed as $`dP_T`$ goes to zero, whereas the glueball candidates $`f_0(1500)`$, $`f_0(1710)`$ and $`f_2(1950)`$ are prominent . In order to calculate the contribution of each resonance as a function of $`dP_T`$, the waves have been fitted in three $`dP_T`$ intervals with the parameters of the resonances fixed to those obtained from the fits to the total data. Table 2 gives the percentage of each resonance in three $`dP_T`$ intervals together with the ratio of the number of events for $`dP_T`$ $`<`$ 0.2 GeV to the number of events for $`dP_T`$ $`>`$ 0.5 GeV for each resonance considered. These distributions are similar to what have been observed for other $`q\overline{q}`$ states . In addition, an interesting effect has been observed in the azimuthal angle $`\varphi `$ which is defined as the angle between the $`p_T`$ vectors of the two outgoing protons. For the resonances studied to date which are compatible with being produced by DPE, the data . are consistent with the Pomeron transforming like a non-conserved vector current . In order to determine the $`\varphi `$ dependence for the resonances observed, a spin analysis has been performed in the $`\eta \pi ^+\pi ^{}`$ channel in four different $`\varphi `$ intervals each of 45 degrees. The results are shown in fig. 4a) and b) for the $`\eta _2(1645)`$ and $`\eta _2(1870)`$ respectively. In order to determine the four momentum transfer dependence ($`|t|`$) of the resonances observed in the $`\eta \pi ^+\pi ^{}`$ channel the waves have been fitted in 0.1 GeV<sup>2</sup> bins of $`|t|`$ with the parameters of the resonances fixed to those obtained from the fits to the total data. Fig. 4c) and d) show the four momentum transfer from one of the proton vertices for the $`\eta _2(1645)`$ and $`\eta _2(1870)`$ respectively. The distributions cannot be fitted with a single exponential. Instead they have been fitted to the form $$\frac{d\sigma }{dt}=\alpha e^{b_1t}+\beta te^{b_2t}$$ The parameters resulting from the fit are given in table 3. After correcting for geometrical acceptances, detector efficiencies, losses due to cuts, and unseen decay modes of the $`a_2(1320)`$, the cross-section for the $`\eta _2(1645)`$ and $`\eta _2(1870)`$ decaying to $`a_2(1320)\pi `$ at $`\sqrt{s}`$ = 29.1 GeV in the $`x_F`$ interval $`|x_F|0.2`$ has been determined to be $`\sigma (\eta _2(1645))`$ = 1664 $`\pm `$ 149 nb, and $`\sigma (\eta _2(1870))`$ = 1845 $`\pm `$ 183 nb. A Monte Carlo simulation has been performed for the production of a $`J^{PC}`$ = $`2^+`$ state with spin projection $`|J_Z|`$ =1 via the exchange of two non-conserved vector currents using the model of Close and Schuler discussed in ref. . The prediction of this model is found to be in qualitative agreement with the observed $`\varphi `$, $`dP_T`$ and $`t`$ distributions of the $`\eta _2(1645)`$ and $`\eta _2(1870)`$. This will be presented in a later publication. In summary, there is evidence for an $`a_2(1320)\pi `$ decay mode of the $`\eta _2(1645)`$ and $`\eta _2(1870)`$ in the $`\eta \pi ^+\pi ^{}`$ final state. In addition, there is evidence for an $`a_0(980)\pi `$ decay mode of both resonances and possibly a $`f_2(1270)\eta `$ decay mode of the $`\eta _2(1870)`$. There is no evidence for any $`J^{PC}`$ = $`2^{++}`$ $`a_2(1320)\pi `$ wave, in particular no evidence for the decay $`f_2(1950)`$ $``$ $`a_2(1320)\pi `$. Acknowledgements This work is supported, in part, by grants from the British Particle Physics and Astronomy Research Council, the British Royal Society, the Ministry of Education, Science, Sports and Culture of Japan (grants no. 07044098 and 1004100), the French Programme International de Cooperation Scientifique (grant no. 576) and the Russian Foundation for Basic Research (grants 96-15-96633 and 98-02-22032). Tables Figures Figure 1 Figure 2 Figure 3 Figure 4
no-problem/9911/astro-ph9911403.html
ar5iv
text
# Radiating Regions in Pulsar Magnetospheres: From Theory to Observations and Back ## 1. Introduction Over 30 years since the discovery of pulsars, the mechanism of their radio emission is still poorly understood. Moreover, the location of radio emitting regions in the pulsar magnetospheres is unknown. Strong non-thermal high-frequency (optical, X-ray and $`\gamma `$-ray) radiation is observed from about ten pulsars. The pulses of high-frequency radiation generally bear little resemblance to the radio pulses (with the exception of the Crab pulsar). Besides, at different frequencies the pulses of the high-frequency radiation are different. These imply that there are many regions in the pulsar magnetospheres where strong non-thermal radiation is generated. ## 2. Radiating Regions in Pulsar Magnetospheres A common point of all acceptable models of pulsars is that a strong electric field $`𝐄_{}=(𝐄𝐁)𝐁/|𝐁|^2`$ along the magnetic field $`𝐁`$ is generated in the pulsar magnetosphere. Primary particles are accelerated by this electric field to ultrarelativistic energies and generate $`\gamma `$-rays. Some of these $`\gamma `$-rays are absorbed by creating secondary electron-positron pairs. The created pairs screen the electric field $`𝐄_{}`$ in the pulsar magnetosphere everywhere except for compact regions. The compact regions where $`𝐄_{}`$ is unscreened are called gaps. These gaps are ”engines” that are responsible for non-thermal radiation of pulsars. Most probably, the gaps are located either near the magnetic poles of pulsars or near their light cylinders (for a review, see Michel 1991). Some part of radiation generated in the gaps and their vicinities escapes from the pulsar magnetospheres (see below) and may be observed. Plasma instabilities may be developed in the secondary electron-positron plasma. The regions of their development are also plausible sources of the non-thermal radiation of pulsars in addition to the gaps. ### 2.1. Radiation from Polar Gaps Gaps that form near the magnetic poles of pulsars are called polar gaps. Physical processes (acceleration of particles, generation of radiation and its propagation, creation of electron-positron pairs, etc.) in the polar gaps and their vicinities have been discussed in (e.g., Usov 1996; Zhang & Harding 1999). The bulk of the radiation from the polar gaps is in the range of hard X-rays and $`\gamma `$-rays. In all conventional polar-gap models (Ruderman & Sutherland 1975; Arons 1981; Zhang & Harding 1999 and references therein) where created electron-positron pairs are free the total power carried away by both relativistic particles and radiation from the polar gap into the pulsar magnetosphere is the same within a factor of 2-3 or so. This power is about ten times less than the $`\gamma `$-ray luminosity, at least for the most part of pulsars detected in $`\gamma `$-rays. It was shown (Shabad & Usov 1986) that in a strong magnetic field, $`B>0.1B_{\mathrm{cr}}`$, $`\gamma `$-rays emitted nearly along curved magnetic field lines adiabatically convert into bound pairs (positronium atoms) rather than decaying into free pairs, where $`B_{\mathrm{cr}}=4.4\times 10^{13}`$ G. This partially prevents the screening of the vacuum electric field near the pulsar surface, and the total power carried away by both relativistic particles and radiation from the polar gap into the pulsar magnetosphere can increase significantly. For pulsars with a very strong magnetic field at their surface, $`B__\mathrm{S}>0.1B_{\mathrm{cr}}`$, a modified polar gap model was developed (Usov & Melrose 1995, 1996). In this model, the non-thermal luminosity of pulsars may be comparable to the rate of rotational energy losses that is enough to explain the observed luminosities of all known $`\gamma `$-ray pulsars. A maser version of linear acceleration emission was suggested as a mechanism of radio radiation from the polar gaps of pulsars (e.g., Melrose 1978; Rowe 1992). This mechanism requires an oscillating electric field $`𝐄_{}`$ in the polar gaps. The characteristic frequency of the radio emission is $`\omega _0\mathrm{\Gamma }^2`$, where $`\omega _0`$ is the oscillation frequency, and $`\mathrm{\Gamma }`$ is the Lorentz-factor of radiating particles. In this model, the radio beam coincides with the beam of high-frequency (X-ray and $`\gamma `$-ray) emission from the polar gaps. ### 2.2. Plasma Instabilities in Pulsar Magnetospheres and Non-thermal Radiation from the Regions of their Development Two-stream instability. It was argued that the process of pair creation is strongly nonstationary, and the pair plasma that flows away from the pulsar surface is nonhomogeneous and gathers into separate closes (e.g., Usov 1987). Since the Lorentz-factors of the pair plasma lie within a wide range (from $`\mathrm{\Gamma }_{\mathrm{min}}10`$ to $`\mathrm{\Gamma }_{\mathrm{max}}10^410^5`$), the high-energy particles ($`\mathrm{\Gamma }\mathrm{\Gamma }_{\mathrm{max}}`$) go ahead, and the plasma clouds disperse as they go further from the pulsar. At a distance of $`2l\mathrm{\Gamma }_{\mathrm{min}}^2`$ from the pulsar surface the high-energy particles of a given cloud catch up with the low-energy particles ($`\mathrm{\Gamma }\mathrm{\Gamma }_{\mathrm{min}}`$) that belong to the cloud going ahead of it, where $`l`$ is the characteristic length between the clouds at the moment of their creation near the pulsar surface. In the cloud overlapping region the energy distribution of particles is two-humped, i.e., there are particles only with both $`\mathrm{\Gamma }\mathrm{\Gamma }_{\mathrm{min}}`$ and $`\mathrm{\Gamma }\mathrm{\Gamma }_{\mathrm{max}}`$ whereas particles with intermediate Lorentz-factors are absent. The plasma with such a distribution is unstable with respect to two-stream instability (Usov 1987; Ursov & Usov 1988; Asseo & Melikidze 1998). Longitudinal (nonescaping) Langmuir waves that are generated in the process of development of this instability may be converted by means of different non-linear effects into electromagnetic waves that can escape from the pulsar magnetosphere (e.g., Lesch, Gil, & Shukla 1994; Lyubarskii 1996; Mahajan, Machabeli, & Rogova 1997; Melikidze, Gil, & Pataraya 2000). This non-linear conversion of waves is a ”bottle-neck” that impedes the meeting of the model of pulsar radio emission and observational data. The two-stream instability of strongly nonhomogeneous plasma develops very fast, and if the process of pair creation near the pulsar surface is strongly nonstationary indeed, the development of this instability in the pulsar magnetospheres is almost inevitable. It is worth noting that the outflowing plasma may have several characteristic lengths of its modulation. For instance, in the Ruderman-Sutherland model they are $`l_10.3R`$ and $`l_22R_c`$, where $`R`$ is the neutron star radius and $`R_c`$ is the curvature radius of magnetic field lines at the pulsar surface. For $`R_cR10^6`$ cm and $`\mathrm{\Gamma }_{\mathrm{min}}10`$, the distances to the radio emitting regions are $`30R`$ and $`200R`$. These distances are consistent with observational data on pulsars (Rankin 1993; Kijak & Gil 1997 and references therein). In this model, high-frequency radiation from the radio emitting regions is very weak. Cyclotron instability. Near the light cylinders of pulsars, $`rc/\mathrm{\Omega }`$, the outflowing plasma may be unstable with respect to excitation of cyclotron waves (Machabeli & Usov 1979), where $`c`$ is the speed of light, $`\mathrm{\Omega }=2\pi /P`$ and $`P`$ is the pulsar period. For typical pulsars, $`B__\mathrm{S}10^{12}`$ G and $`P1`$ s, the frequency of these waves is in the radio range, from $`10^2`$ MHz to a few $`\times 10^3`$ MHz (Machabeli & Usov 1989). The model that is based on the cyclotron instability also can explain many observational data on radio emission of pulsars (Lyutikov, Blandford, & Machabeli 2000). The interaction between cyclotron waves and outflowing particles leads to diffusion of particles in the momentum space across the magnetic field. As a result, outflowing particles acquire non-zero pitch angles and generate high-frequency radiation via the synchrotron mechanism. The high-frequency luminosity of the region where the cyclotron instability develops may be as high as the power carried away by particles from the polar gap into the pulsar magnetosphere. ### 2.3. Radiation from Outer Gaps Charge deficient regions (outer gaps) with a strong electric field $`E_{}`$ may exist near the pulsar light cylinder (e.g., Michel 1991). The outer gap model describes the high-requency radiation of the Crab and Vela pulsars fairly well (e.g., Cheng, Ho, & Ruderman 1986a,b; Romani 1996). Outer gaps may act as a generator of radiation in the pulsar magnetosphere only if the period of the pulsar rotation is small enough, $`P<P_{\mathrm{cr}}`$ a few $`\times \mathrm{\hspace{0.17em}0.1}`$ s. Optical observations at the positions of $`\gamma `$-ray pulsars that are near the death line, $`PP_{\mathrm{cr}}`$, may test the outer gap model (e.g., Usov 1994; Lundqvist et al. 1999). ## 3. Discussion For fast rotating pulsars, $`P<P_{\mathrm{cr}}`$, both polar and outer gaps can act in the pulsar magnetospheres, and therefore, the pair plasma properties are very uncertain. For typical pulsars with $`P>P_{\mathrm{cr}}`$, the polar gap model has no an alternative. If for such pulsars it is confirmed that the distance $`r`$ from the pulsar to the radio emitting region is in the range $`Rrc/\mathrm{\Omega }`$ (Rankin 1993; Kijak & Gil 1997 and references therein) then most probably, the process of pair creation at the pulsar surface is strongly nonstationary, and the two-stream instability of the outflowing strongly nonhomogeneous plasma is a reason of the pulsar radio emission (Usov 1987; Ursov & Usov 1988; Asseo & Melikidze 1998). However, if $`r`$ is either $`R`$ or $`c/\mathrm{\Omega }`$, the most plausible mechanism of the pulsar radio emission is either the linear acceleration emission (Melrose 1972) or the cyclotron instability (Machabeli & Usov 1979, 1989; Lyutikov et al. 2000). #### Acknowledgments. This research was supported by MINERVA Foundation, Munich / Germany. ## References Arons, J. 1981, ApJ, 248, 1099 Asseo, E., & Melikidze, G. I. 1998, MNRAS, 301, 59 Cheng, K. S., Ho, C. & Ruderman, M. A. 1986a,b, ApJ, 300, 500&522 Kijak, J., & Gil, J. 1997, MNRAS, 288, 631 Lesch, H., Gil, J. A., & Shukla, P. K. 1994, Space Sci.Rev., 68, 349 Lundqvist, P., Sollerman, J., Ray, A., Leibundgut, B., & Sutaria, F. 1999, A&A, 343, L15 Lyubarskii, Yu. E. 1996, A&A, 308, 809 Lyutikov, M., Blandford, R. D., & Machabeli, G. Z. 2000, this volume Machabeli, G. Z., & Usov, V. V. 1979, Soviet Astron. Lett., 5, 238 Machabeli, G. Z., & Usov, V. V. 1989, Soviet Astron. Lett., 15, 393 Mahajan, S. M., Machabeli, G. Z., & Rogova, A. D. 1997, ApJ, 478, L129 Melikidze, G. I., Gil, J. A., & Pataraya, A. D. 2000, this volume Melrose, D. B. 1978, ApJ, 225, 557 Michel, F. C. 1991, Theory of Neutron Star Magnetospheres, Univ. of Chicago Press Rankin, J.M. 1993, ApJ, 405, 285 Romani, R. W. 1996, ApJ, 470, 469 Rowe, E. T. 1992, Australian J. Phys., 45, 1 Ruderman, M. A., & Sutherland, P. G. 1975, ApJ, 196, 51 Shabad, A. E., & Usov, V. V. 1986, Ap&SS, 128, 377 Usov, V. V. 1987, ApJ, 320, 333 Usov, V. V. 1994, ApJ, 427, 394 Usov, V. V. 1996, in ASP Conf. Proc., 105, Pulsars : Problems & Progress, ed. S. Johnston, M. A. Walker & M. Bailes (San Francisco: ASP), 323 Usov, V. V., & Melrose, D. B. 1995, Australian J. Phys., 48, 571 Usov, V. V., & Melrose, D. B. 1996, ApJ, 464, 306 Ursov, V. N., & Usov, V. V. 1988, Ap&SS, 140, 32 Zhang, B., & Harding, A. K. 1999, astro-ph/9911028
no-problem/9911/cond-mat9911468.html
ar5iv
text
# Super lattice formation of an array of volatile wetting droplets ## Abstract For an ordered array of critical volatile wetting droplets the formation of a super lattice by an Ostwald-ripening like competition process is considered. The underlying diffusion problem is treated within a quasistatic approximation and to first order in the inverse droplets distance. The approach is rather general but a square lattice and a triangular lattice are studied explicitly. Dispersion relations for the super-lattice growth of these arrays are calculated. In a recent experiment C. Schäfle et al nucleated diethylene glycol droplets out of a supersaturated vapor phase on a hexagonal array of hydrophilic (alkanethiol) patches on a hydrophobic substrate. The pressure of the system was reduced afterwards so that the droplets started to shrink by evaporation. During this process the hexagonal lattice split into two triangular super lattices in a sense that the droplets on one super lattice shrank faster than those on the other super lattice. The super lattice of faster shrinking droplets disappeared, finally. The remaining triangular droplet lattice appeared to be stable against further development of super structures. The appearance of the triangular super lattice from the hexagonal lattice is interpreted as an Ostwald-ripening type process. The system reduces its free energy by having few large droplets instead of many small . Similar observations where made by the same authors as well as by Lacasta et al simulating Cahn-Hilliard dynamics of droplet arrays in a concentration field. The study of super lattice growth on an array of wetting droplets is not only of fundamental interest as an example of pattern formation and the dynamics of wetting on structured surfaces, but it is also important to learn about the features of volatile droplet arrays since they can be used to build e.g. chemical sensors (see ). The scope of this paper is the analytic discussion of the diffusional growth properties of a set of wetting droplets sitting on hydrophilic patches on a hydrophobic substrate. We are considering circular patches of equal radius $`a`$. Square and triangular lattice configurations of the patches will be studied explicitly. It is assumed that there is a wetting droplet sitting in the center of each patch. The different possible equilibrium configurations and phase transitions in such a system were discussed in great detail by Lenz and Lipowsky only recently . To discuss the diffusional growth of a wetting droplet out of the surrounding gas phase, one has to specify the boundary conditions of the diffusion equation on the substrate and on the droplets surface. Since there is no diffusion flux into the substrate, the normal derivative of the concentration field vanishes there. The concentration $`c_s`$ in the gas phase on top of the droplet is given by the Gibbs-Thomson relation. It is a constant along the droplets surface and is just a function of the radius of curvature $`R`$ $$c_s(R)=c_0\left(1+\frac{\mathrm{\Lambda }}{R}\right),$$ (1) where $`c_0`$ is the concentration above a flat condensate and $`\mathrm{\Lambda }`$ is a capillary length. We assume that the droplet is growing slowly due to its high density compared to the surrounding gas phase. The diffusion equation, which describes the concentration around the droplet, can be approximated by the Laplace-equation $`\mathrm{\Delta }c=0`$, therefore. The Neumann condition of vanishing normal derivative on the substrate suggests to mirror the system on the substrate to fulfill the condition implicitly. This way the semi-infinite system becomes an infinite system formally and the spherical-cap shaped wetting droplet becomes a symmetric lens that provides a uniform (Dirichlet) boundary condition to the outside concentration field. Since this field is described by a Laplace equation, one can use an electrostatic analogy where the electrical potential corresponds to the concentration field. The mirrored droplet, i.e. the symmetric lens, is a conductor in this picture . The volume growth $`\dot{\mathrm{\Omega }}`$ of an individual droplet corresponds to the charge of the analogous conductor because the diffusion flux density on the droplets surface is given by the normal derivative of the concentration field and the charge density of the conductor is given by the electric field on top of the conductor which is the derivative of the potential. This argument leads to the growth equation $$\dot{\mathrm{\Omega }}=C(R,\theta )\left(c_{\mathrm{}}c_s(R)\right)$$ (2) eventually , where $`C(R,\theta )`$ is the electric capacity of the symmetric lens and $`c_{\mathrm{}}`$ is the systems concentration far away from the droplet. The capacity $`C(R,\theta )`$ depends linearly on the radius of curvature $`R`$ and is a function of the wetting droplets contact angle $`\theta `$. It can be found in the literature . Since the surface tension between the droplet and the hydrophilic patch is smaller than the one between the droplet and the hydrophobic substrate, the contact (Youngs) angles are different in these regions. A small droplet, which completely sits on a hydrophilic patch, has a small contact angle $`\theta _1`$. If the droplet is too big to fit onto the patch it reaches out on the hydrophobic region where its contact angle $`\theta _2`$ is bigger. Consequently one can distinguish three different volume regimes for the droplets. Regime (1), where the contact line is on the hydrophilic patch completely, regime (2), where the contact line is fixed on the border between the hydrophilic and the hydrophobic material and regime (3), where the contact line is completely in the hydrophobic region (see fig. 1). In regime (2) droplet growth takes place with a fixed contact line but a changing contact angle $`\theta `$. If $`\theta <\pi /2`$, this leads to the unusual case that the radius of curvature $`R`$ decreases with an increasing droplet volume. This implies, that in this case a droplet in an environment $`c_{\mathrm{}}=c_s(R)`$ is *not* critical but stable because its Gibbs-Thomson boundary concentration increases with increasing volume unlike the usual case where the Gibbs-Thomson boundary condition decreases with increasing droplet size. This stability allows to grow droplets of identical size on an array of equally sized patches in the regime (2), which is important for experimental purposes. Now there is not only one droplet in the system but the collective behavior of a system of droplets shall be studied. The boundary condition on top of every single droplet is given by (1), so that we can use the full electrostatic analogy again. We have a system of conductors where the relation between the charges and the potentials is given by a capacitance matrix which now yields $$\dot{\mathrm{\Omega }}_i=\underset{j}{}C_{ij}\left(c_{\mathrm{}}c_s(R_j)\right)$$ (3) for the growth rates of the individual droplets generalizing eq. (2). Here the indexes $`i,j`$ count the individual droplets, $`\mathrm{\Omega }_i`$ and $`R_i`$ are the volume and the radius of curvature of droplet $`i`$ and the matrix $`C_{ij}`$ is the abovementioned capacitance matrix of the system. In the following we will assume that the lattice constant $`d`$ of the droplet array is large compared to the droplet sizes ($`R_id`$). Then, the inverse of the capacitance matrix can be written $$C_{ij}^1=\{\begin{array}{cc}C_i^1(R,\theta )+𝒪\left(1/d^2\right)\hfill & \text{for}i=j\hfill \\ 1/R_{ij}+𝒪\left(1/d^2\right)\hfill & \text{for}ij\hfill \end{array}$$ (4) where $`C_i(R,\theta )`$ is the capacity of the droplet $`i`$ alone. $`R_{ij}`$ is the distance between the centers of the droplets $`i`$ and $`j`$. Eqs. (1), (3) and (4) lead to $$c_0\left(1+\frac{\mathrm{\Lambda }}{R_i}\right)=c_{\mathrm{}}+\frac{1}{C_i(R_i,\theta _i)}\dot{\mathrm{\Omega }}_i+\underset{ji}{}\frac{1}{R_{ij}}\dot{\mathrm{\Omega }}_j$$ (5) to linear order in inverse droplet distances. If we now consider a system where each droplet satisfies $`R_i=R_c:=(\mathrm{\Lambda }c_0)/(c_{\mathrm{}}c_0)`$ (i.e. it is critical if its radius increases with increasing volume or stable if its radius decreases with increasing volume), $`\dot{R}_i=0`$ is a (in most cases unstable) solution of the growth equation. To study the onset of the spontaneous development of a super lattice on the regular lattice of wetting droplets, we introduce $`\delta R_ie^{\omega t}:=R_iR_c`$ where the $`\delta R_i`$ describe an eigenmode of the linearized version of eq. (5). The growth exponent reads $$\omega =\frac{c_0\frac{\mathrm{\Lambda }}{R_c^3}}{2\mathrm{\Omega }^{}\left[C_c+_{ji}\frac{1}{R_{ij}}\frac{\delta R_j}{\delta R_i}\right]}.$$ (6) Here $`C_c`$ is the capacity of an individual critical droplet alone and $`\mathrm{\Omega }^{}:=d\mathrm{\Omega }(R,\theta (R))/dR|_{R=R_c}`$ describes the variation of the droplet volume when its radius $`R`$ is changed obeying the physical boundary conditions on the contact angle, i.e. holding the contact angle constant if we are in the regime 1 or 3 of fig. 1 and pinning the contact line (varying the contact angle) if we are in the regime 2. As mentioned earlier, $`\mathrm{\Omega }^{}`$ is negative in regime 2 if $`\theta <\pi /2`$. The sum $`\mathrm{\Sigma }:=d_{ji}(1/R_{ij})(\delta R_j/\delta R_i)`$ is independent of the choice of $`i`$ since we look at eigenmodes of eq. (5). It depends on the type of droplet lattice and the mode under consideration. One finds a divergence at $`\mathrm{\Sigma }/d=C_c`$. This divergence is an artifact of the approximation for the capacitance matrix to first order in $`1/R_{ij}`$, i.e. neglecting $`𝒪(1/d^2)`$. The fastest growing mode is found at the smallest $`\mathrm{\Sigma }`$. In the following the growth exponent $`\omega `$ will be evaluated for the different modes of an infinite square lattice and an infinite triangular lattice. Due to the symmetry of the square lattice the eigenmodes of the growth equation are just Fourier modes. In fig. 2 the sum $`\mathrm{\Sigma }`$ is plotted as a function of $`m`$ and $`n`$, where $`m`$, $`n`$ are the wavenumbers along the two axis of the square lattice. The minimum is found at $`\mathrm{\Sigma }(\pi ,\pi )`$ which means that the preferred growth mode leads to a checkerboard structure. The two corresponding square super lattices on the original square lattice carry uniformly shrinking or uniformly growing droplets, respectively. This behavior is optimal in the sense that every growing droplet is surrounded only by shrinking droplets and the other way around leading to a fast exchange of matter. In case of a triangular lattice things are not as obvious since the geometry does not allow every droplet to be surrounded by droplets of a different growth sign without frustration. If we introduce two non rectangular axis along two of the primary axis of the triangular lattice we have again translational symmetry along these axis and can Fourier analyze the system. We get a sum $`\mathrm{\Sigma }`$ for the corresponding wavenumbers as shown in fig. 3. The dispersion relation has a threefold degenerate minimum at $`(\pi ,\pi )`$, $`(0,\pi )`$ and $`(\pi ,0)`$ reflecting the special symmetry of the triangular lattice as shown in fig. 4. These fastest eigenmodes can be characterized by the property that each droplet has four nearest neighbors with opposite growth signs and two nearest neighbors with the same growth sign. In this respect the growth conditions in the triangular lattice are inferior to those in the square lattice. This is reflected by the fact that the minimum $`\mathrm{\Sigma }`$ of the square lattice is lower than the one of the triangular lattice, i.e. the super lattice formation is faster in the square lattice case.. C. Schäfle et al. found in their experiment that the triangular droplet lattice that occurred as a super lattice of the initial hexagonal lattice did not show any further instability, whereas the calculations provided in this paper display an instability as sketched in fig. 4. Unlike the subject of this paper the experiment does not analyze critical droplets so that in addition to $`\omega `$ there is a timescale related to the overall evaporation of droplets. It seems reasonable to expect that the instability of the triangular droplet lattice could be seen experimentally by stretching the timescale provided by the overall evaporation, i.e. by going closer to the critical concentration. I like to thank C. Schäfle *et al.* for sharing their experimental results with me prior to publication. I’m especially grateful to B. Schmittmann and R. K. P. Zia for their kind hospitality and helpful discussions. This work is supported by the Deutsche Forschungsgemeinschaft via BU 1172/1-1 and “Benetzung und Strukturbildung an Grenzflächen” as well as the EU via FMRX-CT 98-0171 “Foam Stability and Wetting Transitions”.
no-problem/9911/hep-ex9911007.html
ar5iv
text
# 1 Introduction ## 1 Introduction ATLAS project represents a general-purpose detector to investigate pp collisions in the energy region up to 14 TeV. Designed barrel part of calorimetry system consists of the electromagnetic liquid argon (Larg) calorimeter, using the accordion geometry and a large scintillating tile hadronic barrel calorimeter, based on a sampling technique using steel absorber material and scintillating plates read out by wavelength shifting fibers. Detailed description of them can be found in ,. In that work we report the results on the studying of the e/$`\pi `$ and e/h for the Combined calorimeter, composed from Larg and TILE prototypes. A e/h is a characteristic number of any calorimeter system and describe the non-compensation of calorimeter response on hadrons relatively to electrons. As electro-magnetic calorimeter is slim for hadrons, great part of hadron shower is outside from LArg. Therefore e/pi for Combined calorimeter is of special interest. That investigation was performed on the basis of data on exposure of ATLAS Barrel Combined Calorimeter Prototype in beams of pions and electrons with energy 20 – 300 GeV in April 1996. Results on the studying e/h for TILE calorimeter can be found in . ## 2 The Electromagnetic Calorimeter The electro-magnetic LArg calorimeter prototype consists of a stack of three azimuthal modules, each one spanning $`9^{}`$ in azimuth and extending over 2 m along the z direction. The calorimeter structure is defined by 2.2 mm thick steel-plated lead absorbers, folded to an accordion shape and separated by 3.8 mm gaps, filled with liquid argon; the signals are collected by Kapton electrodes located in the gaps. The calorimeter extends from an inner radius of 131.5 cm to an outer radius of 182.6 cm, representing (at $`\eta `$ = 0) a total of 25 radiation lengths ($`X_0`$), or 1.22 interaction lengths ($`\lambda `$) for protons. The calorimeter is longitudinally segmented into three compartments of $`9X_0`$, $`9X_0`$ and $`7X_0`$, respectively. The $`\eta \times \varphi `$ segmentation is $`0.018\times 0.02`$ for the first two longitudinal compartments and $`0.036\times 0.02`$ for the last compartment, where $`\eta =\mathrm{log}(\mathrm{tan}\frac{\theta }{2})`$. Each read-out cell has full projective geometry in $`\eta `$ and in $`\varphi `$. The calorimeter was located inside a large cylindrical cryostat with 2 m internal diameter, filled with liquid argon. ## 3 The Hadronic Calorimeter The hadron calorimeter prototype consists of an azimuthal stack of five modules. Each module covers $`2\pi /64`$ in azimuth and extends 1 m along the z direction, such that the front face covers $`100\times 20`$ cm<sup>2</sup>. The radial depth, from an inner radius of 200 cm to an outer radius of 380 cm, accounts for 8.9 $`\lambda `$ at $`\eta `$ = 0 (80.5 $`X_0`$) for protons. Read-out cells are defined by grouping together a bundle of fibers into one photo-multiplier (PMT). Each of the 100 cells is read out by two PMTs and is fully projective in azimuth (with $`\mathrm{\Delta }\varphi =2\pi /640.1`$), while the segmentation along the z axis is made by grouping fibers into read-out cells spanning $`\mathrm{\Delta }z=20`$ cm ($`\mathrm{\Delta }\eta 0.1`$) and is therefore not projective. Each module is read out in four longitudinal segments (corresponding to about 1.5, 2, 2.5 and 3 $`\lambda `$ at $`\eta `$ = 0). More details of this prototype can be found in ,. The beam incident angle was, as in the previous combined run, of about 11<sup>0</sup>, but now the impact point was 8 cm left from the center to avoid side leakage. ## 4 Experimental Setup To simulate the ATLAS setup, the Tile calorimeter was placed on a fixed table, just behind the LArg Accordion cryostat, as shown in figure 1. To optimize the containment of hadronic showers the electro-magnetic calorimeter was located as close as possible to the back of the cryostat as in the previous combined run. Early showers in the liquid argon were kept to a minimum by placing light foam material in the cryostat upstream of the calorimeter. With respect to previous combined test beam setup , a new element is present.In order to try to understand the energy loss in dead material between the active part of the LArg detector and the Tilecal, a layer of scintillator called the mid-sampler was installed . The mid-sampler consisted of five scintillators, 20 cm $`\times `$ 100 cm each, fastened directly to the front face of the Tilecal modules. The scintillator was 1 cm thick, and was readout using ten 1 mm WLS fibers on each of the long sides. Beam quality and geometry were monitored with a set of beam chambers and trigger hodoscopes placed upstream of the LArg cryostat. The momentum bite of the beam was always less than 0.5%. Single-track pion events were selected offline by requiring the pulse height of the beam scintillation counters (S3-4 on the picture) and the energy released in the presampler of the electro-magnetic calorimeter to be compatible with that of a single particle. Beam halo events were removed with appropriate cuts on the horizontal and vertical positions of the incoming track impact point as measured with the two beam chambers (BC on the picture). For this layout the effective distance between the two active parts of the detector is of the order of 50 cm, instead of the 25 cm as foreseen in the ATLAS setup. The amount of material has been quantified to be about $`2X_0`$ in between the two calorimeters. This value is similar to the ATLAS design value, but the material type is different: steel instead of aluminum for the cryostat. The total depth corresponds to about 10.1 $`\lambda `$, to be compared with the 9.6 foreseen in the ATLAS setup . A large scintillator wall (“muon wall”) covering about 1 $`m^2`$ of surface has been placed on the back of the calorimeter to quantify leakage. ## 5 Reconstruction of Electron Energy To separate electrons from the muons, hadrons and events with interactions in dead material before combined calorimeter the following cuts were applied: * events with only physical trigger were selected; * cuts on beam geometry (signals from wire chambers BC1, BC2, BC3 were used); * cuts on signal from scintillator counters S1, S2, S3, S4 and pressampler to reject events with interactions before combined calorimeter; * to separate electrons from hadrons cuts on responses of Tile calorimeter samplings were applied; * cuts on total energy deposition to reject muons. The characteristic feature of the electro-magnetic shower is its small transverse radius with the comparison of hadronic one. For reconstruction of electron energy cluster in $`3\times 7`$ cells of electro-magnetic calorimeter was used to avoid including noise in total response. Meanwhile, direct response of LArg on electrons is not Gaussian-like, as shown on Fig. 2. Explanation of such behavior appears to be dependence of response on an impact point of electron, which is shown on Fig. 3, where $`\eta `$ dependence (top picture) can be considered negligible in terms of uncertainties, there $`\eta `$-coordinate is expressed in cell numbers of electro-magnetic calorimeter along $`\eta `$ direction. For along $`\varphi `$ (bottom picture) one can observe strong influence of LArg internal structure on total electron response, where $`\varphi `$ also in cells numbers. Therefore total response of LArg on electrons represents a sum of normal distributions with different mean values, according to their statistical weights, and shouldn’t be Gaussian-like. As there is no any strong dependence of calorimeter response on the electron impacting point along $`\eta `$, therefore to achieve the mean value of energy spectrum $`\varphi `$-dependence was fitted with the line in the same $`\varphi `$ region for all energies. To put mean value of LArg response on electrons correspondent to known beam energy scale factor $`\alpha =1.166\pm 0.003`$ was applied. Received mean values of responses, and reconstructed energy of electrons for 20 – 287.5 GeV beams are gathered in Table 1. Direct error of the fitting parameter was considered as statistical error. An systematic error 0.4% was introduced to achieve total $`\chi ^2=1`$ for all set of energy points. Additional error in 0.3% was introduced due to the uncertainty of scale factor. Achieved linearity of LArg response on electrons is in terms of $`\pm 1.5\%`$, which is comparable with results from Ref. . ### 5.1 The e/$`\pi `$ Ratio To extract $`e/\pi `$ ratio for combined calorimeter system, one should go to the absolute energy scale for each calorimeter. In our case for electrons the signal from LArg calorimeter is sufficient for energy reconstruction, so to receive response of Combined calorimeter on pions, and extract $`e/\pi `$ ratio the following formula was used: $$\frac{e}{\pi }=\frac{e_{L(3\times 7)}E_{em}^e}{e_{L(11\times 11)}E_{em}^\pi +e_TE_h^\pi +e_{cr}E_{cryo}^\pi },$$ (1) where $`E_{em}^e`$ and $`E_{em}^\pi `$ are response of LArg calorimeter on electrons and pions, $`e_{L(11\times 11)}=1.1`$ is calibration constant for LArg calorimeter for window $`(11\times 11)`$, $`e_{L(3\times 7)}=1.166`$ is scale factor for electron response in Larg window $`(3\times 7)`$, $`E_h^\pi `$ is response of Tile calorimeter on pions, $`e_T=k/(e/\pi )_T=0.145`$ is calibration constant for Tile calorimeter and $`k=300(GeV)/E_h^\pi =0.156`$, $`E_{cryo}^\pi `$ is the energy loss in the cryostat, $`e_{cr}=e_Tc/a`$ is the calibration constant for cryostat. For the case of stand-alone calorimeter that formula leads us to the obvious expression $`e/\pi =E^e/E^\pi `$. Calculated $`e/\pi `$ ratios for beam energies 20 – 300 are gathered in Table 2. For the case of 300 GeV point, offset in 12.5 GeV was added to the reconstructed mean value of response on 287.5 GeV electrons. To achieve good $`\chi ^2`$ of fitting an additional error for 20 GeV point 2% was introduced due to the large uncertainty in definition of response on electrons (beam spot in the $`6.5<\varphi <7.2`$). The $`e/h`$ ratio was extracted from received data by fitting them with expression : $$\frac{e}{\pi }=\frac{e/h}{1+(e/h1)0.11lnE}.$$ (2) Fig. 4 show the $`e/\pi `$ ratios for Combine 96 (black circles) and Combine 94 (open circles). The solid curve is fit of our data by function (2). As the result of fit we found $`e/h=1.37\pm 0.01\pm 0.02`$, which is in a good agreement with data, received in 1994 ($`e/h=1.35\pm 0.04`$), but more precision. With the comparison for stand-alone prototype of Tile calorimeter $`e/h=1.23\pm 0.02`$ for incident particle angle $`10^{}`$, we received larger value for $`e/h`$. That could be explained if LArg calorimeter is more uncompensated then Tile calorimeter, as $`e/h`$ for Combined setup represents a average on $`e/h`$ for both calorimeters. ## 6 Conclusion To measure e/$`\pi `$ and $`e/h`$ ratio for prototype of ATLAS Barrel Combined Calorimeter responses on pions and electrons were studied. Found $`e/h=1.37\pm 0.01\pm 0.02`$ is in good agreement with results but more precisions. ## 7 Acknowledgments This work is the result of the efforts of many people from ATLAS Collaboration. the authors are greatly indebted to all Collaboration for their test beam setup and data taking. Authors are grateful for M. Nessi and J. Budagov for their attention and support of this work. We are indebted to M. Cobal, D. Constanzo, B. Lung-Jensen and V.V. Vinogradov for the valuable discussions and constgructive advices.
no-problem/9911/gr-qc9911080.html
ar5iv
text
# Wormholes and Flux Tubes in the 7D Gravity on the Principal Bundle with SU(2) Gauge Group as the Extra Dimensions ## I Introduction The presence of the off-diagonal components of the multidimensional (MD) metric can essentially change the properties of the appropriate MD gravitational equations. The physical reason for this is evident: these components of the MD metric are connected with the physical gauge fields (electromagnetic or Yang-Mills gauge fields) and excluding the physical fields can lead to this essential change. Let us repeat the following theorem which is useful for understanding how many degrees of freedom we have. , : Let $`G`$ be a structural group of the principal bundle. Then there is a one-to-one correspondence between the $`G`$-invariant metrics $$ds^2=\phi (x^\alpha )\underset{a=5}{\overset{dimG}{}}\left[\sigma ^a+A_\mu ^a(x^\alpha )dx^\mu \right]^2+g_{\mu \nu }(x^\alpha )dx^\mu dx^\nu $$ (1) on the total space $`𝒳`$ and the triples $`(g_{\mu \nu },A_\mu ^a,\phi )`$. Here $`g_{\mu \nu }`$ is the 4D Einstein’s pseudo - Riemannian metric on the base; $`A_\mu ^a`$ are the gauge fields of the group $`G`$ ( the nondiagonal components of the multidimensional metric); $`\phi \gamma _{ab}`$ is the symmetric metric on the fibre ($`\sigma _a=\gamma _{ab}\sigma ^b,\gamma _{ab}=\delta _{ab}`$, $`a=5,\mathrm{},`$ dim $`G`$ is the index on the fibre and $`\mu =0,1,2,3`$ is the index on the base). According to this theorem we have the following independent degrees of freedom: the scalar field $`\phi (x^\alpha )`$, the gauge fields $`A_\mu ^a(x^\alpha )`$ and the 4D metric $`g_{\mu \nu }(x^\alpha )`$. Note that all fields in this MD gravity can depend only on the spacetime points (points on the base) as the total space is $`G`$-invariant. Such kind of MD gravity can easily solve the following problem: why the physical degrees of freedom do not depend on the extra dimensions (ED). In addition to this, a topological structure of the ED is given that leads to a decrease of the numbers of equations connected with the ED in comparison with an ordinary MD gravity with non-fixed ED. Usually, the number of these equations is too large: this results in essential problems with the compactification. Therefore, it is necessary to introduce some external fields in order to have a compactified ED. In our case the vacuum MD gravitational equations is sufficient to obtain the solutions with the compactified ED. In MD gravity on the principal bundle the preferable choice of the coordinate transformations as the initial interpretation of Kaluza-Klein gravity is $`y^a`$ $`=`$ $`y^a(y^a)+f^a\left(x^\mu \right),`$ (2) $`x^\mu `$ $`=`$ $`x^\mu \left(x^\mu \right).`$ (3) here $`y^a`$ are the coordinates on the fibre and $`x^\mu `$ are the coordinates on the base. The first term in (2) means that the choice of coordinate system on the fibre is arbitrary. The second term indicates that in addition we can move the origin of the coordinate system on each fibre on the value $`f^a(x^\mu )`$. It is well known that such a transformation law (2) leads to a local gauge transformation for the appropriate non-Abelian field (see for overview ). That means, the coordinate transformation (2) and (3) are the most natural transformations for the MD gravitation on the principal bundle. Certainly we can perform the much more general coordinate transformations: $`y^a`$ $`=`$ $`y^a(y^a,x^\mu ),`$ (4) $`x^\mu `$ $`=`$ $`x^\mu (y^a,x^\mu ).`$ (5) But in this case we will mix the points of the fibre <sup>§</sup><sup>§</sup>§which are the elements of some group. with the points of the base which are the ordinary spacetimes points.. Then the new coordinates $`y^a`$ and $`x^\mu `$ are not the coordinates along the fibre and the base of the given bundle. We can introduce the new coordinates and kill the $`A_\mu ^a`$. But in this case the initial 4D metric can be changed very radical. For example, the initial static spherically symmetric 4D metric can become nonstatic and nondiagonal. This situation is very clear in the initial Kaluza interpretation of 5D gravity with the constant and nonvariable $`G_{55}`$ component of the metric. In this case we have the ordinary vacuum electrogravity which is equivalent to the 5D gravity with the nondynamical $`G_{55}`$ component of MD metric. And we can choose the new coordinates $`x^B`$ so that the initial 5D metric: $$ds^2=\left(d\chi +A_\mu dx^\mu \right)^2+g_{tt}dt^2+g_{rr}dr^2+r^2\left(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2\right)$$ (6) will be $$ds^2=G_{55}d\chi _{}^{}{}_{}{}^{2}+g_{\mu \nu }dx^\mu dx^\nu $$ (7) Evidently metric (7) will be nonstatic, nondiagonal and depend on the 5<sup>th</sup> coordinate. Of course, the new coordinate system $`x_{}^{}{}_{}{}^{A}`$ ($`A=0,1,2,3,5`$) is worse than the initial $`x^A`$. This remark allow us to say that the MD metric with the off-diagonal components of the metric $`G_\mu ^a`$ can have unusual properties in comparison with the solutions where $`G_\mu ^a=0`$. ## II 7D ansatz and equations Here we will consider the MD gravity on the principal bundle with SU(2) this is the gauge group of the weak interaction structural group. In this case the extra dimensions is SU(2) group <sup>\**</sup><sup>\**</sup>\**topologically it means that the ED are the $`S^3`$ sphere. and the 4D physical spacetime is the base of this bundle. We will search a solution for the following 7D metric $$ds^2=\frac{\mathrm{\Sigma }^2(r)}{u^3(r)}dt^2dr^2a(r)\left(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2\right)r_0^2u(r)\left(\sigma ^a+A_\mu ^adx^\mu \right)^2$$ (8) here $`r_0`$ is some constant, $`\sigma ^a`$ ($`a=5,6,7`$) are the Maurer-Cartan form with relation $`d\sigma ^a=ϵ_{bc}^a\sigma ^b\sigma ^c`$ $`\sigma ^1`$ $`=`$ $`{\displaystyle \frac{1}{2}}(\mathrm{sin}\alpha d\beta \mathrm{sin}\beta \mathrm{cos}\alpha d\gamma ),`$ (9) $`\sigma ^2`$ $`=`$ $`{\displaystyle \frac{1}{2}}(\mathrm{cos}\alpha d\beta +\mathrm{sin}\beta \mathrm{sin}\alpha d\gamma ),`$ (10) $`\sigma ^3`$ $`=`$ $`{\displaystyle \frac{1}{2}}(d\alpha +\mathrm{cos}\beta d\gamma ),`$ (11) where $`0\beta \pi ,0\gamma 2\pi ,0\alpha 4\pi `$ are the Euler angles. We choose the potential $`A_\mu ^a`$ in the ordinary monopole-like form $`A_\theta ^a`$ $`=`$ $`{\displaystyle \frac{1}{2}}(1f(r))\{\mathrm{sin}\varphi ;\mathrm{cos}\varphi ;0\},`$ (12) $`A_\varphi ^a`$ $`=`$ $`{\displaystyle \frac{1}{2}}(1f(r))\mathrm{sin}\theta \{\mathrm{cos}\varphi \mathrm{cos}\theta ;\mathrm{sin}\varphi \mathrm{cos}\theta ;\mathrm{sin}\theta \},`$ (13) $`A_t^a`$ $`=`$ $`v(r)\{\mathrm{sin}\theta \mathrm{cos}\varphi ;\mathrm{sin}\theta \mathrm{sin}\varphi ;\mathrm{cos}\theta \},`$ (14) Let us introduce the color electric $`E_i^a`$ and magnetic $`H_i^a`$ fields $`E_i^a`$ $`=`$ $`F_{ti}^a,`$ (15) $`H_i^a`$ $`=`$ $`\sqrt{\gamma }ϵ_{ijk}\sqrt{g_{tt}}F^{ajk}`$ (16) here the field strength components are defined via $`F_{\mu \nu }^a=A_{\nu ,\mu }^aA_{\mu ,\nu }^a+ϵ_{bc}^aA_\mu ^bA_\nu ^c`$, $`\gamma `$ is the determinant of the 3D space matrix, ($`i,j=1,2,3`$) are the space index. In our case we have $`E_rv^{},E_{\theta ,\varphi }vf,`$ (17) $`H_r{\displaystyle \frac{\mathrm{\Sigma }}{u^{3/2}}}{\displaystyle \frac{1f^2}{a}},H_{\theta ,\varphi }f^{}`$ (18) In order to have the wormhole-like (WH) solution we must demand that the functions $`\mathrm{\Sigma }(r),u(r),a(r),f(r)`$ are even functions and $`v(r)`$ is an odd function. This means that at origin ($`r=0`$) we have only the radial $`E_r`$ and $`H_r`$ fields that indicate the presence of a flux tube of color electric and magnetic fields across the throat of this WH-like solution. The substitution to the 7D gravitational equations <sup>††</sup><sup>††</sup>††The deduction of these gravitational equations for any group $`G`$ is given in the Appendix. $`R_\mu ^{\overline{A}}`$ $`=`$ $`0`$ (19) $`R_{\overline{5}}^{\overline{5}}+R_{\overline{6}}^{\overline{6}}+R_{\overline{7}}^{\overline{7}}`$ $`=`$ $`0`$ (20) leads to the following system of equations $`R_{\overline{2}\overline{2}}2R_{\overline{3}\overline{3}}+R{\displaystyle \frac{\mathrm{\Sigma }^{\prime \prime }}{\mathrm{\Sigma }}}+{\displaystyle \frac{a^{}\mathrm{\Sigma }^{}}{a\mathrm{\Sigma }}}{\displaystyle \frac{4}{r_0^2u}}{\displaystyle \frac{r_0^2u}{4a}}f^2{\displaystyle \frac{r_0^2u}{8a^2}}\left(f^21\right)^2`$ $`=`$ $`0`$ (21) $`R_{\overline{2}\overline{2}}{\displaystyle \frac{1}{2}}R24{\displaystyle \frac{\mathrm{\Sigma }^{}u^{}}{\mathrm{\Sigma }u}}24{\displaystyle \frac{u^2}{u^2}}+16{\displaystyle \frac{a^{}\mathrm{\Sigma }^{}}{a\mathrm{\Sigma }}}+4{\displaystyle \frac{a^2}{2a^2}}{\displaystyle \frac{16}{a}}+`$ (22) $`4{\displaystyle \frac{r_0^2u^4}{\mathrm{\Sigma }^2}}v^22{\displaystyle \frac{r_0^2u}{a}}f^28{\displaystyle \frac{r_0^2u^4}{a\mathrm{\Sigma }^2}}f^2v^2+{\displaystyle \frac{r_0^2u}{a^2}}\left(f^21\right)^2{\displaystyle \frac{48}{ur_0^2}}`$ $`=`$ $`0`$ (23) $`R_{\overline{3}\overline{3}}=R_{\overline{4}\overline{4}}{\displaystyle \frac{a^{\prime \prime }}{a}}+{\displaystyle \frac{a^{}\mathrm{\Sigma }^{}}{a\mathrm{\Sigma }}}{\displaystyle \frac{2}{a}}+{\displaystyle \frac{r_0^2u}{4a}}f^2{\displaystyle \frac{r_0^2u^4}{a\mathrm{\Sigma }^2}}f^2v^2+{\displaystyle \frac{r_0^2u}{4a^2}}\left(f^21\right)^2`$ $`=`$ $`0`$ (24) $`R_{\overline{a}}^{\overline{a}}{\displaystyle \frac{u^{\prime \prime }}{u}}+{\displaystyle \frac{u^{}\mathrm{\Sigma }^{}}{u\mathrm{\Sigma }}}{\displaystyle \frac{u^2}{u^2}}+{\displaystyle \frac{u^{}a^{}}{ua}}{\displaystyle \frac{4}{r_0^2u}}+`$ (25) $`{\displaystyle \frac{r_0^2u^4}{3\mathrm{\Sigma }^2}}v^2{\displaystyle \frac{r_0^2u}{6a}}f^2+{\displaystyle \frac{2r_0^2u^4}{3a\mathrm{\Sigma }^2}}f^2v^2{\displaystyle \frac{r_0^2u}{12a^2}}\left(f^21\right)^2`$ $`=`$ $`0`$ (26) $`R_{\overline{1}\overline{5}}v^{\prime \prime }+v^{}\left({\displaystyle \frac{\mathrm{\Sigma }^{}}{\mathrm{\Sigma }}}+4{\displaystyle \frac{u^{}}{u}}+{\displaystyle \frac{a^{}}{a}}\right){\displaystyle \frac{2}{a}}vf^2`$ $`=`$ $`0,`$ (27) $`R_{\overline{3}\overline{5}}f^{\prime \prime }+f^{}\left({\displaystyle \frac{\mathrm{\Sigma }^{}}{\mathrm{\Sigma }}}+4{\displaystyle \frac{u^{}}{u}}\right)+4{\displaystyle \frac{u^3}{\mathrm{\Sigma }^2}}fv^2{\displaystyle \frac{f}{a}}\left(f^21\right)`$ $`=`$ $`0`$ (28) Note that the equations (27) and (28) are the “Yang-Mills” equations. This system of ordinary differential equations is extremely difficult for the analytical investigation <sup>‡‡</sup><sup>‡‡</sup>‡‡although there exists a closed-form solution for the simplest case when $`a(r)=const`$ (flux tube solution ).. Therefore we will search for a numerical solution of this system. ## III Numerical Investigation Now we must write the initial conditions. At the origin ($`r=0`$) we can expand all functions by this manner $`a(x)`$ $`=`$ $`1+{\displaystyle \frac{a_2}{2}}x^2+\mathrm{},`$ (29) $`\mathrm{\Sigma }(x)`$ $`=`$ $`\mathrm{\Sigma }_0+{\displaystyle \frac{\mathrm{\Sigma }_2}{2}}x^2+\mathrm{},`$ (30) $`u(x)`$ $`=`$ $`u_0+{\displaystyle \frac{u_2}{2}}x^2+\mathrm{},`$ (31) $`v(x)`$ $`=`$ $`v_1x+{\displaystyle \frac{v_3}{6}}x^3+\mathrm{},`$ (32) $`f(x)`$ $`=`$ $`f_0+{\displaystyle \frac{f_2}{2}}x^2+\mathrm{},`$ (33) here we introduce the dimensionless coordinate $`x=r/\sqrt{a_0}`$ ($`a_0=a(0)`$) and redefine $`a(x)/a_0a(x)`$, $`\sqrt{a_0}v(x)v(x)`$, $`r_0^2/a_0r_0^2`$. Then we can rescale time and the constant $`r_0`$ so that $`\mathrm{\Sigma }_0=u_0=1`$. Thus we have only the following initial conditions for the numerical calculations $`a_0=1,u_0=1,\mathrm{\Sigma }_0=1,v_0=0,f_0=f_0,`$ (34) $`a_0^{}=0,u_0^{}=0,\mathrm{\Sigma }_0^{}=0,v_0^{}=v_1,f_0^{}=0,`$ (35) We see that our system depends on two parameters only: $`f_0`$ and $`v_1`$ <sup>\**</sup><sup>\**</sup>\**of course the 7D metric depends on the three parameters: $`a_0`$, $`f_0`$ and $`v_1`$.. The constrained equation (23) for the initial data give us $$r_0^2=\frac{1+\sqrt{1+3\left[v_1^2+\frac{1}{4}\left(f_0^21\right)^2\right]}}{\left[v_1^2+\frac{1}{4}\left(f_0^21\right)^2\right]}$$ (36) this equation is written in dimensionless variables. The numerical calculations of Eq.’s (21)-(28) are presented in the Figs. 1-5. The initial data for these calculations are the following: $`f_0=0.2,v_1=0.3,\mathrm{\hspace{0.33em}0.5},\mathrm{\hspace{0.33em}0.6},\mathrm{\hspace{0.33em}0.61},\mathrm{\hspace{0.33em}0.615},\mathrm{\hspace{0.33em}1.0},\mathrm{\hspace{0.33em}2.0}`$. ## IV Physical discussion Figs. 1-5 show us that in accordance with some relation between $`f_0`$ and $`v_1`$ there are two types of solutions: 1. There exists some value of the radial coordinate $`r_1`$ that $`a(\pm r_1)=0`$, $`u(\pm r_1)=s(\pm r_1)=\mathrm{}`$. Probably this means that at $`r=\pm r_1`$ points we have a singularity. This 4D part of the MD metric we can name as gravitational flux tube. As we have some flux of color electric/magnetic field between the points $`r=r_1`$ and $`r=+r_1`$ where the color electric/magnetic charges are located. 2. There exists some value of the radial coordinate $`r_2`$ that $`a(\pm r_2),s(\pm r_2)<\mathrm{}`$, $`u(\pm r_2)=0`$. The value $`u(\pm r_2)=0`$ means that the interval $`ds^2=0`$ on the hypersurface $`r=\pm r_2,t,\theta ,\varphi =const`$. Since the value of $`a(\pm r_2)`$ is finite we can name this type of the solutions as the WH-like. This type of the solutions is close to the 5D case investigated in . In this Ref. it was shown that the 5D vacuum Kaluza-Klein theory for the metric ($`r_0`$ is radius of the U(1) gauge group, $`Q=nr_0`$ is the magnetic charge, $`\chi `$ is the 5<sup>th</sup> coordinate) $`ds^2`$ $`=`$ $`e^{2\nu (r)}dt^2r_0^2e^{2\psi (r)2\nu (r)}\left[d\chi +\omega (r)dt+n\mathrm{cos}\theta d\varphi \right]^2`$ (37) $``$ $`dr^2a(r)(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2),`$ (38) has the following solutions 1. E $`>`$ H <sup>\*†</sup><sup>\*†</sup>\*†$`E=q/a`$ and $`H=Q/a`$ are the electric and magnetic charges.. WH-like solution located between two $`ds^2=0`$ hypersurfaces. 2. E = H. Infinite flux tube with constant electric and magnetic fields. 3. E $`<`$ H. Finite flux tube located between two singularities at points $`r=\pm r_1`$ where are the electric and magnetic charges. In the first 5D case the solution exists for $`|r|>r_0`$ where $`e^{2\nu }<0`$ and the metric is asymptotically flat. The whole construction can be interpreted as the Euclidean WH with the Lorentzian throat <sup>\*‡</sup><sup>\*‡</sup>\*‡The words Euclidean and Lorentzian can be exchanged. . The question is: whether this situation can be kept in the 7D case for the second type of the solutions? In Ref’s , a similar idea had been investigated about the changing of the signature of the 4D metric in some MD metric on the regular $`T`$-hole horizon. Also in our case the numerical calculations show that on the $`(f_0,v_1)`$ plane there is a curve that separates regions with the different solution type. Evidently, in the first rough approximation the equation for this curve is $$\frac{a_0^{\prime \prime }}{a_0}=\frac{2}{a_0}\frac{r_0^2u_0}{a_0^2}\left(f_0^21\right)^2=0$$ (39) In Fig. 6 these regions with the different type of solutions are shown. In the $`(f_0,v_1)`$ plane we can single out a few cases allowing a more detailed analysis. ### A $`f_0=\pm 1,v_1=0`$ Immediately from the “Yang-Mills” equations (27), (28) we see that $$v(r)=1,f(r)=\pm 1$$ (40) All terms with gauge fields in (21)-(26) vanish, and at the origin $`(r=0)`$ the equation for the initial data give us $$\frac{1}{a_0}+\frac{3}{u_0r_0}=0$$ (41) As $`a(r),u(r)>0`$ this relation cannot be satisfied. This allows us to say that in the absence of the gauge fields (off-diagonal components of the MD metric) the gravitational flux tubes and WH-like solutions do not exist. ### B $`f=0`$ In this case (27) equation is easy to integrate $$v^{}=\frac{q\mathrm{\Sigma }}{r_0^2au^4}$$ (42) And from (17), (18) we see that there are only the radial color component of the electric and magnetic fields. It is very interesting to note that there is some very simple analytical solution in the case $`a(r)=const`$ . The solution is $`a(r)`$ $`=`$ $`{\displaystyle \frac{2q^2}{7}}={\displaystyle \frac{r_0^2}{8}}=const`$ (43) $`\mathrm{\Sigma }(r)`$ $`=`$ $`\mathrm{cosh}\left({\displaystyle \frac{7r}{2\sqrt{2}q}}\right)`$ (44) $`v(r)`$ $`=`$ $`{\displaystyle \frac{\sqrt{2}}{r_0}}\mathrm{sinh}\left({\displaystyle \frac{7r}{2\sqrt{2}q}}\right)`$ (45) $`u(r)`$ $`=`$ $`1`$ (46) $`q`$ $`=`$ $`\sqrt{{\displaystyle \frac{7a}{2}}}`$ (47) We can apply the following gauge transformation to the potentials of Eqs. (12 \- 14) with $`f(r)=0`$ $$A_\mu ^{}=S^1A_\mu Si(_\mu S^1)S$$ (48) where $$S=\left(\begin{array}{cc}\mathrm{cos}\frac{\theta }{2}& e^{i\varphi }\mathrm{sin}\frac{\theta }{2}\\ e^{i\varphi }\mathrm{sin}\frac{\theta }{2}& \mathrm{cos}\frac{\theta }{2}\end{array}\right)$$ (49) With this gauge transformation we find that the gauge potentials become $`A_\theta ^a`$ $`=`$ $`(0;0;0)`$ (50) $`A_\phi ^a`$ $`=`$ $`(\mathrm{cos}\theta 1)(0;0;1)`$ (51) $`A_t^a`$ $`=`$ $`v(r)(0;0;1)`$ (52) In fact this means that the potential (50)-(52) is an Abelian one. This solution corresponds to the points $`x_1`$ and $`x_2`$ in Fig. 6 and we can call it an infinite gravitational flux tube. It is very close to the Levi-Civita-Robinson-Bertotti solution , , in 4D electrogravity. This remark poses a very interesting problem: how are the other solutions on the curve $`C_1`$ $`C_2`$ (Fig. 6) ? Are these solutions infinite flux tubes or something else Unfortunately, here the numerical investigation is more difficult because of a sensitive dependence on initial data. This fact becomes also clear by the following consideration: These lines are the borderlines between two different domains of attraction, and there this sensitive behaviour is quite typical. ? For the question which other Lie groups can lead to similar results we can say that for any $`N>2`$ there is an inclusion SU$`(2)\mathrm{SU}(N)`$. This means that for any such SU$`(N)`$ we can use the ansatz (12)-(14). As one knows, also all other compact semi-simple Lie groups (besides U(1), of course), have SU(2) as one of their subgroups. Therefore, for all these Lie groups similar statements are also valid. For the other Lie groups it is complicated to find out the spherically symmetric ansatz for the off-diagonal components of the MD metric. We would like to point out the following dimensional reduction expression for the Ricci scalar $`R(E)`$ on the total space of the principal bundle $`{\displaystyle d^4xd^dy\sqrt{|detG_{AB}|}R(E)}=`$ (53) $`V_G{\displaystyle d^4x\sqrt{|g|}\phi ^{d/2}\left[R(M)+R(G)\frac{1}{4}\phi F_{\mu \nu }^aF_a^{\mu \nu }+\frac{1}{4}d(d1)_\mu \phi ^\mu \phi \right]}`$ (54) here $`R(M),R(G)`$ are the Ricci scalars of the base and structural group of the principal bundle respectively; $`V_G`$ is the volume of the group $`G`$, cf. , eq. (8.12). Immediately we see that for an arbitrary Lie group $`G`$ we face the problem of how to get the appropriate ansatz for the gauge field $`A_\mu ^a`$. In this context it should be noted that also in the models discussed here, a generalized Birkhoff theorem (cf. ) is valid. This means that a spherically symmetric solution possesses a further isometry without additional assumptions. Of course, this result rests on the symmetries chosen for our model. ## V The possible physical applications Probably, the most interesting above-mentioned solution in 7D gravity on the principal bundle with the SU(2) structural group is the WH-like solution. We remember that if the $`G_{55},G_{66},G_{77}`$ components of the MD metric are not the dynamical variables then this MD gravity is equivalent to the pure 4D gravity + SU(2) Yang-Mills theory. It can be supposed that in the Universe there exist regions where the $`G_{55},G_{66},G_{77}`$ metric components are nondynamical variables and that there exist other regions where these components are dynamical variables. The composite WH can be of such a kind. It means that we have our above-mentioned WH-like solution as a throat of composite WH and two 4D Yang-Mills black holes attached to this throat (see Fig. 7). Such construction can polarize a space-time foam by the following way . Without presence of the 7D (5D) throat the handles of the space-time foam are located disordered in the space-time (Fig. 8a). But after the appearance of the throat of composite WH the location of these handles is ordered (Fig. 8b). Such model can be a geometrical model of the renormalization of the color/electric charge (the case for the electric charge is described in ). This composite WH with the polarization of the space-time foam is a continuation of Wheeler’s idea about “charge without charge” and “mass without mass” in the vacuum gravity. In this connection we can recall Wheeler’s quote in Ref. : “We are therefore led to consider the view that the electron is nothing but a collective state of excitation of the foam-like medium $`\mathrm{}`$ In other words the electron is not a natural starting point for the description of nature, according to the present reinterpretation of the views of Lorentz. Instead it is a first order correction to vacuum physics. That vacuum, that zero order state of affairs, with its enormous concentrations of electromagnetic energy and multiply-connected topologies, has to be described properly before one has the starting point for a proper perturbation theoretic development.” May be all these words can be used to a neutrino with mass, i.e. neutrino is a wormhole in the polarized space-time foam. In these cases there remains to solve the problem of the geometric description of spin. A similar 6-dimensional model, called $`M`$-fluxbranes, has been discussed recently in . It is interesting to note that the above-mentioned gravitational flux tube solutions cannot be the throat of the composite WH as they have a singularity on the place of the $`ds^2=0`$ hypersurface for the WH-like solution. It imposes some restriction on the possible relation between color electric and magnetic fields in the composite WH. ## Acknowledgements We would like to thank A. Kirillov and B. Schmidt for useful comments. VD is grateful for financial support by a Georg Forster Research Fellowship from the Alexander von Humboldt Foundation. HJS thanks the DFG for financial support. ## A Gravitational equations for the gravity on the principal bundle We consider the MD gravity on the principal bundle with the structural group $`G`$ . In this case the extra dimensions are the group $`G`$, and the 4D physical spacetime is the base of this bundle. According to above-mentioned theorem the MD metric on the total space is $$ds^2=\mathrm{\Sigma }_{\overline{A}}\mathrm{\Sigma }^{\overline{A}}$$ (A1) where $`\mathrm{\Sigma }^{\overline{A}}`$ $`=`$ $`h_B^{\overline{A}}dx^B,`$ (A2) $`\mathrm{\Sigma }^{\overline{a}}`$ $`=`$ $`\phi (x^\alpha )\sigma ^{\overline{a}}+h_\mu ^{\overline{a}}(x^\alpha )dx^\mu ,`$ (A3) $`\sigma ^{\overline{a}}`$ $`=`$ $`h_b^{\overline{a}}dx^b,`$ (A4) $`\mathrm{\Sigma }^{\overline{\mu }}`$ $`=`$ $`h_\nu ^{\overline{\mu }}(x^\alpha )dx^\nu `$ (A5) here $`x^B`$ are the coordinates on the total space ($`B=0,1,2,3,5,\mathrm{},N`$, is the MD index dim $`(G)=N`$), $`x^a`$ is the coordinates on the group $`G`$ ($`a=5,\mathrm{},N`$), $`x^\mu =0,1,2,3`$ are the coordinates on the base of the bundle, $`\overline{A}`$ is the $`N`$-bein index, $`h_B^{\overline{A}}`$ is the $`N`$-bein, $`\sigma ^{\overline{a}}`$ are the 1-forms on the group $`G`$ satisfying $`d\sigma ^{\overline{a}}=f_{bc}^a\sigma ^{\overline{b}}\sigma ^{\overline{c}}`$, ($`f_{bc}^a`$) are the structural constants for the group $`G`$. We must note that the functions $`\phi ,h_\mu ^{\overline{a}},h_\nu ^{\overline{\mu }}`$ can depend only on the $`x^\mu `$ points on the base as the fibres of our bundle are locally homogeneous spaces. The matrix $`h_B^{\overline{A}}`$ has the following form $$h_B^{\overline{A}}=\left(\begin{array}{cc}\phi h_b^{\overline{a}}& h_\mu ^{\overline{a}}\\ 0& h_\mu ^{\overline{\nu }}\end{array}\right)$$ (A6) The inverse matrix $`h_{\overline{A}}^B`$ is $$h_{\overline{A}}^B=\left(\begin{array}{cc}\phi ^1h_{\overline{a}}^b& h_{\overline{\mu }}^b\\ 0& h_{\overline{\mu }}^\nu \end{array}\right)$$ (A7) here $`h_{\overline{\mu }}^b=\phi ^1h_{\overline{a}}^bh_\nu ^{\overline{a}}h_{\overline{\mu }}^\nu `$. Also we see that we have only the following degrees of freedom: $`\phi (x^\alpha ),h_{\overline{\mu }}^b(x^\alpha )`$ and $`h_{\overline{\mu }}^\nu (x^\alpha )`$, $`h_{\overline{b}}^a`$ is given and not varying. Varying with respect to $`h_{\overline{\mu }}^A=(h_{\overline{\mu }}^a,h_{\overline{\mu }}^\nu )`$ leads to the equations $$R_A^{\overline{\mu }}\frac{1}{2}h_A^{\overline{\mu }}R=0$$ (A8) here $`\overline{A}=\overline{a}`$, $`\overline{\nu }`$, $`R_B^{\overline{A}}`$ is the MD Ricci tensor. Let $`x^a`$ be the coordinates on the group $`G`$ then $$\phi \sigma ^{\overline{a}}=\phi h_b^{\overline{a}}dx^b$$ (A9) Varying with respect to $`\phi (x^\mu )`$ leads to the following result $$\frac{\delta }{\delta \phi }\left(hR\right)=\frac{\delta \left(h_{\overline{a}}^b/\phi \right)}{\delta \phi }\frac{\delta \left(hR\right)}{\delta \left(h_{\overline{a}}^b/\phi \right)}=\frac{1}{\phi ^2}h_{\overline{a}}^b\left(R_b^{\overline{a}}\frac{1}{2}h_b^{\overline{a}}R\right)=0$$ (A10) here $`h=deth_B^{\overline{A}}`$, $`R`$ is the MD Ricci scalar for the metric on the total space. As $`h_{\overline{a}}^\nu =0`$ we can write $$h_{\overline{a}}^b\left(R_b^{\overline{a}}\frac{1}{2}h_b^{\overline{a}}R\right)+h_{\overline{a}}^\nu \left(R_\nu ^{\overline{a}}\frac{1}{2}h_\nu ^{\overline{a}}R\right)=h_{\overline{a}}^A\left(R_A^{\overline{a}}\frac{1}{2}h_A^{\overline{a}}R\right)=0$$ (A11) From (A8) and (A11) we see that $$h_{\overline{a}}^A\left(R_A^{\overline{a}}\frac{1}{2}h_A^{\overline{a}}R\right)+h_{\overline{\mu }}^A\left(R_A^{\overline{\mu }}\frac{1}{2}h_A^{\overline{\mu }}R\right)=h_{\overline{B}}^A\left(R_A^{\overline{B}}\frac{1}{2}h_A^{\overline{B}}R\right)=0.$$ (A12) This means that $$R=0.$$ (A13) Hence from (A11) we can write $$h_{\overline{a}}^AR_A^{\overline{a}}=R_{\overline{a}}^{\overline{a}}$$ (A14) Finally we have the following equation system for the MD gravity on the principal bundle $`R_A^{\overline{\mu }}`$ $`=`$ $`0`$ (A15) $`R_{\overline{a}}^{\overline{a}}=R_{\overline{5}}^{\overline{5}}+\mathrm{}+R_{\overline{N}}^{\overline{N}}`$ $`=`$ $`0.`$ (A16) We note that the (A16) Eq. is an analog for the Brans-Dicke scalar gravity. In addition we see that (A15) can be wrote as $$R_{\overline{\mu }A}=0orh_{\overline{A}}^BR_{\overline{\mu }B}=R_{\overline{\mu }\overline{A}}=0$$ (A17) Figure captions Fig. 1. Function $`a(x)`$. Initial data: $`f_0=0.2`$, for curve 1: $`v_1=0.3`$, for curve 2 : $`v_1=0.5`$, for curve 3: $`v_1=0.6`$, for curve 4: $`v_1=0.61`$, for curve 5: $`v_1=0.615`$, for curve 6: $`v_1=1.0`$, and for curve 7: $`v_1=2.0`$. Fig. 2. Function $`u(x)`$. Initial data as for Fig. 1. Fig. 3. Function $`\mathrm{\Sigma }(x)`$. Initial data as for Fig. 1. Fig. 4. Function $`f(x)`$. Initial data as for Fig. 1. Fig. 5. Function $`v(x)`$. Initial data as for Fig. 1. Fig. 6. The curves C<sub>1</sub> and C<sub>2</sub> separate the regions with different types of the solutions. In the 2-regions with $`a_0^{\prime \prime }<0`$ we have the flux tube solutions and for the 1-regions $`a_0^{\prime \prime }>0`$ \- the wormhole-like solutions. Fig. 7. The composite wormhole with the 1-throat as MD wormhole-like solution and two black 2-holes attached to its 3-ends. Fig. 8. A polarization of space-time foam in the presence of MD insertion. Fig. 8a presents the unpolarized space-time foam. Fig. 8b presents the polarized space-time foam. 1 are the virtual wormholes, 2 is the multidimensional insertion.
no-problem/9911/astro-ph9911425.html
ar5iv
text
# Discovery of circularly polarised radio emission from SS 433 ## 1. Introduction High-velocity synchrotron-emitting jets are commonly observed from Active Galactic Nuclei (AGN; e.g. Ostrowski et al. 1997; Shields 1999), and X-ray binary systems (XRBs) containing both black holes and neutron stars (e.g. Hjellming & Han 1995; Fender 1999 and references therein). The composition of the jet plasma, whether electron-proton (e<sup>-</sup>p<sup>+</sup>) or electron-positron (e<sup>-</sup>e<sup>+</sup>) remains a fundamental yet unanswered question in nearly all cases. SS 433 is one of the most celebrated of Galactic objects. The source is an XRB consisting, most probably, of a mass-losing star in a 13-day orbit with a stellar-mass black hole or neutron star. The system produces bright quasi-continuous radio jets which precess with a period of $`162.5`$ days (Vermeulen 1989; Vermeulen 1992; Brinkmann 1998). Moving optical emission lines (Margon 1984 and references therein) indicate a jet velocity of $`\beta =v/c=0.26`$, confirmed by both VLA and VLBI radio observations (Vermeulen 1992 and references therein). These optical lines, and their X-ray counterparts (Kotani et al. 1996 and references therein), are the only direct evidence for the existence of baryonic material (i.e. e<sup>-</sup>p<sup>+</sup>) in a jet from any X-ray binary. Recent progress towards determining the composition of the plasma in jets from AGN has been made by the detection and modelling of a circularly polarised radio component from the quasar 3C 279 (Wardle et al. 1998). Wilson & Weiler (1997) argue that radio circular polarization upper limits for the Crab supernova remnant come close to determining the positron content of the nebula. In addition Bower, Falcke & Backer (1999) and Sault & Macquart (1999) have recently detected circularly polarised radio emission from Sgr A\* at the Galactic Centre. In this paper we report the detection of circularly polarised radio emission from SS 433, the first from any XRB, at four radio frequencies. This observation has the potential to be the benchmark against which other jets from other XRBs may be compared in an effort to determine whether they produce e<sup>-</sup>p<sup>+</sup> or e<sup>-</sup>e<sup>+</sup> jets. ## 2. Observations and results The Australia Telescope Compact Array (ATCA) consists of six 22 m alt-az antennas near Narrabri, New South Wales (Frater, Brooks, & Whiteoak 1992). Each ATCA antenna is equipped with two wide-band feed horns, and each feed horn is equipped with two pairs of orthogonal-linear probes. This allows both orthogonal polarizations at two separate frequencies to be observed with each feed horn at the same time. Observations were made using the ATCA ‘continuum-mode’, which gives bandwidths of 128 MHz simultaneously at each of two frequencies, and four correlation products (XX, YY, XY, YX). The observations on 1999 May 10 were made centered on 4.80 GHz and 8.64 GHz. On 1999 May 20, observations at 1.38 GHz and 2.50 GHz were alternated with those at 4.80 GHz and 8.64 GHz, with a cycle time of $`25`$ minutes. Calibration sources, selected from the ATCA Calibrator Source Catalogue (Reynolds 1997), were observed every cycle. The ATCA primary calibrator PKS 1934–638 was used to calibrate the bandpass and to set the absolute flux scale (Reynolds 1994). Data reduction was performed with the Miriad package (Sault, Teuben, & Wright 1995). Prior to gain calibration, the xy-phase correction measured by a noise-diode system was applied, and the data were corrected for a small (otherwise unmodelled) field rotation due to the antenna’s pointing model (Kesteven 1997). The main calibration step involves simultaneously solving for time-dependent complex-gains, time-independent residual xy-phase, and time-independent polarization leakages for each feed, as well as the linear and circular polarization of the calibration source. A good parallactic-angle coverage is required in order that the leakages and calibrator linear polarization can be decoupled in the solution process through the relative rotation of the feeds and parallactic angle (Conway & Kronberg 1969). Circular polarization data requires calibration using the ‘strongly-polarized’ equations (Sault, Killeen, & Kesteven 1991), which include terms in $`leakage\times (\text{S}tokesQ,U)`$. To obtain such a solution requires a calibration source which has a few percent linear polarization (for this experiment PKS 1908-202) so that there is sufficient signal in the second-order terms. The accuracy of the leakage solution was estimated by repeating the entire calibration procedure for the 1999 May 20 observation, for all four frequencies, with a different calibrator (either PKS 1947+079 or PKS 2029+121). For SS433, the differences in the resultant circular polarization are of the same order as the errors expected from system noise alone; these differences have been incorporated into our circular polarization error estimates. All calibrators were imaged in circular polarization as a consistency check, and the results for 20 May, 1999, are shown in Fig 1. The leakage calibration of these calibrators was constrained so that the results gave zero circular polarization for PKS 1934-638 (an absolute circular polarization calibrator is needed for the interferometer – see Sault, Hamaker & Bregman 1996). A bias in the observed circular polarization of compact steep spectrum sources observed at the ATCA (Rayner et al. 1999) suggests that PKS 1934–638 is in fact circularly-polarized with $`V/I+2.5\times 10^4\pm 0.5\times 10^4`$ at 4.8 GHz. An error of this order in the absolute circular polariation flux-scale is much smaller than the observed circular polarization of SS433, and does not affect the conclusions of this paper. The sign convention of V follows the IAU convention (Transactions of the IAU, vol 15B, (1973) 166), which conforms to the IEEE definition (1969, Standard Definitions of terms for radio wave propogation, IEEE Trans AP-17, 270). SS 433 was slightly resolved (i.e. $`1`$ arcsec) in the E-W direction in both total intensity and linear polarization, consistent with the observations of Hjellming & Johnston (1981). However the source is unresolved in Stokes V, consistent with all the circularly polarised flux arising in an unresolved point source. On 1999 May 10 we also observed the radio-jet X-ray binary GRS 1915+105; at the time the source had a flat spectrum between 4.80 – 8.64 GHz, at a flux density of $`20`$ mJy. We can place $`3\sigma `$ limits of 1.2% on both the fractional linear and circular polarization of the emission from this source. In addition to our ATCA observations, we also utilise data from the Green Bank Interferometer (GBI) variable source monitoring program. These data provide a long-term view of the state of activity of the radio source with daily flux density measurements at 2.3 and 8.3 GHz (see e.g. Waltman et al. 1991). ## 3. Evolution of the propagating ejecta Both the GBI monitoring data and our ATCA observations are presented in Figure 2. Note that the lower two panels show the flux density measured in linear and circular polarization respectively (i.e. not the fractional spectra). A best fit to the observed flux density spectrum of the circular polarization has a spectral index of $`\alpha _V=0.9\pm 0.1`$ (where $`V\nu ^{\alpha _V}`$). Between the two epochs it is clear that the total radio flux dropped by $`40`$% and was dominated by the decaying stages of the two major flare events which peaked, at 2 GHz, on $``$ MJD 51304 and $``$ MJD 51312 respectively. Over the same period the linearly polarised flux may have increased slightly, and the circularly polarised flux appears to have remained constant. Our current understanding of the evolution of external and internal opacity effects in the outflow from SS 433 is summarised in Fig 3 (based upon Hjellming & Johnston 1981, Vermeulen 1992 and Paragi et al. 1999). Also indicated are our best estimates of the physical locations of the components corresponding to the two flare events, at the times of our observations on 1999 May 10 and May 20. We can presume that on our first epoch we only observed linear polarization from the first ejection, which contributed $`1/3`$ of the total flux density at this time. By our second epoch of observations both components contributed to the linearly polarised flux. Qualitatively this can explain the increase in the linearly polarised flux density between the two epochs, even though the total flux density decreased. Qualitatively similar behaviour is seen in the evolution of linear polarization in ejections from GRS 1915+105 (Fender et al. 1999) where the core also remains persistently (linearly) depolarised. ## 4. The origin of the circular polarization Circular polarization may be produced in a synchrotron-emitting plasma either directly as a result of the synchrotron process or via conversion of linear to circular polarization (Kennett & Melrose 1998 and references therein). Below we briefly discuss the possible interpretations of the circularly polarised radio emission from SS 433 in the context of these models. ### 4.1. Intrinsically circularly polarised synchrotron emission An electron of Lorentz factor $`\gamma `$ can be considered to radiate synchrotron emission primarily at a frequency $`\nu =4.2B_{}\gamma ^2`$ MHz, where $`B_{}`$ is the component of the magnetic field perpendicular to the line of sight, measured in Gauss. The fractional circular polarization $`m_c`$ (= Stokes $`|V|/I`$) produced intrinsically by synchrotron radiation is of order $`1/\gamma `$ (Legg & Westfold 1968), and hence the observed circular polarization spectrum should follow the relation $`m_c\nu ^{1/2}`$. For an observed $`m_c0.005`$ at 8640 MHz, we can estimate a magnetic field strength of $`50`$ mG (corresponding to $`\gamma 230`$). This estimate is in order-of-magnitude agreement with magnetic field estimates for major ejections from SS 433 and other X-ray binaries (e.g. Hjellming & Han 1995, Fender et al. 1999). Note in addition that ‘Faraday depolarization’, which can severely reduce the observed linear polarization, will have no effect on circular polarization. It is therefore possible to have an optically thin synchrotron source which displays a large ratio of circular to linear polarization, as is observed in this case. However, reversals in the line-of-sight component of the magnetic field which are likely to occur within the source will require significantly higher magnetic fields. Similarly, a significant e<sup>+</sup> population within the source will also cause a reduction in the observed $`V`$, and require a significantly higher magnetic field. In addition, the $``$ constant circularly polarised flux density at 8640 MHz during a drop by $`40`$% in the total flux density does not seem compatible with an origin for the circular polarization in the optically thin ejecta which correspond to the two major flares. ### 4.2. Propagation-induced circular polarization Linearly polarised radiation can be converted to circularly polarised radiation during propagation through a plasma with elliptical (or linear) propagation modes (Pacholczyk 1973; Kennett & Melrose 1998 and references therein). In the event of the admixture of a small amount of relativistic plasma to a thermal plasma, propagation modes through the plasma will become slightly elliptical, and a spectrum of the form $`m_c\nu ^1`$ is predicted (Pacholczyk 1973). In the event of plasma which is dominated by highly relativistic particles, the propagation modes may approach linear, and a much steeper spectrum of the form $`m_c\nu ^3`$ is predicted (Kennett & Melrose 1998). Wardle et al. (1998) argued that the circular polarization observed from 3C 279 arose because of such propagation-induced ‘repolarization’. They concluded that the low-energy spectrum of the relativistic particles must extend to $`\gamma <<100`$, and therefore the jet must be composed of an e<sup>+</sup>e<sup>-</sup> plasma (if there were protons accompanying each emitting electron the kinetic energy of the jet would be several orders of magnitude greater than that which is seen to be dissipated at the head of the jet). We note that analogous considerations may be also applicable to SS 433, where X-ray and radio hotspots are observed within the W 50 radio nebula, presumably at the site of the jet : ISM interaction. Indeed, the kinetic energy in the jets of SS 433, if they are composed of a e<sup>-</sup>p<sup>+</sup> plasma, is $`10^{40}`$ erg s<sup>-1</sup> (Brinkmann et al. 1988), which is much greater than that which is directly observed to be dissipated on larger scales within W50 (this is one of the arguments against an e<sup>-</sup>p<sup>+</sup> plasma which is put forward by Kundt 1998). ### 4.3. Alternative mechanisms ? Given the complexity of the SS 433 system and the high densities and magnetic field strengths likely to be present near to the base of the jet, alternative origins for the circularly polarised emission cannot be ruled out. These include gyrosynchrotron emission from low-energy electrons and cyclotron maser emission (Dulk 1985). We note that if the circularly polarised emission is associated with a region on the scale of one of the binary components (e.g. $`<10^{12}`$ cm) then the brightness temperature is $`10^{10}`$ K at 8.64 GHz, and $`10^{12}`$ K at 1.4 GHz. ## 5. Discussion It is of great importance to determine in which of the various emitting regions the circularly polarised flux density originated, in order to determine both the spectrum and relative strength of the emission. Our ATCA observations rule out an extension with the optically thin jets on scales of $`1`$ arcsec, and the lack of correlated variability in Stokes I and V argues against an association with the two major ejection events. The underlying radio spectrum of SS 433 is typically around 250 mJy at 8 GHz, with an optically thin spectral index of about $`0.7`$, corresponding to a quasi-continuous flow of matter into the jets. If associated with this component then the spectrum of the relative circular polarization could be as flat as the $`m_c\nu ^{1/2}`$ predicted for intrinsic synchrotron emission. However, if associated with this component, then why not with the two flares, which are presumably just enhancements of the same flow ? Alternatively, Paragi et al. (1999) show that the innermost regions ($`50`$ mas) of the jets have a flat/inverted spectrum between 1 – 15 GHz. The core region has a peak flux density typically of $`100`$ mJy on VLBI scales. If associated with this region the fractional circular polarization may be as high as 10% and the spectrum may steepen to the $`m_c\nu ^1`$ predicted for a mildly relativistic plasma (it seems unlikely that the $`m_c\nu ^3`$ spectrum can be recovered, unless the emission arises right in the binary core of the system, which has the most inverted radio spectrum). If associated with the inner regions, this implies a very large ratio of circular to linear polarizations, as found for Sgr A\* by Bower et al. (1999). Further precessional-phase-resolved monitoring and spatial resolution of the regions responsible for the circular polarization are essential to further investigate this discovery. If, as seems likely, the circular polarization is associated with the synchrotron emitting ejecta, comparison with circular polarization measurements of other X-ray binaries has the potential to reveal, finally, the composition of the relativistic plasmas. RPF would like to acknowledge useful discussions with Ralph Spencer and Al Stirling, and to thank Mark Walker for the original suggestion to look for circular polarisation from X-ray binaries. The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. RPF was funded during the period of this research EC Marie Curie Fellowship ERBFMBICT 972436.
no-problem/9911/physics9911023.html
ar5iv
text
# An improved Rosenbluth Monte Carlo scheme for cluster counting and lattice animal enumeration ## 1 Introduction The enumeration of lattice animals is an important problem in a variety of physical problems including nucleation , percolation and branched polymers . A lattice animal is a cluster of $`N`$ connected sites on a lattice with given symmetry and dimensionality and we seek to enumerate all distinct animals with a given number of sites. Exact enumeration has been carried out for small lattice animals using a variety of methods but the methods become computationally prohibitive for large animals. Many techniques have been used to enumerate larger lattice animals including various Monte Carlo growth schemes , a constant fugacity Monte Carlo method , an incomplete enumeration method and reaction limited cluster-cluster aggregation . In the following paper we describe an improvement of a method proposed by one of the authors which was based on an extension of the scheme proposed by Rosenbluth and Rosenbluth for enumerating self avoiding polymer chains. The central problem in using the Rosenbluth scheme for lattice animal enumeration is calculating the degeneracy of the clusters which are generated. In the method proposed by Care, the cluster growth was modified in a way which forced the degeneracy to be $`N!`$ where $`N`$ is the number of sites occupied by the lattice animal. However the resulting algorithm was fairly complicated to implement. An alternative method of correcting for the degeneracy had been proposed by Pratt . In this latter scheme the correcting weight is more complicated to determine and must be recalculated at each stage of the cluster growth if results are sought at each cluster size. However the Pratt scheme does not require any restriction on the growth of the cluster. In this paper we show that there are a class of Rosenbluth like algorithms which yield a degeneracy of $`N`$ and which are straightforward to implement. The method provides an estimate of the number of lattice animals and can also yield estimates of any other desired properties of the animals such as their radius of gyration or perimeter multiplicities . We describe and justify the algorithm in Section 2 and present results to illustrate the use of the method in Section 3. Conclusions are given in Section 4 ## 2 Algorithm Any algorithm, suitable for the purpose of the enumeration of lattice animals using the Rosenbluth Monte Carlo approach, must satisfy two important criteria. First of all it has to be ergodic. That is to say, the algorithm should have a non zero probability of sampling any given cluster shape. The second criteria relates to the degeneracy that is associated with each cluster and requires this to be determinable. This degeneracy arises from the number of different ways that the same cluster shape can be constructed by the algorithm. While it is easy to devise methods of growing clusters that meet the first requirement, the second condition is more difficult to satisfy. For many simple algorithms the calculation of the degeneracy, for every cluster, can be a more complex problem than the original task of enumerating the number of lattice animals. In the original Rosenbluth Monte Carlo approach of Care , this difficulty was overcome by ensuring that the degeneracy for all clusters of size $`N`$ was the same and equal to $`N!`$. However, to achieve this result the algorithm had to employ a somewhat elaborate procedure. This made the implementation of the method rather complicated, as well as limiting its possible extension to enumeration of other type of clusters. Here we shall consider an alternative algorithm, which while satisfying both of the above criteria, is considerably simpler than the algorithm proposed by Care. In Section 2.1 we describe the algorithm in its most basic form, before proving in Section 2.2 that the ergodicity and the degeneracy requirements are both met. In Section 2.3 we demonstrate how the basic algorithm can be further refined to improve its efficiency. ### 2.1 Basic Algorithm Having chosen a suitable lattice on which the clusters are to be grown (square and simple cubic lattices were used in this study for 2D and 3D systems, respectively), a probability $`p`$ of acceptance and $`q=(1p)`$ of rejecting sites is specified. Although in principle any value of p between 0 and 1 can be selected, the efficiency of the sampling process is largely dependent on a careful choice of this value, as will be discussed later. In addition, an ordered list of all neighbours of a site on the lattice is made. For example, for a 2D square lattice this might read (right, down, left, up). While the order initially chosen is arbitrary, it is essential that this remains the same throughout a given run. In the basic algorithm, once chosen, the probability $`p`$ remains fixed during the Monte Carlo sampling procedure. However in Section 2.3 the effect of relaxing this requirement is discussed. We construct an ensemble of $`N_E`$ clusters and for each of these calculate a weight factor which we subsequently use to calculate weighted averages of various cluster properties. For a property $`O`$ of the clusters, the weighted average is defined as $$<O>_W=\frac{1}{N_E}\underset{\alpha =1}{\overset{N_E}{}}W_\alpha O_\alpha $$ (1) The weight associated with cluster $`\alpha `$ with $`N`$ sites is defined to be $`W_\alpha =1/(d_NP_\alpha )`$ where $`P_\alpha `$ is the normalised probability of growing the cluster and $`d_N`$ is a degeneracy equal to the number of ways of growing a particular cluster shape. It can be shown that the weighted average can be used to estimate the number, $`c_N`$, of lattice animals of size $`N`$ and other properties such as the average radius of gyration $`\overline{R_N^2}`$:- $`E[<1>_W]`$ $`=`$ $`c_N`$ (2) $`E[<R_\nu ^2>_W]`$ $`=`$ $`{\displaystyle \underset{\{\nu =1\}}{\overset{c_N}{}}}R_{N\nu }^2=c_N\overline{R_N^2}`$ (3) During the growth of each cluster we maintain a record of the sites which have been occupied, the sites which have been rejected and a ‘last-in-first-out stack’ of sites which is maintained according to the rules described below. Each cluster is grown as follows 1. Starting from an initial position, the neighbours of this site are examined one at a time according to the list specified above. An adjacent site is accepted with a probability p or else is rejected. 2. If the adjacent site is rejected, a note of this is made and the next neighbour in the list is considered. 3. If on the other hand it is accepted, then this becomes the current site and its position is added to top of a stack, as well as to a list of accepted sites. The examination of the sites is now resumed for the neighbours of this newly accepted site. Once again this is done in the strict order which was agreed at the start of the algorithm. 4. Sites that have already been accepted or rejected are no longer available for examination. Thus, if such a site is encountered, it is ignored and the examination is moved on to the next eligible neighbour in the list. 5. If at any stage the current site has no more neighbours left, that is all its adjacent sites are already accepted or rejected, then the current position is moved back by one to the previous location. This will be the position below the current one in the stack. The current position is removed from the top of the stack, though not from the list of accepted sites. 6. The algorithm stops for one of the following two reasons. If ever the number of accepted sites reaches N, then the algorithm is immediately terminated. In this case a cluster of size N is successfully produced. Note that unlike some of the other common cluster growth algorithms , it is not necessary here for every neighbour of the generated cluster to be rejected. Some of these might still be unexamined before the algorithm terminates. The second way in which the algorithm stops is when it fails to produce a cluster of size $`N`$. In this case, the number of accepted sites will be $`M<N`$, with all the neighbours of these $`M`$ sites already having been rejected, leaving no eligible sites left for further examination. From step (v), it is clear that in cases such as this, the current position would have returned to the starting location. 7. The probability of producing a cluster of size N, in a manner involving $`r`$ rejections, is simply $`p^{(N1)}q^r`$. Hence the weight, $`W_\alpha `$, associated with the growth of the cluster is given by $$W_\alpha =1/(d_Np^{(1N)}(1p)^r)$$ (4) where the degeneracy, $`d_N`$, is shown below to be exactly $`N`$. Failed attempts have a zero weight associated with them. However they must be included in the weighted average of equation (1). 8. During the growth of a cluster of size $`N`$, we may also collect data for all the clusters of size $`M`$ where $`MN`$. It must be remembered that the weights for these smaller clusters must be calculated with a degeneracy of $`M`$. A specific example is helpful in demonstrating the algorithm. Figure 1 displays a successful attempt in forming a cluster of size $`N=4`$, on a square lattice. The order in which the neighbours were examined was chosen to be right, down, left and up. Let us now consider various steps involved in construction of this cluster in detail. Beginning from the initial position labelled cell one, the adjacent site to the right of this position is examined. In this case the site is rejected and the current position remains on the cell one. Such rejected cells are indicated by the letter X. The next neighbour in the list is the one below, labelled cell two. As it happens this is accepted. Thus, the current position moves to this site and its position is added to the top of the stack, ahead of the position of cell one. The process of examining the neighbours is resumed for sites adjacent to cell two. Once again, following the strict order in the list, the site labelled three to the right of current position is considered first. This is also accepted and as before is placed at the top of the stack. At this stage the stack contains the positions of cells three, two and one, in that order. The current position is now cell three. The site to the right of this, followed by the one below, are tested and both rejected in succession. Since both the neighbours to the left ( ie cell one) and the one above have already been considered, the current position has no more eligible neighbours left to test. Therefore, following step (v) above, site three is removed from the stack. This leaves the position of cell two at the top of the stack, making this the current position again. The cell two has two neighbours, the adjacent sites below and to the left, which are still unexamined. Of these, according to our agreed list, the site below takes precedent, but as shown in Figure 1 this is rejected. Current position remains on the cell two and the neighbouring site (cell labelled four) to the left of this position is tested. As it happens this is accepted. A cluster of the desired size $`N=4`$ is achieved, bringing this particular attempt to a successful end. For the subsequent discussion, it is useful to represent a sequence of acceptance and rejections by a series of 1 and 0. Thus, for the case shown in Figure 1 we have {0,1,1,0,0,0,1}. Note that at any stage throughout a series, the position of the current site and that of the neighbour to be examined, relative to the starting cell, are entirely specified by the decisions that have been made so far. In other words, given a sequence of one and zeros we can determine precisely the shape of the cluster that was constructed. This is only possible because of the manner in which the neighbours of the current position are always tested in a strict pre-defined order. For an algorithm that considers the neighbouring sites at random, the same will clearly not be true. The procedure described above needs to be repeated a large number of times, to obtain the weights for the ensemble average defined in equation (1). In particular, using equation 2, the number of lattice animals of size $`N`$ can now be determined. ### 2.2 Ergodicity and degeneracy of the algorithm Let us now discuss the issue of the ergodicity of the algorithm. We wish to see whether, starting from any particular site on a given cluster, a series of acceptance and rejections (1 and 0) can always be determined which leads to that cluster shape. We stress that we are not concerned about how probable such a sequence is likely to be, but merely that it exists. We can attempt to construct such a sequence by following the same rules as our algorithm described above, with one exception; we accept and reject each examined site according to whether it forms part of the target cluster shape or not. Obviously, in the original algorithm, each such move has a non zero chance of occurring, provided $`p`$ is not set to zero or one. Since we only accept sites that belong to the cluster in question, it follows that if the sequence is successful then we would achieve the desired cluster shape. However, we might argue that for some choice of target cluster and starting position, a series started in this manner will always terminate prematurely. That is to say, it will inevitably lead to a failure, with only part of the required cluster having been constructed. Now, it is easy to see that this cannot be true. If the series fails, it implies that all the neighbouring sites of the sub-cluster formed so far are rejected. However, the rest of the cluster must be connected to this sub-cluster at some point. Hence, at very least, one neighbouring site of the sub-cluster must be part of the full cluster and could not have been rejected. Starting from any of the sites belonging to a cluster then, it is always possible to write down a sequence of one and zeros that will result in the formation of that cluster. Similarly, considering every starting point on a cluster of size $`N`$, another implication of the above result is that the corresponding cluster shape can be generated in a minimum of at least $`N`$ distinct ways. Next, we shall show that the degeneracy of a cluster of size $`N`$ in our algorithm is in fact exactly $`N`$ (unlike the original algorithm of Care which has a degeneracy of $`N!`$). Let us suppose that starting from a particular site on a given target cluster shape, our algorithm has two distinct ways of forming this cluster. Associated with each of these, a series of one and zeros can be written down, in the same manner as that indicated above. The two ways of constructing the cluster must necessarily begin to differ from each other at some stage along the sequence, where we will have a 1 in one case and a 0 in the other. Now since up to this point the two series are identical, the site being examined at this stage will be the same for both cases. This is rejected in one sequence (hence 0) whereas it is accepted in the other (hence 1). It immediately follows that these two differing ways of constructing the cluster cannot result in the same shape. Using this result, together with previous one regarding the ergodicity of the algorithm, we are lead to conclude that, starting from a given site on a cluster, the algorithm has one and only one way of constructing the cluster. Hence, for a cluster of size $`N`$, the degeneracy is simply $`N`$. ### 2.3 Refined algorithms #### 2.3.1 Adjacent site stack During the growth of the cluster a stack can be constructed of all the sites which are adjacent to the cluster and still available for growth. When a new site is added to the cluster, its neighbours are inspected in the predetermined sequence and any available ones are added to the top of this stack. (Note that this stack differs from that discussed in Section (2.1)). The choice of site to be occupied can be made from all the adjacent sites in a single Monte Carlo decision. Thus, if we consider the underlying process in the method described above, at each step there is a probability $`p`$ of the site being accepted and a probability $`q=1p`$ of the site being rejected. We therefore need to generate a random number with the same distribution as the number of attempts needed to obtain an acceptance. The probability of making $`k`$ attempts of which only the last is successful, is $$p_k=q^{k1}p$$ (5) where $`1k<\mathrm{}`$ and $`_{k=1}^{\mathrm{}}p_k=1`$. In order to sample from this distribution we note that the associated cumulative distribution, $`C_m`$, is given by $$C_m=\underset{k=1}{\overset{m}{}}q^{k1}(1q)=1q^m$$ (6) Hence if we generate a random number, $`\eta `$, uniformly distributed in the range $`0<\eta <1`$, then a number $`m`$ given by $$m=\mathrm{Int}\left[\frac{\mathrm{ln}(\eta )}{\mathrm{ln}(q)}+1\right]$$ (7) will have been drawn from the required distribution. Thus we generate the number $`m`$ according to equation (7) and use this to determine which site on the stack is selected, with $`m=1`$ corresponding to the site at the top of the stack. If $`m>N_{adj}`$, where $`N_{adj}`$ is the number of available adjacent sites, the cluster growth is terminated as explained in step ((vi)) in Section 2.1. All the adjacent sites lying above the chosen site in the stack are transferred into the list of rejected sites. The list of adjacent sites is then adjusted to include the new available sites adjacent to the recently accepted site. As before, it is crucial that these are added to the top of the list in the strict predefined order. #### 2.3.2 Variable probability An apparent disadvantage of the methods so far described is that with fixed choice of probability, $`p`$, occasions arise when a cluster growth will terminate before reaching a cluster of size $`N`$, simply because the Monte Carlo choice rejected all the neighbouring sites. This problem can be overcome if the value of $`p`$ is allowed to vary as the cluster grows. The simplest method is to determine the number, $`N_{adj}`$, of available adjacent sites at each point in the cluster growth and select one of these sites with uniform probability. This effectively makes $`p=1/N_{adj}`$ and thereby increases the chances of growing a cluster of size $`N`$. Note that it is still possible for a cluster growth to become blocked. This happens when the chosen site is the one at the bottom of the current eligible neighbours list, thus causing all the other neighbouring sites in the list to be rejected in one step. If the newly accepted site has itself no unexamined neighbours to add to the list, the algorithm terminated prematurely. Modified in the manner described above the weight associated with a cluster is now $$W_\alpha =\frac{\mathrm{\Pi }_{i=1}^NN_{adj}^i}{N}$$ (8) rather than the expression given in equation (4). However, when this variable probability method was tested it was found that although it reduced the number of rejected clusters, it was inefficient at sampling the space of possible clusters when compared with method described in section (2.3.1). This inefficiency was measured by comparison of the standard deviation in the estimated cluster number for any given number of clusters in the sampling ensemble. It is thought that the inefficiency of the variable probability method arises because it gives too much weight to sites lower in the stack, yielding many non-representative clusters. It is possible that this problem could be overcome by using a non-uniform sampling distribution (cf ) but this was not tested in this work and the method described in (2.3.1) was used to obtain the results described in Section (3) . ## 3 Results In order to test the algorithm described in Section (2) it was used to estimate the number of lattice animals on a square 2D lattice and a simple cubic 3D lattice for which exact results are known up to certain sizes . Before collecting data it was necessary to determine the optimum value of the probability $`p`$ with which an adjacent site is accepted during the cluster growth. The effect of changing $`p`$ on the estimated error in the number of clusters of size 50 on the 2D and 3D lattices can be seen in Figure 2. It can be seen that there is a fairly broad range of values of $`p`$ for which the error is a minimum and a value of $`p=0.6`$ was used to obtain the results described below for the 2D lattice and $`0.72`$ for the 3D lattice. The distribution of weights is log normal and becomes highly skewed for large cluster sizes; this is a standard problem with Rosenbluth methods . The minimum in the error achieved by the choice of the value of the probability $`p`$ has the effect of minimising the variance of the distribution of the weights, $`W_\alpha `$. In Table 1 we present results obtained using the algorithm defined in section 2 using the adjacent site stack method of section 2.3 to enumerate clusters on a simple cubic 3D lattice for clusters up to size 50. The results were obtained from an ensemble of $`2.5\times 10^7`$ clusters. The data took 3.3 hours to collect on a R5000 Silicon Graphics workstation using code written in the language C but with no attempt to optimise the code. Only $`30\%`$ of the clusters achieved a size of 50. The results are quoted together with a standard error, $`e^{est}`$, calculated by breaking the data into 50 blocks and determining the variance of the block means for each cluster size. If the number of samples in each block is sufficient, it follows from the central limit theorem that the sampling distribution of the means should become reasonably symmetrical. We therefore also quote a skewness, $`\xi `$, defined by $$\xi =m_3/m_2^{3/2}$$ (9) where $`m_i`$ is the $`i^{th}`$ moment about the mean of the sampling distribution. It is expected that $`\xi \stackrel{<}{}0.5`$ for a symmetrical distribution and $`\xi >1`$ for a highly skew distribution. The statistic $`\xi `$ should be treated with some caution since it is likely to be subject to considerable error because it involves the calculation of a third moment from a limited number of data points. Exact results are known for clusters up to size 13 and in the table we quote the values for the quantity $`\chi `$ defined by $$\chi _M=|\frac{c_M^{exact}c_M^{est}}{c_M^{exact}e_M^{est}}|$$ (10) where $`c_M`$ is the number of clusters of size $`M`$ and it can be seen that all the values of $`\chi `$ are $`O(1)`$. Hence we assume that $`e^{est}`$ is an acceptable method of estimating the error in the method. However it is likely that the $`e^{est}`$ will underestimate the true error if the distribution becomes more skew. We also quote in Table 1 the values of $`c_N`$ calculated by Lam using a Monte Carlo incomplete enumeration method together with the error estimates reported for this method. In Table 2 we quote data collected from a square two dimensional lattice by collecting data from $`2.5\times 10^7`$ clusters up to size 50. This data only took 1.45 hours to collect but only $`2\%`$ of the clusters achieved a size of 50. Comparison is given with exact results up to clusters of size 19. The rate of growth of errors for the two and three dimensional data is shown in Figure 3 and it can be seen that the errors associated with the method diverge are beginning to diverge quite rapidly above clusters of size 50. This behaviour is to be expected with a technique which is based on sampling from a log normal distribution. In the previous paper equivalent results were obtained for clusters up to size 30 with approximately the same sample size. The improvement up to clusters of size 50 obtained by the new method arises because the weight associated with clusters of a certain size is generated from roughly half as many random numbers. This effectively halves the standard deviation of the log normal distribution of the weights and allows larger clusters to be sampled before the method becomes unusable. ## 4 Conclusions We have described a simple Rosenbluth algorithm for the Monte Carlo enumeration of lattice animals and clusters which can be applied to any lattice topology. A merit of the scheme is that for thermal systems it may be easily adapted to include Boltzmann weightings following, for example, the arguments used by Siepmann at al in the development of the configurational bias technique. Similarly, the method can be applied to calculation of the averaged properties of a cluster of a given size, in the site percolation problem. In this case we have $$<O>=\frac{<(1P)^tO>_W}{<(1P)^t>_W}=\frac{_{\alpha =1}^{N_E}W_\alpha (1P)^{t_\alpha }O_\alpha }{_{\alpha =1}^{N_E}W_\alpha (1P)^{t_\alpha }}$$ (11) where $`P`$ is the probability of site occupation in the percolation problem of interest and $`t_\alpha `$ the number of perimeter sites of the cluster $`\alpha `$. Preliminary results also indicate that the method may be useful in the study of the adsorption of clusters onto solid surfaces. A possible numerical limitation of the method arises from the highly skew probability distribution of Rosenbluth weights which occurs for large cluster sizes. However the method presented in this work is able to work to considerably higher cluster sizes than the one described in before this becomes a problem.
no-problem/9911/hep-ex9911046.html
ar5iv
text
# MLLA Parton Spectra Compared to ARIADNE ## 1 Introduction The perturbative QCD approach to describing the inclusive energy spectra, via the modified leading log approximation (MLLA) in conjunction with local parton hadron duality (LPHD), has been very successful in both $`e^+e^{}`$ annihilation and deep inelastic scattering experiments . Using LPHD, the non-perturbative effects of such distributions are reduced to a simple factor of normalisation that relates the hadronic distributions to the partonic ones. Perturbative features of these distributions are calculated by MLLA which accounts for both the double and single logarithmic effects. The MLLA approach has two free parameters: a running strong coupling, governed by a QCD scale $`\mathrm{\Lambda },`$ and an energy cut-off, $`Q_0,`$ below which the parton evolution is truncated. The MLLA evolution equation allow the parton spectra for the logarithmic scaled energy spectra, $`\xi ,`$ to be calculated . The variable $`\xi `$ is defined as $`\mathrm{ln}(E_0/E)\mathrm{ln}(1/x_p),`$ where $`E_0`$ is the original energy of the jet and $`E`$ is the parton’s energy. The cut-off, $`Q_0,`$ bounds the parton energy, $`Ek_TQ_0,`$ where $`k_T`$ is the transverse energy of the decay products in the jet evolution. In order to reconstruct the $`\xi `$ distributions one has to perform the inverse Mellin transformation: $$\overline{D}(\xi ,Y,\lambda )=_{ϵı\mathrm{}}^{ϵ+ı\mathrm{}}\frac{d\omega }{2\pi ı}x_p^\omega D(\omega ,Y,\lambda )$$ (1) where the integral runs parallel to the imaginary axis on the right of all singularities in the complex $`\omega `$plane, $`Y=\mathrm{ln}(E_0/Q_0)`$ and $`\lambda =\mathrm{ln}(Q_0/\mathrm{\Lambda }).`$ The Mellin-transformed distributions, $`D(\omega ,Y,\lambda ),`$ can be expressed in terms of confluent hypergeometric functions, $`\mathrm{\Phi }`$: $$\begin{array}{cc}D(\omega ,Y,\lambda )\hfill & =\frac{t_1A}{B(B+1)}\mathrm{\Phi }(A+B+1,B+2;t_1)\mathrm{\Phi }(AB,1B;t_2)\\ & +\left(\frac{t_2}{t_1}\right)^B\mathrm{\Phi }(A,B;t_1)\mathrm{\Phi }(A,B+1;t_2),\end{array}$$ (2) where $$\begin{array}{cc}t_1=\omega (Y+\lambda ),\hfill & \hfill t_2=\omega \lambda .\end{array}$$ (3) In addition $`A`$ and $`B`$ are defined as: $$\begin{array}{cc}A=4N_c/b\omega ,\hfill & \hfill B=a/b,\end{array}$$ (4) where $`N_c`$ is the number of colours, $`a=11N_c/3+2n_f/3N_c^2`$ , $`n_f`$ is the number of flavours and $`b=11N_c/3+2n_f/3.`$ Equation 1 is then calculated using a numerical integration in the complex $`\omega `$plane. The current region in the $`ep`$ Breit frame is analogous to a single hemisphere of $`e^+e^{}`$ annihilation. In $`e^+e^{}q\overline{q}`$ annihilation the two quarks are produced with equal and opposite momenta, $`\pm \sqrt{s}/2,`$ where $`\sqrt{s}`$ is the positron-electron centre of mass energy. The fragmentation of these quarks can be compared to that of the quark struck from the proton; this quark has an outgoing momentum $`Q/2`$ in the Breit frame, where $`Q^2`$ is the negative square of the four-momentum of the virtual exchanged boson in DIS. In the direction of this struck quark the scaled momentum spectra of the particles are expected by MLLA to have a dependence on $`Q`$ similar to that observed in $`e^+e^{}`$ annihilation at energy $`\sqrt{s}=Q,`$ with no Bjorken-$`x`$ dependence. The ARIADNE Monte Carlo generator is based on the colour dipole model, CDM . In the CDM, all gluon emissions constituting the QCD cascade start as radiation from the colour dipole formed between the quark and the anti-quark in the case of $`e^+e^{}`$ annihilation or the struck quark and the proton remnant in the case of DIS. All subsequent radiation arises from independent colour dipoles formed either from $`q\overline{q}`$ pairs or softer gluons radiated by the previously produced gluons. In the DIS scenario, the proton remnant is treated as an extended object which results in a suppression of radiation , generally in the proton direction. In addition the struck quark is treated as extended, as the photon only probes it to a distance inversely proportional to the transferred momentum. Treating the remnant and the struck quark as extended objects, rather than point like, results in a reduction in the available phase space for gluon radiation in DIS. The QCD cascade in ARIADNE is governed by a number of parameters in the Monte Carlo models. Two of the most important are the QCD scale, $`\mathrm{\Lambda },`$ (PARA(1)) and the parameter that determines the $`k_T`$ cut off for the shower (PARA(3)). An additional parameter, PARA(28), also allows the user to bound the lower energy of the emitted parton as well. For this study ARIADNE version 4.10 has been used. ## 2 Comparisons with MLLA Before investigating the evolution of the shower in DIS, the evolution in the simpler case of $`q\overline{q}`$ pair production in $`e^+e^{}`$ annihilation was studied. The spectra for both MLLA and ARIADNE were generated with a $`\mathrm{\Lambda }=150`$ MeV and a cut-off $`Q_0=2\mathrm{\Lambda }=\mathrm{PARA}(3)=\mathrm{PARA}(28).`$ Below $`\xi 1`$ there are instabilities in the numerical integration of equation 1 so all subsequent comparisons are for $`\xi >1.`$ Except for an overall normalisation discrepancy (a factor of 1.4 greater parton multiplicity in ARIADNE) the $`\xi `$ spectra are in very good agreement as illustrated in Figure 1. This normalisation discrepancy is constant, independent of the $`\sqrt{s}`$ at which the events were generated at. There is a slight tendency for the MLLA calculation to fall off quicker at large values of $`\xi `$ than the ARIADNE predictions. Using LEPTO to generate the electroweak cross section and colour flow configuration for DIS, ARIADNE was then used to generate the subsequent QCD cascade. The event was boosted to the Breit frame and those partons in the current fragmentation region selected. The DIS events were generated with fixed kinematics that are accessible in the HERA regime. The corresponding value of Bjorken-$`x`$ with $`Q`$ are shown in Table 1. Using the same values of $`\mathrm{\Lambda }`$ and $`Q_0`$ that was used for the $`e^+e^{}`$ annihilation study, the MLLA prediction is again compared to the ARIADNE generated spectra. Figure 2 shows the default version of ARIADNE for DIS compared to the MLLA predictions. As $`Q^2`$ increases the discrepancy between ARIADNE and the MLLA calculations becomes more pronounced. The $`\xi `$ distribution of ARIADNE peaks at higher values than the MLLA calculation. In addition, the MLLA calculations are narrower than the ARIADNE predictions. Again the parton multiplicity of the two distributions are different. Unlike the $`e^+e^{}`$ situatuion this normalisation factor seems to exhibit a $`Q`$ dependence. At low $`Q`$ the height of the peak for ARIADNE compared to MLLA is a factor of 1.1 higher whilst in the highest $`Q`$ bin it is 1.1 lower. In the default ARIADNE, the mechanism for soft suppression of radiation due to the extended source of the proton remnant results in a suppression of radiation in the current region of the Breit frame at high $`Q^2.`$ Figure 3 shows the high $`Q^2`$ modified version of ARIADNE for DIS, where this suppression in the current region is removed, compared to the MLLA predictions. As expected, this modification to ARIADNE leads to a much better agreement between the MLLA calculations and ARIADNE. The situation with the parton multiplcity is similar. The $`Q`$dependence of the ratio of the peak heights is less than the default ARIADNE, with ARIADNE being a factor $`1.21.3`$ higher. In both options of the ARIADNE program there are discrepancies evident in the lower $`(x,Q)`$ bins compared to the MLLA predictions. One possible explaination of this discrepancy is given in Ref. , where it shown that high $`p_T`$ emissions in DIS can lead to the situation where the current region of the Breit frame is depopulated. ## 3 Conclusions The QCD cascade as implemented in the ARIADNE Monte Carlo program is in good agreement with the shape of the MLLA prediction in the simple scenario of $`q\overline{q}`$ production in $`e^+e^{}`$ annihilation. In the more complex situation of DIS the agreement is not as good, unless account is taken of the additional suppression introduced into the model in the current fragmentation region caused by the suppression of phase space due to the extended nature of the proton remnant.
no-problem/9911/cond-mat9911323.html
ar5iv
text
# Rotational levels in quantum dots∗ ## Abstract Low energy spectra of isotropic quantum dots are calculated in the regime of low electron densities where Coulomb interaction causes strong correlations. The earlier developed pocket state method is generalized to allow for continuous rotations. Detailed predictions are made for dots of shallow confinements and small particle numbers, including the occurance of spin blockades in transport. <sup>1</sup><sup>1</sup>footnotetext: Dedicated to Alfred Hüller to the occasion of his 60.-th birthday Much of our present understanding of small quantum dots, with observable discrete level structure , concentrates on the regime of relatively high carrier densities where the interaction and charging energy is comparable to the kinetic (Fermi) energy in magnitude . Similar to real atoms effective single particle orbitals establish a reasonable approximation to the electronic states. The spins follow from Hund’s rule which is a perturbative result though it accords well with experimental findings in small quantum dots at high particle densities . At lower densities Coulomb interaction is expected to destroy this single particle picture, leaving strongly correlated or even crystallized electrons with collective low energy excitations. While in the homogeneous two-dimensional case $`r_\mathrm{s}`$ should exceed $`r_\mathrm{c}=37`$ to reach this regime ($`r_\mathrm{s}=(\pi n_\mathrm{s})^{1/2}`$ measures the ratio between Coulomb and kinetic energy and is regulated by the two dimensional carrier density $`n_\mathrm{s}`$), disorder is predicted to reduce this value considerably to $`r_\mathrm{c}=7.5`$ . An even more pronounced reduction of $`r_\mathrm{c}`$ in comparison with the homogeneous value is found for the transition into the ‘Wigner regime’ in quantum dots . Careful quantum Monte Carlo (QMC) studies based on the spin sensitivity of the density–density correlation function yielded $`r_\mathrm{c}=4`$ for parabolic quantum dots . Experimentally, this regime has been addressed using capacitance spectroscopy which only probes ground state energies. Non-linear transport behaviour has not yet been investigated to detect the interesting correlation effects for the low energy excitations. Numerical investigations of the low density regime, emphasizing the spin states of rotating three electron Wigner molecules, have been carried out for shallow parabolic dots . Investigations for larger particle numbers have focussed on dots of low symmetry where corners in the confining potential or impurities suppress zero modes to delocalize the charges in the Wigner regime by so that ‘pocket states’ can be introduced , which are well suited to describe localized charges. The ‘pocket states’ served as basis to map the spin sensitive low energy physics to the one of lattice models of the Hubbard form that account for quantum correlations by hopping between nearest places. Applicability of this archetype for correlation phenomena has been demonstrated e.g. in quantum dots of polygonal geometry . This mapping to a lattice model cannot be carried out straightforwardly if zero modes cause charge delocalization which by symmetry actually happens in most experimental quantum dots. They are fairly well described by an isotropic and in fact parabolic model $$H=\underset{i=1}{\overset{N}{}}\frac{𝒑_i^2}{2m^{}}+V$$ (1) where $$V=\frac{m^{}}{2}\omega _0^2\underset{i=1}{\overset{N}{}}𝒙_i^2+\underset{i<j}{}\frac{e^2}{\kappa |𝒙_i𝒙_j|}.$$ (2) Here, the effective mass $`m^{}`$ and the dielectric constant $`\kappa `$ are material parameters, and $`𝒙_j(𝒑_j)`$ are electron positions (momenta) in two dimensions. This model does not explicitly involve spin (as opposed to real atoms spin-orbit coupling is negligible in quantum dots) so that all of its eigenstates are simultaneously eigenstates to the square of the total spin $`\widehat{S}^2`$ to eigenvalues $`S(S+1)`$. The present work extends the pocket state method (PSM) to allow for rotational symmetry and compares with results obtained by QMC studies . Being based on a recently developed multilevel blocking algorithm to circumvent the infamous Fermion sign problem this QMC allows for high accuracy to resolve reliably even the low energy spin structure at particle numbers significantly larger than those treatable by diagonalizations. At low densities the charge carriers form a finite piece of an electron crystal , a Wigner molecule (WM), that might, classically , be arbitrarily oriented. Superposition of all of the azimuthal degeneracies leads to an isotropic charge density distribution, as required by the symmetry of (2) . For analytical progress it is tempting to separate out the normal coordinate related with the overall rotation and with total angular momentum quantum numbers $`\mathrm{}`$ (in strictly harmonic confinements $`\mathrm{}`$ refers to the relative part of the Hamiltonian since the center of mass motion just adds integer multiples of $`\omega _0`$ to all of the eigenvalues and does not affect the spin of any of the states ). However, the remaining normal coordinates then would in general no longer describe identical quantum particles obeying Pauli’s principle and Fermi (or Bose) statistics but they would correspond to linear combinations of such particles. Within the PSM it is crucial to know the result of particle permutations in order to assign eventually the correct total spins $`S`$ to the eigenstates and eigenenergies . Therefore, we treat all of the possible particle exchanges on equal footing, including discrete overall rotations of the WM if they correspond to particle permutations. It depends on the geometry of the WM whether rotations by $`\mathrm{\hspace{0.25em}2}\pi /p`$ with $`p>1`$ leave electron places invariant so that the Pauli principle relates $`\mathrm{}`$ with $`S`$. Such a relationship is well known, for instance from the example of solid hydrogen H<sub>2</sub>, where the even $`\mathrm{}`$ are necessarily $`S=0`$ singlett states while the odd $`\mathrm{}`$ are $`S=1`$ tripletts (in this example the spins refer to the protons), the reason being the equivalence of rotations by 180 degrees with the exchange of two identical spin–half Fermions. Other examples are discussed in . Validity of the PSM requires that the spin sensitive excitation energies $`\mathrm{\Delta }`$, to be calculated by this method, should be smaller than charge (plasmon) excitations . In the absence of continuous symmetries this condition is easily fulfilled at small densities due to the almost exponential decay of $`\mathrm{\Delta }\mathrm{exp}\sqrt{r_\mathrm{s}}`$. Plasmon energies decrease only according to a power law $`r_\mathrm{s}^{3/2}`$ for Coulomb repulsions. With their faster decay $`\mathrm{\hspace{0.25em}1}/2I=(2\pi m^{}_0^{\mathrm{}}drr^3n(r))^1r_\mathrm{s}^2`$ (depending on the radial charge density distribution $`n(r)`$, $`I`$ is the moment of inertia) the total angular momentum excitations, however, still decay faster than the plasmons so that eventually the low energy levels will follow only from electron interchanges among the places defining the WM , including overall rotations by $`\mathrm{\hspace{0.25em}2}\pi /p`$, i.e. by processes permuting identical quantum particles . From classical as well as from quantum Monte Carlo studies it is known that up to $`N8`$ Wigner molecules in the parabolic quantum dots are very symmetric : the electrons form one spatial shell ($`N5`$) so that $`p=N`$, or one electron occupies the center (i.e. $`p=N1`$). Here we focus on $`N6`$. The method can be generalized straightforwardly to larger $`N`$ and more complicated geometries of the WM. The transition amplitudes for all possible particle permutations constitute the entries $`t`$ of the pocket state matrix . In the classically forbidden cases $`t`$ can be estimated within the WKB approximation as discussed in . The complete potential (2), including the interaction, goes into this estimate. Often the most important entries involve only two or three adjacent particles, as in quantum dots of polygonal shapes , which then determine the hopping terms in the equivalent Hubbard model. This is different for the zero modes : there a much larger number of particles can be involved into a certain permutational transition, such as a rotation by $`\mathrm{\hspace{0.25em}2}\pi /p`$ in isotropic quantum dots. Corresponding entries $`t_\mathrm{R}`$ to the pocket state matrix are not of tunneling type and therefore not exponentially small. In those cases $`t_\mathrm{R}=p^2/8\pi ^2I`$ is fixed by the energy constant $`\mathrm{\hspace{0.25em}1}/2I`$ for rotational excitations ($`I`$ follows from $`n(r)`$). This way all of the relevant entries to the pocket state matrix can be estimated. Its diagonalization yields eventually the complete set of low energy eigenvalues. Advantage can be taken from the fact that pocket states constitute a faithful representation of the symmetric group $`S_N`$ so that diagonalization can be carried out analytically for small systems, $`N4`$, otherwise numerical help is required. Only irreducible representations $`[N/2+S,N/2S]`$ are compatible with Pauli’s principle for spin-half Fermions . This fixes the spin $`S`$ for each eigenvalue. The entries $`|t|\mathrm{e}^{\sqrt{r_\mathrm{s}}}`$ and $`|t_\mathrm{R}|r_\mathrm{s}^2`$ vary differently with the strength of the Coulomb interaction so that the ratio $`t/t_\mathrm{R}`$ is a measure for the interaction strength. We use $`y:={\displaystyle \frac{1}{1+t/t_\mathrm{R}}}>0`$ ranging from $`\mathrm{\hspace{0.25em}1}/(1+(\pi ^2/4)p)`$, since $`|2t|`$ cannot exceed the Fermi energy in the non-interacting limit, up to unity at strong interactions, $`y1`$. Figure 1 shows the low energy spectrum versus $`y`$ for $`N=3`$. Our description is designed for evaluating excitation energies, i.e. the differences between the energies of different spin states. As expected for weak interactions ($`y<0.5`$), the ground state is unpolarized . A transition into the spin polarized ground state $`S=3/2`$, not found in earlier diagonalization studies, is seen above a certain interaction strength which for Coulomb interactions and GaAs parameters can be estimated to happen when $`\omega _0<0.5`$meV . This result complies with the QMC studies and can also be seen when carefully examining Figure 1 of the study of a large quantum dot. We would like to emphasize, that this spin polarization is an exact consequence of correlations and not the result of the mean field approximation or a magnetic field. In transport experiments, when contacting quantum dots with electron reservoirs, it should show up as a ‘spin blockade’ , since the ground states of $`N=2`$ and $`N=3`$ in sufficiently large quantum dots differ then in spin by more than $`\mathrm{\Delta }S=1/2`$ (by which entering or escaping single electrons can change spin) since the $`N=2`$ ground state (with time reversal symmetry) is always a singlett . For $`N=4`$ (not shown here) we confirm the Hund’s rule result of a $`S=1`$ ground state, as obtained already in density functional calculations . New is its persistence up to strong interactions. The lowest singlett level $`S=0`$ approaches this ground level $`\mathrm{exp}\omega _0^{1/3}`$ as $`\omega _0`$ decreases. The rotationally first excited state $`\mathrm{}=1`$ consists only of triplett $`S=1`$ levels while the spin polarized level $`S=2`$ belongs to the doubly excited rotational state, $`\mathrm{}=2`$, together with another singlett $`S=0`$ level. For $`N=5`$ (Figure 2), on the other hand, the polarized state $`S=5/2`$ joins the unpolarized ground state $`S=1/2`$ in the lowest rotational level at strong interactions. This low energy high spin state makes negative differential conductances in the non-linear transport likely, due to the spin blockade . Rotationally excited levels consist of $`S=1/2`$ as well as of $`S=3/2`$ spin states. The sixth electron is predicted , also classically , to occupy the center of a 5–fold ring. This complicates the pocket state analysis since new types of pair exchanges appear (exchange with the central electron) and also the triple exchange $`t_3`$ (cyclic permutations of three adjacent electrons, including the central one) turns out as important, in accordance with WKB estimates . Indeed, the PSM spectra do not compare with the low energy levels obtained from QMC unless $`t_3`$ is included with a similar magnitude as the pair exchanges. This demonstrates how our approach complements most favorably the QMC simulations for quantum dots which yields abolute values for the many particle energies to high accuracy, contrary to the method based on pocket states. Very reliable estimates for the $`t`$–parameters can be achieved which otherwise would have to be guessed by less trusty approximative means. On the other hand, QMC is incomplete for the low energy levels since only the lowest eigenenergies to given $`z`$–component can be simulated. For $`N=6`$ and confining energies $`\omega _00.13`$meV (GaAs) we find, with increasing energy, the spin sequence 1-0-3-2-1-0-2-1-2-1-1-0. The spin $`S=1`$ indicates another interaction induced change in the ground state spin since from the non-interacting levels point of view $`N=6`$ corresponds to a ‘noble gas’ configuration implying an unpolarized ground state spin $`S=0`$ . This result also has to be contrasted with the conjecture $`S=2`$ following from a static antiferromagnetic WM of pentagonal symmetry. The rotational ground state $`\mathrm{}=0`$ includes all possible spin states $`S=1,0,3,2`$, with the fully polarized state, $`S=3`$, being lower in energy than the lowest $`S=2`$ state, in accordance with QMC. This again suggests possible occurance of negative differential conductances for the transition to $`N=5`$. In conclusion, generalizing the pocket state method we have developed a description for the low density regime in isotropic such as parabolic quantum dots. Low energy levels, including spin quantum numbers were determined for $`N6`$. Detailed predictions are made for spin blockades as they should be detectable in linear and non-linear transport through shallow quantum dots of confinement energies below $`\omega _0\stackrel{<}{}0.4`$meV (GaAs). . Acknowledgement I am particularly indebt to my teacher Alfred Hüller for longstanding support and encouragement. Numerous very fruitful discussions with Charles Creffield, Reinhold Egger, Hermann Grabert, and John Jefferson are acknowledged. This work has been carried out during stays at the University of Freiburg, the University of Jyväskylä, and the King’s College London. Support has been received from the DFG (through SFB 276) and the EPSRC (U.K.).
no-problem/9911/astro-ph9911501.html
ar5iv
text
# Cosmic Flows 99: Conference Summary ## 1 Introduction This is not a comprehensive review of the conference, but rather a collection of concluding remarks on some of the central issues which I wish to highlight. The distinctive feature of this conference was the exposure of several new observational surveys of peculiar velocities, listed in Table 1. These data enable dynamical studies in three zones: our $`30h^1\mathrm{Mpc}`$ local neighborhood at high-resolution, within $`60h^1\mathrm{Mpc}`$ with $`10h^1\mathrm{Mpc}`$ smoothing, and out to $`120h^1\mathrm{Mpc}`$ at low-resolution. I use some of the analysis tools developed by my colleagues and myself, including POTENT, Wiener Filter (WF), decomposition and likelihood analysis, to illustrate some of the potential of these data. I then address the implications to cosmology and galaxy formation, and my views of how further progress is ought to be made. The outline is as follows: §2 addresses bulk velocities. §3 discusses high-resolution analysis in the local neighborhood. §4 reviews the robust analysis in the Great Attractor vicinity. §5 proposes a decomposition of the velocity field into divergent and tidal components. §6 demonstrates the potential of dynamical analysis on very large scales. §7 addressed the constraints on cosmological parameters. §8 evaluates the effect of nontrivial biasing on the range of estimates for $`\beta `$. §9 stresses the importance of error analysis via mock catalogs. §10 summarizes my main points. Table 1: Peculiar Velocity Data Catalog Dist. Err Objects Num. Rad. Reference Ind. % ga/cl $`h^1`$Mpc SBF SBF 8 E/ga 300 30(40) Blakeslee et al., tv PT TF 18 S/ga 500 30(40) Pierce, Tully, tv ENEAR FP 20 E/ga,cl 1900 40(70) Wegner et al., tv M3 TF,FP 18 S,E/ga,cl 3400 50(80) Willick et al. 97a SFI TF 18 S/ga 1650 50(70) Haynes et al. 98a,b Shellflow TF 18 S/ga 300 40-75 Courteau et al., tv SNIa SN 8 S 44 50(200) Riess, tv SCI+II TF 18 S/cl 1300/76 95(200) Dale & Giovanelli, tv SMAC FP 20 E/cl 700/56 65(140) Smith et al., tv LP10K TF 18 S/cl 170/15 90-135 Willick, tv BCG $`L_\mathrm{m}`$-$`\alpha `$ 18 E/cl 120 85(150) Lauer & Postman 94 EFAR FP 20 E/cl 450/85 60-150 Colless et al., tv ## 2 Bulk Velocity The simplest quantity extracted from a peculiar velocity sample is the bulk velocity $`V`$, in a sphere (or a shell) about the Local Group (LG). The measurements are sometimes referred to as either proving “convergence” to the cosmic frame within a given radius, or as posing a challenge to the large-scale isotropy of the universe. I would like to stress that the interpretation of a bulk velocity is meaningful only in the context of a specific theoretical model, and is a quantitative issue. In fact, large-scale isotropy does not require “convergence” on any finite scale. Our models predict, quite robustly, a relatively weak descent of amplitude with scale, and the large cosmic variance due to the finite, sparse and nonuniform sampling can accommodate a large range of results. I therefore point first, in Fig. 1, to the theoretical prediction of a $`\mathrm{\Lambda }`$CDM model for the simplest statistic: the bulk-flow amplitude in a top-hat sphere. The solid line is the rms value, obtained by integrating over the power spectrum times the square of the Fourier Transform of the top-hat window. The dashed lines represent 90% cosmic scatter in the Maxwellian distribution of $`V`$, when only one random sphere is sampled. This model (flat, with $`\mathrm{\Omega }_\mathrm{m}=0.35`$, $`n=1`$, $`h=0.65`$), has $`\sigma _8\mathrm{\Omega }_\mathrm{m}^{0.5}=0.51`$, consistent with the constraints from cluster abundance (Eke, Cole & Frenk 1996). In fact, any model from the CDM family that is normalized in a similar way predicts bulk velocities in the same ball-park, so the theoretical curves should be regarded as representative of our “standard” models. Note the gradual descent and the large scatter: the velocity could almost vanish inside $`50h^1\mathrm{Mpc}`$, or be as high as $`400\mathrm{km}\mathrm{s}^1`$ near $`100h^1\mathrm{Mpc}`$, without violating standard cosmology. A bulk velocity can be computed by fitting a 3D model of constant velocity to the observed radial peculiar velocities. Each datum contributes to the fit, usually weighted by the inverse square of the relative distance error, added in quadrature to a constant velocity dispersion. Thus, the result corresponds to a nonuniform window in space, which is typically biased towards small radii and is very specific to the sample. A proper comparison with theory should take into account the sampling window and the associated cosmic scatter (Kaiser 1988; Watkins & Feldman 1995). However, a semi-quantitative impression can be obtained by a crude comparison in the “theory plane”, for which one can approximate a top-hat window by equal-volume weighting at the expense of larger random errors. A full POTENT analysis is a more accurate way of mimicking uniform weighting. The results are put together in a crude way in Fig. 1, displaying the amplitudes of the bulk velocities in the CMB frame, as if they all represent top-hat bulk velocities. The amplitudes can be compared because the directions of all the nonzero vectors are remarkably similar: with the exception of BCG, they all lie in the $`30^{}`$ vicinity of $`(l,b)=(280,0)`$. Some of the error-bars are based on a careful error analysis using mock catalogs, while others are crude estimates. In most cases they represent random errors only and underestimate the systematic biases. The bulk velocities were de-biased by subtracting in quadrature the errors in each component. Also shown in Fig. 1 is the velocity of the LG as deduced from the CMB dipole, and the velocities predicted from the IRAS PSCz redshift survey using linear theory, with $`\beta _{\mathrm{IRAS}}=0.7`$, the best-fit to the CMB dipole (Saunders et al., this volume, hereafter tv). M3, SFI and Shellflow are dominated by Tully-Fisher (TF) spirals inside $`R\stackrel{<}{}50h^1\mathrm{Mpc}`$. The M3 result refers to the VM2 calibration (Dekel et al. 1999) and is a bit lower than the original M3 result. The M3 and SFI results were obtained via a uniform POTENT reconstruction and error analysis. Inside $`R\stackrel{<}{}50h^1\mathrm{Mpc}`$ they generally agree, while at larger radii the bulk velocity in SFI drops faster than in M3. This difference may be related to a difference in matching the zero points of the TF relations between North and South in the two catalogs, but one should admit that these two samples are not large enough for a reliable estimate of $`V`$ beyond $`50h^1\mathrm{Mpc}`$, where, for one thing, the Malmquist-bias corrections are quite uncertain. The new Shellflow result seems to favor a low value, but it is for a shell outside the main body of M3 and SFI, and the large error due to the relatively small number of galaxies makes it consistent with the model, and with both M3 and SFI, at the $`1\sigma `$ level. However, the Shellflow data will enable a revised calibration of M3 and SFI, which can significantly reduce the uncertainties. The preliminary report from the ENEAR survey of Fundamental-Plane (FP) velocities (Wegner et al., tv) agrees well with M3 and SFI. In our local $`30h^1\mathrm{Mpc}`$ vicinity, we have computed the bulk flows via a minimum $`\chi ^2`$ fit with volume weights for the two independent new surveys: the accurate SBF measurements of 300 ellipticals (Tonry et al., tv), and TF measurements of 500 spirals (Pierce, Tully, tv). A dispersion of $`300\mathrm{km}\mathrm{s}^1`$ is assumed in the fit, to make $`\chi ^2\mathrm{d}.\mathrm{o}.\mathrm{f}`$. One can see that all the results within $`50h^1\mathrm{Mpc}`$ are remarkably consistent with our theoretical expectations and with each other. On larger scales we have several new results based on clusters of galaxies: SMAC and EFAR of FP ellipticals, and LP10K and SCI+II based on TF spirals. The EFAR sample is an exception because it covers limited areas of sky, largely perpendicular to the direction of the bulk flow. The fact that all these results are consistent with the same bulk-flow direction is very comforting in view of the worries raised earlier by the BCG result. The amplitudes, on the other hand, show large scatter. The results are as reported by the observing teams, with an effective top-hat radius crudely assigned. The main point is that no one single measurement is more than $`2\sigma `$ away from the model prediction, even in the simplified presentation of in Fig. 1. This is confirmed by a more accurate analysis which takes into account the systematic errors due to sampling together with the random errors and cosmic variance (Hudson, tv; Hoffman, tv). A model with a steeper drop in the power spectrum on the “blue” side of the peak, like CHDM, gives a somewhat higher amplitude and therefore a better fit to SMAC and LP10K. Hudson demonstrates further that the bulk velocity vectors as measured in all of these large-scale surveys (except BCG) are in fact consistent with each other at the 95% CL. Take for example the “high” LP10K value compared to the “low” SCII value. We note that the individual peculiar velocities of the 7 clusters common to these samples are consistent within the errors, and that the 15 SCII clusters that lie within the LP10K shell have a nominal bulk velocity of $`400`$, closer to the LP10K result. I therefore do not see the need or justification for Willick (tv) to discard his own result; it is high, but consistent with the model and the other data given the expected (big) errors. As pointed out by Strauss (tv), there is no clear understanding yet of the source of the discrepant BCG result, and we are therefore eagerly waiting for the upcoming, follow-up, larger BCG survey for a possible resolution of this mystery. The bulk velocity of SNIa is computed by us, volume weighted and de-biased, from the 16 SNe inside $`60h^1\mathrm{Mpc}`$ (out of the sample of 44 inside $`300h^1\mathrm{Mpc}`$, Riess, tv). Even slightly higher values are obtained (with no volume weights) inside $`100h^1\mathrm{Mpc}`$. The SN result still carries a large error because of the small number of objects in the current sample, but the accurate distances and the unlimited sampling potential promise that this distance indicator will eventually become very valuable in reducing the uncertainties on large scales. Despite the apparent scatter on large scales, and the disputes over the bulk velocity being small or large, we see no significant discrepancies between the bulk velocity data and models, and thus the bulk flow measurements do not introduce a problem for homogeneous cosmology. Even though there seems to be a slight preference for CHDM-like models, the bulk velocity is clearly not the tool for distinguishing between the variants of our standard picture. On the other hand, the fact that the model predictions for the bulk velocity are robust (especially once forced to roughly obey the normalization constraints from other data) allows us to use the observed amplitude of $`300\mathrm{km}\mathrm{s}^1`$ bulk velocity on scales $`\stackrel{<}{}100h^1\mathrm{Mpc}`$, in comparison with the observed $`\delta T/T10^5`$ in the CMB, as a unique probe of the fluctuation growth rate. This provides the most convincing confirmation for our basic hypothesis that structure has evolved by gravitational instability (see Bertschinger, Gorski & Dekel 1989; Zaroubi et al. 1997a). ## 3 Local Neighborhood The new SBF peculiar velocities (Tonry, Dressler, tv; Blakeslee et al., tv), in which the distance of each galaxy is estimated with unprecedented accuracy and Malmquist biases are small, allow a high-resolution study of the dynamics in our local cosmological neighborhood, within $`30h^1\mathrm{Mpc}`$ of the LG. Fig. 2 demonstrates the potential of this data via a high-resolution map of the mass-density field as recovered by a Wiener-Filter. This method (Zaroubi, Hoffman & Dekel 1999; originally Kaiser & Stebbins 1991) provides the most likely mean density field, given the noisy data and an assumed model for the power spectrum (in this case a tilted $`\mathrm{\Omega }=1`$ CDM which best fits the M3 data). The method assumes that both the density fluctuations and the errors are Gaussian, and it uses linear gravity. Note that the WF induces variable smoothing as a function of the local noise; it allows a high-resolution analysis nearby, where the data is of high quality, with an effective smoothing of nearly G4 (a Gaussian window of $`4h^1\mathrm{Mpc}`$), compared to $``$G10 with the M3 and SFI data on larger scales. While showing (on the left) the near side of the known Great Attractor (GA), the map reveals for the first time fine dynamical entities nearby. The counterparts of these structures in the galaxy distribution are clearly seen in the corresponding maps from the Nearby Galaxies Atlas (Tully & Fisher 1987, plates 15 and 19). For example, the Virgo and Ursa Major clusters, branching out from the GA along Y$`15h^1\mathrm{Mpc}`$ all the way to X$`30`$, and the Fornax complex, stretching in the south Galactic hemisphere (Y$`<0`$) out to X$`20`$. The general similarity between the galaxy clusters and the underlying mass attractors is encouraging. A quantitative comparison would allow a study of the non-trivial biasing relation between galaxies and dark matter in the local vicinity, on scales smaller than addressed so far. A sample of $`500`$ TF peculiar velocities within $`30h^1\mathrm{Mpc}`$ is being completed by Pierce, Tully and coworkers, and the ENEAR survey will add ellipticals in this region. Together with the accurate SBF data, they present a new opportunity for high-resolution dynamical analysis of the local neighborhood. For example, these new data call for a revisited VELMOD analysis of comparison between peculiar velocities and a whole-sky redshift survey (Willick et al. 1997b; Willick & Strauss 1999). It should be borne in mind that a proper high-resolution analysis must treat nonlinear effects in a reliable way, which must be tested using proper mock catalogs (§9). ## 4 Great Attractor and Perseus Pisces The M3 and SFI datasets, soon to be cross-calibrated with the whole-sky Shellflow, and then to be complemented by ENEAR, provide a rich body of peculiar velocity data for a quantitative analysis of the dynamical fields on intermediate-large scales. By applying methods like POTENT to these data we obtain reliable reconstructions with uniform G12 smoothing out to $`60h^1\mathrm{Mpc}`$ (at least in several directions). Fig. 3 shows the G12 density field in the Supergalactic plane as extracted by POTENT from the M3 peculiar velocities. The dominant structures are the Great Attractor (GA, left), Perseus-Pisces (PP, right), and Coma (back), with the big void stretching in between. Fig. 4 shows Supergalactic density maps as reconstructed from different datasets and by different methods. The VM2 calibration of M3, which has been tailored to maximize the agreement of M3 with the IRAS 1.2Jy redshift survey, hardly makes a difference to the density map (while it does reduce the bulk flow somewhat). The appearance of the GA is quite similar in M3 and SFI, while PP in SFI is lower and located further away, with the big void between the LG and PP deeper and more extended (and thus pointing to a larger value of $`\mathrm{\Omega }_\mathrm{m}`$, §7). There is a general similarity between the dynamical mass-density maps (for M3 more than SFI) and the IRAS 1.2Jy galaxy-density map, allowing a reconstruction of the local biasing field (§8). The WF mean field density contrast at a given location is, by construction, correlated with the quality of the data there. The WF maps thus demonstrate that the M3 and SFI densities are similar in the regions of high-quality data, such as the GA region, and they highlight the robust large-scale dynamical features in our neighborhood. The M3 and SFI results differ mostly in their bulk velocities in shells near $`5060h^1\mathrm{Mpc}`$ — a problem that Shellflow may help resolving. ## 5 Decomposition: Local and Tidal Components There has been a lot of discussion over the years about which object is responsible for what velocity. In general, this discussion is conceptually confused because the acceleration at a point is the integral of the density fluctuations over all of space and it cannot be uniquely assigned to any specific source. Nevertheless, given a specific volume, one can uniquely decompose the velocity at any point into two well-defined components: a “divergent” and a “tidal” component, due to the density fluctuations within the volume and outside it, respectively. A demonstration of such a decomposition is shown in Fig. 5, for the mean velocity field recovered using a WF from the M3 peculiar velocities, with respect to a sphere of radius $`60h^1\mathrm{Mpc}`$ about the LG. The WF velocity field is first translated into a density field via linear theory, $`\delta vv`$, and then the divergent velocity field is reconstructed by integrating the inverse of this Poisson equation inside the sphere of $`60h^1\mathrm{Mpc}`$. The tidal field is obtained by subtracting the divergent component from the total velocity. The divergent component shows the main features of convergence and divergence within the volume, associated with the GA, PP, and the voids in between. The CMB velocity of the LG is about half divergent, namely due to GA, PP and such, and half tidal, due to mass fluctuations external to the $`60h^1\mathrm{Mpc}`$ sphere. There is no significant bulk velocity in the divergent component inside that sphere (although there could be one, e.g., if there was only a single dominant off-center attractor in this sphere); The bulk velocity inside the sphere of $`60h^1\mathrm{Mpc}`$ is all tidal, due to external fluctuations. When this bulk velocity is subtracted from the tidal component, one recovers the shear field, dominated by the quadrupole and higher moments. The major eigenvector of the shear tensor lies roughly along the line connecting the LG with the Shapley supercluster. A fit of a simplified toy model made of a single point-mass attractor to the tidal component yields a mass excess of $`4\times 10^{17}h^1\mathrm{\Omega }^{0.4}M_{}`$ at a distance of $`175h^1\mathrm{Mpc}`$ in the direction of Shapley. This analysis thus allows us to extract information from the velocities in a given volume about the mass distribution outside this volume, and it can be applied to different datasets and different volumes. For example, when applied to the WF (or POTENT) velocities from the SFI data inside $`60h^1\mathrm{Mpc}`$, the tidal bulk velocity turns out smaller than in the M3 case, but the residual shear field is very similar, indicating a similar quadrupole and external sources. When applied to the SBF data within $`30h^1\mathrm{Mpc}`$, the decomposition yields similarly that the bulk velocity is dominated by the tidal field, and the major axis of the shear tensor lies roughly along the line connecting PP, LG and GA. ## 6 Very Large Scales The new data of peculiar velocities for clusters of galaxies on large scales allow dynamical reconstruction beyond just the bulk velocity. As a demonstration, Fig. 6 shows G20 POTENT maps extracted from a combination of the SMAC, LP10K, and SN data out to $`120h^1\mathrm{Mpc}`$. Beyond the familiar structures of GA and PP that dominate the inner $`60h^1\mathrm{Mpc}`$, one can see the Coma structure at Y$`50100h^1\mathrm{Mpc}`$, and the near sides of the Shapely (Y$`>0`$) and Horologium (Y$`<0`$) overdensities behind the GA, at X$`100h^1\mathrm{Mpc}`$ and beyond. The earlier hints from the tidal component of the velocities at smaller distances (Fig. 5) are now beginning to be confirmed by the local derivatives of the peculiar velocities directly measured at large distances. Another example for a large-scale study is the monopole analysis, which could constrain a local Hubble Bubble (Zehavi et al. 1997; Dale & Giovanelli, tv; Fruchter, tv) and thus modify the local estimates of $`h`$ and $`\mathrm{\Omega }_\mathrm{m}`$7). Peculiar velocities of many objects both inside the void and far outside it are necessary for a reliable result. With more and more data at large distances, the monopole deviations from the universal Hubble flow could be determined with increasing accuracy, because the error $`\delta H=\delta v/r`$ is independent of distance (as $`\delta vr`$). I think that the greatest potential for future studies of local cosmic flows lies in big surveys of SN Ia (Riess, tv). They provide a distance indicator of only 5-10% error which can be observed out to hundreds of megaparsecs and is, in principle, of unlimited sampling density, limited in practice only by the patience and dedication of the observers. ## 7 Cosmological Parameters I share the discontent expressed by Strauss (tv) and others from the fact that the results from cosmic flows have been unjustifiably underrated by many in the community. This has a lot to do with bad PR on our side, where in many cases we tend to stress marginal apparent discrepancies between different results and take for granted the robust valuable findings. An important feature of peculiar velocity data is that it allow us to addresses directly the dynamics of the total (cold) mass distribution, and thus bypass the difficulties introduced by density biasing of the luminous galaxies, which are unavoidable in the analysis of redshift surveys. For example, the spatial velocity variations provide direct constraints on the value of the cosmological density parameter $`\mathrm{\Omega }_\mathrm{m}`$. This makes them valuable even when the errors are not yet as small and under control as they could be, given that the available complementary data all involve additional parameters such as $`\sigma _8`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$, or biasing parameters, and they all have their own appreciable errors. The results obtained on intermediate scales directly from M3 and SFI constrain $`\mathrm{\Omega }_\mathrm{m}`$ at the $`\pm 2\sigma `$ level to the range 0.3-1.0 (Primack, tv; based on Nusser & Dekel 1993; Dekel & Rees 1993; Bernardeau et al. 1995; and yet unpublished results from the newer data). Allowing the power spectrum to be of the CDM type, the maximum-likelihood estimate of $`\mathrm{\Omega }_\mathrm{m}`$ is $`0.5\pm 0.1`$ (Zaroubi et al. 1997b; Freudling et al. 1999). A similar analysis with more free parameters can constrain additional parameters that affect the power spectrum, such as the large-scale power index $`n`$, or the normalization parameter $`\sigma _8`$. The results from cosmic flows provide valuable orthogonal constraints to complementary data. For example, combined with the constraints from the global geometry of space-time based on high-redshift supernovae type Ia, which is roughly $`0.8\mathrm{\Omega }_\mathrm{m}0.6\mathrm{\Omega }_\mathrm{\Lambda }=0.2\pm 0.1`$ (Riess et al. 1998; Perlmutter et al. 1999), the velocity constraints on $`\mathrm{\Omega }_\mathrm{m}`$ confine the value of $`\mathrm{\Omega }_\mathrm{\Lambda }`$ to $`0.8\pm 0.3`$ (Zehavi & Dekel 1999). Jointly with available CMB constraints as well, one can obtain simultaneous constraints on three parameters, such as $`\mathrm{\Omega }_\mathrm{m}`$, $`\sigma _8`$ and $`h`$ (Zehavi & Dekel, tv), still without appealing to biasing parameters. The addition of constraints from the abundance of clusters (Eke, Cole & Frenk 1996), or from gravitational lensing, should allow us to confine these dynamical parameters with even higher accuracy. We heard evidence for the “coldness” of the local flow (Lake, tv; Van de Weygaert & Hoffman, tv; Klypin, tv), which left us still wondering whether it is really in conflict with standard models (Strauss, tv). I dare to report on a very preliminary result of a likelihood analysis based on M3 and SFI (extending Zehavi & Dekel, tv), which seems to favor a power spectrum that drops sharply at $`k\stackrel{>}{}k_{\mathrm{peak}}`$. Such a power spectrum could be obtained, for example, with a high faction of baryonic or hot dark matter. A similar, independent hint comes from the SMAC data (Hudson et al., tv). This would add to the uncertainty of the results obtained under the assumption of $`\mathrm{\Lambda }`$CDM. When a redshift survey is involved, the unknown biasing relation between galaxies and mass introduces another source of uncertainty, which should not be ignored. One should treat biasing properly before $`\mathrm{\Omega }_\mathrm{m}`$ can be extracted from the range of estimates of parameters like $`\beta `$8). Many different clever ideas of how to estimate cosmological parameters from peculiar velocities can be thought of. Some turn out to be more discriminatory and less biased than others. A given idea can turn into a viable method, whose results should be considered seriously, only after the method has been tested and calibrated using proper mock catalogs, and a detailed error analysis is provided (§9). If this attitude is adopted by all practitioners, the field will regain the respectability it deserves in evaluating the cosmological parameters. ## 8 Biasing Understanding the biasing relation between galaxies and mass is crucial for the purpose of translating measurements of bias-contaminated quantities such as $`\beta `$ (vaguely defined as $`\mathrm{\Omega }^{0.6}/b`$) into accurate estimates of $`\mathrm{\Omega }`$. On the other hand, the biasing can provide hints for the complicated physical processes involved in galaxy formation. The linear deterministic relation between the density fluctuations of galaxies and mass, $`\delta _\mathrm{g}(xx)=b\delta (xx)`$, has no theoretical basis and is not self-consistent. Indeed, the analytic analysis of halo biasing (Mo & White 1996) predicts that the biasing is non-linear, $`b=b(\delta )`$. Then, the biasing at any other smoothing scale must obey a different $`b(\delta )`$ and be non-deterministic, i.e., it involves scale-dependence and scatter. In addition to shot noise, an inevitable source of scatter are hidden variables affecting the efficiency of galaxy formation beyond its dependence on $`\delta `$, which are yet to be studied in detail (e.g., Blanton et al. 1998). Fig. 7 demonstrates some of the nontrivial qualitative features of halo biasing in $`N`$-body simulations. The nonlinear shape of $`b(\delta )`$ at $`\delta <0`$ is robust, while at $`\delta >0`$ it varies with mass, time and scale (see also Frenk, tv; Sheth, tv). In order to properly incorporate the biasing in the analysis of cosmic flows, one needs an appropriate formalism that quantifies non-trivial biasing. For example, in the formalism of Dekel & Lahav (1999), the linear and deterministic relation at a given scale and time is replaced by the conditional distribution $`P(\delta _\mathrm{g}|\delta )`$. The mean nonlinear biasing is characterized by the conditional mean $`\delta _\mathrm{g}|\delta b(\delta )\delta `$, and the scatter by the conditional variance $`\sigma _\mathrm{b}^2(\delta )`$. To second order, the biasing is then defined by 3 parameters: the slope $`\widehat{b}`$ of the regression of $`\delta _\mathrm{g}`$ on $`\delta `$ (replacing $`b`$), a non-linearity parameter $`\stackrel{~}{b}/\widehat{b}`$, and a scatter parameter $`\sigma _\mathrm{b}/\widehat{b}`$. The ratio of variances $`b_{\mathrm{var}}^2`$ and the correlation coefficient $`r`$ mix these fundamental parameters. In the case shown in Fig. 7 at $`z=0`$, the overall non-linearity is $`\stackrel{~}{b}^2/\widehat{b}^2=1.08`$, and the scatter is $`\sigma _\mathrm{b}^2/\widehat{b}^2=0.15`$. These effects lead to differences of order 20-30% among the various measures of $`\beta `$. An additional contribution to the span of $`\beta `$ estimates may have to do with the biasing dependences on scale and on galaxy properties. These are deduced from simulations such as the one shown in Fig. 8 (e.g., Somerville et al. 1999), and can be measured from large redshift surveys with type identification, such as SDSS. Together, these nontrivial biasing features could explain much of the observational range, $`\beta _{\mathrm{IRAS}}=0.41.0`$ (Strauss, tv, Table 2). Any outliers are suspicious of underestimated errors, and should be re-examined using proper mock catalogs (§9). ## 9 Error Analysis The research field of cosmic flows, which started in the eighties with semi-qualitative analyses, has developed into a mature, quantitative phase in which the errors ought to be evaluated in detail. This will allow us to understand the range of estimates for parameters like $`\beta `$ and $`\mathrm{\Omega }_\mathrm{m}`$. Measurements based on new methods or data which are not accompanied by a detailed error analysis are not very useful at this point (though such results are still being presented at times). An appropriate tool for error analysis is an ensemble of Monte Carlo mock catalogs, in which both the nonlinear gravitational dynamics and the galaxy formation process are simulated properly, and then galaxies are sampled and measured in a way that mimics the observational procedure. Such mock catalogs allow an evaluation of both random and systematic errors. The development of the POTENT method (Dekel et al. 1999; Kolatt, tv) is an example. The recovery algorithm and the associated methods for measuring cosmological parameters have been calibrated based on mock catalogs by Kolatt et al. (1996). A key feature of these simulations was the effort to mimic the actual structure in our local neighborhood, generating the initial conditions using constrained realizations based on the density of galaxies in the IRAS 1.2Jy redshift survey. Such simulations allow for correlations between the errors and the underlying density field. The main limitations of these mock catalogs were the unsatisfactory treatment of nonlinear effects due to limited resolution in the simulations, the simplified way of identifying galaxies, and the fact that the simulations were initially restricted to an $`\mathrm{\Omega }_\mathrm{m}=1`$ standard CDM cosmology. It is now time for a new generation of mock catalogs that will overcome these limitations. New mock catalogs of this sort are becoming available, based on the GIF simulations (Eldar et al. 1999). Constrained realizations (based on IRAS 1.2Jy) serve as initial conditions that were evolved forward in time using a high-resolution parallel tree code, assuming either $`\tau `$CDM ($`\mathrm{\Omega }_\mathrm{m}=1`$) or $`\mathrm{\Lambda }`$CDM ($`\mathrm{\Omega }_\mathrm{m}=0.3`$), both with power spectra that allow a simultaneous fit to COBE normalization and the observed cluster abundance. Particle positions and velocities were stored at 50 logarithmically spaced time-steps in order for different recipes of galaxy formation to be implemented post hoc in considerable detail. The physical processes include shock heating (and possibly radiative heating) of the pre-galactic gas, radiative cooling, star-formation, hydrodynamic (and possibly radiative) feedback from supernovae, and enrichment with heavy elements. Fig. 8 shows a slice from the $`\mathrm{\Lambda }`$CDM constrained simulation, comparing the dark-matter distribution with the galaxy distribution. Given the relevant properties for each of the galaxies, such as magnitude and internal velocity, one can “observe” a set of Monte-Carlo mock catalogs following the selection criteria and specifications of each of the observed catalogs (e.g., Diaferio et al. 1999). By applying one’s algorithm to these mock catalogs, for which the “true” underlying dynamics is known, one can quantify the random and systematic errors in detail. These simulations and mock catalogs will soon be made available as standard benchmarks for reconstruction methods. Designer mock catalogs for specific new datasets can be made to order. ## 10 Conclusion My main points are as follows: $``$ The observed amplitudes of bulk velocity out to $`100h^1\mathrm{Mpc}`$ (with the marginal exception of the current BCG result) are consistent with our standard family of cosmological models. Full “convergence” is not really required on any scale. The main lesson from the bulk velocity is its general consistency with the gravitational growth rate of perturbations starting from the fluctuations at recombination as measured in the CMB. $``$ The SBF and other data provide an opportunity for high-resolution dynamical analysis of the local neighborhood out to $`30h^1\mathrm{Mpc}`$. Virgo, Ursa Major and Fornax show up as local attractors, and can help us model the biasing relation on small scales, provided that nonlinear effects are treated properly. $``$ The dynamical structure of the GA is robust in the M3 and SFI datasets. The Shellflow data should improve the cross-calibration of North and South in the M3 and SFI data, which, together with other data, ought to allow an accurate evaluation of the bulk velocity out to $`70h^1\mathrm{Mpc}`$. $``$ A decomposition of the velocity field into divergent and tidal components allow us to tell that a significant part of the bulk velocity inside $`60h^1\mathrm{Mpc}`$ is due to external density fluctuations, and that the shear field points at the Shapely concentration as a massive attractor. $``$ The extended cluster and SN velocities enable dynamical analysis beyond just bulk flow out to $`120h^1\mathrm{Mpc}`$, confirming the mass enhancements associated with Coma, Shapely and Horologium. The available data provide marginal evidence for a local Hubble Bubble, that should become less ambiguous with more SN data. $``$ Supernovae type Ia seem to be the most promising tool for cosmic flow analysis. The SN hunters are thus encouraged to pursue large surveys at low redshifts. $``$ Peculiar velocities do provide interesting constraints on cosmological parameters. For example, they confine $`\mathrm{\Omega }_\mathrm{m}`$ to the range 0.3-1.0 at 95% confidence, independent of biasing or other data, solely based on the assumption of Gaussian initial fluctuations. Combined with other data, this constraint is translated to constraints on other parameters, such at $`\mathrm{\Omega }_\mathrm{\Lambda }`$, $`\sigma _8`$, $`h`$, etc. $``$ Galaxy biasing is an obstacle for translating a measured value of $`\beta `$ into an estimate of $`\mathrm{\Omega }_\mathrm{m}`$. Nontrivial features of the biasing scheme, including nonlinearity, stochasticity, scale dependence and type dependence, as predicted by models and simulations, can explain much of the span of estimates for $`\beta `$. $``$ Quantitative error analysis is essential in order to complete the transition of large-scale dynamics into a mature field. Every method has to be calibrated with appropriate mock catalogs, in which nonlinear dynamics and galaxy formation are simulated properly. Such mock catalogs are being produced and offered as benchmarks. Where next? The field of cosmic flows enjoyed several influential conferences, starting in Hawaii, Rio and the Vatican in 1985-1987, then Paris in 1993, and now Victoria in 1999. Projecting ahead, we should look forward to meeting again with exciting new results around 2005. This is provided that somebody energetic like Stephane Courteau takes charge in organizing such a conference. ###### Acknowledgements. I am especially indebted to Ami Eldar, the current guardian of POTENT, for the computations and maps. I am grateful to all my collaborators, many of which have participated in this conference. Our work has been partially supported by the US-Israel Binational Science Foundation (95-00330, 98-00217), by the Israel Science Foundation (950/95, 546/98), and by NASA (ATP NAG 5-301). I believe I represent all the conference participants in thanking Stephane Courteau for organizing this very successful conference. To appear in “Cosmic Flows: Towards an Understanding of Large-Scale Structure”, eds S. Courteau, M.A. Strauss, & J.A. Willick, ASP Conf. Series
no-problem/9911/hep-ph9911219.html
ar5iv
text
# Quarkonia and Hybrids from the Lattice ## 1 Lattice QCD and heavy quarks The simplest way to treat a heavy quark on the lattice is to approximate it as static. The heavy quark propagators are then trivial to evaluate since they are products of time-directed gauge links. This enables potentials between such static colour sources at separation $`R`$ to be defined and the resulting spectrum of quarkonia can then be evaluated exactly from these potentials using the Schrödinger equation in the Born-Oppenheimer or adiabatic approximation. One advantage of this approach is that the continuum limit (lattice spacing $`a0`$) can be readily taken. Moreover, lattice results at small separation in terms of the lattice spacing (small $`R/a`$) can be corrected by hand for the lattice artifacts which arise since the lattice spatial symmetry is cubic rather than the continuum case which has the full rotation group. In practice, however, the $`b`$ and $`c`$ quarks are not sufficiently heavy that this static approach is exact. Corrections can be arranged in powers of $`1/m_Q`$ and can be evaluated in principle using the heavy quark effective theory (HQET). Examples of such calculations are the determination of the spin-orbit and spin-spin potentials between static quarks as well as velocity-dependent terms in the static potential itself. To explore retardation effects, one needs a formalism in which the heavy quarks are moving. One promising approach is to expand the full theory as an effective lagrangian in powers of $`v/c`$ of the heavy quarks. This is the NRQCD scheme and the leading retardation effect in NRQCD comes from the p.A coupling between a quark colour charge in motion and the gluon field. Heavy quark propagators are relatively easy to evaluate in NRQCD since the heavy quarks do not propagate backwards in time. Because there are contributions in the effective lagrangian approach of the form $`1/m_Qa`$, the continuum limit as $`a0`$ is not to be taken: instead extra terms providing matching with the continuum to higher powers in $`a`$ and involving higher powers of $`v/c`$ in the effective lagrangian are needed to increase accuracy. The coefficients of these terms should ideally be determined non-perturbatively but in practice the lowest order perturbative expressions (tadpole improved) are usually used. This makes it difficult to estimate the systematic errors in the NRQCD approach. Note that corrections to the lattice cubic symmetry to restore rotational invariance will come from such higher order terms. Without any approximation, the lattice formalism for relativistic quarks (Wilson-Dirac or staggered) can be applied directly to heavy quarks. Provided that $`m_Qa<<1`$, this approach is quite straightforward. Thus only $`c`$ quarks are tractable this way with present lattice spacings. This is a useful complement to the other two approaches which are less reliable for the case of the lighter $`c`$ quarks. ## 2 Quarkonia In the static approximation, a potential $`V(R)`$ between static quarks in the fundamental representation of colour and at separation $`R`$ can be extracted from the lattice. In the quenched approximation, this evaluation goes back to the early 1980’s. The salient features are a behaviour like $`e/R`$ at small $`R`$ and like $`\sigma R`$ at large $`R`$. Here $`\sigma `$ is the string tension and $`e`$ is related to the running coupling $`\alpha _s`$ (indeed this is one way to determine $`\alpha _s`$ from the lattice ). The shape of the lattice potential (labelled $`\mathrm{\Sigma }_g^+`$) and the wavefunctions and energy levels are illustrated in fig. 1. The $`b\overline{b}`$ spectrum evaluated from this lattice potential in the Born-Oppenheimer approximation was found not to agree precisely with experiment. One way to quantify this is that the energy level ratio $`\frac{1P1S}{2S1S}`$ is around 0.71 from the quenched lattice while it is 0.78 from experiment (for the spin averaged S and P-wave $`b\overline{b}`$ states). For a more thorough discussion of this, including the effect of velocity dependent terms in the potential, see ref . This can be understood as a consequence of the quenched approximation. The value of the Coulomb coefficient $`e`$ is expected in lowest order of perturbation theory to contain a factor $`(332N_f)^1`$ and so will increase as sea quarks (with $`N_f=2`$, say) are included. This effect has been confirmed by explicit lattice calculation including such sea quarks . An illustration is shown in fig. 2. Indeed effects of including sea quarks seem to be comparable to those expected from perturbation theory although much work still needs to be done to include even lighter sea quarks in the vacuum so that the extrapolation to light sea quarks is under better control. The consequence of this increase in the depth of the potential at small $`R`$ is that the 1S level will be moved down in energy, resulting in an increase of the ratio $`\frac{1P1S}{2S1S}`$ to bring it more into line with experiment. For this reason we will use differences with the 2S energy to estimate the hybrid energy levels subsequently from quenched calculations. Note that, experimentally, the $`1P1S`$ energy splitting is very similar for $`c\overline{c}`$ and $`b\overline{b}`$. This coincidence has been used as evidence that this quantity is insensitive to quark masses and hence a good point of comparison between lattice calculations and experiment. While this may be true for valence quarks, it not likely to be valid for sea quarks, since, as we have discussed above, the $`1S`$ level is especially sensitive to the sea quark effects. The other régime in which sea quarks will make a definite impact is in the large $`R`$ region. It will become energetically favourable to create two heavy-light mesons ($`Q\overline{q}`$) of energy $`2m_{Q\overline{q}}`$ when this energy is less than $`V(R)`$. This phenomenon is known as string breaking since as $`R`$ is increased the colour flux between the static sources breaks with the formation of a light quark-antiquark pair. From the lattice mass values of the heavy-light mesons, this string breaking can be predicted to occur at around 1.2fm - an illustration is shown in fig. 3. Lattice studies of the potential $`V(R)`$ using generalised Wilson loops have not reached sufficient precision to observe this directly. It is known from studies of the adjoint potential that a variational approach involving both the string states and the meson-antimeson states will be needed to obtain accurate energy estimates at these large separations — this is under way . NRQCD calculations of quarkonia show very similar results to the potential approach described above. A comparison for spin averaged masses of the NRQCD result with the potential approach shows no significant evidence for retardation effects. Indeed differences among NRQCD results arising from different lattice spacings and different treatment of higher order corrections are of the same magnitude as their difference from the potential result. This is illustrated in fig. 4 for the quarkonium $`1P`$ and $`2S`$ excitations. A quenched lattice study of quarkonia using relativistic quarks also shows similar results to those found by the other methods described above. ### 2.1 Quarkonia decays A very approximate study of some quarkonium decays can be made in the Born-Oppenheimer approximation using esentially the methods of atomic physics: overlaps of wave functions. In particular the decay to lepton pairs will be governed by the wave function at the origin. In practice the corrections to the non-relativistic approach for decays are much larger than for the energy values, so this approach is rather imprecise. Hadronic decays (such as $`\mathrm{{\rm Y}}(4S)B\overline{B}`$) are of interest because they proceed by string breaking: a light quark pair is created which then results in a pair of heavy-light mesons being produced. This process is accessible in principle from lattice calculations . For example, from the splitting of the energy levels caused by string breaking, one can estimate the decay rate . ## 3 Hybrid Mesons The static quark approach gives a very straightforward way to explore hybrid quarkonia. These will be $`Q\overline{Q}`$ states in which the gluonic contribution is excited. The ground state of the gluonic degrees of freedom has been explored on the lattice, and, as expected, corresponds to a symmetric cigar-like distribution of colour flux between the two heavy quarks. One can then construct less symmetric colour distributions which would correspond to gluonic excitations. The way to organise this is to classify the gluonic fields according to the symmetries of the system. This discussion is very similar to the description of electron wave functions in diatomic molecules. The symmetries are (i) rotation around the separation axis $`z`$ with representations labelled by $`J_z`$ (ii) CP with representations labelled by $`g`$ and $`u`$ and (iii) C$``$. Here C interchanges $`Q`$ and $`\overline{Q}`$, P is parity and $``$ is a rotation of $`180^0`$ about the mid-point around the $`y`$ axis. The C$``$ operation is only relevant to classify states with $`J_z=0`$. The convention is to label states of $`J_z=0,1,2`$ by $`\mathrm{\Sigma },\mathrm{\Pi },\mathrm{\Delta }`$ respectively. In lattice studies the rotation around the separation axis is replaced by a four-fold discrete symmetry and states are labelled by representations of the discrete group $`D_{4h}`$. The ground state configuration of the colour flux is then $`\mathrm{\Sigma }_g^+`$ ($`A_{1g}`$ on the lattice). The exploration of the energy levels of other representations has a long history in lattice studies . The first excited state is found to be the $`\mathrm{\Pi }_u`$ ($`E_u`$ on a lattice) - see fig. 5 for an illustration. This can be visualised as the symmetry of a string bowed out in the $`x`$ direction minus the same deflection in the $`x`$ direction (plus another component of the two-dimensional representation with the transverse direction $`x`$ replaced by $`y`$), corresponding to flux states from a lattice operator which is the difference of U-shaped paths from quark to antiquark of the form $``$. Recent lattice studies have used an asymmetric space/time spacing which enables excited states to be determined in a well controlled way. Results are shown in fig. 6 for a large variety of gluonic excitations. These results confirm the finding that the $`\mathrm{\Pi }_u`$ excitation is the lowest lying and hence of most relevance to spectroscopy. From the potential corresponding to these excited gluonic states, one can determine the spectrum of hybrid quarkonia using the Schrödinger equation in the Born-Oppenheimer approximation. This approximation will be good if the heavy quarks move very little in the time it takes for the potential between them to become established. More quantitatively, we require that the potential energy of gluonic excitation is much larger than the typical energy of orbital or radial excitation. This is indeed the case , especially for $`b`$ quarks. Another nice feature of this approach is that the self energy of the static sources cancels in the energy difference between this hybrid state and the $`Q\overline{Q}`$ states. Thus the lattice approach gives directly the excitation energy of each gluonic excitation. The $`\mathrm{\Pi }_u`$ symmetry state corresponds to excitations of the gluonic field in quarkonium called magnetic (with $`L^{PC}=1^+`$) and pseudo-electric (with $`1^+`$) in contrast to the usual P-wave orbital excitation which has $`L^{PC}=1^{}`$. Thus we expect different quantum number assignments from those of the gluonic ground state. Indeed combining with the heavy quark spins, we get a degenerate set of 8 states with $`J^{PC}=1^{}`$, $`0^+`$, $`1^+`$, $`2^+`$ and $`1^{++},0^+,1^+,2^+`$ respectively. Note that of these, $`J^{PC}=1^+,0^+`$ and $`2^+`$ are spin-exotic and hence will not mix with $`Q\overline{Q}`$ states. They thus form a very attractive goal for experimental searches for hybrid mesons. Illustrations of the spectrum of such spin-exotic hybrid mesons are given in fig. 1 and 5. The eightfold degeneracy of the static approach will be broken by various corrections. As an example, one of the eight degenerate hybrid states is a pseudoscalar with the heavy quarks in a spin triplet. This has the same overall quantum numbers as the S-wave $`Q\overline{Q}`$ state ($`\eta _b`$) which, however, has the heavy quarks in a spin singlet. So any mixing between these states must be mediated by spin dependent interactions. These spin dependent interactions will be smaller for heavier quarks. It is of interest to establish the strength of these effects for $`b`$ and $`c`$ quarks. Another topic of interest is the splitting between the spin exotic hybrids which will come from the different energies of the magnetic and pseudo-electric gluonic excitations. One way to study this is using the NRQCD approach which enables the $`L^{PC}=1^+`$ and $`1^+`$ excitations to be separated in a spin averaged approach. Lattice results indicate no statistically significant splitting (see fig. 4) although the $`1^+`$ excitation does lie a little lighter. This would imply, after adding in heavy quark spin, that the $`J^{PC}=1^+`$ hybrid was the lightest spin exotic. In principle the NRQCD approach, by adding spin-dependent terms in the Lagrangian, can address the full splitting of the 8 levels. However, as we shall also discuss in connection with propagating quark approaches, the mixing of non spin exotic states with $`Q\overline{Q}`$ may confuse this situation. Including spin-dependent terms in a NRQCD study of hybrids does give a relatively large spin splitting among the triplet states. Unfortunately this study has only considered magnetic gluonic excitations so cannot address the splitting between spin exotic hybrids. Confirmation of the ordering of the spin exotic states also comes from lattice studies with propagating quarks which are able to measure masses for all 8 states. We discuss this evidence in more detail below - see also fig. 7. Within the quenched approximation, the lattice evidence for $`b\overline{b}`$ quarks points to a lightest hybrid spin exotic with $`J^{PC}=1^+`$ at an energy given by $`(m_Hm_{2S})r_0`$ =1.8 (static potential ); 1.9 (static potential , NRQCD ); 2.0 (NRQCD ). These results can be summarised as $$(m_Hm_{2S})r_0=1.9\pm 0.1$$ Here $`r_0`$ is defined implicitly by the static force as $`r^2F(r)=1.65`$ at $`r=r_0`$ and is a well measured quantity on a lattice derived from the static potential $`V(R)`$ at $`r0.5`$ fm. Within the quenched approximation, where different experimental observables differ by of order 10%, the overall scale is uncertain but we choose $`r_0^1=390`$ MeV $`\pm `$ 10%. Using the experimental mass of the $`\mathrm{{\rm Y}}(2S)`$, this implies that the lightest spin exotic hybrid is at $`m_H=10.76(7)`$ GeV. Above this energy there will be many more hybrid states, many of which will be spin exotic. Some preliminary lattice studies have been made including sea quarks. As yet only sea quark masses down to the strange quark mass have been explored and hence the extrapolation to realistic sea quark masses is not yet well established. Indeed, pushing to lower sea quark masses is the main remaining challenge in lattice gauge theory. The light propagating quark case has been explored but no significant differences are found from using quenched vacua. The excited gluonic static potential has also been determined including sea quarks ($`N_f=2`$ flavours) and no significant difference is seen . Thus the quenched estimates given above are not superseded. Note, however, that hybrid states can mix with $`Q\overline{Q}q\overline{q}`$ states - for instance with their decay products as we shall discuss in the next section. This mixing is, in principle, enabled in a lattice study with sea quarks. ### 3.1 Light quark hybrid mesons Here we focus on lattice results for hybrid mesons made from light quarks using fully relativistic propagating quarks. There will be no mixing with $`q\overline{q}`$ mesons for spin-exotic hybrid mesons and these are of special interest. The first study of this area was by the UKQCD Collaboration who used operators motivated by the heavy quark studies referred to above. Using non-local operators, they studied all 8 $`J^{PC}`$ values coming from $`L^{PC}=1^+`$ and $`1^+`$ excitations. The resulting mass spectrum is shown in fig. 7 where the $`J^{PC}=1^+`$ state is seen to be the lightest spin-exotic state with a statistical significance of 1 standard deviation. The statistical error on the mass of this lightest spin-exotic meson is 7% but, to take account of systematic errors from the lattice determination, a mass of 2000(200) MeV is quoted for this hybrid meson with $`s\overline{s}`$ light quarks. Although not directly measured, the corresponding light quark hybrid meson would be expected to be around 120 MeV lighter. One feature clearly seen in fig. 7 is that non spin-exotic mesons created by hybrid meson operators have masses which are very similar to those found when the states are created by $`q\overline{q}`$ operators. This suggests that there is quite strong coupling between hybrid and $`q\overline{q}`$ mesons even in the quenched approximation. This would imply that the $`\pi (1800)`$ is unlikely to be a pure hybrid, for example. A second lattice group has also evaluated hybrid meson spectra with propagating quarks from quenched lattices. They obtain masses of the $`1^+`$ state with statistical and various systematic errors of 1970(90)(300) MeV, 2170(80)(100)(100) MeV and 4390(80)(200) MeV for $`n\overline{n}`$, $`s\overline{s}`$ and $`c\overline{c}`$ quarks respectively. For the $`0^+`$ spin-exotic state they have a noisier signal but evidence that it is heavier. They also explore mixing matrix elements between spin-exotic hybrid states and 4 quark operators. Recently a first attempt has been made to determine the hybrid meson spectrum using full QCD. The sea quarks used have several different masses and an extrapolation is made to the limit of physical sea quark masses, yielding a mass of 1.9(2) GeV for the lightest spin-exotic hybrid meson, which they again find to be the $`1^+`$. In principle this calculation should take account of sea quark effects such as the mixing between such a hybrid meson and $`q\overline{q}q\overline{q}`$ states such as $`\eta \pi `$. As illustrated in fig. 8, the calculations are performed for quite heavy sea quarks (the lightest being approximately the strange quark mass) and then a linear extrapolation is made. It is quite possible, however, that such mixing effects turn on non-linearly as the sea quark masses are reduced. The systematic error from this possibility is difficult to quantify. The three independent lattice calculations of the light hybrid spectrum are in good agreement with each other. They imply that the natural energy range for spin-exotic hybrid mesons is around 1.9 GeV. The $`J^{PC}=1^+`$ state is found to be lightest. It is not easy to reconcile these lattice results with experimental indications for resonances at 1.4 GeV and 1.6 GeV, especially the lower mass value. Mixing with $`q\overline{q}q\overline{q}`$ states such as $`\eta \pi `$ is not included for realistic quark masses in the lattice calculations. This can be interpreted, dependent on one’s viewpoint, as either that the lattice calculations are incomplete or as an indication that the experimental states may have an important meson-meson component in them. ### 3.2 Hybrid meson decays One clear feature of heavy quark hybrid mesons is that they have very extended wavefunctions since the potential that binds them is relatively flat. This has implications for their production and decay. For instance, any vector state will only be weakly produced in $`e^+e^{}`$ collisions because the wave function at the origin will be small. Given our mass estimates above, the open channels for decay of a $`J^{PC}=1^+`$ hybrid include $`B\overline{B},B\overline{B^{}},\eta _b\eta ,\eta _b\eta ^{},\mathrm{{\rm Y}}(1S)\omega `$ and $`\mathrm{{\rm Y}}(1S)\varphi `$. Selection rules have been proposed for hybrid decays, for example that $`H\to ̸X+Y`$ if $`X`$ and $`Y`$ have the same non-relativistic structure and each has $`L=0`$. This would rule out $`B\overline{B}`$ and $`B\overline{B^{}}`$ and the analogous cases for charm quarks. This selection rule can be addressed directly from the static quark approach. The symmetries in this case of rotations about the separation axis, etc have to be preserved in the strong decay. From the initial state with the gluonic field in a given symmetry representation, the $`q\overline{q}`$ pair must be produced in the decay in such a way that the combined symmetry of the quark pair and the final gluonic distribution matches the initial representation. For the ground state of the gluonic excitation (non-hybrid) we have $`J_z=0`$ and even $`CP`$. Thus, for this state to decay to $`(Q\overline{q})(\overline{Q}q)`$ with each heavy-light meson having $`L=0`$, the final gluonic distribution is also symmetric (actually it is essentially two spherical blobs around each static source binding the heavy light mesons). Then any $`q\overline{q}`$ pair production has to respect this symmetry and have $`J_z=0`$ and even $`CP`$. Since there is no orbital angular momentum, the $`CP`$ condition then requires $`S_{q\overline{q}}=1`$, a triplet state. This is just a derivation of what is called the $`{}_{}{}^{3}P_{0}^{}`$ model of decays: the light quark-antiquark is produced in a triplet state. This spin assignment can be tested by the ratio of $`B\overline{B},B\overline{B^{}}`$ and $`B^{}\overline{B^{}}`$ decays. For the $`J^{PC}=1^+`$ hybrid we have a gluonic field with $`J_z=1`$ and odd $`CP`$. For the case of decay to a $`(Q\overline{q})(\overline{Q}q)`$ with each heavy-light meson having $`L=0`$, this would imply that the $`q\overline{q}`$ would have to be produced with $`J_z=1`$ and odd $`CP`$. This is not possible since the triplet state would have even $`CP`$ while the singlet state cannot have $`J_z=1`$. This is then equivalent to the selection rule described above. There will presumably be small corrections to this selection rule coming from retardation effects. Decay to $`(Q\overline{q})(\overline{Q}q)`$ with one heavy-light meson having a non-zero orbital excitation is allowed from symmetry but is not allowed energetically with conventional mass assignments for the P-wave excited B meson multiplet. Decays to $`(Q\overline{Q})(q\overline{q})`$ are also possible since there is enough excitation energy to create a light quark meson. This meson must be created in a flavour singlet state and the lightest candidates are $`\eta `$ and $`\omega `$. In a lattice context, this production is via a disconnected quark loop with $`s`$, $`u`$ and $`d`$ quark contributions of similar strength. The flavour singlet mixture of $`\eta `$ and $`\eta ^{}`$ (mainly $`\eta ^{}`$) and the singlet mixture of the vector mesons (which includes a substantial $`\omega `$ component) are expected to be coupled most strongly. So allowed decays are $`\eta _b\eta `$, $`\eta _b\eta ^{}`$, $`\mathrm{{\rm Y}}(1S)\omega `$ and $`\mathrm{{\rm Y}}(1S)\varphi `$. Here the light meson must have $`J_z=1`$ and together with the $`CP`$ constraint, this implies that the light meson must be in a $`P`$-wave with respect to the heavy quark meson. $`S`$-wave decays to $`\eta _bf_1`$ and $`\mathrm{{\rm Y}}(1S)h`$ are also allowed although there may be insufficient phase space. (Here the $`f_1`$ and $`h`$ are $`J^{PC}=1^{++}`$ and $`1^+`$ flavour singlet mesons). As for the case of quarkonium decays, it is possible in principle to explore on the lattice some aspects of these decays. One can study matrix elements between ground states which are degenerate in energy such as the $`1^+`$ hybrid and the $`\eta _b\eta `$ final state where the light quark mass is adjusted so that there is equal energy in both systems. This and similar lattice studies will enable some further guidance to be given for experimental searches for hybrid mesons. ## 4 Summary and Outlook One of the advantages of lattice studies is that, by varying the parameters such as quark masses, they can serve as very useful data to develop phenomenological models. One example is that the excitation spectrum of the potential between static quarks can be used to test QCD string excitation models. This has been much discussed - for a review see ref . At present, lattice studies are restricted to sea quark masses no lighter than strange quarks. The results with such sea quarks show rather modest changes from the quenched results as the sea quarks are included but this may change non-linearly as the sea quark masses are further reduced. Thus the systematic error associated with the extrapolation in sea quark mass is very hard to estimate. The only way to circumscribe this systematic error is by evaluating explicitly with lighter sea quarks and this requirement is the remaining big computational challenge in the lattice approach. Present lattice results for quarkonia are in quantitative agreement with experiment, taking into account the uncertainty in the extrapolation in sea quark mass. For hybrid mesons, the lattice gives a very natural way to define and study them. For light quark hybrids one can explore the spectrum for all $`J^{PC}`$ values, finding a lightest spin-exotic hybrid with $`J^{PC}=1^+`$ and mass 1.9(2) GeV. This mass is significantly higher than mass values found experimentally (1.4 and 1.6 GeV). The situation for $`c\overline{c}`$ hybrid states is that neither the heavy quark lattice methods (potentials, NRQCD) nor the light quark methods (propagating quarks) are at their best in this quark mass region. Estimates for the lightest $`c\overline{c}`$ hybrid have been given from all three lattice methods and lie around $`\frac{H1S}{1P1S}3.0`$ but the systematic errors are quite large. The situation is much better controlled for $`b\overline{b}`$ hybrids. We expect the lightest $`b\overline{b}`$ hybrid to have $`J^{PC}=1^+`$ and mass $`10.75\pm 0.10`$ GeV. It will be difficult to isolate such states experimentally - but well worth the effort. The most likely decay modes of this hybrid meson are to a $`b\overline{b}`$ ground state meson ($`\eta _b`$ or $`\mathrm{{\rm Y}}(1S)`$) with the emission of a flavour singlet light quark meson ($`\eta `$, $`\eta ^{}`$, $`\omega `$ or $`\varphi `$). Future lattice calculations should be able to study these and other decays.
no-problem/9911/quant-ph9911103.html
ar5iv
text
# Improving Detectors Using Entangling Quantum Copiers ## Abstract We present a detection scheme which using imperfect detectors, and imperfect quantum copying machines (which entangle the copies), allows one to extract more information from an incoming signal, than with the imperfect detectors alone. Copying machines in general use two approaches. One of the extreme cases is a classical copying machine, where measurements (destructive or non-destructive) are made on the original state, the results of which are then fed as parameters into some state preparation scheme which attempts to construct a copy of the original. This approach obviously allows one to generate an arbitrary amount of copies, possibly all identical to each other. The opposite extreme is a fully quantum copying machine which by some process that is unseen by external observers, creates a fixed number of copies, usually destroying the original in the process. Naturally in a realistic situation, noise will additionally degrade the quality of the copies, and copiers which utilise both of the processes above are obviously also possible. Ignoring for now the matter of the inevitable noise, the exact state of the original can only be determined with certainty by some measurement if all the possible states of the original are mutually orthogonal. In all other situations, any classical copying machine must have a finite probability of producing imperfect copies. In fact, by the well-known no-cloning theorem the same can be said of quantum copying machines. If the possible states of the original are not mutually orthogonal, there is no quantum copier which will always make perfect copies. So one might ask what good *are* quantum copiers, then? Well, the obvious answer is that for the situation where the possible originals are not orthogonal, often quantum copiers can create better copies than classical ones. Some examples are the UQCM for unknown qubits, or other copiers for two non-orthogonal qubits. While this promises the possibility of many applications of quantum copying in the future, few specific examples of uses for a quantum copier have been considered so far. When discussing practical applications, quantum copiers have mainly been put forward as something to be defended against by quantum cryptography schemes. This article presents a analysis of a possible application of quantum copiers: using them to improve detection efficiencies. We firstly note that in practice one always has restricted detector resources. In particular, this article treats the situation where the best available detectors have some efficiency less than one. As an example system, consider the case where one of a set of possible input states are to be distinguished by a measurement scheme, using (some number of identical) imperfect detectors. One also has some (identical) quantum copiers which can act on the possible input states. At first, let us suppose that the possible input states are mutually orthogonal, and that one has somehow acquired perfect quantum copiers for this set of states. Assume the copiers destroy the original, and produce two copies for simplicity. Then, an obvious way to take advantage of the copiers is to send the originals through a quantum copier, before trying to detect both copies separately. (depicted in figure 1). This basically gives one a second chance to distinguish the input state, if the detection at the first copy fails. Consider a very simplified model of photodetection using this measurement scheme. Suppose one has perfect copiers, and noiseless photodetectors of efficiency $`\eta `$. That is, the probability of a a count on the detector is $`\eta `$ if a photon is incident, and $`0`$ otherwise. With the copier set up as in figure 1, if any of the detectors register a count, one can with certainty conclude that a photon was incident. So, if a photon *is* incident, the probability of finding it is $$P_{\text{count}|\text{photon}}^{(1)}=\eta +(1\eta )\eta $$ (1) as opposed to just $`\eta `$ with no copier, because one gets a “second chance” at detection. On the other hand, if no count is registered, then the probability that no photon was incident is $$P_{\text{nophoton}|\text{nocount}}^{(1)}=\frac{1p}{1\eta p(2\eta )}$$ (2) where $`p`$ is the probability that a photon is incident on average, irrespective of the measurement result. The expression of equation (2) is always greater than $`\frac{1p}{1\eta p}`$, which is the probability if no copier is used. This increase reflects the added confidence that comes from both detectors failing to register the photon. We note that using quantum copiers, and not classical ones is vital. A classical copier would have to rely on the same imperfect photodetectors, and would actually *reduce* the detection efficiency, since to detect a photon at one of the two copy detectors, one must have been first detected at the copier. This gives $`P_{\text{count}|\text{photon}}^{(1)}=\eta ^2(2\eta )`$ which is always less than or equal to $`\eta `$, a result achieved without any copiers at all. Detection with the help of perfect quantum copiers, as briefly discussed above, is all very well, but what happens when the equipment used is noisy, and not 100% efficient? Consider the following, more realistic, model of photodetection. The possible states that are to be distinguished are the vacuum $`|0`$ and single photon $`|1`$ states. The *a priori* probability that the input state is a photon is $`p`$. A generalised measurement on some state $`\widehat{\rho }`$ can be modeled by a positive operator-valued measure (POVM) $`\{\widehat{A}_i\}`$ described by a set of $`n`$ positive operators $`\widehat{A}_i`$, such that $`_{i=1}^n\widehat{A}_i=\widehat{I}`$, where $`\widehat{I}`$ is the identity matrix in the Hilbert space of $`\widehat{\rho }`$ (and of the $`\widehat{A}_i`$). The probability of obtaining the $`i`$th result, by measuring on a state $`\widehat{\rho }`$ is then $$P_i=\text{Tr}\left[\widehat{\rho }\widehat{A}_i\right]$$ (3) Now suppose the photodetectors at one’s disposal are noisy and have quantum efficiency $`\eta `$. The effect of these can be modeled by the POVM $`\widehat{A}_+=`$ $`\eta |11\left|+\eta \xi \right|00|`$ (5) $`\widehat{A}_{}=`$ $`(1\eta )|11\left|+(1\eta \xi )\right|00|`$ (6) where the operator $`\widehat{A}_+`$ represents a count, and the operator $`\widehat{A}_{}`$ the lack of one. The parameter $`\xi [0,1)`$ controls the amount of noise. That is, $`\xi \eta `$ is the probability that the photodetector registers a spurious (“dark”) count when no photon is incident. We will model the quantum copier as one which has a probability $`\epsilon `$ of working correctly and producing perfect copies. Otherwise, the parameter $`\mu [1,1]`$ determines (in a somewhat arbitrary way) what is produced. This can be written $`\widehat{\rho }_1=|1|d1|d|`$ $`\epsilon |1|11|1|+(1\epsilon )\widehat{\rho }_N=\widehat{\rho }_1^1`$ (8) $`\widehat{\rho }_0=|0|d0|d|`$ $`\epsilon |0|00|0|+(1\epsilon )\widehat{\rho }_N=\widehat{\rho }_0^1`$ (9) where $`|d`$ is a dummy state, which is fed into the copier, and becomes the second copy. It is included here to preserve unitarity in the perfect copying case $`\epsilon =1`$. The state produced upon failure of the copier, $`\widehat{\rho }_N`$ is independent of the original, and is given by $$\widehat{\rho }_N=(1|\mu |)\frac{\widehat{I}}{4}+\{\begin{array}{ccc}\mu & |1|11|1|\hfill & \text{ if }\mu >0\hfill \\ |\mu |& |0|00|0|\hfill & \text{ if }\mu 0\hfill \end{array}$$ (10) Here, $`\frac{1}{4}\widehat{I}`$ is the totally random mixed state. So, for $`\mu =0`$ a totally random noise state is produced upon failure to copy, for $`\mu =1`$ vacuum, for $`\mu =1`$ photons in both copies, and for intermediate values of $`\mu `$ a linear combination of the three cases mentioned. This model (equation (Improving Detectors Using Entangling Quantum Copiers)) of the copier is an extension (to allow for inefficiencies) of the Wootters-Zurek copier, which has been extensively studied . In the ideal case ($`\epsilon =1`$), with the dummy input state in the vacuum ($`|d=|0`$), the transformation is: $$|0|0|0|0|1|0|1|1$$ (11) This transformation can be implemented by the simplest of all quantum logic circuits, the single controlled-not gate. These have recently begun to be implemented for some systems (although admittedly not for single-photon systems), and are the subject of intense ongoing research, because of their application to quantum computing. This means that similar schemes to the one considered here may become experimentally realisable in the foreseeable future. We also point out that the transformation (11) can be also considered an “entangler” rather than a copier. Consider its effect on the photon-vacuum superposition state $$\frac{1}{\sqrt{2}}(|0+|1)\frac{1}{\sqrt{2}}(|0|0+|1|1)$$ (12) This correlation between the copies is an essential property for the detection scheme presented here to be useful — otherwise one could not combine the results of the different detector measurements to better infer properties of the original. We will now examine how we determine whether the copying scheme we are proposing is more efficient. Let us now consider the total amount of information about the input state that is contained in the measurement results. This is the (Shannon) mutual information $`I_m`$ per input state between some observer $`A`$ who knows with certainty what the original states are (perhaps because they were prepared by that observer), and another observer $`B`$ who has access to the measurement results of the detection scheme. This can be readily evaluated from the expression $$I_m=\underset{i,j}{}P_{j|i}P_i\mathrm{log}_2\frac{P_{j|i}}{P_j}$$ (13) where $`i`$ ranges over the number of possible input states, and $`j`$ over the number of possible detection results. $`P_i`$ are the *a priori* probabilities that the $`i`$th input state entered the detection scheme, $`P_{j|i}`$ is the probability that the $`j`$th the detection result was obtained given that the $`i`$th state was input, and $`P_j`$ is the marginal probability that the $`j`$th detection result was obtained overall. This mutual information has very concrete meaning even though in general, $`B`$ can never be actually certain what any particular input state was. It is known that by using appropriate block-coding and error-correction schemes, $`A`$ can transmit to $`B`$ an amount of *certain* information that can come arbitrarily close to the upper limit $`I_m`$ imposed by the detection probabilities. In other words, $`I_m`$ is the maximum amount of information that $`A`$ and $`B`$ can share using a given detection scheme, if they are cunning enough. It follows then, that the detection scheme which gives a greater information content about the initial state $`I_m`$, will be the potentially more useful one. The authors have actually shown that the Wootters-Zurek copier is the optimal quantum broadcaster of information when the information is decoded one-symbol at a time , and this will be discussed in a future paper. From expression (13) it can be seen that $`I_m`$ depends on the *a priori* input probabilities (the parameter $`p`$ in the cases considered here). This leads one to surmise that (at least in general) various detection schemes may do relatively better or worse depending on how frequently the input is a photon. This is in fact found to be the case. However, in what follows, we will concentrate mainly on the $`p=\frac{1}{2}`$ case of equiprobable photons and vacuum, since this is the situation which allows the maximum amount of information to be encoded in the original message, and so is in some ways the most basic case. If the new detection scheme gives mutual information content $`I_m(\epsilon ,\eta ,\mu ,\xi ,N,p)`$ per input state, then $`\eta ^e(I_m(\epsilon ,\eta ,\mu ,\xi ,N,p))`$ is defined as the efficiency of a noiseless detector that would give the same mutual information content if it was used by itself in the basic scheme with no copiers. i.e. $$I_m(,\eta ^e,,0,0,p)=I_m(\epsilon ,\eta ,\mu ,\xi ,N,p)$$ (14) $`\eta ^e`$ is a one-to-one, monotonically increasing function of $`I_m`$, and so if (and only if) some detection scheme increases $`\eta ^e`$, it also increases the mutual information, thus $`\eta ^e`$ and $`I_m`$ are equivalent for ranking detection schemes in terms of effectiveness. $`\eta ^e`$ also has the advantage that for some cases of the new copier-enhanced detection scheme it is independent of the photon input probability $`p`$. Now it is time to ask the question: For what parameter values does the copier-enhanced detection scheme provide more information about the initial states than using a single detector? Consider firstly the simplest case of interest, where there are no spurious (dark) counts in the photodetectors ($`\xi =0`$), and one has a copier of efficiency $`\epsilon `$ which produces vacuum upon failure ($`\mu =1`$). This will give some idea about the relationship between the detector and copier efficiencies required, leaving the effects of noise for later consideration. As mentioned previously, in this situation the effective efficiency is independent of $`p`$, and with one layer of copiers ($`N=1`$), it is found to be given by the simple expression: $$\eta _{(1)}^e=\epsilon \left[1(1\eta )^2\right]$$ (15) Since this is independent of $`p`$, introducing a second lot of copiers, is equivalent to replacing $`\eta `$ in the above expression by $`\eta _{(1)}^e`$ i.e. $`\eta _{(n+1)}^e=\epsilon \left[1(1\eta _{(n)}^e)^2\right]`$. In fact, in the limit of never-ending amounts of copiers, the effective efficiency approaches $$\underset{N\mathrm{}}{lim}\eta ^e=2\frac{1}{\epsilon }$$ (16) One finds that effective efficiency is improved (over $`\eta ^e=\eta `$) by the copier scheme whenever $$\epsilon >\frac{1}{2\eta }$$ (17) Since no random noise is introduced by either copier or detector, improvement is achieved whenever more copiers are added, to arbitrary order $`N`$. A few things of interest to note * The copier efficiency required is always above $`\eta `$ and above $`\frac{1}{2}`$. * A gain in efficiency can be achieved even with quite poor copiers — for relatively small detector efficiencies $`\eta `$ (which occur for photodetection in practice), the copier efficiency required is only slightly above half! * For very good detectors, to get improvement, the copier efficiency $`\epsilon `$ has to be slightly larger than the detector efficiency $`\eta `$. * For low efficiencies, the relative gain in efficiency can be very high, and can reach approximately $`2^N`$ for very poor detectors and very good copiers. To examine how much improvement can be achieved in more detail, consider when the efficiency of the detectors is $`\eta =0.6`$. This is a typical efficiency for a pretty good single-photon detector at present. This is shown by the solid lines in figure 2. Note how quite large efficiency gains are achievable even when the copier efficiency is slightly over the threshold useful value of $`ϵ=0.714`$ (from equation (17)), and how adding more copiers easily introduces more gains at first, but after three levels of copiers, adding more becomes a lot of effort for not much gain. To conclude it can be seen that when one is restricted to using imperfect detectors (as is always the case), more efficiency of detection can be gained by employing entangling quantum copiers such as a controlled-not gate. In fact if the efficiency of the detectors is far from 100% (such as in single-photon detection) the copier does not have to be very efficient itself, and significant gains in detection can still be made. We note that although a detailed analysis was carried out for the case of single-photon detection, the basic scheme can be readily generalised to other types of detectors. From (17), it can be seen that to be useful, the quantum copiers must be successful with an efficiency $`\epsilon `$ over 50% and somewhat greater than the detector efficiency $`\eta `$. It is not generally clear how feasible this is for various physical systems, or measurement schemes that one might wish to employ. With current technology it is often still easier to make measurements on a system, rather than entangling it with other known systems, however this varies from measurement to measurement and from system to system. The physical processes involved in measurement and quantum copying are often quite different: the former requires creating a correlation between a quantum system and a macroscopic pointer, whereas the latter involves creating quantum entanglement between two similar microscopic states. Efficient detection depends on correlating the system with its environment in a strong, yet controlled way, whereas quantum copying depends on isolating the system from its environment. One thus supposes that the usefulness of a scheme such as the one outlined here will depend on the system and measurements in question, due to the relative ease of implementing detection and controlled quantum evolution in those systems. WJM would like to acknowledge the support of the Australian Research Council.
no-problem/9911/astro-ph9911371.html
ar5iv
text
# Evolution of iron core white dwarfs ## 1 Introduction Although the theory of electron degeneracy is widely accepted amongst the astronomical community, up to very recently its observational basis was not solid enough. Obviously, the most useful objects for testing such a theory are white dwarf (WD) stars. It is known that the structure of WDs is almost completely dominated by electron degeneracy. Theory of WD stars predicts a mass - radius relation (Chandrasekhar 1939) that should be subject of observational test. Because of the great importance of the theory of electron degeneracy in several astrophysical circumstances, great effort has been devoted to improving our knowledge of the mass - radius relation for WD stars. In this sense, recent observations carried out by the astrometric satellite Hipparcos have allowed Provencal et al. (1998) to substantially improve the mass and radius determination for 20 WDs, either single or members of binary systems. From very accurate parallaxes, these authors determined precise mass and radius values, without invoking mass - radius relations, thus making these WDs excellent targets for testing stellar degeneracy directly. On the basis of these observations, Provencal et al. suggest in particular that at least three objects of their WD sample appear to have an interior chemical composition consistent with iron. Indeed, GD 140, EG50 and Procyon B have stellar radii that, for their observed masses, are significantly smaller than those corresponding to a carbon - oxygen (CO) interior. Needless to say, such results, if correct, are clearly at odds with the standard theory of stellar evolution, which predicts a CO interior for intermediate mass WDs. It is nevertheless worth noticing that the only proposal for a physical process able to account for the formation of iron WDs is, to our knowledge, by means of explosive ignition of electron \- degenerate ONeMg cores (Isern, Canal & Labay 1991). In their calculations, Isern et al. find that, depending critically upon the ignition density and the velocity of the burning front, such an explosive ignition may lead to the formation of neutron stars, thermonuclear supernovae or iron WDs. Interestingly enough, the implications of Provencal et al. results about the possible existence of a population of iron WDs coupled to the lack of modern theoretical studies about iron - core WDs in the literature, make it worthwhile to perform a detailed exploration of the structure and evolution of such objects. As a matter of fact, to our knowledge the only study of the evolution of iron WDs was performed long ago by Savedoff, Van Horn & Vila (1969); however it was based on very simplified assumptions such as the neglect of convection, crystallization and electrostatic corrections to the equation of state. At first glance, one may think that the evolution of these objects could be markedly different from their CO counterparts. To place this suspicion on a more quantitative basis, we have carried out a comprehensive study of the properties of iron - core WDs with the emphasis placed on their evolution. The present paper is organized as follows: In Section 2 we briefly describe our evolutionary code. In Section 3 we summarise our main results. Finally, Section 4 is devoted to discussing the implications of our results and to making some concluding remarks. ## 2 The Evolutionary Code The calculations we present below were performed with the same evolutionary code that we employed in our previous works on WDs, and we refer the reader to Althaus & Benvenuto (1997, 1998) for details about both the physical ingredients we incorporated and the procedure we followed to generate the initial models. In particular, the equation of state for the low - density regime is that of Saumon, Chabrier & Van Horn (1995) for hydrogen and helium plasmas, while the treatment for the high - density regime (solid and liquid phases) includes ionic contributions, coulomb interactions, partially degenerate electrons, and electron exchange and Thomas - Fermi contributions at finite temperature. The harmonic phonon contribution is that of Chabrier (1993). High - density conductive opacities and the various mechanisms of neutrinos emission for different chemical composition (<sup>4</sup>He, <sup>12</sup>C, <sup>16</sup>O, <sup>20</sup>Ne, <sup>24</sup>Mg, <sup>28</sup>Si, <sup>32</sup>S, <sup>40</sup>Ca and <sup>56</sup>Fe) are taken from the works of Itoh and collaborators (see Althaus & Benvenuto 1997 for details). In addition to this, we include conductive opacities and Bremsstrahlung neutrinos for the crystalline lattice phase following Itoh et al. (1984a) and Itoh et al. (1984b; see also erratum), respectively. The latter becomes relevant for WD models with iron core since, as it will be clear later, these models begin to develop a crystalline core at high stellar luminosities. In Fig. 1 we show the conductive opacity at some selected temperature values for iron, and for 50% carbon - 50% oxygen plasmas. The downward steps are due to the crystallization of the plasmas. Indeed, thermal conductivity in the crystalline phase becomes a factor 2-4 smaller near the melting temperature (see Itoh et al. 1984a and Itoh, Hayashi & Kohyama 1993 for details), which, as we shall see, will affect the rate of cooling of iron WD models. We regard to neutrino emission rates, we have considered photo, pair, plasma and Bremsstrahlung neutrino contributions. The total emission rate is shown in Fig. 2 for the same cases as considered in Fig. 1. For a given temperature, the dominant neutrino emission process at low densities is photo neutrinos. At higher densities there is a bump in the emission rate due to plasma neutrinos, whereas at higher densities Bremsstrahlung neutrinos take over. Clearly, at high densities, neutrino energy losses for iron plasmas become much more pronounced than for CO plasmas. With respect to the energy transport by convection, we adopt the mixing length prescription usually employed in most WD studies. Finally, we consider the release of latent heat during crystallization in the same way as in Benvenuto & Althaus (1997). ## 3 Evolutionary Results In this section we shall describe the main results we have found on the evolution of iron WDs. We have considered models with masses of $`M/\mathrm{M}_{}`$ = 0.50, 0.60, 0.70, 0.80, 0.90 and 1.0. In view of the lack of a detailed theory about the formation of iron WDs, we have taken into account for each stellar mass different chemical stratifications. Specifically, we have adopted pure iron cores embracing 99 (hereafter referred to as pure iron models), 75, 50 and 25 per cent of the total stellar mass plus (in the last three cases) a CO envelope. We have also examined the evolution of models with a homogeneous composition of iron and CO, by adopting a mass fraction for iron of 0.25, 0.50 and 0.75. In the interests of comparison, we have also computed the evolution of standard CO WD models. Since standard WDs of different stellar masses are expected to have different internal composition of CO, we have adopted the chemical profiles (kindly provided to us by I. Domínguez) resulting from recent evolutionary calculations of WD progenitors (Salaris et al. 1997). These profiles are also considered in the CO envelope of our iron models. In all the cases just described we have taken into account the presence of an outer helium envelope with mass $`M/M_{}`$ = 0.01 and adopted a metallicity $`Z`$ of 0.001. In the case of pure iron models, the transition from iron to helium layers is assumed to be almost discontinuous. We have also analysed the effect of a hydrogen envelope on iron models by including a pure hydrogen envelope with $`M/M_{}`$ = $`10^5`$ on the top of the helium envelope (in this case we considered $`Z=0`$). Models with and without a hydrogen envelope will be hereinafter referred to as DA and non - DA, respectively. The main results of the present work are summarised in Figs. 3 to 21. For completeness, we have extended our calculations to the case of stellar mass of 0.4 $`M_{}`$ , which most probably would have their origin in binary systems. We begin by examining Fig. 3 in which we show the neutrino luminosity in terms of photon luminosity for pure iron and CO models. As expected, high mass models fade away in neutrinos at higher luminosities than those corresponding to low mass models. Such behaviour is found for both types of interior compositions considered here. However, because iron plasma is more efficient neutrino emitter than CO (see Fig. 2), note that for a given mass and photon luminosity, iron WDs have appreciably higher neutrino luminosity. Consequently, neutrinos is the dominant energy release channel for iron WDs up to luminosities markedly lower than those corresponding to standard CO WDs. The relation between central temperature and density for iron - rich and CO models is shown in Fig. 4. As we have considered models only in the WD regime, these curves are almost vertical straight lines, i.e. the density of the objects remained almost constant during the computed stages. As it is well known, the higher the mass the higher the central density of a WD, but note that for a fixed mass value, iron - rich WDs have central densities appreciably higher than the corresponding to CO WDs. This is true even for models containing an iron core of only 25 per cent of the total mass. The higher densities are a result of basically two effects: One is the higher mean molecular weigth \- per - electron for iron plasma ($`\mu _e=2.151344`$ for iron, whereas $`\mu _e=`$ 2.000000 and 1.999364 for carbon and oxygen, respectively). The other effect is that, because of the high atomic number ($`Z=26`$), iron plasma is subject to much stronger interactions than a CO one. Thus, for a given particle number density, as corrective terms of the pressure are negative, an iron plasma exerts less pressure, forcing a larger internal density than for CO WDs. It is worth mentioning that as the objects cool down, their internal temperature goes to zero. Thus, the global structure of the objects asymptotically tends to that predicted by Hamada & Salpeter (1961), as expected<sup>1</sup><sup>1</sup>1For the sake of clarity, we do not include the results corresponding to the Hamada & Salpeter zero temperature objects. Another important characteristic of the models are their radii and surface gravities shown in Figs. 5 to 9 as a function of effective temperature. Hot WD models have larger radii, due mainly to the inflation of their non - and partially - degenerate outer layers. It is quite noticeable that iron WDs have a much smaller (higher) radii (gravitational acceleration) than their CO counterparts. Note also that, for the same stellar mass and iron content, models with pure iron cores are clearly less compact than homogeneous iron models. These results can be compared with the observational data of Provencal et al. (1998) to infer the core chemical composition of their WD sample. In particular, we add in Figs. 7 and 8 the observational data for the enigmatic case of EG 50, for which Provencal et al. (1998) quoted a surface gravity of $`\mathrm{log}g=8.10\pm 0.05`$, an effective temperature of 21700 K $`\pm `$ 300 K and a stellar mass (derived without relying on a mass - radius relation) of 0.50 M $`\pm `$ 0.02 M. Having the stellar mass, we are in a position to estimate the core composition for this star. In this context, we find that these values are compatible with a pure iron model of $`M`$ 0.52 M. They are also consistent with homogeneous models with an iron abundance by mass greater than $``$ 0.75. In Fig. 9 we show the effect of a thick hydrogen envelope on our pure iron models. The presence of such an envelope gives rise to somewhat smaller gravities, an oppossite trend to what is needed to fit the current observations of EG 50. For completeness, we show in Fig. 10 the central temperature versus photon luminosity relation for some selected models. Note the change of slope at high luminosities, reflecting the end of neutrino dominated era. As stated above, we have also considered crystallization in our models. We assumed that crystallization sets in when the plasma coupling constant $`\mathrm{\Gamma }`$ reaches the value $`\mathrm{\Gamma }_m=180`$, where $`\mathrm{\Gamma }=2.275\times 10^5{\displaystyle \frac{\rho ^{1/3}}{T}}({\displaystyle \frac{Z}{A}})^{1/3}Z^{5/3},`$ (1) where $`A`$ and $`Z`$ denote, respectively, the averages over abundances by number of the atomic mass and charge of the different species of ions (see Segretain et al. 1994). Our choice of $`\mathrm{\Gamma }_m`$ value is in accordance with studies carried out by Ogata & Ichimaru (1987) and Stringfellow, De Witt & Slattery (1990), and it has been used in recent WD evolutionary calculations such as Segretain et al. (1994) and Salaris et al. (1997). The growth of the crystal phase in our models is shown in Figs. 11 and 12 as a function of photon luminosity. Very large differences are found between the crystallization of iron WDs compared to the case of standard CO WDs. If we assume a pure iron composition, then $`Z_{Fe}^2/A_{Fe}^{1/3}=176.69`$, whereas considering pure carbon and oxygen $`Z_C^2/A_C^{1/3}`$= 15.72 and $`Z_O^2/A_O^{1/3}`$= 25.39, respectively. Thus, iron plasma reaches crystallization conditions much earlier during the evolution. In addition, the interior of iron WDs is much denser than in the standard case, as noted earlier. As a result, crystallization of iron WDs sets in at such high luminosities that neutrino emission is the main agent of energy release. This is in clear contrast with the case of CO WDs, which undergo crystallization at luminosities low enough that neutrino emission has already fadded away. It should be noted that for 1.0 $`M_{}`$ and 0.40 $`M_{}`$ pure iron models, the onset of crystallization occurs at luminosities 2000 and 50 times higher than the corresponding to CO WDs with the same mass. In the case of models with a pure iron core plus a CO envelope, Fig. 11 depicts an interesting feature worthy of comment. Indeed, the weaker electrostatic coupling of CO plasma as compared to iron plasma leads to a halt in the growth of the crystalline phase once the iron core has been completely crystallized. It is only at much later evolutionary stages that crystallization process will set in again. It is worth mentioning that in the case of iron WDs, the analytic treatment of the growth of the crystal phase presented in Benvenuto & Althaus (1995) is no longer valid, simply because in that work it was assumed an isothemal interior. As well known, the crystallization of the interior of WDs has two effects on the evolution. One is the release of latent heat, which acts as an energy source that delays the evolution. The other is the change of the specific heat at constant volume $`C_v`$. In fact, ions are no longer free but subject to undergo small oscillations around its corresponding lattice equilibrium position. In such a case, $`C_v=3kD(\theta _D/T)`$ where $`D`$ is the Debye function and $`\theta _D=1.74\times 10^3(2Z/A)\rho ^{1/2}`$ is the Debye temperature. Eventually, at low enough temperatures, $`C_v(T/\theta _D)^3`$. Thus, the crystallized interior has a lower ability to store heat, giving rise to an acceleration of the cooling process. In order to get a deeper insight on the role of these effects in the evolution of iron WDs, we present in Figs. 13 and 14 the profile of interior luminosity relative to its surface value at selected stages of the evolution of iron and CO WDs with 0.6 $`M_{}`$ and 1.0 $`M_{}`$ as a function of the fractional mass. Let us first discuss the results corresponding to the 0.6 $`M_{}`$ models (Fig. 13). In either of the iron and CO cases, curves labelled 1 correspond to a evolutionary stage at which neutrino luminosity dominates, giving rise, as well known, to negative luminosity values. From then on, in the case of CO WDs (lower panel of Fig. 13), neutrinos fade away, producing a linear, smooth profile (corresponding to evolutionary stages limited by curves 2 and 3). Afterwards, the interior crystallizes, as is reflected by the change of slope in curves 3, 4 and 5. Such a change of slope is due to the release of latent heat. Note also the outwards direction of the propagation of the crystallization front as cooling proceeds. Let us now discuss the results corresponding to the case of 0.6 $`M_{}`$ iron WD (upper panel of Fig. 13). As crystallization occurs very early, some of the evolutionary stages we have selected correspond to high luminosities. Curve labelled as 1 reaches negative $`L`$ values because of neutrino emission and curve 2 corresponds to a stage soon after the onset of crystallization. From then on, the crystal front moves outwards during the early stages of evolution. Note the large differences found in the luminosity profile in this case as compared to the standard one, largely due to the fact that much of the released latent heat is lost by neutrino emission. In the case of 1.0 $`M_{}`$ models the differences are dramatically enhanced (see Fig. 14). The CO model (lower panel) shows profiles very similar to the previous CO case with the exception of the last stage of evolution included, for which, as a result of the decrease in the specific heat $`C_v`$, the profile is no longer linear. In the case of the iron model, curve 1 corresponds to a stage previous to the onset of crystallization. Curves 2, 3, 4 and 5 represent stages at which the crystal front moves outwards. Over these evolutionary stages, latent heat is lost away by neutrino emission. Finally, curve 7 corresponds to a stage so advanced that most of the luminosity is provided by compression of the outermost layers. Indeed, most of the iron core has a very low luminosity because of its very low specific heat ($`\theta _D/T40`$). Let us remind the reader that $`C_v`$ is proportional to the number of particles. Thus, per gram of material, $`C_v`$ is inversely proportional to the atomic weight of the constitutive isotopes. In consequence, for a given stellar mass value, iron WDs have a lower total capacity of storing heat than that corresponding to the standard case in about (assuming a mixture of 50% carbon and 50% oxygen) $`56/(0.512+0.516)=4`$. Thus, it is clear that iron WDs should cool faster than CO ones, as it is shown in Figs. 15 \- 17. In these figures we show the time spent by objects to cool down to a given luminosity (at the luminosity stages shown in the figures, our election for the zero age point, $`\mathrm{log}L/\mathrm{L}_{}`$ =0, is immaterial). We find that in reaching a given luminosity value, low luminosity, pure iron models has to evolve about a fifth of the time a CO WD need!. The abrupt change in the rate of cooling of pure iron models (as reflected by the change of slope in the age - luminosity relationship) at the high luminosity range of these figures is worthy of comment. In fact, it occurs when the crystal front has just reached the outer layers of the iron core. Because of the discontinuity of iron conductivity opacity at the melting temperature (see Fig. 1), these layers becomes suddendly much more transparent. The opacity of these layers plays a significant role by regulating the heat flow from the interior to the outer space; thus such a discontinuity in the opacity is expected to affect the rate of cooling. The situation is more clearly illustrated by Fig. 18 in which the behaviour of opacity (conductive plus radiative) is shown in terms of the outer mass fraction for a pure iron model with $`M/\mathrm{M}_{}`$ = 0.6 at different evolutionary stages. Note that very deep in the star, conduction is very efficient, so the discontinuity in the opacity will play a minor role. It is only when crystallization reaches the very outer layers of the iron core that the cooling rate will be affected. This explains the fact that the induced effect on the cooling times be negligible for models having only half of their stellar masses composed of iron. The effect of a hydrogen envelope on cooling of pure iron models is depicted in Fig. 17. As expected, the presence of a hydrogen envelope increases the evolutionary times at very low luminosities. In part, this is due to the excess of thermal energy the star has to get rid of when degeneracy reaches the base of the convection zone, thus producing a bump in the cooling curves (see D’Antona & Mazzitelli 1989 for a discussion in the context of CO WD models). We should mention that in the present calculations we have not investigated the effect of separation of carbon and oxygen during crystallization on the age of our CO models (see Salaris et al. 1997 and references therein); nor did we take into account the effect of iron - carbon phase separation analysed by Xu & Van Horn (1992). In view of the fact that iron - rich WDs crystallize at high luminosities, we judge that the induced delay in the cooling times of our iron - rich models brought about by chemical redistribution at solidification would be of minor importance, although more details calculations would be required to place this assertion on a more quantitative basis. Assuming a constant birthrate of WDs, we have computed single luminosity functions (LF) as $`dt/d`$$`\mathrm{log}L/\mathrm{L}_{}`$ for our set of models. The results are displayed in Figs. 19 to 21. For clarity we arbitrarily fixed the value of $`dt/d`$$`\mathrm{log}L/\mathrm{L}_{}`$ = -5, -6 and -7 at $`\mathrm{log}L/\mathrm{L}_{}`$ =0 for the two sets of iron and CO composition respectively. As in the previous figures, the differences between iron and CO WDs are large. For iron objects at stages for which the crystalline phase is still growing, the slope of the LF is rather larger than for the CO case, especially for high mass objects. A striking feature shown by these figures are the spikes characterizing the LFs of the pure iron sequences, which are directly understood in terms of the discussion presented in the foregoing paragraph. As explained, when the crystal front reaches the outer layers of the iron core, they become suddendly much more transparent (as a result of the discontinuity of conductive opacity at the melting temperature, see Fig. 18), giving rise to an abrupt change in the rate of cooling of models, which translates into a discontinuity in the derivative of the evolutionary times. Thus, LF shows a step downwards and then again increases steadily up to the lowest $`L`$ considered here. We want to mention that the behaviour of our theoretical luminosity functions at the lowest luminosity values computed here may be affected by extrapolation of available opacity data. ## 4 Discussion and conclusions Motivated by recent observational evidence that seems to indicate the existence of white dwarf (WD) stars with iron - rich cores (Provencal, et al. 1998), we have studied the evolution of iron - core WDs. In this paper we have constructed detailed evolutionary sequences of WDs stars with different chemical stratifications. Specifically, we have computed the evolution of models with masses of $`M/\mathrm{M}_{}`$ = 0.40, 0.50, 0.60, 0.70, 0.80, 0.90 and 1.0 with pure iron cores embracing 99, 75, 50 and 25 per cent of the total stellar mass plus (in the last three cases) a CO envelope. We have also examined the evolution of models with a homogeneous composition of iron and CO, by adopting a mass fraction for iron of 0.25, 0.50 and 0.75. For comparison purposes with standard results we have also computed the evolution of CO WD models having the same masses. All of the models were assumed to have an outer helium envelope of $`M/M_{}`$ = 0.01, and in some cases we analysed the effects of the presence of a hydrogen envelope. In computing the structure of such objects we employed a detailed evolutionary code updated to account for the physics of iron plasmas properly. In a set of figures we examined neutrino luminosities, central densities and temperatures, radii, surface gravities, crystallization, internal luminosity profiles, ages and the luminosity function (at constant birthrate). Our results indicate that iron WDs evolve in a very different way, as compared to standard CO WDs. These differences are due to the fact that the mean molecular weight per electron for iron is higher than for CO plasmas and also to the stronger corrections to the ideal degenerate equation of state that causes the pressure of iron plasmas to be below the values corresponding to the case of CO. As consequence of the denser interior, iron WDs have smaller radii, greater surface gravities, higher internal densities, etc. compared to standard CO WDs of the same mass. We have compared the predictions of our models with the current observational data of the WD EG 50, for which Provencal et al. (1998) have suggested an iron - rich composition. In particular, we found that this object is consistent with WD models having a pure iron composition. Likewise, very noticeable are the differences encountered in the crystallization process that ocurrs at very high luminosities. For example, the onset of crystallization occurs, for the case of a 1 $`M_{}`$ pure iron model, at a luminosity 2000 times higher than the corresponding to the case of a CO object with the same stellar mass. Because iron particles are much heavier than carbon or oxygen, the specific heat per gram is much lower, indicating that the interior of iron WDs is able to store comparatively little amounts of heat. Thus, it is not surprising that the cooling process at very low luminosities proceeds in a much faster way compared to the standard case. We have also computed the single luminosity function for each computed sequence. It is nevertheless worth noticing that, due to the uncertainties present in the birth process, we have not constructed an integrated luminosity function for the computed iron WD sequences. In any case it should be noticed that if pure iron WDs, to which class EG 50 seems to belong, were very numerous, some of them would have had time enough to evolve to luminosities much lower than the corresponding to the observed fall - off of the WD luminosity function ($`\mathrm{log}L/\mathrm{L}_{}`$ $``$ -4.5, see Leggett, Ruiz & Bergeron 1998 for details). Thus, from a statistical point of view, the lack of a tail in the observed luminosity function strongly indicates a low spatial density of pure iron WDs and may be employed to quantitatively constraint it. Detailed tabulations of the results presented in this paper are available upon request to the authors at their email addresses. ## Acknowledgments We are deeply acknowledged to our anonymous referee, whose suggestions and comments greatly improved the original version of this work. We are also grateful to I. Domínguez for sending us the chemical profiles of her pre - white dwarf models. O.G.B. wishes to acknowledge to Jan - Erik Solheim and the LOC of the 11th European Workshop on White Dwarfs held at Tromso (Norway) for their generous support that allowed him to attend that meeting were he became aware of the observational results that motivated the present work.
no-problem/9911/math-ph9911043.html
ar5iv
text
# 1. Introduction ## 1. Introduction The theory of reproducing kernels was developed in \[A\], \[S\]. A recent review of the theory is \[Sa1,Sa2\], where the reader can find many references. The basic result in \[A\] is the existence and uniqueness of a reproducing kernel Hilbert space (RKHS) corresponding to any self-adjoint nonnegative-definite kernel $`K(p,q)`$, $`p,qE`$, where $`E`$ is an abstract set. Let $`H`$ be a Hilbert space of functions defined on $`E`$, and $`HL^2(E)`$. Assume that $`K(,q)`$ and $`K(p,)`$ belong to $`H`$. Let us assume that the linear operator $`K:HH`$, with the kernel $`K(p,q)`$, is injective. It is defined on all of $`H`$ since $`K(p,)H`$ by the assumption. Define RKHS $`H_K`$ inner product by the formula $$(f,g)_{H_K}:=[f,g]:=(K^1f,g),$$ $`1.1`$ where $`(f,g):=(f,g)_{L^2(E)}`$, $`K^1`$ is the operator inverse to $`K:HH`$, and $$Kf:=_EK(p,q)f(q)𝑑q.$$ $`1.2`$ The injectivity assumption can be dropped, but then one has to consider $`K`$ on the factor space $`H/N(K)`$, where $`N(K):=\{f:Kf=0\}`$ is the null-space of $`K`$. In the literature (e.g. see \[Sa1,2\]) the inner product in RKHS was not defined explicitly by formula (1.1). The definition of the inner product in $`H_K`$, given in \[A\] (and presented in \[Sa1, p.36\]) is implicit and contains some limiting procedure which is not described explicitly. In particular, it is not clear over which sets of $`p`$ and $`q`$ the summation in formula (11) in \[Sa1, p.36\] is taken. In \[A\] such a summation is taken over a finite set of points $`pE`$ and $`qE`$. The finite sums $`_pX_pK(,p)`$, used in \[Sa1, p.36\] do not form a complete Hilbert space $`H_K`$, and the completion procedure is not discussed in sufficient details in \[Sa1\]. Our definition (1.1) of the inner product in $`H_K`$ coincides with the definition in \[Sa1, p.36, formula (11)\] if one takes $`f`$ and $`g`$ in (1.1) to be finite linear combinations of the functions of the type $`K(p,)`$ and $`K(,q)`$. The reproducing property of the kernel $`K(p,q)`$ can be stated as follows: $$[f(),K(,q)]=f(q),$$ $`1.3`$ and this formula can be easily derived from the definition (1.1) of the inner product in $`H_K`$: $$[f(),K(,q)]:=(K^1f,K(,q))=(f,K^1K(,q))=(f,I(,q))=f(q).$$ $`1.4`$ Here we have used the selfadjointness of the operator $`K^1`$, and the fact that the distributional kernel of the identity operator $`I`$ is $`\delta (pq)`$, the delta function, which is well defined on RKHS because the value $`f(p)`$ for any $`pE`$ is a bounded linear functional in $`H`$: $$|f(q)|fK(,q),$$ $`1.5`$ where $`f:=[f,f]^{\frac{1}{2}}`$ is the norm in $`H_K`$. The basic results of this paper are: 1) representation of the inner product in $`H_K`$ by formula (1.1), and 2) clarification of the conditions from \[Sa1,2\] under which the range of the general linear transform, defined by formula (2.1) below, is characterized and inversion formulas for this transform are obtained. ## 2. Linear transforms and RKHS Define $$f(p):=LF:=_T\overline{h(t,p)}F(t)𝑑m(t),$$ $`2.1`$ where $`T^n`$ is some subset of $`^n`$, $`dm(t)`$ is a positive measure on $`T`$, and $`h(t,p)`$ is a function on $`H_0\times H`$, where $`H_0:=L^2(T,dm(t))`$. The linear operator $`L:H_0H`$ is injective if the set $`\{h(t,p)\}_{pE}`$ is total in $`H_0`$. This means that if for some $`FL^2(T,dm(t))`$ the following equation holds: $$0=_Th(t,p)F(t)𝑑m(t)pE,$$ $`2.2`$ then $`F(t)=0`$. Let us assume that $`L`$ is injective. The operator $`L^{}:HH_0`$ acts by the formula: $$(LF,g)_H=(F,L^{}g)_{H_0},$$ thus $$L^{}g=_Eh(t,p)g(p)𝑑p.$$ $`2.3`$ Recall that we assume in this paper that $`K`$ and $`L`$ are injective, so that $`K^1`$ and $`L^1`$ exist. Let us state a simple lemma. ###### Lemma 2.1 One has $$[LF,LG]=(F,G)_{H_0},$$ $`2.4`$ provided that RKHS $`H_K`$ is defined by the kernel $$K(p,q):=_T\overline{h(t,p)}h(t,q)𝑑m(t).$$ $`2.5`$ ###### Demonstration Proof One has $$[LF,LG]=(K^1LF,LG)=(L^{}K^1LF,G)_{H_0}=(F,G)_{H_0},$$ $`2.6`$ where the operator $`L`$ in (2.6), after the first equality sign, is considered as an operator from $`H_0`$ into $`H`$. The last step in (2.6) is based on the relation: $$L^{}K^1L=I.$$ $`2.7`$ Let us assume that $`L^1`$ is a closed, possibly unbounded, densely defined operator from $`R(L)H`$ into $`H_0`$, where $`R(L)`$ is the range of $`L`$. Then formula (2.7) is equivalent to the relation: $$K=LL^{}.$$ $`2.8`$ Indeed, in this case one has: $$K^1=(LL^{})^1=L^1L^1,$$ $`2.9`$ so that (2.7) and (2.8) are equivalent. Note that under our assumptions about $`L^1`$ the operator $`(L^{})^1`$ does exist and $`(L^{})^1=(L^1)^{}`$. Let us prove that (2.5) is equivalent to (2.8) and, consequently, to (2.7). Using (2.1) and (2.3), one gets: $$LL^{}g=_T\overline{h(t,p)}_Eh(t,q)g(q)𝑑q𝑑m(t)=_EK(p,q)g(p)𝑑p,$$ $`2.10`$ where $`K(p,q)`$ is defined by (2.5). Since $`g(p)`$ in (2.10) is arbitrary, this formula implies (2.8), as claimed. Therefore (2.5) implies (2.8), and, consequently, (2.7), and (2.7) implies (2.4) according to (2.6). Lemma 2.1 is proved. ∎ In \[Sa1\] it is proposed to characterize the range $`R(L)`$ of linear map (2.1) as the RKHS with the reproducing kernel (2.5). It follows from Lemma 2.1 that if one puts the inner product (1.1) of $`H_K`$, with $`K(p,q)`$ defined in (2.5), on the set $`R(L)`$, then $`L:H_0H_K`$ is an isometry (see (2.6)). In general one cannot describe the norm in $`H_K`$ in terms of some standard norms, such as the Sobolev norm. Therefore the above observation (that $`R(L)=H_K`$ if one puts the norm of $`H_K`$ onto $`R(L)`$) does not solve the problem of characterization of the range of $`L:H_0H_0`$ as an operator from $`H_0`$ into $`H_0`$. This point was discussed in \[R2\]. On the other hand, some cases are known when one can characterize the norm in $`H_K`$ in terms of the Sobolev norms (positive or negative) \[R1\]. It is also claimed in \[Sa1,2\] that an inversion formula exists for a general linear transform (2.1) (\[Sa2, p.56, formula (31)\]). This inversion formula is derived under the assumption \[Sa2, p.58\] that $`H_K`$ is the space $`L^2(E,d\mu )`$, where $`d\mu `$ is some positive measure. This assumption means that the kernel $`A(p,q)`$ of the operator $`K^1`$ is a distribution of the form $`\delta (pq)w(p)`$, where $`w(p)`$ is the density of the measure $`d\mu (p)`$, that is $`d\mu (p)=w(p)dp`$, and $`\delta (pq)`$ is the delta function. This and the definition of the inverse operator, namely $`KK^1=I`$, written in terms of kernels, imply: $$\delta (pq)=_E\delta (ps)K(s,q)𝑑\mu (s)=K(p,q)w(p),$$ $`2.11`$ where we have assumed that $`w(p)>0`$ is a smooth function, with $`v(p):=\frac{1}{w(p)}>0`$. Thus (2.11) implies that the reproducing kernel $`K(p,q)`$ must be of the form: $$K(p,q)=v(p)\delta (pq),$$ $`2.12`$ if one assumes that the inner product in $`H_K`$ is the same as in $`L^2(E,d\mu )`$, as indeed S.Saitoh assumes in \[Sa2, p.56\] and in \[Sa1\]. Assumption (2.12) is not satisfied in general, and is essentially equivalent to the formula $`L^1=L^{}`$, where $`L`$ now is an operator from $`H_0`$ into $`H_K`$. Let us prove the above claim. If $`L`$ is considered as operator from $`H_0`$ into $`H_K`$, then formula (2.7) can be written as $$L^{}L=I,L:H_0H_K$$ $`2.13`$ and formula (2.6) takes the form: $$LF_{H_K}=F_0.$$ $`2.14`$ Thus $`L:H_0H_K`$ is an isometry (see (2.14)) and $`L^{}`$ is the left inverse of $`L`$ (see (2.13)). We assume that $`L`$ is injective, that is, the null-space of the operator $`L`$ is trivial: $`N(L)=\{0\}`$. Since, by definition, $`H_K`$ consists of the elements of $`R(L)`$, that is, $`R(L)=H_K`$, and $`L^{}`$ is injective on $`R(L)`$ by (2.13), it follows that $$L^{}=L^1,$$ $`2.15`$ where $`L^1:H_KH_0`$ is a bounded linear operator. The claim is proved. Formula (2.15) is equivalent to the inversion formula (31) in \[Sa2, p.56\], while (2.14) is equivalent to formula (33) in \[Sa2, p.57\]. It is now clear that the assumptions in \[Sa1,2\] are equivalent to the assumption that $`L:H_0H_K`$ is a unitary operator, so that its inverse is $`L^{}`$. This assumption makes the description of the range of $`L`$ and the inversion formula trivial. It is suggested in \[Sa1\] and in \[Sa2\] to use the norm $`f_{H_K}=L^1f_0=F_0`$ on $`R(L)`$, where $`L`$ is an injective linear operator, and it was claimed in these works that one gets in such a way a characterization of the range of the operator L defined by formula (2.1). In fact this suggestion does not give a nontrivial and practically useful characterization of the range $`R(L)`$ of this linear integral operator because the norm $`L^1f_0`$ cannot, in general, be described in terms of the usual norms, such as Sobolev or Hoelder norms, for example. Likewise, the fact that the inverse of a unitary operator $`L`$ is $`L^{}`$ does not give a nontrivial inversion formula, since the main difficulty is to characterize the space $`H_K`$ in terms of the usual norms (such as Sobolev norms, for example) and to check that $`L:H_0H_K`$ is a unitary operator. Finally, one can easily check that if the assumption in \[Sa1, p.7\] and \[Sa2, p.56\] holds (this assumption says that $`H_K`$ has the inner product of $`L^2(E,d\mu )`$): $$_E_EA(p,q)f(p)\overline{g(q)}𝑑p𝑑q=_Ef(p)\overline{g(p)}w(p)𝑑p,f,gH_K,$$ $`2.16`$ where $`A(p,q)`$ is a nonnegative-definite kernel of the operator $`K^1`$ (see formula (1.1)), and $`w(p)`$ is a continuous weight function, $`0<c_0\nu (p)c_1`$, $`pE`$, then $$A(p,q)=w(p)\delta (pq),$$ which is an equation similar to (2.12), with $`w(p)=v^1(p)`$. This means that the assumption in \[Sa2, p.56\] that the RKHS $`H_K`$ is realizable as $`L^2(E,d\mu )`$ is equivalent to the assumption that the reproducing kernel $`K(p,q)`$ is of the form (2.12) provided that $`d\mu =w(p)dp`$.
no-problem/9911/cond-mat9911381.html
ar5iv
text
# Cerenkov generation of high-frequency confined acoustic phonons in quantum wells \[ ## Abstract We analyze the Cerenkov emission of high-frequency confined acoustic phonons by drifting electrons in a quantum well. We find that the electron drift can cause strong phonon amplification (generation). A general formula for the gain coefficient $`\alpha `$ is obtained as a function of the phonon frequency and the structure parameters. The gain coefficient increases sharply in the short-wave region. For the example of a $`Si/SiGe/Si`$ device it is shown that the amplification coefficients of the order of hundreds of $`cm^1`$ can be achieved in the sub-THz frequency range. \] High-frequency lattice vibrations with a high degree of spatial and temporal coherence have been observed for a number of semiconductor materials and heterostructures. These include Si, Ge, GaAs as well as SiGe and AlGaAs superlattices. These studies provide information on excitation mechanisms for the coherent phonons, their dynamics, electron-phonon interaction, and other important phenomena, including phonon control of the ionic motion. Intense coherent phonon waves can be exploited for various applications: terahertz modulation of light, generation of high frequency electric oscillations, nondestructive testing of microstructures, etc. Usually, both optical and acoustic high-frequency coherent phonons are excited optically by ultrafast laser pulses. The development of electrical methods of coherent phonon generation is an important problem. An electric current flowing though a semiconductor can produce high-frequency coherent acoustic phonons. Two distinct cases are possible. If the current results from transitions of carriers between bound electron states, coherent phonon generation can occur if there is a population inversion between these states. Hopping vertical transport in superlattices and three barrier structures provides examples of mechanisms for the establishment of a population inversion and for stimulated generation of terahertz phonons and plasmons. If the current is due to free electron motion in an electric field, phonon amplification (generation) can be achieved via the Cherenkov effect if the electron drift velocity exceeds the velocity of sound. This effect is well-known for bulk samples. High drift velocities and large densities of electrons are necessary for practical use of the Cerenkov effect. Advanced technology of semiconductor heterostructures opens new possibilities to employ this effect for high-frequency phonon generation. Indeed, such phenomena as high electron mobility at large electron density and phonon confinement in a quantum well (QW) can greatly facilitate achieving phonon amplification and generation by electron drift. In this letter, we analyze the generation of high-frequency confined acoustic phonons under the electron drift in a QW layer. Consider a symmetric heterostructure shown in Fig. 1, (a) with electrons confined in the layer $`A`$ of thickness $`2d`$. Assuming isotropic elastic properties for both semiconductors $`A`$ and $`B`$ one can introduce the longitudinal, $`V_{LA}`$ and $`V_{LB}`$, and transverse, $`V_TA`$ and $`V_{TB}`$, sound velocities. If $`V_{TA}<V_{TB},V_{LA},V_{LB}`$, then localization of acoustic waves near the embedded layer will occur. These localized waves propagate along the layer and decay outside it. There are two classes of the localized waves: the shear-horizontal (SH) waves with the displacement vector $`\stackrel{}{u}=(0,u_y,0)`$, and the shear-vertical (SV) waves with the displacement vector: $`\stackrel{}{u}=(u_x,0,u_z)`$. Dispersion relations for each class of waves are represented by a set of branches $`\omega =\omega _\nu (q)`$, with $`\omega `$ and $`q`$ being the wave frequency and wave vector, $`\nu `$ is an integer. For a given $`\nu `$, localization of the waves depends on $`q`$. Let $`\stackrel{}{u}_{\nu ,q}(x,z,t)=\stackrel{}{w}_{\nu ,q}(z)e^{(iqxi\omega t)}`$ be solutions of the elastic equations describing the localized waves. Solutions with different ”quantum numbers” $`\{\nu ,q\}`$ are orthogonal. We normalize the solutions by imposing the condition that the $`\{\nu ,q\}`$ wave has an elastic energy equal to $`\mathrm{}\omega _{\nu ,q}`$. The set of such solutions (modes) allows one to quantize the lattice vibrations, introduce confined phonons and analyze processes of absorption and emission of the phonons. Consider the interaction of a localized mode with electrons assuming that a) only the lowest two-dimensional electron subband is populated and b) presence of higher subbands can be ignored. Then, setting the area of the layer equal to 1, the electron wavefunctions have the form $`\mathrm{\Psi }_\stackrel{}{k}(\stackrel{}{r},z)=e^{i\stackrel{}{k}\stackrel{}{r}}\chi (z),`$ where $`\stackrel{}{k}`$ is the two-dimensional electron wavevector. We suppose that electrons interact with phonons via the deformation potential (DP); thus, the energy of this interaction is $`H=bdiv\stackrel{}{u}`$, where $`b`$ is the DP constant. Then the probability of transition between electron states $`\stackrel{}{k}`$ and $`\stackrel{}{k}^{}`$ due to emission or absorption of a confined phonon $`\{\nu ,q\}`$ is $$P^{(\pm )}(k,k^{}|\nu ,q)=\frac{2\pi }{\mathrm{}}\left|M(q)\right|^2\left(N_{\nu ,q}+\frac{1}{2}\pm \frac{1}{2}\right)\delta _{k_xq,k_x^{}}$$ $$\times \delta _{k_y,k_y^{}}\delta \left[E(\stackrel{}{k})E(\stackrel{}{k}^{})\mathrm{}\omega _{\nu ,q}\right]F(\stackrel{}{k})\left[1F(\stackrel{}{k}^{})\right],$$ (1) where $`M(q)`$ is the matrix element: $$M(q)b\left(_{\mathrm{}}^{\mathrm{}}𝑑iv(\stackrel{}{w}_{\nu ,q})\chi _1^2(z)𝑑z\right)/\kappa ^{el}(q),$$ (2) $`N_{\nu ,q}`$ is the phonon number of the mode, and $`F(\stackrel{}{k})=F[k_x,k_y]`$ is the electron distribution function. In Eq. (1) the upper signs correspond to emission, and the lower ones correspond to absorption processes. We take into account the effect of electron screening of the DP by introducing the electron permittivity: $`\kappa ^{(el)}(q)=1+2\pi e^2d𝒜(q)(qd)/\kappa `$. Here, $`𝒜(q)`$ is the polarization operator of two-dimensional electrons: $$𝒜(q)=2\underset{\stackrel{}{k}}{}\frac{F(\stackrel{}{k})F(\stackrel{}{k}\stackrel{}{q})}{E(\stackrel{}{k})E(\stackrel{}{k}\stackrel{}{q})},$$ (3) where $`\kappa `$ is the dielectric constant and the factor $`(s)`$ is $$(s)=\frac{1}{s}_{\mathrm{}}^{\mathrm{}}_{\mathrm{}}^{\mathrm{}}𝑑\zeta 𝑑\zeta ^{}\chi ^2(\zeta d)\chi ^2(\zeta ^{}d)e^{s|\zeta \zeta |},$$ Now, we introduce the kinetic equation for the phonon number of the mode $`\{\nu ,q\}`$: $$\frac{dN_{\nu ,q}}{dt}=\gamma _{\nu ,q}^{(+)}(1+N_{\nu ,q})\gamma _{\nu ,q}^{()}N_{\nu ,q}\beta _{\nu ,q}N_{\nu ,q},$$ (4) where $`\gamma _{\nu ,q}^{(\pm )}`$ are parameters which determine evolution of $`N_{\nu ,q}`$ in time due to the interaction with electrons. Both parameters can be found easily by calculating the total rates of emission and absorption of phonons of a given mode by the summation of Eq. (1) over all initial and final electron states. The parameter $`\beta _{\nu ,q}`$ describes phonon losses. They can include phonon scattering or phonon absorption due to non-electronic mechanisms, phonon decay due to anharmonicity of the lattice, etc. In Eq. (4) the terms which correspond to stimulated processes can be represented by $`\left(\gamma _{\nu ,q}^{(+)}\gamma _{\nu ,q}^{()}\right)N_{\nu ,q}\gamma _{\nu ,q}N_{\nu ,q},`$ with the phonon increment (decrement) equal to $$\gamma _{\nu ,q}=\frac{m^{}}{\pi \mathrm{}^3q}\left|M(q)\right|^2\left(^{(+)}(q)^{()}(q)\right),$$ (5) $$^{(\pm )}(q)=_{\mathrm{}}^{\mathrm{}}𝑑k_yF[sign(q)\frac{m^{}\omega _{\nu ,q}}{\mathrm{}|q|}\pm \frac{1}{2}q,k_y].$$ (6) Here, $`m^{}`$ is the effective mass. Depending on the shape of the electron distribution function, $`F[k_x,k_y]`$, the value $`\gamma _{\nu ,q}`$ can be either positive, or negative. If the phonon increment caused by the electron-phonon interaction is positive and, in addition, it exceeds phonon losses, $`\gamma _{\nu ,q}>\beta _{\nu ,q}`$, the population of corresponding mode(s) should increase in time, i.e., we obtain the effect of phonon generation. One can introduce the amplification (absorption) coefficient for the confined acoustic modes which describes the rate of increase in the acoustic wave intensity per unit length. We obtain the amplification coefficient via the phonon increment: $`\alpha _{\nu ,q}=\gamma _{\nu ,q}/V_g`$, where $`V_g=d\omega _{\nu ,q}/dq`$ is the group velocity of the wave. The signs of $`\gamma _{\nu ,q}`$ and $`\alpha _{\nu ,q}`$ are determined by the factor $`(^{(+)}^{()})`$, which is to be calculated from the distribution function. This factor can be interpreted as the difference in the populations of the electron states, which are involved in the processes of emission and absorption. If this factor is positive, one obtains a kind of ”population inversion”. We suppose that the electrons drift in an applied electric field along the QW layer. Under the realistic assumption of strong electron-electron scattering, the distribution function can be thought as the shifted Fermi distribution: $$F[k_x,k_y]=F_F[k_x\frac{m^{}}{\mathrm{}}V_{dr},k_y],$$ (7) where $`F_F(\stackrel{}{k})`$ is the Fermi function, $`V_{dr}`$ and $`T`$ are the electron drift velocity and temperature, respectively. From Eq. (5) for phonons propagating along the electron flux ($`q>0`$), we immediately find that $`\gamma _{\nu ,q},\alpha _{\nu ,q}>0`$ if the electron drift velocity exceeds the confined phonon phase velocity: $`V_{dr}>\omega _{\nu ,q}/|q|.`$ This criterion is, in fact, the well-known condition of the Cerenkov generation effect. If $`q<0`$ we always have $`\gamma _{\nu ,q},\alpha _{\nu ,q}<0`$. Typically, both velocities, $`V_{dr}`$ and $`\omega _{\nu ,q}/|q|`$, are much less than the average electron velocity. This implies that there is a relatively small disturbance of the Fermi function. Thus, to estimate $`\gamma _{\nu ,q}`$ and $`\alpha _{\nu ,q}`$ we will take into account the shift in $`F[k_x,k_y]`$. While calculating the screening effect \[see Eq. (3)\] we can neglect this shift and use just the Fermi function $`F_F(\stackrel{}{k})`$. The latter approximation finalizes the description of amplification of the confined phonons by the drifting electrons. Now we shall apply these results to confined phonons of different symmetry. It is easy to see that the deformation-potential interaction couples only SV phonons with electrons. One can show that the functions $`w_x(z)`$ and $`w_z(z)`$ always have different symmetry. We define the symmetric shear-vertical (SSV) modes as those with $`w_x(z)=w_x(z),w_z(z)=w_z(z)`$ and the antisymmetric ones with $`w_x(z)=w_x(z),w_z(z)=w_z(z)`$. For a symmetric QW, the electrons are coupled with the SSV phonons. The displacement field distribution for one of the confined SSV modes is presented in Fig. 1 (b). We have performed numerical calculations of the amplification coefficient for different heterostructures. We have found that two effects contribute critically to amplification: the phonon confinement effect through the matrix element of Eq. (2) and the nonequilibrium population of electron states through the factor $`(^{(+)}^{()})`$. For $`IIIV`$ and SiGe heterostructures, the acoustic mismatch is typically small and the lowest mode is an antisymmetric SV mode. Consequently, all SSV modes have finite frequency onsets. This determines two important features: a low-frequency cut-off of the amplification and a nonmonotonous dependence of the matrix element $`M(q(\omega ))`$. The population factor of Eq. (6) limits phonon amplification at high frequencies. As a result, amplification band for each SSV phonon branch is relatively narrow. Two typical amplification bands are illustrated in Fig. 2. These results are obtained for a $`p`$-doped $`Si/Si_{.5}Ge_{.5}/Si`$ structure. The heavy hole subband is the lowest one in the strained $`SiGe`$ layer. We set $`d=5nm`$, the hole density is taken as $`10^{12}cm^2`$ and the drift velocity is $`V_{dr}=2.5V_{TA}`$ with $`V_{TA}=\mathrm{3.4\hspace{0.17em}10}^5cm/s`$ for the $`SiGe`$ layer. One can see that amplification coefficient of the order of tens to hundreds $`cm^1`$ can be achieved for confined modes in the sub-THz frequency range. These values of $`\alpha `$ are well above unavoidable phonon losses due to the effects of anharmonicity and scattering on isotopes. The condition of phonon generation in a single passage device, $`\alpha L_x1`$, can be realized for reasonable extensions of the structure, $`L_x`$. At the maximum of amplification, the phonon wavelength equals $`160\AA `$ and the generated phonon flux is confined to a layer of thickness of about $`200\AA `$. Thus, a short-wavelength and highly-collimated beam of the coherent phonons can be amplified and generated in perfect QW heterostructures. In conclusion, we have found that the drift of two-dimensional electrons can result in Cerenkov instability of the phonon subsystem: the phonon modes confined near the QW layer and propagating along the electron flux are amplified. The amplification coefficient for these modes has a sharp maximum in the sub-THz frequency range. The amplification coefficient can exceed hundreds of $`cm^1`$ for the mode almost confined within the QW layer. Our results suggest that a simple electrical method for generation of high-frequency coherent phonons can be developed on the basis of the Cerenkov effect. This work was supported by the U.S. Army Research Office and the Ukrainian State Foundation for Fundamental Researches.
no-problem/9911/astro-ph9911195.html
ar5iv
text
# The Optical/Near-IR Colours of Red Quasars ## 1 INTRODUCTION It was long believed that quasars are blue. The optical/near-IR colours of optically selected QSOs are indeed uniformly very blue (eg. Neugebauer et al. 1987, Francis 1996). It was therefore a surprise when substantial numbers of extremely red quasars were identified in radio-selected samples (eg. Rieke, Lebofsky & Wisniewski 1982, Ledden & O’Dell 1983, Webster et al. 1995, Stickel, Rieke, Kühr 1996). The biggest sample of these objects is that of Webster et al, who were studying a sample of radio-loud quasars with flat radio spectra: the Parkes Half-Jansky Flat-Spectrum survey, a complete sample of 323 sources with fluxes at 2.7 GHz ($`S_{2.7}`$) of greater than 0.5 Jy, and radio spectral indices $`\alpha `$ ($`S_\nu \nu ^\alpha `$) with $`\alpha >0.5`$ as measured between 2.7 and 5.0 GHz (Drinkwater et al. 1997). While some of these Parkes sources had $`B_JK_n`$ colours as blue as any optically selected QSOs, most had redder $`B_JK_n`$ colours, and some were amongst the reddest objects on the sky. Why should the Parkes sources be so red? A variety of theories were proposed: * The $`B_J`$ magnitudes of the Parkes sample were measured many years before the $`K_n`$ magnitudes. Quasars with flat radio spectra are known to be highly variable: this could thus introduce a scatter into the $`B_JK_n`$ colours, though it is hard to see why it should introduce a systematic reddening. * Elliptical galaxies with redshifts $`z>0.1`$ have very red $`B_JK_n`$ colours, due to the redshifted 400nm break. If the host galaxies make a significant contribution to the integrated light from the Parkes sources, this could produce the red colours. Masci, Webster & Francis (1998), however, used spectra to show that this effect was only significant for $`10`$% of the sample. * The $`B_J`$ magnitudes were derived from COSMOS scans of UK Schmidt plates, and are subject to substantial systematic errors, which could introduce scatter into the $`B_JK_n`$ colours (O’Brian, Webster & Francis, in preparation), though this too should not introduce a systematic reddening. * Parkes quasars could have the same intrinsic colours as optically selected QSOs, but be reddened by dust somewhere along the line of sight (Webster et al. 1995). * Flat-radio-spectrum quasars are thought to have relativistic jets: if the synchrotron emission from these jets has a very red spectrum and extended into the near-IR, it could account for the red colours (Serjeant & Rawlings 1996). In this paper, we test Webster et al’s results by obtaining much better photometry of a large sub-set of the Parkes sources. To minimise the effects of variability, all our photometry for a given source was obtained within a period of at most six days. All the data were obtained from photometrically calibrated images, and rather than relying on only two bands ($`B_J`$ and $`K_n`$), we obtained photometry in every band from $`B`$ to $`K_n`$. In principle, multi-colour photometry should enable us to discriminate between the dust and synchrotron models. If quasars have intrinsically blue power-law continua (eg. $`F_\nu \nu ^{0.3}`$, Francis 1996), reddened by a foreground dust screen with an extinction $`E(BV)`$ between the $`B`$ and $`V`$ bands (in magnitudes) and an optical depth inversely proportional to wavelength, then the observed continuum slope will be $$F_\lambda e^{2E(BV)/\lambda }\lambda ^{1.7},$$ (1) where $`\lambda `$ is the wavelength in $`\mu `$m. This is plotted in Fig 1. Note the very characteristic ‘n’ shape, as the dust absorption increases exponentially into the blue. If, alternatively, the redness is caused by the addition of some red synchrotron emission component to the underlying blue continuum, continuum shapes will have a characteristic ‘u’ shape, dominated by the underlying blue flux at short wavelengths but by the new synchrotron component at longer wavelengths (Fig 2). If radio-quiet red quasars exist, they cannot be selected by conventional optical surveys. We show that by combining optical and near-IR data, it should be possible to select any radio-quiet sources with the colours of most of our radio-selected red quasars. This paper describes the observations, presents the data, includes some simple phenomenological analyses of the results, and discusses the colour selection of red quasars in the optical and near-IR. We defer the detailed modelling of the data to another paper: Whiting, Webster & Francis (2000). ## 2 OBSERVATIONS We obtained quasi-simultaneous B, V, R, I, J, H and $`K_n`$ photometry of a subset of the Parkes sample. Observations were taken during 26 nights in 1997 (Table 1) at Siding Spring Observatory. Optical images were obtained with either the 1 m telescope, or with the imager on the 2.3 m telescope. Near-IR images were obtained with the CASPIR 256$`\times `$256 InSb array camera (McGregor et al. 1994) on the 2.3 m telescope. 157 Parkes sources were observed in some or all of the bands, as well as a small control sample of 12 optically selected QSOs randomly selected from the Large Bright QSO survey (LBQS, Morris et al. 1991); an optical QSO survey well matched in size and redshift distribution to the Parkes sample. To minimise the effects of variability, all the observations of an individual source were made within, at most, a six day period (Table 2). Flat spectrum quasars typically vary by 10% or less on these timescales, though very occasional greater variations are seen, typically in BL Lac objects (eg. Wagner et al. 1990, Heidt & Wagner 1996). Only data taken in photometric conditions were used: seeing was typically 1–2”. Bright objects were typically observed for $``$ five minutes in each band. Fainter objects were observed for up to two hours in our most sensitive bands ($`R`$, $`I`$ and $`H`$). If they were seen in these bands, we observed them in progressively bluer bands as time allowed. Four sources were not detected in any band: PKS 1535$`+`$004, PKS 1601$``$222, PKS 1649$``$062 and PKS 2047$`+`$098. About five standard stars, spanning a range of colours, were observed each night: in the optical, the Graham E regions (Graham 1982) were used, while in the near-IR, photometric calibration was obtained using the IRIS standard stars, which have magnitudes on the Carter SAAO system (Carter & Meadows 1995). Within individual nights, the scatter in photometric zero points (without using colour corrections) was $`<3`$% rms, so all the standards in a given band were simply averaged to give the final calibration. All 98 Parkes sources lying in the RA. ranges 00:36 – 00:57, 01:53 – 02:40 and 14:50 – 22:52 (B1950) were observed in both the optical and the IR: these should thus form an unbiassed, complete sub-sample of the whole Parkes Half-Jansky sample. The remaining 59 sources were selected for observation mainly on the basis of prevailing weather conditions, and so should also form a reasonably unbiassed sub-sample. No selection was made against radio galaxies: sources with resolved optical or near-IR images (as classified by the COSMOS plate measuring machine from UK Schmidt plates, and checked by visual inspection of our images) are listed in Table 2. Where appropriate, they are excluded from the following analysis. Optical images were bias- and overscan-subtracted, and then flat fielded using twilight sky flats. For the fainter sources, multiply dithered 300- or 600-second exposures were taken: these were combined using inverse variance weighting. The infrared exposures were made up of multiple dithered 60 second images, each made up of two averaged 30 sec exposures in $`J`$, six averaged 10 sec exposures in $`H`$ and twelve averaged 5 sec exposures in $`K_n`$. These were bias- and dark-subtracted, and then corrected for the non-linearity of the CASPIR detector using a simple quadratic correction term (derived from plots of median counts against exposure time obtained from dome flats). Known bad pixels were replaced by the interpolated flux from neighbouring pixels. Flat fields were obtained by taking exposures of the dome with lamps on and off, and subtracting one from the other: this removes the contribution from telescope emission, and substantially improves the photometric accuracy attainable. Individual images were sky subtracted, using a median of the 10 images taken nearest in time. The dithered images were then aligned and combined, using the median to remove residual errors. The radio sources were identified from the radio positions by using astrometry from nearby stars, bootstrapped from positions in the COSMOS/UKST and APM/POSS sky catalogues, maintained on-line at the Anglo-Australian Observatory. Magnitudes were then measured using circular apertures, with the sky level determined from the median flux in an annulus around the sky aperture. For unresolved sources, the photometric apertures were set by the seeing: typical aperture radii were $`5`$”. For resolved sources (mostly low redshift radio galaxies) larger circular apertures were used, centred on the galactic nucleus. These larger aperture radii are listed in the footnotes to Table 2. Standard stars were measured with similar aperture sizes. Quoted errors are the sum (in quadrature) of random errors and an assumed 5% error in the photometric zero points. Random errors were determined by measuring the rms (root-mean-squared) pixel-to-pixel variation in sky regions, and scaling to the aperture size used. This will be accurate for fainter (sky or read-noise limited) sources, but will underestimate random errors for the brightest few sources. The photometric zero point errors were estimated from the scatter in zero points between different standard star measurements in an individual night: typical rms scatters are $`<3`$%, so we adopted a conservative value of 5% as our zero point error. For modelling and plotting purposes, we converted the magnitudes into fluxes. We assumed fluxes for zero magnitude objects as listed in Table 3. In the optical, our filter set approximate the Johnson & Cousins system, and were calibrated using the Graham standards (also approximating Johnson & Cousins). The zero magnitude star fluxes for this system were taken from Bessell Castelli & Plez (1998). In the infrared, our observations used the CASPIR filter set calibrated by the IRIS standards. Zero magnitude fluxes were calculated by P. McGregor, assuming that Vega is well represented in the near-IR by a black body of temperature 11200 K, and normalisation $`F_\lambda (555\mathrm{n}\mathrm{m})=3.44\times 10^{12}\mathrm{W}\mathrm{cm}^2\mu m^1`$ (Bersanelli, Bouchet & Falomo 1991). These normalisations agree closely with those quoted for UKIRT near-IR standards (MacKenty et al. 1997). Our observations were made with the $`K_n`$ filter, but were calibrated using the quoted $`K`$ magnitudes of the IRIS standards without applying a colour correction term, and should thus be normalised to a $`K`$-band zero point. ## 3 Results and Discussion ### 3.1 The Colour Distribution The results are listed in Table 4. Quoted errors are $`1\sigma `$; upper limits are $`3\sigma `$. Our data confirm the basic result of Webster et al: the Parkes quasars have very different $`BK`$ colours from optically selected QSOs (Fig 3). The difference is significant: a Kolmogorov-Smirnov test shows that the the probability of getting two samples this different from the same parent population is only $`9.1\times 10^5`$. The bluest Parkes sources have colours very similar to those of optically selected QSOs, but the distribution of colours extends much further into the red. ### 3.2 The ‘Main Sequence’ Are the Parkes sources uniformly red everywhere between $`B`$ and $`K_n`$? In Fig 4 we plot a measure of the optical colour ($`BI`$) against a measure of the near-IR colour ($`JK_n`$) for the complete sub-sample. Objects whose continuum shape approximates a featureless power-law all the way from $`B`$ to $`K_n`$ should lie close to the solid line in this plot. $`90`$% of all the Parkes sources do indeed lie close to the power-law line in Fig 4. These sources form a ‘main sequence’ of quasar colours, stretching from blue objects with $`F_\nu \nu ^0`$ to red objects with $`F_\nu \nu ^2`$. Examples of quasars from both ends of this ‘main sequence’ are shown in Fig 5. Note that these quasars can lie on either side of the power-law line: ie. they can have both ‘n’- and ‘u’-shaped continuum spectra. The majority, however, lie above the line, consistent with slightly ‘u’-shaped spectra (redder in the near-IR than in the optical). This supports the synchrotron model for these sources. We defer discussion of this point to the detailed synchrotron modelling of the companion paper Whiting et al. ### 3.3 Optically Selected QSOs As Fig 4 shows, the optically selected QSOs all have very similar colours, and lie at the blue end of the ‘main sequence’. They lie systematically below the power-law line, however, indicating that they have ‘n’ shaped spectra: ie. they are redder in the optical than in the near-IR. This can be seen in their spectra energy distributions, shown in Fig 6. This spectral curvature matches the predictions of the dust model. Wills, Netzer & Wills (1985), however, suggested that it may be partially due to blended Fe II and Balmer-line emission, though Francis et al. (1991) argued that this curvature is too large to be plausibly explained by emission-line contributions. The position of the optically selected QSOs at the blue end of the ‘main sequence’ would be expected if the cause of redness in the Parkes quasars is the addition of a red synchrotron component to an underlying blue continuum which is identical to that in radio-quiet QSOs (Whiting et al.). ### 3.4 Galaxies and Extremely Red Objects The colours of the spatially extended sources in the Parkes sample are sharply peaked in the red, as would be expected from moderate redshift galaxies (Fig 7). They therefore lie far below the ‘main sequence’ in Fig 4, the one exception being PKS 1514$``$241, which is a galaxy at z=0.049 with a BL Lac nucleus, which is presumably diluting the galaxy colours. Higher redshift galaxies lie further to the right on this plot, as would be expected due to the 400 nm break reducing the $`B`$-band flux. What are the other, red, highly ‘n’-shaped objects lying far below the ‘main sequence’ which are not spatially resolved? A few are high redshift QSOs, in which the $`B`$-band flux has been reduced by Ly$`\alpha `$ forest absorption (Fig 8). The reddest objects, however, with $`BI>3`$ (Fig 9), do not lie at high redshifts. We have obtained spectra of four of these very red objects (Francis et al. 2000, in preparation). Three show hybrid spectra: they look like galaxies at short wavelengths, but at longer wavelengths a red power-law continuum component is seen, along with broad emission-lines. The ratios of H$`\alpha `$ to H$`\beta `$ are around 20: far above those seen in normal AGN ($`5`$) and evidence of substantial reddening (Fig 10). Note that these hybrid objects all have radio spectra indices near the steep spectrum cut-off of our sample, as do the galaxies in the sample. The reddest objects are thus a heterogeneous group: some are high redshift quasars, some are galaxies, and some are heavily dust-reddened quasars. ### 3.5 Unidentified Objects Four Parkes sources were not detected in any band. After correction for galactic foreground absorption (Schlegel et al.), our non-detections impose 3$`\sigma `$ upper limits of $`H>19.61`$ for PKS 1532$`+`$004, $`H>19.76`$ & $`K>19.29`$ for PKS 1601$``$222, $`H>17.22`$ and $`K>16.61`$ for PKS 1649$``$062 (which is subjected to substantial galactic reddening) and $`H>19.82`$ for PKS 2047$`+`$098. If unified schemes for radio-loud AGN are correct, the host galaxies of our flat-radio-spectrum sources should be very similar to those of steep-radio-spectrum radio galaxies. This enables us to place a lower-limit on the redshift of these unidentified sources: even if their AGN light is completely obscured, we should still see the host galaxy, which should lie on the $`K`$-band Hubble diagram for radio galaxies (eg. McCarthy 1992). To be undetected at our magnitude limits, therefore, all these sources must lie above redshift 1, and apart from PKS 1649$``$062, probably lie above redshift 3. ### 3.6 Anomalous Objects Three sources have colours that do not fit any of these categories (Fig 11). We discuss these in turn. PKS 1648$`+`$015 shows a smooth optical power-law rising into the red, until at around $`1.4\mu `$m, the flux abruptly decreases. As all the IR data points were taken within minutes of each other in good weather conditions, we believe that this near-IR turn-over is real. We obtained a somewhat noisy optical spectrum of this source (Drinkwater et al.) which shows a featureless, very red power-law, in excellent agreement with the photometry. We cannot explain this source. PKS 1732$`+`$094 is blue longwards of around $`0.6\mu `$m, but drops dramatically at shorter wavelengths. Our spectrum of this source (Drinkwater et al.) is too poor to be of any use. We hypothesise that this may be a very high redshift $`z>4`$ quasar, and that the drop in the blue is due to Ly$`\alpha `$ absorption. PKS 2002$``$185 has optical colours typical of the bluest Parkes sources, but in the near-IR is bluer still: far bluer than any other source at these wavelengths. An optical spectrum, covering a very restricted wavelength range (Wilkes et al. 1983) shows only a single broad emission-line: on the assumption that this is Mg II (279.8 nm) a redshift of 0.859 is determined. ## 4 Multicolour Selection of Red Quasars Could there be a population of radio-quiet QSOs with the same colours as our radio-loud red quasars? Webster et al. showed that it is virtually impossible to find such QSOs in any sample with a blue optical magnitude limit. In this section we ask whether red QSOs could be identified by colour selection in the red optical and near-IR. In Fig 12, we compare the optical and near-IR colours of the Parkes sources against the colours of high galactic latitude point sources drawn from the Two-Micron All Sky Survey (2MASS, $`K<15`$) and ESO Imaging Survey (EIS, $`K<22`$). The ‘Main Sequence’ sources, both red and blue, are clearly separated from the foreground objects. This separation is due to their power-law spectral energy distributions: as compared to the convex spectral energy distributions of stars and galaxies, the quasars have excess flux in $`B`$ and/or $`K`$. This selection technique is similar to the ‘KX’ technique proposed by Warren, Hewett & Foltz (1999). Unfortunately, the very red sources lying below the ‘main sequence’ have colours within the stellar locus and will be hard to find. Can red quasars be identified purely on the basis of their near-IR colours? In Fig 13, we show that most of the Parkes quasars lie in regions of the near-IR colour-colour plot with substantial stellar contamination, but that the reddest move away from the stellar locus, and could be detectable in the IR alone. Fig 14 shows that purely optical colour selection is not likely to be effective. ## 5 Conclusions The Parkes quasars can, we conclude, be crudely divided into three populations: 1. The ‘Main Sequence’: $`90`$% of the Parkes sources have approximately power-law spectral energy distributions, with spectral indices $`\alpha `$ ($`F_\nu \nu ^\alpha `$) in the range $`0>\alpha >2`$. The nature of these sources is discussed by Whiting et al. 2. Very Red Sources: These sources, which comprise $`10`$% of the Parkes sample, are characterised by much redder continuum slopes in the optical than in the IR. They tend to have relatively steep radio spectra. Half these sources are radio galaxies, while most of the remainder are highly dust-reddened quasars. The undetected sources are probably high redshift members of this class. 3. Oddballs: Roughly 2% of the Parkes sample defy this categorisation. The ‘main sequence’ sources, both red and blue, should be easily detectable in combined near-IR and optical QSO surveys, due to their excess flux in the $`K`$ and/or $`B`$ bands. ## Acknowledgements We wish to thank Mike Bessell and Peter MacGregor for their help with the details of the photometry, and Tori Ibbetson for her assistance with the observations. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center, funded by the National Aeronautics and Space Administration and the National Science Foundation, and of catalogues from the ESO Imaging Survey, obtained from observations with the ESO New Technology Telescope at the La Silla observatory under program-ID Nos 59.A-9005(A) and 60.A-9005(A). ## References Bersanelli, M., Bouchet, P., & Falomo, R. 1991, A & A, 252, 854 Bessell, M. S., Castelli, F., & Plez, B. 1998, A & A, 333, 231 Carter, B. S., & Meadows, V. S. 1995, MNRAS, 276, 734 Drinkwater, M. J., Webster, R. L., Francis, P. J., Condon, J. J., Ellison, S. L., Jauncey, D. L., Lovell, J., Peterson, B. A., & Savage, A. 1997, MNRAS, 284, 85 Francis, P. J., Hewett, P. C., Foltz, C. B., & Chaffee, C. B., 1991, ApJ 373, 465 Francis, P. J. 1996, Publ. Astron. Soc. Australia, 13, 212 Graham J. A. 1982, PASP 94, 265 Heidt, J. & Wagner, S.J. 1996, A & A, 305, 42 Ledden, S.E. & O’Dell, S.L. 1983, ApJ, 270, 434 MacKenty, J. W. et al, 1997, “NICMOS Instrument Handbook”, Version 2.0, (Baltimore: STScI) Masci, F. J., Webster, R. L. & Francis, P. J. 1998, MNRAS 301, 975 McCarthy, P. 1992, Ann Rev A & A, 31, 639 McGregor, P., Hart, J., Downing, M., Hoadley, D., & Bloxham, G. 1994, in Infrared Astronomy with Arrays: The Next Generation, ed. I. S. McLean (Kluwer: Dordrecht), p. 299 Morris S. L., Weymann R. J., Anderson S. F., Hewett P. C., Foltz C. B., Chaffee F. H. & Francis P. J. 1991, AJ, 102, 1627 Neugebauer, G., Green, R. F., Matthews, K., Schmidt, M., Soifer, B. T., & Bennet, J. 1987, ApJS, 63, 615 Rieke, G.H., Lebofsky, M.J. & Wisniewski, W.A. 1982, ApJ 263, 73 Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ 500, 525 Serjeant, S., & Rawlings, S. 1997, Nature, 379, 304 Stickel, M., Rieke, G.H., Kühr, H. & Rieke, M.J. 1996, ApJ, 468, 556 Wagner, S.J., Sanchez-Pons, F., Quirrenbach, A., & Witzel, A. 1990, A & A, 235 L1 Warren, S.J., Hewett, P.C. & Foltz, C.B. 1999, MNRAS in press (astro-ph/9911064). Webster, R. L., Francis, P. J., Peterson, B. A., Drinkwater, M. J., & Masci, F. J. 1995, Nature, 375, 469 Whiting, M.T., Webster, R.L. & Francis, P.J. 2000, MNRAS submitted. Wilkes, B. J., Wright, A. E., Jauncey, D. L. & Peterson, B. A., 1983, PASA, 5, 2 Wills, B., Netzer, H., & Wills, D. 1985, ApJ 288, 94
no-problem/9911/astro-ph9911203.html
ar5iv
text
# Molecular Clouds in Cooling Flow Clusters of Galaxies ## 1. Introduction Cooling flows in clusters of galaxies deposit large quantities of cool gas around the central galaxy, which is still growing. The final evolution of the cool gas is not clear. It may just accumulate as cool dense clouds. The metallicity of a cluster seems to be correlated with the presence of a cooling flow. In this context molecules such as $`CO`$ can subsequently be formed in the gaseous medium. O’Dea, Baum, Maloney et al. (1994) searched for molecular gas, by looking for $`CO`$ emission lines in a heterogeneous sample of cooling flow clusters. They come to the conclusion that in order to have escaped detection the gas has to be very cold, close to the temperature of the Cosmic Background Radiation. The aim of this contribution is to discuss the minimum temperature achievable by sub-clouds (resulting from the fragmentation of bigger clouds) in cooling flow regions by improving the analysis done by O’Dea, Baum, Maloney et al. (1994), in particular by considering clouds made of $`H_2`$, $`HD`$ and $`CO`$ molecules and by computing cooling functions which are more appropriate for temperatures below 20 K. ## 2. Thermal equilibrium In the standard Big Bang model primordial chemistry took place around the epoch of recombination. At this stage the chemical species were essentially hydrogen, deuterium, helium and lithium. Then with the adiabatic cooling of the Universe due to the expansion, different routes led to molecular formation (Puy et al. 1993). Molecules play an important role in the cooling of the denser clouds via the excitation of rotational levels. In order to calculate analytically the molecular cooling, Puy, Grenacher and Jetzer (1999) consider only the transition between the ground state and the first rotational level for $`H_2`$, $`HD`$ and $`CO`$. As expected, it turns out that $`CO`$ is the main coolant at low temperatures below 20 K. The clouds are embedded in the hot intracluster gas, whose emission is dominated by thermal bremsstrahlung. We introduce an attenuating factor $`\tau `$ characterizing the column density surrounding the sub-clouds. The attenuated bremsstrahlung flux coming from the intracluster gas heats the clouds located in the cooling flow at a distance $`r`$ from the cluster center. Thermal balance between heating and cooling defines an equilibrium temperature of the sub-clouds at a distance $`r`$ inside the cooling region: $`r<r_{cool}`$ (where $`r_{cool}`$ is the cooling radius). As an example we choose PKS 0745-191 which is embedded in one of the largest known cooling flows. We adopt the following column densities for a typical small cloud: $`N_{CO}=10^{14}`$ cm<sup>-2</sup>, $`N_{H_2}=2\times 10^{18}`$ cm<sup>-2</sup> with $`n_{H_2}=10^6`$ cm<sup>-3</sup> (density of $`H_2`$). These values correspond to a size for the cloud of $`L10^6`$ pc, moreover we take the following abundance $`\eta _{HD}=N_{HD}/N_{H_2}=7\times 10^5`$. The first column of the following table gives the minimum temperature of the sub-clouds, $`T_{clump}`$, inside the cooling flow region for different values of $`\tau `$ with $`\eta _{CO}=5\times 10^5`$, whereas the second column gives $`T_{clump}`$ for different values of $`\eta _{CO}`$ with $`\tau =2.5`$. The hypothesis of very cold molecular gas in cooling flows seems reasonable given also the fact that with our approximations we get upper limits for the cloud temperatures. ### Acknowledgments. We are grateful to Monique Signore for valuable discussions. This work has been supported by the D<sup>r</sup> Tomalla Foundation and by the Swiss National Science Foundation. ## References O’Dea C., Baum S., Maloney P. et al. 1994 ApJ422, 467 Puy D., Alecian G., Lebourlot J. et al. 1993 A&A267, 337 Puy D., Grenacher L., Jetzer Ph. 1999 A&A345, 723
no-problem/9911/hep-ph9911358.html
ar5iv
text
# A DOUBLE PARTON SCATTERING BACKGROUND TO HIGGS BOSON PRODUCTION AT THE LHC ## I Introduction The problem of identifying the most convenient signatures for detecting the Higgs boson production at the LHC has been widely discussed in the literature. Most results are summarized in ref., where, in addition, various different backgrounds to the process are estimated. The $`b\overline{b}`$ channel is the most favorite Higgs decay mode when the Higgs mass is below the $`W^+W^{}`$ threshold. The confidence in the capability of identifying efficiently the $`b`$ quark jets has therefore addressed towards the detection of $`b\overline{b}`$ pairs to observe the Higgs boson production at the LHC, if the Higgs mass is in the range $`80GeV<M_H<150GeV`$. To reduce the huge QCD background to the $`b\overline{b}`$ pair production, the $`b\overline{b}`$ pair is detected in association with an isolated lepton from the decay of a $`W`$ boson. The process of interest to detect the Higgs boson production through the $`b\overline{b}`$ decay channel is therefore: $`p+pWH+X`$, with $`Wl\nu _l`$, $`Hb\overline{b}`$, where $`l=e,\mu `$. The purpose of the present note is to point out that the same $`l,b\overline{b}`$ final state can be produced also by a different mechanism, namely by a double parton collision process, which represents therefore a further background to be taken into account in addition to the other background processes previously considered. In fact as a result of the present analysis we find that double parton scatterings may represent a rather sizeable source of background. The possibility of hadronic interactions with double parton scattering collisions was foreseen on rather general grounds long ago. The process has been recently observed by CDF : In a hadronic interaction with a double parton scattering two different pairs of partons interact independently at different points in the transverse space, in the same inelastic hadronic event. The process is induced by unitarity and, as a consequence, it has been considered mostly in the regime where the partonic cross sections become comparable to the total inelastic hadronic cross section, namely large c.m. energy in the hadronic interaction and relatively low transverse momenta of the produced partons. Those are in fact also the conditions where the process was observed. In such a kinematical regime one does not expect strong initial state correlations in the fractional momenta of the partons undergoing the double collision process and, with this simplifying hypothesis, the double parton scattering cross section is proportional to the product of two single scattering cross sections. All the new non perturbative information on the structure of the colliding hadrons provided by the process, in the specific case the information on the two-body parton correlation in transverse space, reduces to a scale factor with dimensions of a cross section (the ’effective cross section’). In the case of two identical parton interactions, as for producing four large $`p_t`$ jets, the double parton scattering cross section assumes therefore the simplest factorized form $$\sigma _D(Jets)=\frac{1}{2}\frac{\sigma _J^2}{\sigma _{eff}}$$ (1) $`\sigma _J`$ is the usually considered single parton scattering cross section: $$\sigma _J=\underset{ff^{}}{}_{p_t>p_t^{min}}𝑑x𝑑x^{}d^2p_tG_f(x)G_f^{}(x^{})\frac{d\widehat{\sigma }_{ff^{}}}{d^2p_t}$$ (2) where $`G_f(x)`$ is the parton distribution as a function of the momentum fraction $`x`$ and at the scale $`p_t`$. The different species of interacting partons are indicated with the label $`f`$ and $`d\widehat{\sigma }_{ff^{}}/d^2p_t`$ is the elementary partonic cross section. $`\sigma _{eff}`$ is the effective cross section and it enters as a simple proportionality factor in the integrated inclusive cross section for a double parton scattering $`\sigma _D`$. The value of $`\sigma _{eff}`$ represents therefore the whole output of the measure of the double parton scattering process in this simplest scheme, which on the other hand has shown to be in agreement with the available experimental evidence. In the case of two distinguishable parton scatterings $`A`$ and $`B`$ the factor $`1/2`$ in Eq.1 is missing and one correspondingly writes $$\sigma _D(AB)=\frac{\sigma _A\sigma _B}{\sigma _{eff}}$$ (3) The effective cross section is a geometrical property of the hadronic interaction, related to the overlap of the matter distribution of the two interacting hadrons in transverse space. The expectation is that it is independent on the c.m. energy of the hadronic collision and on the cutoff $`p_t^{cut}`$. Moreover, although one may expect that different kinds of partons may be distributed in different ways in the transverse space, one does not expect a strong dependence of $`\sigma _{eff}`$ on the different possible partonic reactions. The simplest possibility to consider is therefore the one where the scale factors in Eq.1 and in Eq.3 are the same. In the intermediate Higgs mass range the partonic center of mass energy needed for producing the Higgs boson is relatively low, as compared to the overall energy involved in the hadronic collision at the LHC, and one may therefore expect that the factorization in Eq.3 may still be a good approximation when producing partonic states with values of the invariant mass of the order of the mass of the Higgs. We will therefore estimate the double parton scattering background to the process $`p+pWH+X`$, with $`Wl\nu _l`$ and $`Hb\overline{b}`$, by using the simplest expression in Eq.3. We will also take the attitude of considering the value of $`\sigma _{eff}`$ as a universal property of all double parton interactions and we will use the actual value which was measured by CDF. In this respect one has to point out that in the experimental analysis of CDF the measure of the double parton scattering cross section has been performed by removing all triple parton collision events from the sample of inelastic events with double parton scatterings. The double parton scattering cross section measured in the experimental analysis does not correspond therefore to the inclusive cross section written here above and usually considered in the literature, which allows the simple inverse proportionality relation between $`\sigma _D`$ and $`\sigma _{eff}`$. The double parton scattering cross section measured by the CDF experiment is in fact smaller as compared to the double parton scattering cross section discussed here. As a consequence the resulting value of the effective cross section, $`\sigma _{eff}|_{CDF}`$, is somewhat larger with respect to the quantity suitable to the actual purposes. By using in the present note $`\sigma _{eff}|_{CDF}`$ as a scale factor for the double parton scattering process, we will therefore underestimate the size of the background due to double parton scatterings to the Higgs boson production. ## II Double Parton Scattering Background Process A background to the process $`p+pWH+X`$, with $`Wl\nu _l`$, $`Hb\overline{b}`$ is represented by the double parton scattering interaction where the intermediate vector boson $`W`$ and the $`b\overline{b}`$ pair are produced in two independent parton interactions. The corresponding integrated rate is easily evaluated by combining the expected cross sections for $`W`$ and $`b\overline{b}`$ production at LHC energy with $`\sigma _{eff}`$ as in Eq.3. If one uses $`\sigma (W)\times BR(Wl\nu _l)40nb`$ , $`\sigma (b\overline{b})5\times 10^2\mu `$ and as a value for the scale factor $$\sigma _{eff}=\mathrm{\hspace{0.17em}14.5}mb$$ (4) (the observed value is $`\sigma _{eff}|_{CDF}=14.5\pm 1.7_{2.3}^{+1.7}mb`$) one obtains that the cross section for a double parton collision producing a $`Wl\nu _l`$ and a $`b\overline{b}`$ pair, is of the order of $`1.4nb`$. The Higgs production cross sections, $`p+pWH+X`$, with $`Wl\nu _l`$, $`Hb\overline{b}`$, has been estimated to be rather of order of $`1pb`$. By integrating the double parton scattering cross section over the whole possible configurations of the $`b\overline{b}`$ pair one then obtains a cross section three orders of magnitude larger that the expected signal from Higgs decay. Obviously, rather than the integrated cross sections, one is interested in comparing the two differential cross sections as a function of the invariant mass of the $`b\overline{b}`$ pair. In the calculations of the background and signal we used, for the matrix elements, the packages MadGraph and HELAS, and the integration was performed by VEGAS with the parton distributions MRS99. The cross section to produce $`WH`$, followed by $`Wl\nu _l`$, $`Hb\overline{b}`$, is plotted in fig.1 for three different possible values of the Higgs mass, and it is compared with the double parton scattering cross section $`d\sigma _D/dM_{b\overline{b}}`$ as a function of the invariant mass of the $`b\overline{b}`$ system. The estimated signal of Higgs boson production in the invariant mass of the $`b\overline{b}`$ pair corresponds to the three possible values for the mass of the Higgs boson, $`80`$, $`100`$ and $`120GeV`$. The curves refer to background double parton scattering process. The dashed line is obtained by estimating the cross section for $`b\overline{b}`$ production at the lowest order in $`\alpha _S`$ and by using as a scale factor in $`\alpha _S`$ the transverse mass of the $`b`$ quark. The continuous line is a rescaling of the lowest order result by a factor $`1.8`$ and it corresponds to the expectations of the order $`\alpha _S^3`$ estimates of the $`b\overline{b}`$ cross section according with ref.. The estimated background from double parton scatterings is therefore a factor four of five larger than the expected signal. In fig.2 we compare the signal and the background after applying all the typical cuts considered to select the Higgs signal: \- for the lepton: $`p_t^l>20`$ GeV, $`|\eta ^l|<2.5`$ and isolation from the $`b`$’s, $`\mathrm{\Delta }R_{l,b}>.7`$ \- for the two $`b`$ partons: $`p_t^b>15`$ GeV, $`|\eta ^b|<2`$ and $`\mathrm{\Delta }R_{b,\overline{b}}>.7`$ As in the previous figure the Higgs signal in the $`b\overline{b}`$ invariant mass corresponds to three possible values for the mass of the Higgs boson, $`80`$, $`100`$ and $`120GeV`$. The dotted line is the single parton scattering background, where the $`Wb\overline{b}`$ state is produced directly in a single partonic interaction. The dashed line is the expected background originated by double parton scattering process evaluated by estimating the $`b\overline{b}`$ production cross section at $`𝒪(\alpha _S^3)`$. The continuous line is the total expected background. Fig.2 summarizes our result: also after using the more realistic cuts just described, the double parton scatterings process remains a rather substantial component of the background. The difference with the conventional estimate of the background is immediately evident comparing the total background estimate (continuous curve) with single scattering background (dotted curve). ## III Conclusions In the present note we have discussed the background induced by double parton collisions to the detection of the Higgs in the $`Wb\overline{b}`$ channel. The large rate of production of $`b\overline{b}`$ pairs expected at the LHC (the corresponding cross section is of order of $`500\mu b`$) gives rise to a relatively large probability of production of a $`b\overline{b}`$ pair in the process underlying the $`W`$ production. As a consequence a very promising channel to detect the production of the Higgs boson, in the intermediate Higgs mass range, namely the final state with a $`b\overline{b}`$ pair and with an isolated lepton, is affected by a sizable background due to double parton collision processes. Although the double parton collision cross section is a decreasing function of the invariant masses of the $`b\overline{b}`$ pair, the relatively large value of the invariant mass required to the $`b\overline{b}`$ pair to be assigned to the Higgs decay is not large enough, at LHC energies, to allow one to neglect the double parton scattering background. It is rather obvious that the considerations above are not limited to the $`Wb\overline{b}`$ channel. Similar arguments can be repeated in several other cases. In addition to the obvious case of the $`Zb\overline{b}`$ channel, a few examples where we expect that multiple parton scattering processes might give a non secondary effect are the following: * $`W+\mathrm{jets}`$, $`Wb+\mathrm{jets}`$ and $`Wb\overline{b}+\mathrm{jets}`$, * $`t\overline{t}llb\overline{b}`$, * $`t\overline{b}b\overline{b}l\nu `$, * $`b\overline{b}+\mathrm{jets}`$, * production of many jets when $`p_t^{min}25GeV`$ Although after the cuts the ratio signal to background is still a favorable number, the actual case discussed here shows that an evaluation of the background, without keeping into account the contribution of the double parton collision preocesses, may be rather unrealistic at the LHC and that, in some cases, the cuts to the final state considered so far are likely to be rediscussed. Acknowledgments We thank Stefano Moretti for many useful discussions and Giuseppe Ridolfi for the code to evaluate the $`b\overline{b}`$ cross section at $`𝒪(\alpha _S^3)`$. This work was partially supported by the Italian Ministry of University and of Scientific and Technological Research by means of the Fondi per la Ricerca scientifica - Università di Trieste.
no-problem/9911/chao-dyn9911006.html
ar5iv
text
# Two-Particle Dispersion in Model Velocity Fields Since the seminal work of Sir L.F.Richardson on particles’ dispersion in atmospheric turbulence a large amount of work has been done in order to understand the fundamentals of this process (see and for reviews). Based on empirical evidence, Richardson found out that the mean square distance $`R^2(t)=r^2(t)`$ between two particles dispersed by a turbulent flow grows proportionally to $`t^3`$. The works of Obukhov and Batchelor have shown that Richardson’s law is closely related to the Kolmogorov-Obukhov scaling of the relative velocities in turbulent flows. Scaling arguments based on dimensional analysis allow then to understand the overall type of the behavior of $`R^2(t)`$, but a full theoretical picture of the dispersion process is still lacking . The theoretical description of dispersion processes typically starts from models, in which one fixes the spacial statistics of the well developed turbulent flow (Kolmogorov-Obukhov energy spectrum), and discusses different types of temporal behavior for the flows . Three situations have been considered in-detail so far. Here, the white-in time flows represent a toy model which allows for deep analytical insights \[4-6\]. In connection with ”real” turbulence two other cases are widely discussed. One of them supposes that the temporal decorrelation of the particles’ relative motion happens because the pair as a whole is moving with a mean velocity relative to an essentially frozen flow structure (as proposed by a Taylor hypothesis) . Another premise connects this decorrelation with the death and birth of flow structures (”eddies”), whose lifetime is governed by Kolmogorov’s universality assumption . Both these situations are extremely awkward for theoretical analysis. In the present letter we address the following question: What are the generic types of two-particle dispersion behavior in a velocity field whose statistical spatial structure is fixed (and similar to that of a turbulent flow), if its temporal correlation properties change. This question will be discussed in the framework of numerical simulations and scaling concepts. As we proceed to show, two generic types of behavior arise. Thus, the white-in-time flow and the Taylor-type situation belong to the classes of diffusive and ballistic behavior, respectively. The case of Kolmogorov temporal scaling represents a borderline situation. Let us consider modes of particles’ separation in a velocity field whose two-time correlation function of relative velocities behaves as $`𝐯(𝐫,t_1)𝐯(𝐫,t_2)v^2(r)g\left[(t_2t_1)/\tau (r)\right]`$, where $`\tau (r)`$ is the distance-dependent correlation time. The $`g`$-function is defined so that $`g(0)=1`$ and $`_0^{\mathrm{}}g(s)𝑑s=1`$. The mean square relative velocity and the correlation time scale as $$v^2(r)v_0^2\left(\frac{r}{r_0}\right)^\alpha $$ (1) and $$\tau (r)\tau _0\left(\frac{r}{r_0}\right)^\beta .$$ (2) One can visualize such a flow as being built up from several structures (plane waves, eddies, etc., see Ref.), each of which is characterized by its own spatial scale and its scale-dependent correlation time. In well-developed turbulent flows one has $`v^2(r)ϵ^{2/3}r^{2/3}`$, where $`ϵ`$ is the energy dissipation rate, so that $`\alpha =2/3`$. The white-in-time flow corresponds to $`\beta =0`$. Kolmogorov scaling implies $`\beta =2/3`$ and Taylor’s frozen-flow assumption leads to $`\beta =1`$. In our simulations we model the two-particle relative motion using the quasi-Lagrangian approach of Ref.. Parallel to Ref. we confine ourselves to a two-dimensional case, which is also of high experimental interest . The relative velocity $`𝐯(𝐫,t)=\times \eta (𝐫,t)`$ is given by the quasi-Lagrangian stream function $`\eta `$. This function is built up from the contributions of radial octaves: $$\eta (𝐫,t)=\underset{i=1}{\overset{N}{}}k_i^{(1+\alpha /2)}\eta _i(k_i𝐫,t),$$ (3) where $`k_i=2^i`$, and the flow function for one-octave contribution in polar coordinates $`(r,\theta )`$ is given by $`\eta _i(k_i𝐫,t)=F(k_ir)\left(A_i(t)+B_i(t)\mathrm{cos}(2\theta +\varphi _i)\right)`$. The radial part $`F(x)`$ obeys $`F(x)=x^2(1x)`$ for $`0x1`$ and $`F(x)=0`$ otherwise, and $`\varphi _i`$ are quenched random phases. Moreover, $`A_i(t)`$and $`B_i(t)`$ are independent Gaussian random processes with dispersions $`A^2=B^2=v_0^2`$ and with correlation times $`\tau _i=2^{i\beta }\tau _0`$. At each time step these processes are generated according to $`X_i(t+\mathrm{\Delta }t)=\sqrt{1(\mathrm{\Delta }t/\tau _i)^2}X_i(t)+(\mathrm{\Delta }t/\tau _i)v_0^2\zeta `$, where $`X`$ is $`A`$ or $`B,`$ and $`\zeta `$ is a Gaussian random variable with zero mean and unit variance. The values of $`\tau _0`$ and the integration step $`\mathrm{\Delta }t`$ are to be chosen in such a way that $`\tau _N\mathrm{\Delta }t`$. Typically, values of $`\mathrm{\Delta }t10^4`$ are used. For the noncorrelated flow ($`\beta =0`$) the values of $`A_i(t)`$and $`B_i(t)`$ are renewed at each integration step $`\mathrm{\Delta }t`$. In the present simulations $`N=16`$ was used. The value $`v_0=1`$ was employed in the majority of simulations reported here, so that only the use of a different $`v_0`$ value will be is explicitly stated in the following. The values of $`R^2(t)`$ obtained from 3000 realizations of the flow for several values of $`\beta [0,1]`$ are plotted on double logarithmic scales in Fig.1, where $`\tau _0=0.15`$ is used. One can clearly see that for all $`\beta `$ a scaling regime $`R^2(t)t^\gamma `$ appears. We note moreover that the curves for $`\beta =0.67,0.8,09`$ and $`1.0`$ are almost indistinguishable within statistical errors. The values of $`\gamma `$ as a function of $`\beta `$ are presented in the insert, together with the theoretically predicted forms, vide infra. The regimes of dispersion found in simulations can be explained within the framework put forward in Ref.. The discussion starts by considering $`l(r)=v(r)\tau (r),`$ the mean free path of motion at the distance $`r`$. If this mean free path always stays small compared to $`r`$, the relative motion exhibits a diffusive behavior with a position-dependent diffusion coefficient, $`K(r)l^2(r)/\tau (r)r^{\alpha +\beta }`$. Taking as a scaling assumption $`rr^2(t)^{1/2}=R`$, one gets that the mean square separation $`R`$ grows as $`R^2t^\gamma `$ with $$\gamma =\frac{2}{2(\alpha +\beta )}.$$ (4) On the other hand, if $`l(r)`$ is of the order of $`r`$, the mean separation follows from the integration of the ballistic equation of motion $`\frac{d}{dt}R=v(R)R^{\alpha /2}`$, see Ref.. Thus, in a flow where a considerable amount of flow lines of relative velocity are open, one gets $`R^2t^\gamma ,`$ with $$\gamma =\frac{4}{2\alpha }.$$ (5) The occurrence of either regime is governed by the value of the (local) persistence parameter of the flow, $$Ps(r)=l(r)/r=v(r)\tau (r)/r.$$ (6) Small values of $`Ps`$ correspond to erratic, diffusive motion, while large values of $`Ps`$ imply that the motion is strongly persistent. The value of the persistence parameter scales with $`r`$ as $`Ps(r)r^{\alpha /2+\beta 1}`$. Since under particle’s dispersion the mean interparticle distance grows continuously with time, the value of $`Ps`$ decreases continuously for $`\alpha /2+\beta <1`$, so that the diffusive approximation is asymptotically exact. For $`\alpha /2+\beta >1`$ the lifetimes of the structures grow so fast that the diffusive approximation does not hold. This situation is one observed in our simulations for $`\beta >2/3`$. The strong ballistic component of motion implies that the velocities stay correlated over considerable time intervals. The results of Fig.1 confirm that $`\gamma (\beta )`$ behaves accordingly to Eq.(4) for $`\beta <2/3`$ and Eq.(5) for $`\beta >2/3`$. We note here that the parameters of the simulations presented in Fig.1 ($`v_0=1,`$ $`\tau _0=0.15`$) were chosen in a way that allows to show all curves within the same time- and distance intervals. This leads to a somehow restricted scaling range and to slight overestimate of $`\gamma `$-values in the diffusive domain. Strong differences between the diffusive and the ballistic regimes can be readily inferred when looking at typical trajectories of the motion, such as are plotted in Fig.2 for the cases $`\beta =0.33`$ and $`\beta =0.67`$. The difference between the trajectories is evident both in the $`(x,y)`$-plots and in the $`r(t)`$-dependences. The curves for $`\beta =0.33`$ exhibit a random-walk-like, erratic behavior, while the curves for $`\beta =0.67`$ show long periods of laminar, directed motion. In order to quantitatively characterize the strength of the velocity correlations we calculate the backwards-in-time correlation function (BCF) of the radial velocities, as introduced in Ref.. This function is defined as $`C_r(\tau )=v_r(t\tau )v_r(t)/v_r^2(t)`$ and shows, what part of its history is remembered by a particle in motion. The function is plotted in Fig.3 against the dimensionless parameter $`\vartheta =\tau /t`$. The functions (obtained in $`10^4`$ realizations each) are plotted for 4 different sets of parameters. Here the dashed lines correspond to $`\beta =0.33`$, in the diffusive range, for $`t=10^2,`$ $`310^2,`$ $`10^1`$ and $`310^1`$. These BCF do not scale and are rather sharply peaked close to zero, thus indicating the loss of memory. The two sets of full lines indicate $`C_r(\tau )`$ in Kolmogorov flows, for $`t=10^2,`$ $`310^2,`$ and $`10^1`$. The lower set corresponds to the value $`\tau _0=0.05`$ and the upper set to the value $`\tau _0=0.15`$. In both cases the functions show scaling behavior. No considerable changes in the BCF’s form occur when further increasing the value of $`\tau _0`$ up to $`\tau _0=1`$, thus indicating that the data $`\tau _0=0.15`$ correspond already to a strongly correlated regime. The form of these curves resembles closely the experimental findings of Ref. . The BCFs for $`\beta =1.0`$show an overall behavior very similar to the one in Kolmogorov’s case. Note that as the time grows the curves for $`\beta =1.0`$ approach those for $`\beta =2/3`$ and probably tend to the same limit. The curves for $`\beta =1.0`$ and $`\tau _0=1`$ (not shown) fall together with those in Kolmogorov’s case with $`\tau _0=0.15`$. The similarity in the properties of dispersion processes in a Kolmogorov situation with larger $`Ps`$ (larger $`\tau _0`$) and in ballistic regime can be explained based on the behavior of the effective persistence parameter. In the diffusive regime we supposed that the correlation time of the particles’ relative velocity scales in the same way as the Eulerian lifetime of the corresponding structures. On the other hand, in the ballistic regime, $`\beta >1\alpha /2`$, the lifetimes of the structures grow so fast that no considerable decorrelation takes place during the time the particles sweep through the structure. The Lagrangian decorrelation process is then connected not to Eulerian decorrelation, but to sweeping along open flow lines. The effective correlation time then scales according to $`\tau _s(r)r/v(r)t^{1\alpha /2}`$, and the effective value of $`\beta `$ stagnates at $`\beta =1\alpha /2`$. Thus, all long-time correlated cases belong to the same universality class of strongly-correlated flows, as the Kolmogorov flows with large $`Ps`$, for which Eq.(4) and (5) coincide. For Kolmogorov flows the ballistic and the diffusive mechanisms lead to the same functional form of $`R^2(t)`$-dependence. The functional form of the dependence of $`R^2(t)`$ on parameters of the flow is $`R^2\left(v_0^2\tau _0/r_0^{\alpha +\beta }\right)^\gamma t^\gamma `$ in the diffusive situation ($`Ps1`$) and $`R^2\left(v_0/r_0^{\alpha /2}\right)^\gamma t^\gamma `$ in the ballistic case ($`Ps1`$). Assuming that $`Ps`$ is the single relevant parameter governing the dispersion we are lead to the form $`R^2(t)f(Ps)\left(v_0/r_0^{\alpha /2}\right)^\gamma t^\gamma `$, where $`f(Ps)`$ is a universal function of $`Ps`$, which behaves as $`Ps^\gamma `$ for $`Ps1`$ and tends to a constant for $`Ps1`$. Thus, for a fixed spatial structure of the flow, the following scaling assumption is supposed to hold: $$\frac{R^2(t)}{\left(v_0t\right)^\gamma }=F(v_0\tau _0),$$ (7) which scaling can be checked in our case by plotting $`R^2(t)/(v_0t)^3`$ against $`v_0\tau _0`$. The corresponding plot is given in Fig. 4, where we fix $`t=0.1`$, and plot the results in three series of simulations. Each point corresponds to an average over $`510^4`$ runs. Here the squares correspond to $`v_0=1`$ and to the values of $`\tau _0`$ ranging between 0.01 and 0.15, the triangles correspond to $`v_0=0.3`$ and th $`\tau _0`$ between 0.033 and 0.5, and the circles to $`\tau _0=0.1`$ and to values of $`v_0`$ between 0.1 and 1.5. The error bar indicates a typical statistical error as inferred from 5 similar series of $`510^4`$ runs each. The scaling proposed by Eq.(7) is well-obeyed by the results. Some points outside of the range of Fig.4 were also checked. Thus, for larger values of $`v_0\tau _0`$ the values of $`R^2(t)/(v_0t)^3`$ seem to stagnate. On the other hand, increasing $`v_0\tau _0`$ to values larger than 0.3 (i.e. approaching the frozen flow regime) leads to a strong increase in fluctuations, making the results less reliable. Let us summarize our findings. Thus, we considered two-particle dispersion in a velocity field scaling according to $`v^2(r)r^{2/3}`$ and $`\tau (r)r^\beta `$. We show that two generic types of behavior are possible: For $`\alpha /2+\beta <1`$ the diffusion approximation holds and the increase in the interparticle distances is governed by the distance-dependent diffusion coefficient $`K(r)r^{\alpha +\beta }`$. In the opposite case $`\alpha /2+\beta >1`$ the relative velocities stay strongly correlated. The transition between the two regimes takes place exactly for the Kolmogorov flow, for which $`\alpha /2+\beta =1`$. In this case the properties of the dispersion process depend on the persistence parameter of the flow. The author is thankful to P.Tabeling, I.Procaccia, V.L’vov, J.Klafter and A. Blumen for enlightening discussions. Financial support by the Deutsche Forschungsgemeinschaft through the SFB 428 and by the Fonds der Chemischen Industrie is gratefully acknowledged.
no-problem/9911/cond-mat9911274.html
ar5iv
text
# Dynamics of fractal dimension during phase ordering of a geometrical multifractal \[ ## Abstract A simple multifractal coarsening model is suggested that can explain the observed dynamical behavior of the fractal dimension in a wide range of coarsening fractal systems. It is assumed that the minority phase (an ensemble of droplets) at $`t=0`$ represents a non-uniform recursive fractal set, and that this set is a geometrical multifractal characterized by a $`f(\alpha )`$-curve. It is assumed that the droplets shrink according to their size and preserving their ordering. It is shown that at early times the Hausdorff dimension does not change with time, whereas at late times its dynamics follow the $`f(\alpha )`$ curve. This is illustrated by a special case of a two-scale Cantor dust. The results are then generalized to a wider range of coarsening mechanisms. \] Fractal growth phenomena have been under extensive investigation during the past two decades . The inverse process of fractal coarsening occurs in many physical systems. It has been discussed in the context of sintering of fractal matter . Coarsening of fractal clusters by surface tension in the bulk-diffusion-controlled , interface-controlled and edge-diffusion-controlled systems has been investigated. Additional examples include thermal relaxation of rough grain boundaries and smoothing of fractal polymer structure in the process of polymer collapse . Two-dimensional fractal fingering, observed in a Hele-Shaw cell with radial symmetry (for a review see Ref. ), exhibits coarsening at a late stage of the experiment. All these systems are quite different, as they involve non-conserved or conserved order parameters, different transport mechanisms, etc. A crucial issue related to any phase ordering process is the presence or absence of dynamical scale invariance (DSI) . DSI assumes that there is a single dynamical length scale $`\lambda (t)`$ such that the coarsening system looks (statistically) invariant in time when lengths are scaled by $`\lambda (t)`$. Does a fractal cluster or a fractal interface exhibit DSI (on a shrinking interval of distances) in the process of coarsening? Early scenarios of fractal coarsening in systems with non-conserved and conserved order parameter did rely upon the hypothesis of DSI. However, numerical simulations showed that the DSI breaks down during the coarsening of fractal clusters in edge- and bulk-diffusion-controlled systems. On the other hand, recent simulations of smoothing of a fractal polymer during collapse , and of interface-controlled fractal coarsening under a global conservation law , do support DSI. Therefore, a question arises about possible universality classes of fractal coarsening. Even if DSI holds, the fractal dimension may or may not change with time. Early fractal coarsening scenarios assumed that it remains constant (again, on a shrinking interval of distances). Experiments on sintering of silica aerogels (a convenient way of investigating fractal coarsening) have been inconclusive. Some of them gave evidence in favor of constancy of the fractal dimension during coarsening, while others reported a significant change of the fractal dimension with time. Another evidence for a significant decrease of fractal dimension with time was found in experiments on thermal annealing of ferroelectric thin films of lead zirconate titanate . In this experiment, the fractal dimension remained constant at early times, and decreased to its final value at intermediate times. Numerical simulations of a variety of coarsening systems with different growth laws showed that the fractal dimension does not change with time. These simulations include bulk-diffusion-controlled , edge-diffusion-controlled , and interface-controlled systems. It is remarkable that in so many systems with widely different coarsening mechanisms the fractal dimension remains constant during the dynamics. Therefore, one is tempted to look for a general scenario that would explain this fact and that would be insensitive to specific coarsening mechanisms. The simple multifractal coarsening model developed in this paper has this property. In addition, this model is the first attempt to address the multifractal properties of fractal coarsening. We shall consider a very simple model of a coarsening fractal system. In this model, the initial condition for the minority phase is an ensemble of droplets that represents a geometrical multifractal. We will then assume that the smaller droplets shrink and disappear independently, according to their sizes, and consider discrete time dynamics. Using a well-known theorem of multifractal geometry, we will establish the dynamical behavior of the Hausdorff dimension of this simple coarsening system. This result will be illustrated in a special case, when the droplets are distributed in the form of a two-scale Cantor dust . Employing the size distribution function of this fractal set, we will follow the dynamical behavior of the $`d`$-measure in two characteristic limiting cases and show that the Hausdorff dimension’s dynamics in this example are consistent with the general result. Then we will relax the discrete time assumption. Furthermore, we will show that the results are essentially independent of the details of the coarsening dynamics as long as the minority-phase droplets do not merge or break up. The minority phase of our model represents, at zero time, a big but finite ensemble of droplets that form a non-uniform recursive fractal with a constant density distribution in the $`E`$-dimensional space. Let us index the droplets in the $`m`$-th generation of the fractal according to their radii. Thus, all the droplets with index $`k`$ have radius $`R_m(k)`$ and form a subset of the whole fractal which we denote by $`S_m(k)`$. The smallest droplets have index $`k=0`$ and radius $`R_m(0)`$, which is the lower cutoff of the fractal. The largest droplets have index $`k=m`$ and radius $`R_m(m)`$, which is the upper cutoff. One can work with a size distribution function $`n_m(k)`$, which is simply the number of droplets with radius $`R_m(k)`$, and use it to compute the Hausdorff dimension of the fractal (see Ref. , where this was done for a two-scale Cantor dust). Any non-uniform recursive fractal with a constant density distribution can be described as multifractal in geometrical sense (see Ref. , p. 66). In this case one can introduce the measure of the subset $`S_m(k)`$ in the following way: $$\mu _m(k)=\frac{R_m^E(k)}{\mathrm{\Sigma }_{k=0}^mn_m(k)R_m^E(k)},$$ (1) where $`R_m(k)`$ are the radii of the droplets divided by the size of the system. The Hölder exponent of the elements of the subset $`S_m(k)`$ is defined by $$\alpha _m(k)=\frac{\mathrm{ln}\mu _m(k)}{\mathrm{ln}R_m(k)}.$$ (2) The $`f(\alpha )`$ curve for the fractal is constructed in the following way : $$f(\alpha )=\frac{\mathrm{ln}n_m(k)}{\mathrm{ln}R_m(k)}(1km),$$ (3) where $`k`$ is supposed to be expressed through $`\alpha `$ with the help of the equation $`\alpha _m(k)=\alpha `$. (We assume that this equation gives a one-to-one correspondence between $`\alpha `$ and $`k`$.) $`f(\alpha )`$ is assumed to have a single maximum which is attained for $`\alpha =\alpha _0`$, so that $`f(\alpha _0)`$ is the Hausdorff dimension of the whole fractal. We also assume that $`f(\alpha (k))`$ is the Hausdorff dimension of the subset $`S_m(k)`$. This assumption, widely used in the physical literature, was rigorously proved in the case of a two-scale Cantor dust , and also for a class of other multifractal measures . We now turn to describe the dynamics. We assume first that the droplets shrink and disappear independently, according to their radius only, and also simplify the governing dynamics by introducing a discrete time $`\tau `$ (later we will relax these two assumptions). In the first time step $`\tau =0`$ the smallest droplets with radius $`R_m(0)`$ disappear, while the sizes of the other droplets do not change. In the next time step $`\tau =1`$ the elements with radius $`R_m(1)`$ disappear, and so on. The set of droplets that survive after each step of these dynamics obviously remains self-similar (on a shrinking interval of distances). The main result of this paper is the following behavior of the Hausdorff dimension $`D`$ as a function of the discrete time $`\tau `$. For $`\tau k(\alpha _0)`$ $`D`$ does not change: $`D(\tau )=D_0`$, where $`D_0`$ is the Hausdorff dimension of the initial condition. For $`\tau >k(\alpha _0)`$ $`D(\tau )=f(\alpha (k_{min}))`$ where $`k_{min}(\tau )`$ is the $`k`$-value of the smallest droplets which have not yet disappeared by time $`\tau `$. This dynamical behavior is illustrated in Fig. 1. The proof of this result is based on the following theorem: the Hausdorff dimension of a union of two fractal sets $`S_1`$ and $`S_2`$ with fractal dimensions $`D_{S_1}`$ and $`D_{S_2}`$, respectively, is $`D=\text{max}(D_{S_1},D_{S_2})`$. (See, for example, , p. 17.) In the last time step of the dynamics, $`\tau =m`$, the coarsening object consists of the subset $`S_m(m)`$ alone, and its Hausdorff dimension is $`f(\alpha (m))`$. In the previous time step $`\tau =m1`$ the object consists of two subsets : $`S_m(m)`$ with Hausdorff dimension $`f(\alpha (m))`$, and $`S_m(m1)`$ with Hausdorff dimension $`f(\alpha (m1))`$. It follows from the shape of the $`f(\alpha )`$ curve of the initial fractal that $`f(\alpha (m1))>f(\alpha (m))`$. Using the theorem, we get $`D(\tau =m1)=f(\alpha (m1))`$. More generally, consider time $`\tau =k_0+s`$, where $`s`$ is a positive integer and $`k_0k(\alpha _0)`$. At this time we can regard the object as a union of two fractal subsets $`S_m(k_0+s)`$ and $`S_m(mkk_0+s+1)`$. Here, $`S_m(mkk_0+s+1)`$ is the union of all subsets $`S_m(k)`$ with $`k=s+1,\mathrm{},m`$. It is also the whole coarsening object at time $`\tau =k_0+s+1`$. Assume by induction that $`D(\tau =k_0+s+1)=D(S_m(mkk_0+s+1))=f(\alpha (k_0+s+1))`$. It follows from the shape of the $`f(\alpha )`$ curve that $`f(\alpha (k_0+s))>f(\alpha (k_0+s+1))`$. Hence, using the theorem, we conclude that $`D(\tau =k_0+s)=f(\alpha (k_0+s))`$. Since $`k_0+s`$ is the index of the smallest droplets which have not yet disappeared we can write this result as $$D(\tau k_0)=f(\alpha (k_{min})).$$ (4) The dynamical behavior of the Hausdorff dimension at times $`\tau k_0`$ can be found in a similar way. For $`\tau =k_01`$ the object can be considered as a union of two fractal subsets $`S_m(mkk_0)`$ and $`S_m(k_01)`$. It follows from Eq. (4) that $`D(\tau =k_0)=D(S_m(mkk_0))=D_0`$. From the shape of the $`f(\alpha )`$ curve we get $`D_0=f(\alpha (k_0))>f(\alpha (k_01))`$. Therefore, $`D(\tau =k_01)=D_0`$. More generally, for any time $`\tau =k_0s`$ the coarsening object can be considered as a union of the two fractal subsets : $`S_m(mkk_0s+1)`$ with Hausdorff dimension $`D_0`$ and $`S_m(k_0s)`$ with Hausdorff dimension $`f(\alpha (k_0s))`$. From the shape of the $`f(\alpha )`$ curve we deduce $`f(\alpha (k_0s))<D_0`$. Hence, by using the above theorem we conclude that $`D(\tau =k_0s)=D_0`$. More generally, we can write: $$D(\tau k_0)=f(\alpha (k_0))=D_0.$$ (5) Let us now turn to the particular case, when the ensemble of droplets at zero time represents a two-scale Cantor dust. Recall that the initiator of this fractal is an $`E`$-dimensional cube of unit side length. The generator consists of $`n_1`$ cubes of side $`l_1`$ and $`n_2`$ cubes of side $`l_2`$ where $`l_2>l_1`$. In each step of the fractal construction every full cube is replaced by the properly rescaled generator. After the last step of the construction, which is the $`m`$-th step, all the cubes are replaced by spherical droplets with the same size as the cubes. Now assume that this two-scale Cantor dust undergoes the simple coarsening dynamics described earlier. For convenience, we will compute the time-dependent $`d`$-measure of a two-scale Cantor dust which consists of cubes (the ones which were replaced by the spheres after the $`m`$-th generation of the construction). The only difference in the computed $`d`$-measure will be in a $`d`$-dependent prefactor. Since this prefactor is independent of $`k`$ and $`m`$, it will not affect the dynamical behavior of the $`d`$-measure and the Hausdorff dimension. The $`d`$-measure of the $`m`$-th generation of a two-scale Cantor dust can be written as $`M_d={\displaystyle _0^m}n_m(k)R_m^d(k)𝑑k=`$ (6) $`\left({\displaystyle \frac{m}{2\pi k(mk)}}\right)^{1/2}{\displaystyle _0^m}\mathrm{exp}\left[g(k)\right]𝑑k,`$ (7) where $$g(k)=k\mathrm{ln}\left(\frac{k}{mn_2l_2^d}\right)(mk)\mathrm{ln}\left(\frac{mk}{mn_1l_1^d}\right),$$ (8) and $`R_m(k)=l_1^{mk}l_2^k`$ is the size of the cubes in the subset $`S_m(k)`$. The function $`\mathrm{exp}[g(k)]`$ has a (sharp) maximum at $$\stackrel{~}{k_0}(d)=\frac{n_2l_2^dm}{n_1l_1^d+n_2l_2^d}.$$ (9) For $`d=D_0`$ one can show that $`\stackrel{~}{k_0}(D_0)=k(\alpha _0)k_0`$. At time $`\tau =k_{min}`$ the $`d`$-measure of the object is $$M_d(\tau )=_{k_{min}(\tau )}^mn_m(k)R_m^d(k)𝑑k.$$ (10) As long as $`k_{min}(\tau )\stackrel{~}{k_0}(d)`$, one can apply the saddle point argument used in Ref. and conclude that $$M_d(\tau \stackrel{~}{k_0}(d))M_d(\tau =0)=(n_1l_1^d+n_2l_2^d)^m.$$ (11) This implies that during the early stages of the dynamics the $`d`$-measure remains, with an exponential accuracy, constant. Correspondingly, the Hausdorff dimension which is computed by solving the same equation $$n_1l_1^d+n_2l_2^d=1$$ (12) for $`d`$, does not change with time. This is in agreement with Eq. (5) obtained in the general case. On the other hand, when $`\stackrel{~}{k_0}(d)\tau =k_{min}m`$, the behavior of $`M_d(\tau )`$ is quite different. Since for $`k>\stackrel{~}{k_0}(d)`$ $`g(k)`$ is a decreasing function of $`k`$, the main contribution to the integral in Eq. (10) comes from a close neighborhood of $`k=k_{min}(\tau )`$. Therefore, in Eq. (10) we can expand $`g(k)`$ around $`k=k_{min}(t)`$ to the first order and get $`M_d{\displaystyle \frac{n_m(k_{min})R_m^d(k_{min})}{g^{}(k_{min})}}=`$ (13) $`{\displaystyle \frac{h(\xi _{min},d)[y(\xi _{min},d)]^m}{m^{1/2}}},`$ (14) where $`\xi _{min}=k_{min}/m`$, $$y(\xi _{min},d)=\left(\frac{1\xi _{min}}{n_1l_1^d}\right)^{\xi _{min}1}\left(\frac{\xi _{min}}{n_2l_2^d}\right)^{\xi _{min}},$$ (15) and $$h^1(\xi _{min},d)=[2\pi \xi _{min}(1\xi _{min})]^{1/2}\mathrm{ln}\left[\frac{(1\xi _{min})n_2l_2^d}{\xi _{min}n_1l_1^d}\right].$$ (16) The Hausdorff dimension of the subset labeled by $`\xi _{min}`$ is given by $$f(\alpha (\xi _{min}))=\frac{\xi _{min}\mathrm{ln}(\frac{\xi _{min}}{n_2})+(1\xi _{min})\mathrm{ln}(\frac{1\xi _{min}}{n_1})}{(1\xi _{min})\mathrm{ln}l_1+\xi _{min}\mathrm{ln}l_2}.$$ (17) It follows that $$R_m(k_{min})^{f(\alpha (k_{min}))}=\frac{n_m(k_{min})}{\left(\frac{m}{2\pi k_{min}(mk_{min})}\right)^{1/2}}.$$ (18) Hence, we obtain the following expression for $`M_d`$ in the limit of $`\stackrel{~}{k_0}(d)\tau m`$: $$M_d\left[\frac{h(\xi _{min},d)}{m^{1/2}}\right]R_m(k_{min})^{df(\alpha (k_{min}))}.$$ (19) We see that, up to logarithmic corrections resulting from the factor $`h(\xi _{min},d)`$, the $`d`$-measure obeys a power law of $`R_m(k_{min})`$ with a time-dependent exponent. Eqs. (14)-(16) allow one to calculate the Hausdorff dimension of the ensemble of droplets in the limit of $`\stackrel{~}{k_0}(d)\tau m`$. Taking the logarithm of both sides of Eq. (14) and dividing by $`m`$, we get $$\frac{\mathrm{ln}M_d}{m}\frac{1}{m}\mathrm{ln}\left[\frac{h(\xi _{min},d)}{m^{1/2}}\right]+\mathrm{ln}[y(\xi _{min},d)].$$ (20) As $`m1`$, the first term on the right hand side of Eq. (20) can be neglected. Therefore, the Hausdorff dimension is determined by solving the equation $$y(\xi _{min},d)=1$$ (21) for $`d`$. The solution is just the Hausdorff dimension of the subset $`\xi _{min}`$ given by Eq. (17). Therefore, $`D(\tau )=f(\alpha (k_{min}))`$ for $`k_0\tau m`$, in agreement with the general result (4). We now show that the assumptions of a discrete time and of the independent shrinking of the droplets can be relaxed. It is sufficient to assume only that the dynamics of each droplet are determined by its radius (and possibly by a time-dependent “critical radius”, characterizing some mean-field interaction between droplets). We should also assume that the droplets do not merge or break up. Under these assumptions the number of droplets in each subset is constant (until the droplets disappear) and all the droplets belonging to the same subset have the same (time-dependent) radius. In addition, we forbid nucleation, which is a standard assumption for a coarsening stage . Let us denote the radii of the droplets belonging to the $`k`$-th subset at time $`t`$ by $`R_m(k,t)`$. The $`d`$-measure of the $`k`$-th subset at time $`t`$ is given by : $$M_d(m,k,t)=n_m(k)R_m^d(k,t).$$ (22) This can be rewritten as : $$M_d(m,k,t)=M_d(m,k,0)\left[\frac{R_m(k,t)}{R_m(k,0)}\right]^d,$$ (23) where $`R_m(k,0)`$ and $`M_d(m,k,0)`$ are the initial values of the radii and $`d`$-measure. Since the initial condition is a geometrical multifractal, $`M_d(m,k,0)`$ can be expressed in the following manner : $$M_d(m,k,0)=\left[Y(\frac{k}{m},d,\{P_i\})\right]^m,$$ (24) where the function $`Y`$ and the parameters $`\{P_i\}`$ characterize the initial fractal condition considered. (In our example of the two-scale Cantor dust the role of the function $`Y`$ was played by $`y`$, while the set of parameters $`\{P_i\}`$ included $`n_1,n_2,l_1`$ and $`l_2`$.) Substituting (24) into (23), taking the logarithm of both sides, and dividing by $`m`$ we get $$\frac{\mathrm{ln}M_d(m,k,t)}{m}=\mathrm{ln}\left[Y(\frac{k}{m},d,\{P_i\})\right]+\frac{d}{m}\mathrm{ln}\left[\frac{R_m(k,t)}{R_m(k,0)}\right].$$ (25) For typical coarsening mechanisms $`R_m(k,t)`$ grows with time slower than exponentially. For example, this is true for non-conserved dynamics (model A) and for the Lifshitz-Slyozov theory of conserved dynamics (model B) . Therefore, when $`R_m(k,t)>R_m(k,0)`$ the second term on the right side of Eq. (25) is negligible at $`m1`$. Similarly, it is negligible when $`R_m(k,t)<R_m(k,0)`$ as long as $`R_m(k,t)`$ is not exponentially smaller than $`R_m(k,0)`$. Eq. (25) becomes inconvenient in the case of shrinking droplets at the moment of their disappearance. Eq. (23) shows, however, that the $`d`$-measure of such droplets vanishes. Hence, the $`d`$-measure of the $`k`$-th subset does not change during the coarsening dynamics until the droplets belonging to this subset disappear. Consequently, the Hausdorff dimension of this subset does not change until its disappearance. We have therefore shown that the results of our simple discrete time coarsening model apply to a wide range of coarsening mechanisms. It should be noticed that for a system with weak multifractal properties our model predicts that the fractal dimension remains approximately constant at all times. Therefore, this model provides a simple explanation to the observation that the fractal dimension does not change in a wide range of coarsening processes . In summary, we have considered a simple model of coarsening disconnected droplets forming a geometrical multifractal. We have shown that at early times the Hausdorff dimension of the system does not change, whereas at late times its dynamics follow the $`f(\alpha )`$ curve of the initial multifractal distribution. These results are insensitive to the particular coarsening mechanism. We hope that they will motivate experimental investigation of multifractal aspects of fractal coarsening. This work was supported in part by a grant from Israel Science Foundation, administered by the Israel Academy of Sciences and Humanities.
no-problem/9911/astro-ph9911012.html
ar5iv
text
# Slow pulsars from the STScI/NAIC drift scan search ## The STScI/NAIC drift scan search During the recent Gregorian upgrade of the Arecibo telescope, considerable effort was put into drift scan searches of the Arecibo sky ($`1^{}<\delta <+39^{}`$) for new pulsars. The STScI/NAIC group was assigned declination strips centered at 1.5, 6.5, 11.5, 16.5, 21.5, 26.5, 31.5, and 36.5. A list of 20 candidates was compiled from a search in these areas between 1994 and 1998. We have so far confirmed eight new pulsars as a result of these observations. The nominal parameters based on the confirmation observations are summarized in Table 1. Barycentric periods have uncertainties of order one unit in the last digit quoted, while a conservative estimate of the uncertainty in the dispersion measures (DM) is $`\pm 10`$ cm<sup>-3</sup> pc. The positions are presently uncertain by of order $`\pm 5`$ arcmin in right ascension and declination — equivalent to the half power beam size of the telescope at 430 MHz. Although we presently have no long-term estimates of the flux densities of the new pulsars, it is already clear that they are weak sources with typical flux densities of order 0.5 to 1 mJy. Some of the initial detections were probably significantly facilitated by flux amplifications due to interstellar scintillation. Inferred 430-MHz luminosities, based on their fluxes and dispersion measures estimates range between 3 and 30 mJy kpc<sup>2</sup>. These pulsars, along with those discovered by other groups during the Arecibo upgrade, should greatly assist future statistical studies of the low end of the pulsar luminosity function. More accurate measurements of the flux densities, as well as the spin and astrometric parameters for each source are presently underway at Arecibo as part of a regular timing program using the Penn State Pulsar Machine.
no-problem/9911/astro-ph9911202.html
ar5iv
text
# 1 Introduction ## 1 Introduction Molecules such as $`H_2`$ and $`HD`$ are expected to be present in the post-recombination gas and due to their cooling properties they can thermally influence the gravitationnal collapse of the first objects which formed in the Universe . At low temperatures, these molecules with some traces of $`CO`$ could also be present in the intracluster gas, where they could act as important coolant in cooling flows. In this chemically simple gas the molecules are mainly excited collisionally. Followed by a radiative de-excitation in the optically thin medium this leads to an energy loss for the gas clouds and thus to a cooling. The aim of this communication is to discuss the minimum temperature achievable by clouds located in the region of the cooling flow of PKS 0745-191. ## 2 Equilibrium distance We have computed the molecular cooling (including radiative transfer effects) due to $`H_2`$, $`HD`$ and $`CO`$ for small clouds which are the result of a fragmentation process of bigger clouds in cooling flows . In our calculation we included also an attenuation factor $`\tau `$ which caracterizes the column density surrounding the sub-clouds. Thus the attenuated bremsstrahlung flux coming from the intracluster gas heats the clouds located in the cooling flow at a distance $`r`$ from the cluster center, and so thermal balance between heating and cooling defines an equilibrium temperature of the sub-clouds at a distance $`r=R_{eq}`$ inside the cooling flow region (i.e. $`R_{eq}<r_{cool}`$). The following column densities are adopted for a typical small cloud (with $`n_{H_2}=10^6`$cm<sup>-3</sup> and the orto-para ratio equal to 1): $`N_{CO}=10^{14}\mathrm{cm}^2\mathrm{and}N_{H_2}=2\times 10^{18}\mathrm{cm}^2`$, which corresponds to a $`CO`$ abundance: $`\eta _{CO}5\times 10^5`$. For $`HD`$ instead we assume the primordial ratio $`\eta _{HD}7\times 10^5`$. In Figure 1 we plotted the equilibrium temperature of clumps at the equilibrium distance $`R_{eq}`$ for different values of the attenuation factor $`\tau `$. We see that low equilibrium temperatures are achieved at distances smaller than the cooling radius $`r_{cool}`$. In Figure 2 we plotted the equilibrium distance as a function of the equilibrium temperature for different values of $`\eta _{CO}`$ and $`\tau =2.5`$ is kept fixed. Indeed, the $`CO`$ abundance and thus $`\eta _{CO}`$ is an important parameter which is, however, not well known. We thus find that a fraction of the gas in the cooling flow of PKS 0745-191 could be very cold, which might form small clouds via fragmentation. Acknowledgements. We would like to thank M. Plionis and I. Georgantopoulos for organizing this pleasant conference. This work has been supported by the Dr Tomalla Foundation and by the Swiss NSF.
no-problem/9911/astro-ph9911383.html
ar5iv
text
# Synthetic Spectra and Color-Temperature Relations of M Giants ## 1 Introduction M giants are important contributors to the integrated light of many stellar aggregates, such as early-type galaxies and galactic bulges, even though they generally comprise a very small fraction of the stellar mass of these systems. In fact, recent integrated light models of the Galactic bulge of the Milky Way (Houdashelt (1995)) indicate that M giants contribute over half of the K-band flux there and thus also dictate the strength of spectral features such as the 2.3 $`\mu `$m CO band. However, the relative importance of these cool stars in integrated light is quite dependent upon the wavelength region under consideration and the other stellar populations present. For example, at optical wavelengths, where the Galactic-bulge M giants are much fainter than they are in the near-infrared, these stars produce only about 10–20% of the bulge’s continuous flux but are entirely responsible for the broad absorption bands of TiO seen in this part of its spectral energy distribution. Consequently, population models of galaxies should strive to include realistic representations of M giants. Unfortunately, M stars have proven especially difficult to model accurately because they are so cool and thus have a wealth of molecules in their stellar atmospheres. This means that a variety of phenomena which can be ignored in the modelling of hotter stars are important in the atmospheric structure of M stars. The most critical of these are the molecular opacities, which depend not only on the accuracy of the (sometimes nonexistent) laboratory data but also on the way in which the opacities are calculated and represented in the models. Simple mean opacities, opacity distribution functions and opacity sampling have each been used in the modelling of cool star atmospheres. For example, plane-parallel models of M giants have been calculated by Brett (1990) using straight mean opacities and by J$`ø`$rgensen (1994) using opacity sampling. Other factors influencing cool star models include sphericity effects and variability. In spherical models, an additional parameter is introduced – the extension of the atmosphere, d. It is defined by the relation, d = r/R – 1, where R is the stellar radius, typically defined to be the radius at which the Rosseland optical depth is equal to unity, and r is the radius at which the Rosseland optical depth has a value of 10<sup>-5</sup>. Static, spherical models of M giants have been constructed by Bessell et al. (1989a), Scholz & Tsuji (1984), Scholz (1985) and Plez et al. (1992). All but the latter used straight mean opacities in their models; Plez et al. (1992) incorporated more recent opacity data and used opacity sampling. Dynamic, spherical models, representing Mira variables, have been studied by Bessell et al. (1989b) and Alvarez & Plez (1998). Extension and sphericity are generally important for two reasons. First, if log g varies significantly between the base of the stellar atmosphere and its outer layers, it may be necessary to include the radial dependence of gravity explicitly in the stellar atmosphere model. Second, extension results in a higher photon escape probability, and the corresponding dilution of the stellar flux produces a cooling of the outer layers of the atmosphere. Due to the temperature sensitivity of molecule formation, this cooling in turn enhances the formation of certain molecules and thus their partial pressures. In fact, extension can be so important in cool stars that Scholz & Wehrse (1982) and Scholz (1985) have suggested a 3-dimensional classification scheme for M giants in which extension serves as the third parameter (in addition to T<sub>eff</sub> and log g); they propose that the extension of an M giant can be estimated observationally from the depths of specific TiO bands at a given effective temperature and surface gravity. However, Plez et al. (1992) has found that a good representation of the opacities is more important than sphericity effects for models having log g $``$ 0.0 and masses of order 1 M or more. Thus, sphericity and variability significantly affect only the coolest of M giant models. We have recently begun an evolutionary synthesis program to produce synthetic spectra of early-type galaxies. The foundations of this work are stellar atmosphere models and synthetic spectra calculated with updated versions of the MARCS (Gustafsson et al. (1975), Bell et al. (1976)) and SSG (Bell & Gustafsson (1978); Gustafsson & Bell (1979); Bell & Gustafsson 1989; hereafter BG (89)) computer codes, respectively. However, because we employ versions of these codes which do not account for sphericity and other factors which affect the stellar atmospheres of M giants, we have exerted a considerable effort to fine-tune our models to compute more representative synthetic spectra of these cool stars and also to establish an effective temperature scale for M giants based upon recent angular diameter measurements. In a companion paper, Houdashelt et al. (2000; hereafter Paper I ), we describe many of the recent improvements in these codes and present new color-temperature relations for stars as cool as spectral type K5. In the present paper, we discuss our improved models of M giants and compare the resulting synthetic spectra and colors to observational data. In Section 2, we briefly describe the MARCS/SSG models and calculations, examine the possible shortcomings of using these models to represent M giants and present our strategy for testing and refining the synthetic spectrum calculations to compensate for these shortcomings. We compare three effective temperature scales for cool stars in Section 3 and determine which best represents the field M giants. Section 4 discusses our treatment of TiO absorption in the synthetic spectra and compares our results to observed spectra of field M giants. We also show the good agreement between the computed and observed CO band strengths and compare the broad-band colors measured from the synthetic spectra to photometry of field stars. Section 5 summarizes the major conclusions of this work. ## 2 Basic Details of the M Giant Models The models of M giants presented in this paper have been constructed in exactly the same manner as those of the hotter stars described in Paper I . We provide here only a brief description of the MARCS model atmospheres and the SSG synthetic spectrum calculations, emphasizing those factors which are most relevant to calculating models of cool stars. We refer the reader to Paper I for further details. ### 2.1 Calculating the Model Atmospheres and Synthetic Spectra The version of the MARCS stellar atmosphere code used to construct the model atmospheres of the M giants produces a flux-constant, chemically-homogeneous, plane-parallel model atmosphere calculated under the assumptions of hydrostatic equilibrium and LTE. It incorporates opacity distribution functions (ODFs) to represent the opacity due to atomic and molecular lines as a function of wavelength. The SSG spectral synthesis code combines the MARCS model atmosphere and spectral line lists to compute a synthetic spectrum. The primary spectral line list which we use is the updated version of the Bell “N” list (Bell et al. (1994)) described in Paper I . In addition, optional line lists for TiO and H<sub>2</sub>O can be included in the calculations. We have incorporated the TiO line list in our M giant models, and we describe it more fully below. As described later in this paper (see Section 4.2.3), both of the H<sub>2</sub>O line lists which we tested were found to be unsatisfactory, so no water lines are included in the models presented here. The TiO spectral line list includes lines from the $`\alpha `$ (C$`{}_{}{}^{3}\mathrm{\Delta }`$–X$`{}_{}{}^{3}\mathrm{\Delta }`$), $`\beta `$ (c$`{}_{}{}^{1}\mathrm{\Phi }`$–a$`{}_{}{}^{1}\mathrm{\Delta }`$), $`\gamma `$ (A$`{}_{}{}^{3}\mathrm{\Phi }`$–X$`{}_{}{}^{3}\mathrm{\Delta }`$), $`\gamma ^{}`$ (B$`{}_{}{}^{3}\mathrm{\Pi }`$–X$`{}_{}{}^{3}\mathrm{\Delta }`$), $`\delta `$ (b$`{}_{}{}^{1}\mathrm{\Pi }`$–a$`{}_{}{}^{1}\mathrm{\Delta }`$), $`\varphi `$ (b$`{}_{}{}^{1}\mathrm{\Pi }`$–d$`{}_{}{}^{1}\mathrm{\Sigma }`$), and $`ϵ`$ (E$`{}_{}{}^{3}\mathrm{\Pi }`$–X$`{}_{}{}^{3}\mathrm{\Delta }`$) systems. In addition to the lines of <sup>48</sup>TiO, the spectral line lists include lines of <sup>46</sup>TiO, <sup>47</sup>TiO, <sup>49</sup>TiO and <sup>50</sup>TiO as well. Wavelengths and gf values for lines in the $`ϵ`$ system were kindly provided by Plez (1996). For lines of the other systems, wavelengths were calculated using molecular constants taken from Phillips (1973), and the Hönl-London factors were obtained from formulae in Kovacs (1969). The Franck-Condon factors were taken from Bell et al. (1979) for the $`\alpha `$, $`\gamma `$, $`\gamma ^{}`$ and $`\varphi `$ systems and computed for the $`\beta `$ and $`\delta `$ systems using the code described by Bell et al. (1976). The initial f<sub>00</sub> values for the $`\alpha `$, $`\beta `$, $`\gamma `$ and $`\gamma ^{}`$ systems came from Hedgecock et al. (1995), while those for the $`\delta `$ and $`\varphi `$ bands were assumed to be 0.0190 and 0.0210, respectively. Improved values were found empirically by comparing observed and synthetic spectra of field M giants as described in Section 4.1. All of the stellar atmosphere models and synthetic spectra discussed in this paper have been constructed using solar abundance ratios for all of the elements except carbon and nitrogen. Paper I discusses the evidence indicating that stars which are more evolved than the “bump” in the red-giant-branch luminosity function have had CNO-processed material mixed into their atmospheres. Consequently, we have used \[C/Fe\] = –0.2, \[N/H\] = +0.4 and <sup>12</sup>C/<sup>13</sup>C = 14 for our M giant models, in accordance with the abundance ratios measured in field M giants (Smith & Lambert (1990)). As in Paper I , the synthetic spectra were calculated at 0.1 Å resolution and in two pieces, optical and infrared (IR). The optical portion of the spectrum covers wavelengths from 3000–12000 Å, and the IR section extends from 1.0–5.1 $`\mu `$m (the overlap is required for calculating J-band magnitudes). In addition, the microturbulent velocity, $`\xi `$, used to calculate the synthetic spectrum of a given star was derived from its surface gravity using the field-star relation, $`\xi `$ = 2.22 – 0.322 log g (Gratton et al. 1996). ### 2.2 Possible Deficiencies in the Models There are two possible drawbacks to using our version of MARCS model atmospheres to represent M giants. First, being plane-parallel models, they do not account for the affects of sphericity and extension. Second, the ODFs which we employ do not include the molecular opacities of TiO, VO and H<sub>2</sub>O, which are among the strongest opacity sources in M stars. In addition, the spectral line list used by SSG does not include lines of VO. Still, there are good reasons to believe that the models, especially those of the hotter M giants, will not be too greatly in error due to these factors. J$`ø`$rgensen (1994) has shown that the inclusion of TiO in plane-parallel model atmospheres mainly serves to heat the surface layers of the models, with the extent of the heating diminishing as T<sub>eff</sub> decreases. This heating inhibits the formation of H<sub>2</sub>O, which thus does not become an important source of opacity until mid-to-late-M types. The main effect of H<sub>2</sub>O opacity in his models is to cause an expansion of the atmosphere in cooler stars (T<sub>eff</sub> $``$ 3000 K). VO appears to have little influence on the atmospheric structure but does affect the spectra of M giants, producing a series of broad absorption bands between about 0.7 and 2.2 $`\mu `$m in the spectra of giants later than spectral type M5 (Brett 1990; Plez 1998). From a physical standpoint, the basic differences between spherical model atmospheres and their plane-parallel counterparts appear to be 1) the spherical models extend to lower gas pressures and temperatures, and 2) as extension increases, the temperature at a given optical depth decreases, while the gas pressure at a given temperature increases (see e.g., Scholz & Tsuji 1984; Scholz 1985). Scholz (1985) and Bessell et al. (1989a) have found that extension is relatively unimportant in early-type M giants but dramatically increases with decreasing effective temperature for T<sub>eff</sub> $``$ 3500 K, mainly due to a substantial increase in the H<sub>2</sub>O opacity. In addition, they find that extension influences only the uppermost layers of the atmosphere, so that molecular species such as TiO, VO and H<sub>2</sub>O are affected, but CN and CO, which mostly form deeper in the atmosphere, are found to be relatively insensitive to extension. Plez et al. (1992) also find that extension affects TiO formation mainly because it forms in the upper layers of the atmosphere; extension is the greatest for their models near 3200 K due to the saturation of H<sub>2</sub>O bands. Thus, while these factors (plane-parallel model atmospheres, missing opacity in the ODFs) are potential hindrances to successful modelling of cool giants, we expect them to have relatively minor effects on our results. Since they affect only the surface layers of the models, the continuum in our models should accurately represent the effective temperature. On the other hand, we would be surprised to find that the absorption bands of the molecules formed in the outer parts of the atmosphere matched those observed in stars of the corresponding T<sub>eff</sub>. The TiO bands, which are extremely temperature-sensitive, should be the most important in this regard, since VO and H<sub>2</sub>O bands are observed in the spectra of only the coolest M giants (spectral type M5 and later) and/or those of very low gravity; the omission of the latter two molecules in the ODFs probably has a minimal influence on the stellar atmosphere models of most M giants. Thus, we have good reason to believe that we can overcome the deficiencies inherent in our ODFs and plane-parallel models by modifying our treatment of TiO. ### 2.3 How Do We Test Our Models? Quite often, stellar models are evaluated by comparing the colors of a grid of solar-metallicity models to observed color-color or color-temperature relations of field stars. If the field star relations fall within the domain of the grid colors at a given effective temperature or color, then the models are usually considered to be satisfactory. The fact that this approach is only truly appropriate for colors which are reasonable representations of the continuum slope and are relatively insensitive to gravity is often ignored. While this may be reasonable for many of the colors of hotter stars, such effects must not be neglected when modelling M giants, since their spectra are dominated by molecular absorption bands and gravity-sensitive features. It is possible that, through a fortuitous but incorrect combination of log g, \[Fe/H\] and perhaps microturbulent velocity, a synthetic spectrum can be calculated which has the colors of a field M giant of a given effective temperature but proves to be a poor match to the finer details of its spectral energy distribution. Unfortunately, without a priori knowledge of the temperatures and gravities of the field stars, a better way to test the model colors is not obvious. Given the aforementioned uncertainties in our models and in dealing with cool stars in general, we do not expect to be able to produce perfect synthetic spectra of M giants. However, we need to be able to evaluate the “quality” of our models and refine them as necessary. We have chosen to do this by comparing our synthetic spectra to observed spectra of field M giants in as much detail as possible. Ideally, this would utilize a good set of empirical spectra of stars of known T<sub>eff</sub>, gravity and metallicity; to the best of our knowledge, such a set of data does not exist for M giants. Instead, we have chosen to use the “intrinsic” spectral sequence of field M giants presented by Fluks et al. (1994; hereafter FPTWWS ) to test and improve our synthetic spectrum calculations. However, to use this data for such a purpose, we must first assign effective temperatures and surface gravities to the stars represented by their spectral sequence. In the MK spectral classification system, temperature classes of M giants are based primarily upon the strengths of the TiO bands (see Keenan & McNeil (1976)). While this allows an MK spectral type to be assigned to any spectrum containing TiO bands, it also makes the classification of these cool stars metallicity-dependent, since the TiO band strengths depend upon the abundances of titanium and oxygen (we adopt logarithmic abundances of 4.78 and 8.87 dex for Ti and O, respectively, on a scale where H = 12.0 dex). In other words, an M2 giant with solar abundances will not have the same effective temperature as a metal-poor M2 giant or an M2 giant with non-solar Ti/Fe ratios, such as stars in the Galactic bulge (McWilliam & Rich (1994)). Because the TiO bands are also sensitive to gravity (see Bessell et al. 1989a) and extension (for the coolest and lowest gravity M stars), there may be a more complex relation between spectral type and effective temperature for M giants than for hotter stars. Nevertheless, the primary factor affecting the TiO bands is T<sub>eff</sub>, and the crucial step in calculating realistic models of M giants is to reproduce the specific relationship between the depths of the TiO bands (i.e., spectral type) and effective temperature which is observed in field M giants. Thus, the first step in our modelling is to determine the spectral type, T<sub>eff</sub> relation (hereafter STT relation) which holds for the field M giants; we do this in the subsequent section of this paper. We then assign a surface gravity to each M giant using a relation between log g and T<sub>eff</sub> derived from the isochrones and stellar evolutionary tracks produced as part of our evolutionary synthesis program (see Paper I and Houdashelt et al. (2001) for details). As expected from the discussion of our cool star models, the strengths of some of the TiO bands in the initial synthetic spectra failed to match the observed spectra, and we were forced to adjust the f<sub>00</sub> values of the individual TiO bands to improve the overall agreement between the empirical and synthetic spectra. ## 3 The M Giant Temperature Scale As discussed in Paper I , the most direct way to determine the effective temperature of a star is through measurement of its angular diameter, $`\varphi `$, and its apparent bolometric flux, f<sub>bol</sub>. These parameters can be related to effective temperature through the relation, $$\mathrm{T}_{\mathrm{eff}}\left(\frac{\mathrm{f}_{\mathrm{bol}}}{\varphi ^2}\right)^{0.25}.$$ (1) Of course, angular diameters can be determined for only the most nearby stars and are typically measured through either lunar occultations or interferometry (e.g., speckle, intensity, Michelson). Usually, the diameter measured by these methods is that for a uniform disk, denoted $`\varphi _{\mathrm{UD}}`$, which must then be adjusted to give the limb-darkened angular diameter, $`\varphi _{\mathrm{LD}}`$, before computing the effective temperature. The limb-darkening correction is generally derived from stellar atmosphere models and is currently one of the greatest uncertainties in estimating the effective temperature, since this correction is sensitive to stellar parameters (such as T<sub>eff</sub> itself) and the wavelength at which $`\varphi _{\mathrm{UD}}`$ is determined. As our main purpose in modelling M giants is to use their synthetic spectra for evolutionary synthesis of galaxies, the temperature scale that we choose must be consistent with that used in Paper I for the hotter stars, the STT relation of BG (89). Since BG (89) only derived effective temperatures for G and K stars, we must find a complementary relation for cooler stars. Below, we examine three different temperature relations for M0–M7 giants, each based upon angular diameter measurements – the relation given by Dyck et al. (1996; hereafter DBBR ), that of Di Benedetto & Rabbia (1987; hereafter DiBR ), and a temperature scale which we have derived from angular diameters measured by Mozurkewich et al. (1991) and Mozurkewich (1997), which we will hereafter collectively refer to as M (97). Perrin et al. (1998) have recently estimated T<sub>eff</sub> for stars even later than spectral type M7, but we have not included their results because even the most metal-rich isochrones used in our evolutionary synthesis models do not extend to such cool effective temperatures. ### 3.1 Angular Diameter Measurements DBBR combined interferometric angular diameters of 34 stars measured with the Infrared Optical Telescope Array at 2.2 $`\mu `$m and occultation diameters from Ridgway et al. (1980) to derive a relation between effective temperature and spectral type for K and M giants. Their temperatures were determined using the limb-darkening correction $`\varphi _{\mathrm{LD}}`$ = 1.022 $`\varphi _{\mathrm{UD}}`$, which they derived from the stellar atmosphere models of Scholz & Takeda (1987). DBBR estimated the uncertainty in their effective temperature at a given spectral type to be approximately 95 K. They also concluded that M supergiants were systematically cooler than M giants of the same spectral type. Di Benedetto (1993; hereafter DiB (93)) tabulated angular diameters of 21 stars, primarily taken from the work of DiBR and Di Benedetto & Ferluga (1990). These angular diameters were also measured at 2.2 $`\mu `$m using Michelson interferometry, but the limb-darkening corrections were derived from the models of Manduca (1979); they used 1.026 $``$ $`\varphi _{\mathrm{LD}}`$/$`\varphi _{\mathrm{UD}}`$ $``$ 1.036, with an average of 1.035 for stars later than spectral type K5. DBBR noted that their uniform-disk angular diameters agreed well with those of DiBR for $`\varphi _{\mathrm{UD}}`$ $``$ 10 mas but were systematically smaller (by about 10% on average) than the measurements of DiBR for larger stars. M (97) has presented uniform-disk angular diameters measured with the Mark III Interferometer at 8000 Å. To convert these measurements to limb-darkened diameters, we used the giant-star data presented in Table 3 of Mozurkewich et al. (1991) to derive the relation, $`\varphi _{\mathrm{LD}}`$/$`\varphi _{\mathrm{UD}}`$ = 1.078 + 0.002139 SP, where SP is the M spectral class of the star (e.g., SP = 0 for an M0 star, SP = –1 for a K5 star). Paper I gives further details of the derivation of this relation. Since BG (89) were required to estimate apparent bolometric fluxes to compute effective temperatures using the infrared-flux method, they were also able to predict angular diameters of the stars they examined. Ideally, we would like to compare their predicted diameters to those measured by the other groups. Unfortunately, the samples of DBBR and DiB (93) have very little overlap with BG (89): two and four stars, respectively. Thus, to examine the compatibility of the four sets of angular diameters, we compare the BG (89), DBBR and DiB (93) data to the measurements of M (97). The upper panels of Figure 1 show direct comparisons of these angular diameters, and the corresponding bottom panels illustrate the differences between the diameters plotted in the upper panels. The error bars shown for the BG (89) angular diameters have been calculated using 1.316 $`\times `$ 10<sup>7</sup> as the constant of proportionality in equation 1 (DBBR ) and assuming a 4% uncertainty in f<sub>bol</sub> and an uncertainty of 150 K in T<sub>eff</sub> (see BG (89)). The other error bars have been taken directly from the respective references. In each of the panels of Figure 1, the solid line represents equality of the two sets of measurements compared there; the dotted line shows the angular diameter above which DBBR noted that their diameters differed systematically from those of DiBR . Squares, triangles and circles have been used to represent subgiants, giants and bright giants, and supergiants, respectively; filled symbols show M stars, and open symbols are G and K stars. A quantitative comparison of the $`\varphi _{\mathrm{LD}}`$ measurements is given in Table Synthetic Spectra and Color-Temperature Relations of M Giants. Several conclusions can be drawn from Figure 1 and Table Synthetic Spectra and Color-Temperature Relations of M Giants. First, the M (97) and BG (89) angular diameters are remarkably similar, a point already emphasized in Paper I . Second, while all three sets of measurements appear to be consistent for $`\varphi _{\mathrm{LD}}`$ $``$ 10.22 mas ($`\varphi _{\mathrm{UD}}`$ $``$ 10 mas), the M (97) angular diameters are systematically greater than the others for stars larger than this. Third, the differences between the four sets of angular diameters are dominated by the M star measurements, probably because the stars with $`\varphi _{\mathrm{UD}}`$ $``$ 10 mas tend to be M stars; in fact, the $`\varphi _{\mathrm{LD}}`$ values for the G and K stars do not show systematic differences. From the comparisons displayed in Figure 1 and summarized in Table Synthetic Spectra and Color-Temperature Relations of M Giants, we conclude that the M (97), BG (89), DBBR and DiB (93) angular diameters are in sufficient agreement for G and K stars to infer that effective temperatures based upon any of the other three group’s measurements would be consistent with the BG (89) temperature scale for these stars, although there is certainly less scatter in the comparison with M (97). However, the same conclusion cannot be drawn for the M stars, and we must evaluate the resulting STT relations individually to determine which is the most suitable for producing accurate synthetic spectra of M giants. ### 3.2 Comparing Effective Temperature Scales DBBR and DiBR have derived very similar STT relations, even though some systematic differences exist in their angular diameter measurements for K and M giants, evidently because systematic differences in their estimates of f<sub>bol</sub> largely offset the $`\varphi _{\mathrm{LD}}`$ differences. To derive analogous STT relations based upon M (97)’s angular diameters, we have simply adopted the bolometric fluxes of DBBR and DiBR to calculate “M (97)” effective temperatures for the stars that each group had in common with M (97). The effects of using the various $`\varphi _{\mathrm{LD}}`$ and f<sub>bol</sub> measurements when calculating effective temperatures are shown in the four panels of Figure 2. The upper section of this figure shows the data reported by DBBR and DiBR in the left-hand and right-hand panels, respectively; the lower panels show the effective temperatures which result when the angular diameters of M (97) are substituted for those used in the corresponding upper panels for the stars in common. As in Figure 1, triangles represent giants and circles are supergiants, but the filled symbols here are stars having $`\varphi _{\mathrm{UD}}`$ $`>`$ 10 mas, while the open symbols are those with smaller angular diameters. The temperature errors shown in the upper panels of the figure are those quoted by DBBR and DiBR ; those in the lower panels have been derived from equation 1, using 1.316 $`\times `$ 10<sup>7</sup> as the constant of proportionality (DBBR ) and adopting the $`\varphi _{\mathrm{UD}}`$ uncertainties of M (97) and the flux uncertainties of either DiBR or DBBR , as appropriate. The dotted line in each panel of Figure 2 is the STT relation quoted by Ridgway et al. (1980) and is based upon angular diameters measured from lunar occultations (not shown). The solid line is DBBR ’s relation, which incorporates the Ridgway et al. (1980) measurements in addition to those plotted in the upper, left-hand panel of Figure 2. The dashed line is the STT relation of DiBR . The bold lines in the lower panels of Figure 2 are linear, least-squares fits to the data plotted in each. From Figure 2, it is clear that the STT relations of DBBR and DiBR are very similar; the greatest difference occurs near spectral type M0. It is also true that the STT relations derived from the M (97) angular diameters (the bold, solid lines in the lower panels of Figure 2) are nearly identical, regardless of whether the DBBR or DiBR bolometric fluxes are used; in fact, the resulting relations differ by less than 30 K at all spectral types from K0 to M7. Thus, for the remainder of this paper, we will no longer discuss the STT relation of DiBR or the relation derived from the DiBR bolometric fluxes using the M (97) angular diameters; we will instead concentrate upon the DBBR data because it spans a broader range of spectral types. As expected from the comparison of the M (97) and DBBR angular diameter measurements, the STT relations which result from these two data sets agree nicely for the K giants but differ systematically in the M giant regime. To quantify this, we tabulate these two T<sub>eff</sub> scales and our estimates for the surface gravities of field M giants in Table Synthetic Spectra and Color-Temperature Relations of M Giants; the calculation of these log g values is described below. We will hereafter refer to these two STT relations as the DBBR and M (97) effective temperature scales. Note that all of the spectral types plotted in Figure 2 are those adopted by DBBR and DiB (93) (which are identical to DiBR ’s) and are presumably MK types, since they were generally taken from the work of Keenan and collaborators. We have confirmed this for the 29 stars observed by DBBR for which spectral types have also been measured using the 8-color photometric system of Wing (1971); Wing’s method (see Section 4.2.1) provides a quantitative way to measure spectral types of M giants on the MK system. The average difference in spectral types, in the sense Wing – DBBR , is 0.29 ($`\pm `$0.42) subtypes. If we use Wing’s spectral types, when available, to revise the M (97) STT relation in the bottom, left-hand panel of Figure 2, it only differs from that adopted (Table Synthetic Spectra and Color-Temperature Relations of M Giants) by +7 K at spectral type K1, –11 K at type M0 and –35 K at type M7. Thus, M (97)’s angular diameter measurements truly infer a different relation between T<sub>eff</sub> and MK spectral type than that derived by DBBR . Since it is not clear which temperature scale is the best to use for constructing synthetic spectra of M giants, we have experimented with each and discuss the results in the following sections. ### 3.3 Testing the Effective Temperature Scales To determine which of these two STT relations is best suited for M giant models, we have calculated model atmospheres and synthetic spectra (omitting spectral lines of TiO) for M0 through M7 giants on each T<sub>eff</sub> scale given in Table Synthetic Spectra and Color-Temperature Relations of M Giants. We assigned surface gravities to these models by consulting the solar-metallicity isochrones and evolutionary tracks used in our evolutionary synthesis program (Houdashelt et al. (2001)). Specifically, we took log g values at 100 K intervals between 3200 K and 4000 K from our 4 Gyr isochrone (3500–4000 K), 8 Gyr isochrone (3400 K), 16 Gyr isochrone (3300 K) and 0.7 M evolutionary track (3200 K) and fit a quadratic relation to this data; this relation was used to derive the surface gravities listed in Table Synthetic Spectra and Color-Temperature Relations of M Giants. We compare our synthetic spectra to the “intrinsic” MK spectra of field M giants presented by FPTWWS in Figures 3 and 4. In these figures, the synthetic spectra are shown as solid lines and the FPTWWS spectra as dotted lines. The left-hand panels of the figures show the synthetic spectra constructed using the DBBR STT relation, while the right-hand panels show the analogous results when M (97)’s T<sub>eff</sub> scale is adopted. The major uncertainty in evaluating the synthetic spectra through comparisons such as those in Figures 3 and 4 is the reliability of the MK spectral types of the “intrinsic” spectra of FPTWWS . These authors obtained spectra of field M giants and assigned each a spectral type on the Case system using the criteria of Nassau & Velghe (1964). By scaling and averaging the spectra of all M giants within one-half subtype of an integral spectral type, FPTWWS derived what they called an “intrinsic” spectral sequence for M0–M10 giants on the Case system. To derive the analogous MK sequence, they assigned each Case “intrinsic” spectrum an MK type and then interpolated (and extrapolated) to get “intrinsic” spectra for M0–M10 giants on the MK system. The transformation between Case spectral types and MK types was derived from the relation tabulated by Blanco (1964) between Mt. Wilson and Case spectral types, assuming that Mt. Wilson types and MK types are identical (FitzGerald (1969); Mikami (1978)). However, the latter assumption is questionable at early-M types (Wing (1979)). In addition, Blanco’s transformation equates an M0 giant on the Case system to an M1.4 giant on the MK system, so extrapolation of FPTWWS ’s observational data was required to produce their “intrinsic” MK spectra of M0 and M1 giants. For these reasons, we proceed cautiously when comparing our synthetic spectra to FPTWWS ’s spectral sequence, especially at early-M spectral types, but we have nevertheless found their data to be extremely useful in guiding us toward improving our synthetic spectra of M giants and determining the effective temperatures of these stars. Unless otherwise specified, all further references to FPTWWS ’s “intrinsic” spectra can be assumed to mean those on the MK system. As discussed in Section 2.2, the continuum-forming regions of the stellar atmosphere are deep enough to be unaffected by sphericity, and we therefore expect the synthetic spectrum which has the same effective temperature as a star of a given spectral type to match the “continuum” (i.e., inter-TiO) regions of the observed spectrum of that star. We also expect the main differences between the empirical spectra and the synthetic spectra to be due to missing spectral lines in the synthetic spectra. This then implies that the synthetic spectrum of the correct T<sub>eff</sub> should not have a lower flux than the corresponding “intrinsic” spectrum over any extended wavelength regime. At spectral type M0, where the M (97) and DBBR temperature scales only differ by 25 K, a 3880 K synthetic spectrum indeed proves to be a reasonable fit to these “continuum” regions in the FPTWWS spectrum of an average M0 giant. In fact, for spectral types M0–M3, the “continuum” region extending from 7300 to 7600 Å in the FPTWWS spectra is well-matched by the synthetic spectra calculated from the DBBR STT relation; for later types, this region is depressed in the FPTWWS spectra, with respect to the respective DBBR synthetic spectra, probably due to the appearance of TiO and/or VO absorption in the field stars. Figure 3 also suggests that the DBBR effective temperature for spectral type M1 is perhaps a bit too hot, but this could be due to errors in the aforementioned extrapolation of the FPTWWS spectra as well. Alternatively, while the synthetic spectrum expected to represent a given spectral type on the M (97) scale often produces a good fit to the inter-TiO regions blueward of 7000 Å, the poorer agreement at redder wavelengths makes these fits less satisfactory than the DBBR results for two reasons. First, TiO absorption is expected to affect the flux in the bluer regions of the spectrum for spectral types as early as K5, so there may not be any actual continuum points at visual wavelengths. Second, even though the M (97) synthetic spectra fit the bluer pseudo-continuum, the fact that they are cooler than the corresponding DBBR models means that they have a flux deficit over much of the spectrum for $`\lambda >`$ 7000 Å. For these reasons, we conclude that the M (97) temperature scale does not describe field M giants. Why doesn’t the M (97) STT relation produce models which agree with the field M giant observations, when the agreement between the M (97) and BG (89) angular diameters for hotter stars is so remarkable? One could conclude that something is wrong with the M (97) angular diameter measurements, implying that the BG (89) temperatures are also incorrect. However, we propose instead that the temperature errors are not caused by faulty $`\varphi _{\mathrm{UD}}`$ measurements but are due to incorrect estimates of the limb-darkening correction. Note that Mozurkewich et al. (1991) and Mozurkewich (1997) measure angular diameters at 8000 Å, where the emergent flux of M stars is affected by TiO absorption (see Figures 3 and 4). However, TiO was not included in the models of Manduca (1979) which Mozurkewich et al. (1991) used to calculate limb-darkening corrections. Since TiO forms much higher in the stellar atmosphere than the layers in which the continuum is produced, the radiation at 8000 Å comes from a part of the atmosphere much closer to the surface of the star than that seen at continuum wavelengths. This means that a smaller limb-darkening correction is called for at 8000 Å in M giants than is appropriate at continuum wavelengths. Consequently, the limb-darkened angular diameters of the Mozurkewich et al. (1991) M giants, which we subsequently used to derive the corrections for the M (97) data, are too large and result in effective temperatures for these stars which are too cool (see equation 1). However, for hotter stars in which the TiO absorption at 8000 Å is negligible, the limb-darkening corrections are correct. This explains the good agreement between the M (97) and BG (89) angular diameters, since BG (89) observed only G and K giants. Assuming that the STT relation of DBBR given in Table Synthetic Spectra and Color-Temperature Relations of M Giants is accurate for solar-metallicity M giants, we can force the M (97) temperature scale to match DBBR ’s by altering the limb-darkening corrections which we applied to M (97)’s data. In this way, the limb-darkening corrections which should be applied to uniform-disk angular diameters measured at 8000 Å can be estimated. For K giants, the equation derived previously, $`\varphi _{\mathrm{LD}}`$/$`\varphi _{\mathrm{UD}}`$ = 1.078 + 0.002139 SP, is appropriate; for M0–M4 giants, we suggest $`\varphi _{\mathrm{LD}}`$/$`\varphi _{\mathrm{UD}}`$ = 1.059 – 0.03317 SP; and for giants later than type M5, a constant value, $`\varphi _{\mathrm{LD}}`$/$`\varphi _{\mathrm{UD}}`$ = 0.906, can be used. ## 4 The M Giant Models Based upon the comparison discussed in the previous section, we have chosen to use the STT relation of DBBR to model M giants. The second column of Table Synthetic Spectra and Color-Temperature Relations of M Giants gives the effective temperature and surface gravity (in parentheses) which we assign to K and M giants of a given spectral subtype. The next step in constructing representative M giant synthetic spectra is to include TiO in the calculations, adjusting the TiO band strengths as necessary to try to match the spectra of field M giants. As mentioned previously, we have used the “intrinsic” MK spectra of M giants published by FPTWWS to evaluate our synthetic spectra. In this process, we have concentrated upon the models of M2–M5 giants. We discount the stars cooler than this because they are probably variable, and their spectra are significantly affected by absorption bands of molecules not included in our ODFs and/or spectral line lists, notably VO and H<sub>2</sub>O. At spectral types M0 and M1, the FPTWWS spectra are more uncertain than those of later types because they are extrapolations of the observational data. Our synthetic spectra of M2–M5 giants, calculated with the TiO line list and original TiO molecular data described in Section 2.1, are compared to the FPTWWS spectra in Figure 5, where the synthetic spectra are again represented by solid lines and the observational data by dotted lines. In this figure, we see that the strength of the $`\gamma `$-system TiO band near 7100 Å matches the observed depth quite well, but the agreement for most of the other TiO bands is much less satisfactory. This has led us to adjust the TiO data used in the calculations to attempt to produce better agreement with the observational data. These adjustments and the resulting spectra are discussed in the following section. ### 4.1 Treatment of TiO When performing spectral synthesis, especially for abundance analyses, it is common to determine “astrophysical” oscillator strengths (i.e., gf values) for individual spectral lines. For example, if a line of an element of known abundance is stronger or weaker in the synthetic solar spectrum than it is in the observed spectrum of the Sun, the gf value for that line is often adjusted until the appropriate line strength is achieved. We have chosen to use a similar approach to model the TiO bands in our synthetic spectra of M giants. The band absorption oscillator strength for the 0–0 transition of each system of TiO (f<sub>00</sub> in the notation of Larsson (1983)) is less well-known than the Franck-Condon factors and the Hönl-London factors of the various TiO lines. Consequently, we have revised the f<sub>00</sub> values of the TiO systems to reproduce the TiO band strengths seen in the FPTWWS spectra at a given effective temperature. For the redder bands of TiO, which lie at least partially outside the wavelength regime covered by the FPTWWS spectra, we have also used the spectra of Terndrup et al. (1990; hereafter TFW ) and Terndrup et al. (1991) as guides in adjusting the TiO band strengths. Because many of the absorption bands seen in the spectra of M giants are made up of overlapping systems of TiO, a set of synthetic spectra were first calculated, each of which contained spectral lines from only one system of TiO. By isolating spectral features (or parts of features) which were dominated by bands from a single system of TiO, we were able to unambiguously adjust the f<sub>00</sub> values of the individual systems. Simply for reference, Table Synthetic Spectra and Color-Temperature Relations of M Giants compares the f<sub>00</sub> values which we eventually adopted to some others found in the literature. The resulting synthetic spectra of M giants are shown in Figure 6, where we compare our final spectra to the FPTWWS spectra. The agreement here is a significant improvement over that seen in Figure 5, especially for $`\lambda `$ $`<`$ 7000 Å. However, some discrepancies remain and merit further discussion. There are large differences between the FPTWWS spectra and our synthetic spectra in the wavelength region of 7600–8500 Å, especially for the earliest M types. We suspect that this discrepancy is due to an error in the FPTWWS data, perhaps due to flux-calibration errors or miscorrections for telluric absorption, since the synthetic spectra calculated by FPTWWS showed similar systematic differences from their “intrinsic” field-star spectra. In the upper panel of Figure 7, we support this proposition by comparing our synthetic spectrum of an M2 giant (solid line) to FPTWWS ’s analogous spectrum (dotted line), the spectrum of the M2 III star, HD 100783 (dashed line), observed by TFW , and the spectrum of HR 4517 (points), a field M1 giant observed by Kiehling (1987). The lower panel shows the telluric corrections applied to the observational data by FPTWWS (dotted line) and by Houdashelt (1995) to the TFW spectrum (solid line); the z band of telluric H<sub>2</sub>O centered near 8200 Å clearly influences the region of interest. It is also clear from this figure that the general shape of the TiO absorption in this region of the synthetic spectrum is quite similar to that seen in HD 100783 and HR 4517. At other wavelengths, the spectra of FPTWWS , TFW and Kiehling are in much better accord. Note also that the disagreement between the synthetic spectra and the FPTWWS spectra near 8000 Å in Figure 6 decreases for later spectral types, possibly because the TiO absorption begins to dominate the telluric contamination. Nevertheless, Figure 6 shows that there are other spectral regions, mostly located between the synthetic TiO bands, in which the calculations are evidently missing some source of opacity, since the synthetic spectra are brighter than the field star spectra there; these occur near 5400, 6500, 7000 and 7500 Å. To explore the possibility that some of the missing absorption could be supplied by higher-order rotational-vibrational lines of TiO than those included in our line list, we computed synthetic spectra using the TiO line lists of J$`ø`$rgensen (1994); these include all lines up to J=199 with $`\upsilon ^{}`$ and $`\upsilon ^{\prime \prime }`$ values between 0 and 10. However, the use of Jørgensen’s line list did not produce a noticeable difference in the depths and morphology of the TiO bands in the optical region of the spectrum. The $`\alpha `$ and $`\beta `$ systems of the two line lists were indistinguishable in the synthetic spectra and only minor differences were apparent for the $`\gamma `$ and $`\gamma ^{}`$ systems; the reddest bands of the latter two systems had bandheads which were sharper and bluer but fit the observed spectra less well when Jørgensen’s data were used. For the $`\delta `$ system, on the other hand, the TiO bands computed from Jørgensen’s line list had a more similar morphology to those seen in the observational data, being very wedge-shaped, as opposed to the more U-shaped bands produced by our line list. While similar differences were apparent in the shapes of the synthetic $`\varphi `$-system bands, we could not unambiguously detect these bands in any of the empirical spectra, so no determination could be made regarding which line list was more appropriate to use for this system. Finally, the bandheads of Jørgensen’s $`ϵ`$ system fall about 150 Å bluer than predicted by the corresponding TiO line list of Plez (1996), which produces a good match to the observational data. Thus, the discrepancies between our synthetic spectra and the FPTWWS field star spectra do not appear to be due to missing high-order lines of the systems of TiO included in our spectral line list. Plez (1998) has kindly provided us with plots of the a–f system of TiO and the spectrum of VO absorption in a 3300 K model. It appears that most of the (inter-TiO region) differences between our synthetic spectra and the field star spectra, as well as some of those removed by altering the f<sub>00</sub> values of the other TiO systems, could be rectified by inclusion of these two absorption systems in the SSG spectral line list. However, the missing opacity near 6500 Å does not appear to be due to either TiO or VO, and until we can test these possibilities further, we remain uncertain of the source of the missing opacity shortward of 7000 Å in our synthetic spectra. Keeping this caveat in mind, we will proceed to examine our models further through more qualitative and quantitative comparisons of spectral-type estimates, equivalent width measurements and broad-band colors of the synthetic spectra and field M giants. ### 4.2 Molecular Bands in the Synthetic Spectra The main molecular species which influence the spectra of all M giants are TiO and CO. Other molecules, such as CN, VO and H<sub>2</sub>O, are also present in the atmospheres of these stars, but their effects are important in a more limited subset of the M giants. CN is most prevalent in the earliest M types but even then is often contaminated by overlapping spectral features due to other molecules. Absorption bands of VO and H<sub>2</sub>O are seen only in spectral types M5 and later. In the following, we discuss the CO and TiO band strengths in our synthetic spectrum calculations and compare the results to observed trends and to empirical spectra of field M stars. #### 4.2.1 TiO and Spectral Classification As mentioned previously, the spectral types of M stars on the MK system are determined from the strengths of the TiO bands. For a given star, spectral classification involves comparing the observed spectrum of the star to similar spectra of standard stars which define the MK types, a somewhat qualitative method for determining spectral classes which is not unlike the comparisons we have made in Figure 6. While this might lead us to conclude that we have achieved our goal of reproducing the field giant relation between spectral type and effective temperature, it would be reassuring to be able to verify this through something more robust than a fit-by-eye. Thus, we have also estimated spectral types from our synthetic spectra using three quantitative measures of the TiO band strengths. Wing (1971) has designed an 8-color photometric system for determining spectral types of late-K and M giants; we illustrate the filter passbands of this system in the upper panel of Figure 8 along with our synthetic spectrum of an M3 giant. Wing’s system uses the bluest of his filters (filter 71 in Figure 8) to measure the depth of the band of the $`\gamma `$ system of TiO near 7100 Å and estimate a spectral type for the star, after correcting for the overlying CN absorption. Since this method relies on a single TiO band, it is obviously applicable only to stars for which this specific band is detectable and is not saturated; this turns out to be spectral types K4 through M6. We have measured synthetic colors on Wing’s system and calculated spectral types for our synthetic spectra using the methodology described by MacConnell et al. (1992). However, before determining these spectral types, we had to calibrate the synthetic Wing colors to put them onto the observational system. This was done in a manner similar to that used to calibrate the near-infrared, broad-band colors presented in Paper I . First, the zero point corrections to be applied to the synthetic Wing magnitudes were determined from the differences between the observed magnitudes of Vega (MacConnell et al. 1992) and those measured from our synthetic spectrum of Vega (Paper I ). After applying these zero-point corrections to the synthetic Wing magnitudes of 35 of the field stars modelled in Paper I , photometry of these stars was used to derive linear relations between the photometric and synthetic colors. These calibration relations were then applied to the synthetic Wing colors of the M giant models before determining their spectral types. TFW defined a number of spectral indices (pseudo-equivalent widths) which measure the strengths of TiO bands between 6000 and 8500 Å. For two of these indices, S(7890) and I(8460), Houdashelt (1995) presented relationships between the index and spectral type for field stars of spectral type M1 and later. The spectral regions used to define these two indices are shown alongside our synthetic spectrum of an M3 giant in the bottom panel of Figure 8. The S(7890) index measures the strength of an absorption trough due primarily to a $`\gamma `$-system band of TiO with respect to a single “continuum” sideband. The I(8460) index measures the strength of a bandhead of the $`ϵ`$ system of TiO with respect to a pseudo-continuum level interpolated from two adjacent spectral regions, the bluer of which is affected by TiO absorption due to bands of both the $`\gamma `$ and $`\delta `$ systems. We have measured these indices from the M-giant synthetic spectra shown in Figure 6 and used Houdashelt’s relations to estimate their spectral types. In the upper panel of Figure 9, we compare the spectral types measured from our synthetic spectra using Wing’s photometric system to those implied by the effective temperatures of the models per the DBBR STT relation. The agreement is excellent, with the difference between the derived and expected spectral types being larger than 0.3 subtypes only for spectral types M0, M1 and M5. Since the agreement is good at spectral type K5, we suspect that the larger discrepancies for the M0 and M1 stars occur because the effective temperatures of the models are slightly too warm at these spectral types. In effect, this simply means that DBBR ’s temperature estimate for an M1 giant is a bit too hot. Since they did not list T<sub>eff</sub> for spectral type M0, such a problem would affect the temperature at this spectral type as well because we have taken a simple average of their K5 and M1 temperatures to get T<sub>eff</sub> for type M0. Figure 3, which was consulted to decide which effective temperature scale to adopt, supports the notion that the T<sub>eff</sub> of a field M1 giant is hotter than that which DBBR estimated but implies that the M0 temperature cannot be too far off, again with the caveat that the FPTWWS “intrinsic” spectra for spectral types M0 and M1 are extrapolations of their data and are therefore somewhat uncertain. At spectral type M5, the difference between the Wing spectral type and that inferred from the model’s T<sub>eff</sub> may well be due to some of the previously-discussed model uncertainties. Judging from Figure 6, it is apparent that the “continuum” bands in our synthetic spectra, especially the region near 7500 Å which is integral in determining Wing spectral types, are brighter than observed in the FPTWWS spectra. Since the 75 and 78 filters of Wing’s system (see Figure 8) are definitely affected by VO absorption in late-M giants, it is likely that the Wing spectral types of the models cooler than about 3500 K are later than those observed in the corresponding field giants mainly because we don’t include lines of VO in our spectral synthesis. In the middle and lower panels of Figure 9, we compare the spectral types derived from the S(7890) and I(8460) indices to the types assigned from the effective temperatures of the models. These diagrams can be broken into two parts: early-M types (M1–M4) and late-M types (M5–M7). For the former group, the $`\gamma `$-system band of TiO with bandhead near 7600 Å, which dominates S(7890), is probably a little too weak in our models (or there is insufficient flux in the single continuum sideband), while the $`ϵ`$-system bandhead measured by the I(8460) index appears about right. For the later types, the S(7890) index gives spectral types agreeing with the effective temperatures, but the I(8460) types appear to be too strong. Unfortunately, since both the S(7890) and I(8460) indices lie in the region of the spectrum where we suspect FPTWWS ’s spectra to be in error (see Figure 7), the information which can be gleaned about these indices through comparisons of the synthetic spectra and the observational data is limited. Nevertheless, we will briefly discuss possible implications of these TiO index measurements in the synthetic spectra. It appears that the $`\gamma `$-system band of TiO which produces most of the absorption measured by the S(7890) index may truly be too weak in our synthetic spectra. Because VO absorption would apparently have a greater affect on the pseudo-continuum region used to measure S(7890) than on the spectral region of the index itself (Plez 1998), this could be true even at later spectral types; adding spectral lines of VO to the synthetic spectra would probably weaken the S(7890) indices for models having T<sub>eff</sub> $``$ 3500 K. On the other hand, making this band stronger by increasing the f<sub>00</sub> value of the $`\gamma `$ system would throw off the good agreement between the synthetic spectra and FPTWWS ’s “intrinsic” spectra in other spectral regions; for example, the absorption band with bandhead near 7100 Å is also dominated by a $`\gamma `$-system band, as is the redder half of the band between 6500 and 7000 Å. Thus, given the uncertainties in the calibration between S(7890) and spectral type and the overall good agreement between the synthetic spectra and the empirical data, we have chosen not to “tweak” this band to produce S(7890) spectral types which agree better with those inferred by the model temperatures. Figure 6 shows that the TiO absorption affecting the I(8460) index does appear to grow more quickly with decreasing T<sub>eff</sub> in the synthetic spectra than in the corresponding FPTWWS spectra. However, the spectral region in which the I(8460) index is measured is highly composite – the sharp bandhead is due to a band of the $`ϵ`$ system of TiO, but there are overlying TiO bands from the $`\delta `$ and $`\gamma `$ systems as well. We again believe that adding spectral lines of VO to the synthetic spectrum calculations would largely resolve the differences between the I(8460) spectral types and the temperature-inferred types of the M5–M7 giants. Even with the above considerations, the differences between the spectral types estimated from the TiO bands of the synthetic spectra and those indicated by their effective temperatures are small, always less than 1.2 spectral subtypes. Thus, we conclude that the strengths of the TiO bands in our synthetic spectra are in sufficient agreement with those expected from their effective temperatures that we can be confident that the synthetic spectra which we calculate will provide a good representation of M giants in our evolutionary synthesis models. #### 4.2.2 CO and Spectral Classification One of the strongest absorption bands in cool giants is the first-overtone <sup>12</sup>CO(2,0) band with bandhead near 2.3 $`\mu `$m. In fact, Baldwin et al. (1973) designed an intermediate-band filter system specifically to measure the strength of this band; these filters were later refined by Cohen et al. (1978) into the set commonly used today. Baldwin et al. showed that their CO index is a good luminosity indicator in cool stars, being stronger in giants than in dwarfs of similar color; they found that the index varies with T<sub>eff</sub> in giants as well, becoming stronger as T<sub>eff</sub> decreases. Bell & Tripicco (1991) have attributed the gravity behavior of this feature to the lower continuous opacity in the atmospheres of cool giants with respect to those of cool dwarfs of similar effective temperature; this effect more than compensates for the lower abundance of molecules in the giant’s atmosphere. In a series of papers, the CO index was studied in field dwarfs (Persson et al. (1977)), Galactic globular and open cluster members (Cohen et al. 1978; Frogel et al. (1979); Persson et al. (1979); Cohen et al. (1980); Frogel et al. (1981); Frogel et al. (1983)), Magellanic Cloud clusters (Persson et al. (1983)) and globular clusters in M31 (Frogel et al. (1980)). Frogel et al. (1978; hereafter FPAM ) have characterized the CO indices of field dwarfs and giants as a function of color, and Frogel et al. (1975), FPAM , and Persson et al. (1980) used the CO indices of early-type galaxies and the nuclear region of M31 to infer that the (infrared) integrated light of these objects must be giant-dominated. Because this CO band is so gravity-sensitive, it is an important stellar population diagnostic in integrated light studies, and CO is a potentially informative spectral feature when applying our evolutionary synthesis models to interpret observational data. We expect to be able to model the CO bands effectively in M giants, since CO forms deep enough in the stellar atmosphere to be relatively unaffected by extension and sphericity. In fact, Bell & Briley (1991) have shown that the behavior of the 2.3 $`\mu `$m CO band with gravity and metallicity can be modelled quite reliably in G and K stars with MARCS/SSG synthetic spectra, but we wish to verify that this condition still holds for the current models, given the improvements made to the ODFs and CO spectral line data (see Section 2.1 and Paper I ) since their work. In Figures 10 and 11, we compare our synthetic spectra of the appropriate spectral type to each of the spectra of field M giants observed by Kleinmann & Hall (1986; hereafter KH (86)). The agreement between the synthetic spectra and the KH (86) spectra is quite good throughout the entire temperature range of M giants, even at spectral type M7, where the field stars of KH (86) are variable. The <sup>13</sup>CO bandheads at 2.345 and 2.365 $`\mu `$m can be seen in Figures 10 and 11 as well and indicate that the <sup>12</sup>CO/<sup>13</sup>CO ratio varies among the KH (86) M giants. More recently, Ramírez et al. (1997; hereafter RDFSB ) have developed a scheme by which the equivalent width of the 2.3 $`\mu `$m CO band, EW(CO), can be used to find effective temperatures for K and M giants. They obtained spectra of field stars at low and intermediate resolutions (R=1380 and R=4830) and measured EW(CO) for these stars, using slightly different continuum band definitions at each resolution. After converting spectral type to effective temperature, RDFSB found a linear relationship between EW(CO) and T<sub>eff</sub>. Because there did not appear to be any systematic differences between their R=1380 and R=4830 measurements, they combined all of their data to derive this relation. We have convolved and rebinned our synthetic spectra of K and M giants to match each resolution and dispersion of RDFSB and have measured EW(CO) using their continuum and CO band definitions. We have also measured EW(CO) from the KH (86) spectra using both the R=4830 and R=1380 continuum definitions. In Figure 12, we compare our measurements of EW(CO) as a function of spectral type to those of RDFSB . In this diagram, we have taken the spectral types of the RDFSB stars from SIMBAD, so they sometimes differ slightly from those adopted by RDFSB ; for our synthetic spectra, we assume the spectral types indicated from the STT relation of DBBR . Since our EW(CO) measurements do show a resolution dependence, we plot the R=4830 results in the upper panels of Figure 12 and the R=1380 results in the lower panels. The left-hand side of this diagram shows the relations between EW(CO) and spectral type at each resolution; here, the data from RDFSB is shown as open circles, our KH (86) measurements are asterisks, the solid lines connect the EW(CO) values measured from our synthetic spectra, and the dotted lines are linear, least-squares fits to the RDFSB data. On the right-hand side of the figure, the spectral types of the synthetic spectra, derived from the dotted relations shown in the left-hand panels, are compared to the spectral types inferred by DBBR ’s STT relation. Some general conclusions can be drawn from Figure 12. As expected from the comparisons shown in Figures 10 and 11, the EW(CO) measurements of the synthetic spectra are in good agreement with those of the KH (86) spectra, with the CO absorption perhaps a bit stronger in the synthetic spectra than in the KH (86) spectra for the K giants. At R=4830, the synthetic spectrum CO widths and RDFSB ’s CO widths agree well for the M giants, with the CO again a bit stronger in the K-giant synthetic spectra than in the observational data. At R=1380, the situation is reversed from that observed at R=4830; the synthetic spectra and the RDFSB spectra produce very similar EW(CO) for the K giants, but in this case, the CO bands of the synthetic spectra of the M giants (and the KH (86) spectra) appear weaker than in the RDFSB spectra. Detailed comparisons of the RDFSB spectra and our synthetic spectra indicate that these discrepancies are not due to differences in the strengths of the CO bands but instead are caused by differences in the slope of the continuum just blueward of the <sup>12</sup>CO bandhead. Overall then, given the scatter in the EW(CO) measurements of RDFSB , and keeping in mind that a linear fit between EW(CO) and spectral type does not appear to apply over the entire range of K and M spectral types, we conclude that the spectral types estimated for the synthetic spectra from their CO equivalent widths are in agreement with those based upon their effective temperatures, especially at R=4830. Since K and M giants dominate the near-infrared light of most stellar populations, Figures 1011 and 12 also show that our treatment of the CO absorption in these cool stars will produce a realistic representation of the CO bands in our evolutionary synthesis models. #### 4.2.3 H<sub>2</sub>O SSG has the option of including a spectral line list for water in the spectrum synthesis calculations, but we have opted not to include H<sub>2</sub>O absorption in the models presented here. This omission is based upon synthetic spectra calculated using the water line lists of Brett (1991) and Schryber et al. (1995). Brett’s line list is derived from the laboratory data of Ludwig et al. (1973) using the method described by Plez et al. (1992), while the list of Schryber et al. (1995) is theoretical. We first calculated models including spectral lines of H<sub>2</sub>O using the line list of Schryber et al. (1995), but the water absorption seen in the resulting synthetic spectra differed substantially from expectations based upon observational data. The H<sub>2</sub>O lines depressed the flux much more evenly throughout the infrared than is observed and did not show the familiar strong bands which occur, for example, between the H and K atmospheric windows. In addition, the agreement between the observed and calculated CO bands was substantially worsened in the coolest models which included these H<sub>2</sub>O lines. While the use of Brett’s data produced spectra in which the water vapor bands were more discrete, the bands which overlapped the CO absorption again were sufficiently strong to spoil the nice agreement between the synthetic and the empirical spectra seen in Figures 10 and 11. To date, we have not been able to determine whether the behavior of the H<sub>2</sub>O absorption in the models is due to problems with the stellar atmosphere models, which possibly predict an overabundance of water, or the spectral line lists, which may have oscillator strengths which are too large. It is possible that, to reproduce the spectrum of H<sub>2</sub>O seen in real stars, the water absorption in our synthetic spectra would require a treatment similar to that which we have used for TiO. Unfortunately, observational data from the Infrared Space Observatory, for example, is not yet available to allow us to verify the need for such an empirical calibration. Nevertheless, since water absorption is not detectable in the spectra of M giants until spectral type M5 or later (Bessell et al. 1989a), the evolutionary synthesis models which we construct should not be significantly affected by the omission of water lines in our synthetic spectra. ### 4.3 Broad-Band Colors and Bolometric Corrections of K0–M7 Field Giants We have measured broad-band colors from our spectral-type sequence of K and M giant synthetic spectra – Johnson U–V and B–V; Cousins V–R and V–I; Johnson-Glass V–K, J–K and H–K; and CIT/CTIO V–K, J–K, H–K and CO – using the filter transmission profiles described in Paper I . We have also computed CIT/CTIO K-band bolometric corrections (BCs) for these models, assuming M<sub>K,⊙</sub> = +3.31 and BC<sub>K,⊙</sub> = +1.41. We present these colors and BCs in Table Synthetic Spectra and Color-Temperature Relations of M Giants; all of the colors have been transformed to the observational systems using the color calibrations derived in Paper I . Because these color calibrations are all very linear, we feel comfortable extrapolating them into the regime of M giant synthetic colors, even though the sample of field stars used to determine the calibration relations did not include any stars cooler than spectral type K5. However, as in Paper I , we caution the reader that the U–V and H–K colors have greater uncertainties than the other colors; the U–V colors are sensitive to missing opacity in the ultraviolet region of the synthetic spectra, and the H–K color calibrations are not well-determined (see Paper I ). In Figures 13 and 14, we compare our color vs. T<sub>eff</sub> and color vs. spectral type relations to those observed for field M giants. We have extended these comparisons into the K giant regime to illustrate the general agreement between the models and field relations for hotter stars. In each of these figures, our models are represented by open circles, and the M giant photometry presented by FPTWWS is shown as small crosses. We have also measured colors directly from the “intrinsic” MK spectra of FPTWWS which we used to calibrate the TiO bands, and these colors are shown as filled triangles in Figure 13. The field relations appear as solid and dotted lines; their sources are described below and in the figure captions. The filled squares seen in the upper panels of Figure 14 will be described as those specific panels are discussed. Figure 13 shows the optical color comparisons, and the agreement between the models and the observational data is generally quite good. The color-temperature relations shown in the left-hand panels of Figure 13 have been taken from Gratton et al. (1996; dotted lines) and Bessell (1998; solid lines); the latter were derived from the data of Bessell et al. (1998). The field-star color, spectral type relations shown as solid lines in the right-hand panels come from Lee (1970) for B–V and from Thé et al. (1990) for V–R and V–I, after first converting the latter’s Case spectral types to MK types using the transformation given by FPTWWS ; the dotted relation in the V–I, spectral type panel is taken from Bessell & Brett (1988; hereafter BB (88)). The calibrated, optical, synthetic colors and the colors measured from the FPTWWS spectra agree for the early-M giants and then begin to diverge for the later types. The B–V colors of the FPTWWS spectra show a bit of random scatter, and the model B–V colors may be a bit too red ($``$0.04 mag) for early-M giants, but the synthetic colors fall well within the range of the FPTWWS photometry. However, if we assume that any differences between the magnitudes measured from the FPTWWS spectra and those measured from the models are due to “errors” in the model magnitudes, then a close inspection of the synthetic magnitudes shows that the B–V colors of the models are about right only because these “errors” in B and V largely offset one another. In the V–R and V–I vs. T<sub>eff</sub> panels of Figure 13, the model colors, the field relations and the colors measured from FPTWWS ’s spectra are essentially identical for spectral types earlier than type M4. At cooler temperatures, the models and the FPTWWS spectrum colors differentiate mainly because missing opacity in the synthetic spectra makes their V-band magnitudes too bright. For the coolest stars, it is likely that variability, errors in the effective temperature determinations and possibly small number statistics make the field giant color-temperature relations and the FPTWWS “intrinsic” spectra less certain as well. The analogous V–R and V–I vs. spectral type comparisons are a little more confusing. First, the model colors and the FPTWWS spectrum colors again agree to about spectral type M4 in both V–R and V–I, so we are doing a good job of reproducing the FPTWWS spectra with our synthetic spectrum calculations. Second, FPTWWS ’s R-band photometry is evidently not on the Cousins system, since their V–R colors overlie neither the colors measured from their spectra nor the model colors. Finally, the field star relations don’t appear to be well-determined – the Thé et al. (1990) and BB (88) field relations differ by $``$0.2 mag in V–I at a given spectral type, and the two relations approximately bracket both the models and the colors measured from the FPTWWS spectra. This disagreement merits some further discussion. BB (88) derived the field relation (dotted line) shown in the lower, right-hand panel of Figure 13 from a combination of V–I photometry taken from Cousins (1980) and spectral types taken from the Michigan Spectral Survey (Houk & Cowley (1975); Houk (1978); Houk (1982)). The relation of Thé et al. (1990), the solid line, comes from their own photometry and Case spectral types derived from their objective-prism spectra; we used FPTWWS ’s transformation from Case to MK spectral types to get the relation plotted, so it is perhaps a bit more uncertain than BB (88)’s relation. Of course, if this transformation is incorrect, then the MK spectral types of the “intrinsic” spectra of FPTWWS are also in error, and our treatment of the TiO bands in the synthetic spectra is wrong as well. Still, let us suppose that the uncertainties in the transformation between Case and MK spectral types allow a shift to earlier spectral types of the Thé et al. (1990) relation, the models and the FPTWWS spectrum colors to make them agree with BB (88)’s field-giant relation. Even then, FPTWWS ’s photometry (crosses) would not be similarly affected. FPTWWS took the MK spectral types of these stars from the Bright Star Catalogue (Hoffleit & Jaschek (1982)), so to make these points lie along BB (88)’s relation requires systematic errors in FPTWWS ’s photometry; this is at least conceivable, given that their V–R colors appear to be systematically too blue in the middle, right-hand panel of Figure 13. On the other hand, we also question whether BB (88)’s field-giant relation between spectral type and V–I is correct. If we take the colors from this relation and plug them into the V–I, T<sub>eff</sub> relation of Bessell (1998), we get an STT relation which differs significantly from that of DBBR , which we have used as the basis for our modelling of M giants. Thus, assuming that BB (88)’s V–I, spectral type relation is correct forces us to conclude that either the color-temperature relation of Bessell (1998) is wrong or that the STT relation of DBBR is in error. Without some additional information, we cannot resolve these discrepancies between the V–I, spectral type relations of field giants given by BB (88) and Thé et al. (1990). Figure 14 shows the comparisons for the V–K and J–K colors. In this figure, the color, T<sub>eff</sub> relations of the field giants generally come from the same sources as the optical relations; the V–K, T<sub>eff</sub> relation was published by Bessell et al. (1998). The color, spectral type data for the field stars is taken from BB (88). The crosses again represent the photometry of FPTWWS ; their ESO colors have been transformed to the Johnson-Glass system using the color transformations given by BB (88). Since FPTWWS ’s spectra only extended to 9000 Å, near-infrared colors could not be measured from them. The color most often used to determine effective temperatures of cool stars is V–K, so it would be gratifying if our models predicted the same V–K, T<sub>eff</sub> relation as that observed in field M giants. This appears to hold true for spectral types earlier than about type M4, but the cooler models become progressively redder than the color-temperature relation of Bessell et al. (1998), reaching $``$0.6 mag redder at spectral type M7. However, the opposite holds true for the V–K, spectral type data. Here, the models are slightly bluer than the field relation (BB (88)) but overlap FPTWWS ’s transformed photometry. The filled squares shown in the upper two panels of Figure 14 show the V–K colors of the models which result when the synthetic V–band magnitudes are “corrected” for their differences with the respective V magnitudes measured from the FPTWWS spectra; this approximates the V–K colors we would expect to measure from the FPTWWS spectra if their wavelength coverage included the K band. This adjustment makes the model colors an excellent match to the field stars in the color, spectral type plane but obviously makes the fit to the M-giant color-temperature relation much worse. However, we expect the synthetic V–K colors to be too red for spectral types M5 and later because we have neglected H<sub>2</sub>O absorption in our calculations; it’s inclusion would make the K magnitudes fainter but leave the V magnitudes unaffected. Since we encounter the same uncertainty here that we experienced in the V–I plots, namely that substituting the colors from BB (88)’s V–K, spectral type relation into Bessell et al. (1998)’s V–K, T<sub>eff</sub> relation gives an STT relation which differs from that of DBBR , it is not clear to us which field relation (color-temperature or color-type) is more reliable. In either case, the model vs. field-star color differences are relatively small for the early-to-mid-M giants, so we are satisfied that our models provide an adequate representation of the V–K colors of field M giants to be used for evolutionary synthesis. The J–K colors of the models match the field relations for the K giants but become slightly redder than the field relation of Gratton et al. (1996) at a given T<sub>eff</sub> for early-M types. The models, however, are in excellent agreement with the J–K vs. spectral type relation of BB (88) through spectral type M4. As mentioned in Section 3.1, we have not been able to calibrate the absorption bands of the $`\varphi `$-system of TiO, some of which fall in the J band, using empirical spectra. Therefore, we allowed the J–K colors of the models to assist us in choosing a final f<sub>00</sub> value for the $`\varphi `$ system, since the model J–K colors redden significantly as this parameter is increased. For the later M types, we expect that adding spectral lines of H<sub>2</sub>O to the synthetic spectrum calculations, while diminishing both the J-band and K-band fluxes, would resolve the remaining differences between the models and the field-giant relations. In Figure 15, we compare the bolometric corrections of our models of field K and M giants, as a function of spectral type, to empirical relations. Recall that the BCs of the models assume M<sub>V,⊙</sub> = +4.84, BC<sub>V,⊙</sub> = –0.12, M<sub>K,⊙</sub> = +3.31 and BC<sub>K,⊙</sub> = +1.41; this implies a (V–K)<sub>CIT</sub> color for the Sun of 1.53, which is $``$0.02 mag redder than the best observed values tabulated by Bessell et al. (1998). In the upper panel of this figure, the open circles are the (untabulated) V-band BCs of our models, the solid line is the field relation of Johnson (1966), and the M-giant relation of Lee (1970) is shown as a dotted line. The two field relations are virtually identical and are in close agreement with the model BCs for spectral types K0–M3; at later types, the models predict BC<sub>V</sub> values which are smaller in magnitude than the field relations imply. However, recall that our synthetic V-band magnitudes are probably too bright, due to missing opacity in the synthetic spectrum calculations. If we substitute the V-band magnitudes measured from the “intrinsic” MK M-giant spectra of FPTWWS into the BC<sub>V</sub> calculations, then the model points move to the positions of the filled circles in the upper panel of Figure 15; the latter are a much better match to the field relations at late-M spectral types, showing that the differences between the field-star BCs and the model BCs are consistent with the corresponding differences in their observed and computed V magnitudes. It is precisely these uncertainties in the V-band magnitudes that led us to tabulate BC<sub>K</sub> in Table Synthetic Spectra and Color-Temperature Relations of M Giants rather than BC<sub>V</sub>. In the lower panel of Figure 15, we compare our K-band bolometric corrections to some near-infrared field relations. Here, the open circles again represent our models, and the empirical trends have been calculated as prescribed by Bessell & Wood (1984). Bessell & Wood give relations between BC<sub>K</sub> (on the CIT/CTIO system) and both (V–K)<sub>CIT</sub> and (J–K)<sub>AAO</sub>. Using the color transformation between (J–K)<sub>CIT</sub> and (J–K)<sub>AAO</sub> from BB (88), we have used the calibrated, synthetic V–K and J–K colors of our models to calculate the BC<sub>K</sub> values which the Bessell & Wood relations predict; the results are shown in the lower panel of Figure 15. The solid line comes from the model (V–K)<sub>CIT</sub> colors, while the dashed line is produced when these colors are “corrected” for the differences between the V-band magnitudes of the models and those measured from the “intrinsic” M-giant spectra of FPTWWS . The dotted line results from the synthetic J–K colors when Bessell & Wood’s solar-metallicity J–K relation is used, while the crosses are the analogous points derived from their metal-poor relation. Surprisingly, the K-band BCs of the models better match the predicted BCs of metal-poor field stars of similar J–K color than those of their solar-metallicity counterparts. However, given the uncertainties in the calibration of the field-star BC<sub>K</sub> vs. J–K relations and the nice agreement between the model BCs and those of field giants of similar V–K color, we can confidently recommend the use of our color-temperature relations and BC<sub>K</sub> values of K and M giants for converting isochrones from log T<sub>eff</sub>, log L space into the color-magnitude plane. ### 4.4 Color-Temperature Relations of M Giants Given the generally good match between the broad-band colors and bolometric corrections measured from our synthetic spectra of field K and M giants and the empirical data, we have proceeded to construct grids of models of cool giants to supplement those presented in Paper I . At each of four metallicities, we have calculated MARCS model atmospheres and SSG synthetic spectra for stars having 3000 K $``$ T<sub>eff</sub> $``$ 4000 K and –0.5 $``$ log g $``$ 1.5. Table Synthetic Spectra and Color-Temperature Relations of M Giants gives the calibrated colors and CIT/CTIO K-band bolometric corrections of these models; column 3 of Table Synthetic Spectra and Color-Temperature Relations of M Giants gives the spectral types measured from the synthetic spectra using the photometric system of Wing (1971). For those who are concerned about possible errors in the synthetic spectra due to missing opacity and/or spectral lines, we also provide Table Synthetic Spectra and Color-Temperature Relations of M Giants, which gives the differences between the calibrated, synthetic V magnitudes and optical colors of our models of field M0–M7 giants and those measured from FPTWWS ’s “intrinsic” M-giant spectra. The spectral types of the models can be used in conjunction with this table to “correct” the synthetic colors to match the observational data as desired, but we urge the reader to thoroughly review FPTWWS before adopting these color corrections. Also, keep in mind that the color calibrations have been derived from Population I stars, so the colors of the models having \[Fe/H\] $``$ –0.5 should be used with some degree of caution. ## 5 Conclusions To better model elliptical galaxies through evolutionary synthesis, we have improved our synthetic spectra of M giants by 1) determining the optimal effective temperature scale to use for these cool stars, 2) adjusting the f<sub>00</sub> values of the TiO bands to best match the band strengths observed in the spectra of field M giants, and 3) evaluating the resulting models by comparing the synthetic spectra, their estimated spectral types and the model colors and bolometric corrections to empirical data. We have critically examined three effective temperature scales for M giants, each derived from angular diameter measurements. Two of these were taken from Dyck et al. (1996; DBBR ) and Di Benedetto & Rabbia (1987); the third was derived from the angular diameters measured by Mozurkewich et al. (1991) and Mozurkewich (1997; M (97) collectively). We found that the effective temperature vs. spectral type relation of Dyck et al. (1996) produces synthetic spectra which have the same continuous flux level as the “intrinsic” M giant spectrum of the same spectral type observed by Fluks et al. (1994). A possible exception to this rule occurs at spectral type M1, where the Dyck et al. T<sub>eff</sub> may be a bit too cool. This temperature scale, which is similar to Di Benedetto & Rabbia’s but covers a wider range of spectral types, also proves to be a good match to that of Bell & Gustafsson (1989; BG (89)), which we adopted in a companion paper discussing color-temperature relations of hotter stars (Houdashelt et al. 2000). While the angular diameters measured by M (97) were found to match those predicted by BG (89) for G and K giants remarkably well, the resulting effective temperature scale could not be reliably extended into the M giant regime because his uniform-disk angular diameters were measured at 8000 Å. At this wavelength, TiO absorption is present in M star atmospheres, and the limb-darkening corrections used by Mozurkewich et al. (1991) did not take this into account. Adopting DBBR ’s effective temperature scale, we have constructed MARCS model atmospheres and SSG synthetic spectra for solar-metallicity K0–M7 giants. For each system of TiO, we adjusted the band absorption oscillator strength for the 0–0 transition, f<sub>00</sub>, until we were best able to reproduce the “intrinsic” MK spectra of field M giants of Fluks et al. (1994). We found the resulting synthetic spectra to be a good match to the K-band spectra of Kleinmann & Hall (1986) as well. Quantitative measures of the spectral types of the M giant synthetic spectra based upon the strengths of both the TiO bands and the CO bandhead near 2.3 $`\mu `$m are in good agreement with the spectral types expected from DBBR ’s temperature scale. In addition, the broad-band colors of the K and M giant sequence are quite similar to those expected of solar-metallicity field stars of the same spectral type and/or T<sub>eff</sub>, especially for the K and early-M stars. At later spectral types, most of the differences between the models and the empirical data can be ascribed to our omission of spectral lines of VO and H<sub>2</sub>O in the spectral synthesis. Finally, we have presented colors and bolometric corrections for models having 3000 K $``$ T<sub>eff</sub> $``$ 4000 K and –0.5 $``$ log g $``$ 1.5 at four metallicities: \[Fe/H\]=+0.25, 0.0, –0.5 and –1.0. These supplement and extend the color-temperature relations presented in our companion paper (Houdashelt et al. 2000). We would like to thank the National Science Foundation (Grant AST93-14931) and NASA (Grant NAG53028) for their support of this research. We also thank Ben Dorman for allowing us to use his isochrone-construction code and Mike Bessell for providing many helpful suggestions on the manuscript. MLH would like to express his gratitude to Rosie Wyse for providing support while this work was completed. The research has made use of the Simbad database, operated at CDS, Strasbourg, France.
no-problem/9911/hep-ph9911343.html
ar5iv
text
# Optimal polarized observables for model-independent new physics searches at the linear collider11footnote 1Contribution to the International Workshop on Linear Colliders LCWS99, Sitges (Barcelona), Spain, 28 Apr-5 May 1999 ## 1 Introduction and polarized cross sections The concept of effective contact interaction represents a convenient framework to parameterize physical effects of some new dynamics active at a very high scale $`\mathrm{\Lambda }`$, in reactions among the ‘light’, Standard Model, degrees of freedom such as quarks and leptons, $`W`$, $`Z`$, etc., at ‘low energy’ $`E\mathrm{\Lambda }`$. These effects are suppressed by an inverse power of the large scale $`\mathrm{\Lambda }`$, and should manifest by deviations of experimentally measured observables from the SM predictions. We consider here the process, at LC energy: $$e^++e^{}\overline{f}+f,$$ (1) and the relevant $`SU(3)\times SU(2)\times U(1)`$ symmetric, lowest-dimensional $`eeff`$ contact-interaction Lagrangian with helicity conserving, flavor-diagonal fermion currents : $$=\underset{\alpha ,\beta }{}\frac{g_{\mathrm{eff}}^2}{\mathrm{\Lambda }_{\alpha \beta }^2}\eta _{\alpha \beta }\left(\overline{e}_\alpha \gamma _\mu e_\alpha \right)\left(\overline{f}_\beta \gamma ^\mu f_\beta \right).$$ (2) In Eq. (2), generation and color indices have been suppressed, $`\alpha ,\beta =L,R`$ indicate left- or right-handed helicities, and the parameters $`\eta _{\alpha \beta }=\pm 1,0`$ specify the independent, individual, interaction models. Conventionally, $`g_{\mathrm{eff}}^2=4\pi `$ as a reminder that the new interaction, originally proposed for compositeness, would become strong at $`E\mathrm{\Lambda }`$. Thus, in practice, the scales $`\mathrm{\Lambda }_{\alpha \beta }`$ define a standard to compare the reach of different new-physics searches. For example, a bound on $`\mathrm{\Lambda }`$ in the case of a very heavy $`Z^{}`$ exchange with couplings of the order of the electron charge would translate into a constraint on the mass $`M_Z^{}\sqrt{\alpha }\mathrm{\Lambda }`$, and the same is true for leptoquarks or for any other heavy object exchanged in process (1) . Constraints on $``$ can be obtained by looking at deviations of observables from the SM predictions in the experimental data. In general, such sought-for deviations can simultaneously depend on all four-fermion effective coupling constants, that cannot be easily disentangled. A commonly adopted possibility is to assume non-zero value for only one parameter at a time, with the remaining ones set equal to zero. In this way, one would test the individual models mentioned above, and current bounds from a global analysis of the relevant data are of the order of $`\mathrm{\Lambda }_{\alpha \beta }𝒪(10)\mathrm{TeV}`$ . However, for the derivation of model-independent constraints, a procedure that allows to simultaneously take into account the terms of different chiralities, and at the same time to disentangle the contributions of the different individual couplings to avoid potential cancellations that can weaken the bounds, is highly desirable. To this purpose, initial beam longitudinal polarization would offer the possibility of defining polarized cross sections, that allow to reconstruct from the data the helicity cross sections depending on the individual $`eeff`$ chiral couplings of Eq. (2), and consequently to make an analysis in terms of a minimal set of free independent parameters . Also, integrated cross sections would be of advantage in the case of limited statistics, and some optimal choice of the kinematical region may further improve the sensitivity to the new interaction. For $`fe,t`$ and $`m_f\sqrt{s}E_{CM}`$, the differential cross section for process (1) is determined in Born approximation by the s-channel $`\gamma ,Z`$ exchanges plus $``$ of Eq. (2). With $`P_e,P_{\overline{e}}`$ the initial beams longitudinal polarization : $$\frac{d\sigma }{d\mathrm{cos}\theta }=\frac{3}{8}\left[(1+\mathrm{cos}\theta )^2\stackrel{~}{\sigma }_++(1\mathrm{cos}\theta )^2\stackrel{~}{\sigma }_{}\right],$$ (3) where, in terms of helicity cross sections $`\stackrel{~}{\sigma }_+`$ $`=`$ $`{\displaystyle \frac{1}{4}}\left[(1P_e)(1+P_{\overline{e}})\sigma _{\mathrm{LL}}+(1+P_e)(1P_{\overline{e}})\sigma _{\mathrm{RR}}\right],`$ (4) $`\stackrel{~}{\sigma }_{}`$ $`=`$ $`{\displaystyle \frac{1}{4}}\left[(1P_e)(1+P_{\overline{e}})\sigma _{\mathrm{LR}}+(1+P_e)(1P_{\overline{e}})\sigma _{\mathrm{RL}}\right],`$ (5) and ($`\alpha ,\beta =L,R`$; $`N_C3(1+\alpha _s/\pi )`$ for quarks and $`N_C=1`$ for leptons): $$\sigma _{\alpha \beta }=N_C\frac{4\pi \alpha _{em}^2}{3s}|A_{\alpha \beta }|^2.$$ (6) The helicity amplitudes are $$A_{\alpha \beta }=Q_eQ_f+g_\alpha ^eg_\beta ^f\chi _Z+\frac{s\eta _{\alpha \beta }}{\alpha \mathrm{\Lambda }_{\alpha \beta }^2},$$ (7) where $`Q`$’s and $`g`$’s are are the fermion electric charges and SM chiral couplings, respectively, and $`\chi _Z=s/(sM_Z^2+is\mathrm{\Gamma }_Z/M_Z)`$. The above relations clearly show that the helicity cross sections, that directly relate to the individual four-fermion contact interaction couplings and therefore allow a model-independent analysis, can be disentangled by the measurement of $`\stackrel{~}{\sigma }_+`$ and $`\stackrel{~}{\sigma }_{}`$ with different choices of the initial beam polarizations, and making linear combinations. In particular, one can easily see by integration of Eq. (3) in $`\mathrm{cos}\theta `$ that the ‘conventional’ observables, the total cross section $`\sigma \sigma _F+\sigma _B=\stackrel{~}{\sigma }_++\stackrel{~}{\sigma }_{}`$ and the forward-backward difference $`\sigma _{FB}\sigma _F\sigma _B=\frac{3}{4}\left(\stackrel{~}{\sigma }_+\stackrel{~}{\sigma }_{}\right)`$, depend an all helicity cross sections and therefore do not allow the separation by themselves, unless their measurements at different initial polarizations are combined (a minimum of four measurements is needed). For the discussion of the expected uncertainties and the corresponding sensitivities to the parameters of $``$, as well as for improving the significance of the resulting bounds on $`\mathrm{\Lambda }_{\alpha \beta }`$, one can more generally define polarized cross sections integrated over the a priori arbitrary kinematical ranges $`(1,z^{})`$ and $`(z^{},1)`$: $`\sigma _1(z^{})`$ $``$ $`{\displaystyle _z^{}^1}{\displaystyle \frac{d\sigma }{d\mathrm{cos}\theta }}d\mathrm{cos}\theta ={\displaystyle \frac{1}{8}}\left\{\left[8(1+z^{})^3\right]\stackrel{~}{\sigma }_++(1z^{})^3\stackrel{~}{\sigma }_{}\right\},`$ (8) $`\sigma _2(z^{})`$ $``$ $`{\displaystyle _1^z^{}}{\displaystyle \frac{d\sigma }{d\mathrm{cos}\theta }}d\mathrm{cos}\theta ={\displaystyle \frac{1}{8}}\left\{(1+z^{})^3\stackrel{~}{\sigma }_++\left[8(1z^{})^3\right]\stackrel{~}{\sigma }_{}\right\},`$ (9) and try to disentangle the helicity cross sections from the general relations, at different values of the polarizations $`P_e`$ and $`P_{\overline{e}}`$: $`\stackrel{~}{\sigma }_+`$ $`=`$ $`{\displaystyle \frac{1}{6(1z_{}^{}{}_{}{}^{2})}}\left[\left(8(1z^{})^3\right)\sigma _1(z^{})(1z^{})^3\sigma _2(z^{})\right],`$ (10) $`\stackrel{~}{\sigma }_{}`$ $`=`$ $`{\displaystyle \frac{1}{6(1z_{}^{}{}_{}{}^{2})}}\left[(1+z^{})^3\sigma _1(z^{})+\left(8(1+z^{})^3\right)\sigma _2(z^{})\right].`$ (11) In practice, we adopt $`P_e=\pm P`$ with $`P<1`$ and $`P_{\overline{e}}=0`$. Then, the basic set of integrated observables are $`\sigma _{1,2}(z^{},P_e)`$ and, as a second step, we construct the cross sections $`\stackrel{~}{\sigma }_\pm (P_e)`$ which finally yield the helicity cross sections $`\sigma _{\alpha \beta }`$ by solving the linear system of equations corresponding to the two signs of $`P_e`$. One can easily see that the specific choice $`z^{}=0`$ in Eqs. (8) and (9) leads back to the forward and backward cross sections $`\sigma _F`$ and $`\sigma _B`$. Instead, the values $`z^{}=z_\pm ^{}=(2^{2/3}1)=0.587`$ ($`\theta _+^{}=126^{}`$ and $`\theta _{}^{}=54^{}`$) to a very good approximation allow to directly ‘project’ out $`\stackrel{~}{\sigma }_\pm `$ : $$\stackrel{~}{\sigma }_+=\gamma \left(\sigma _1(z_+^{})\sigma _2(z_+^{})\right),\stackrel{~}{\sigma }_{}=\gamma \left(\sigma _2(z_{}^{})\sigma _1(z_{}^{})\right),$$ (12) where $`\gamma =[3\left(2^{2/3}2^{1/3}\right)]^1=1.018`$. Finally, $`z^{}`$ could be taken as an input parameter related to given experimental conditions, that can be tuned to get maximal sensitivity of $`\sigma _{\alpha \beta }`$ on $`\mathrm{\Lambda }`$’s . ## 2 Numerical analysis, optimization and bounds on $`\mathrm{\Lambda }`$ We adopt a $`\chi ^2`$ procedure, defined as follows: $$\chi ^2=\left(\frac{\mathrm{\Delta }\sigma _{\alpha \beta }}{\delta \sigma _{\alpha \beta }}\right)^2,$$ (13) where $`\mathrm{\Delta }\sigma _{\alpha \beta }=\sigma _{\alpha \beta }\sigma _{\alpha \beta }^{SM}`$ are the deviations of helicity cross sections due to the contact four-fermion interaction (2), and $`\delta \sigma _{\alpha \beta }`$ are the corresponding expected experimental uncertainties on $`\sigma _{\alpha \beta }`$, combining both statistical and systematic uncertainties. Assuming that no deviation from the SM is observed within the experimental accuracy, constraints on the allowed values of $`\mathrm{\Lambda }`$’s are obtained by imposing $`\chi ^2<\chi _{\mathrm{crit}}^2`$, where the actual value of $`\chi _{\mathrm{crit}}^2`$ specifies the desired ‘confidence level’ ($`\chi _{\mathrm{crit}}^2=3.84`$ as typical for 95% C.L. with a one-parameter fit). For the expected uncertainties on $`\sigma _{1,2}`$, we assume the following identification efficiencies ($`ϵ`$) and systematic uncertainties ($`\delta ^{\mathrm{sys}}`$) for the different final states : $`ϵ=100\%`$ and $`\delta ^{\mathrm{sys}}=0.5\%`$ for leptons; $`ϵ=60\%`$ and $`\delta ^{\mathrm{sys}}=1\%`$ for $`b`$ quarks; $`ϵ=35\%`$ and $`\delta ^{\mathrm{sys}}=1.5\%`$ for $`c`$ quarks. To have an indication on the role of statistics, for the LC with $`\sqrt{s}=0.5\mathrm{TeV}`$ we consider time-integrated total luminosities $`L_{\mathrm{int}}=50`$ and 500 $`\text{fb}^1`$, and assume $`1/2L_{\mathrm{int}}`$ for each values $`P_e=\pm P`$. We take the values $`P=`$ 1, 0.8, 0.5 as a reasonable variation around $`P=0.8`$, expected at the LC , in order to study the dependence of the results on the initial beam longitudinal polarization. The numerical analysis uses the program ZFITTER along with ZEFIT, with input values $`m_{\mathrm{top}}=175\mathrm{GeV}`$ and $`m_H=100\mathrm{GeV}`$. It takes into account one-loop SM electroweak corrections in the form of improved Born amplitudes , as well as initial- and final-state radiation with a cut on the photon energy emitted in the initial state $`\mathrm{\Delta }=E_\gamma /E_{\mathrm{beam}}=0.9`$ to avoid radiative return to the $`Z`$ peak. The results for the reachable mass scales $`\mathrm{\Lambda }`$ are shown in Tables 1 and 2 . The left entries in each box represent the values obtained by the polarized integrated cross sections defined by $`z_\pm ^{}`$, see Eq. (12). As one can see, the best sensitivity occurs for $`b\overline{b}`$ production while the worst one is for $`c\overline{c}`$, and the decrease of electron polarization $`P`$ from 1 to 0.5 worsens the sensitivity by 20-40%, depending on the final state. As regards the role of the luminosity, the bounds on $`\mathrm{\Lambda }`$ would scale like $`(L_{\mathrm{int}})^{1/4}`$ if no systematic uncertainty were assumed, giving a factor 1.8 of improvement from 50 to 500 $`\mathrm{fb}^1`$. This is the case of $`\mathrm{\Lambda }_{RL}`$ and $`\mathrm{\Lambda }_{LR}`$, where the dominant uncertainty is the statistical one, whereas the bounds for $`\mathrm{\Lambda }_{LL}`$ and $`\mathrm{\Lambda }_{RR}`$ depend much more sensitively on $`\delta ^{\mathrm{sys}}`$. Moreover, it should be noticed that the sensitivity of $`\sigma _{RL}`$ and $`\sigma _{LR}`$ is considerably smaller than that of $`\sigma _{LL}`$ and $`\sigma _{RR}`$. Thus, it is important to construct optimal obervables to get the maximum sensitivity. Referring to Eq. (13), the uncertainties $`\delta \sigma _{\alpha \beta }`$ depend on $`z^{}`$ trough Eqs. (8)-(11), while $`\mathrm{\Delta }\sigma _{\alpha \beta }`$ are $`z^{}`$-independent. This $`z^{}`$ dependence determines the sensitivity of each helicity amplitude to the corresponding $`\mathrm{\Lambda }`$. It can be explicitly evaluated, for the statistical uncertainty, from the known SM cross sections and $`L_{\mathrm{int}}`$. Then, optimization can be achieved by choosing $`z^{}=z_{\mathrm{opt}}^{}`$ at which $`\delta \sigma _{\alpha \beta }`$ becomes minimum, so that the corresponding sensitivity has a maximum. The numerical results, reported in Tables 1 and 2, show that such optimization can allow a substantial increase of the lower bounds on $`\mathrm{\Lambda }_{RL}`$ and $`\mathrm{\Lambda }_{LR}`$, and a modest improvement for $`\mathrm{\Lambda }_{LL}`$ and $`\mathrm{\Lambda }_{RR}`$. In conclusion, the measurement of helicity amplitudes of process (1) at the LC by means of suitable polarized integrated observables and optimal kinematical cuts to increase the sensitivity, would allow model-independent tests of four-fermion contact interactions, in particular as regards their chiral structure, up to mass scales $`\mathrm{\Lambda }_{\alpha \beta }`$ of the order of 40-100 times the C.M. energy, depending on the final fermion flavor and the degree of initial polarization. Work is in progress to assess the further increase in sensitivity to the new interactions which can be obtained if also a significant positron-beam polarization were available.
no-problem/9911/astro-ph9911194.html
ar5iv
text
# Long-term Monitoring of Molonglo Calibrators ## 1 Introduction Many compact extragalactic radio sources show variations in their radio flux density as a function of time. At high frequencies ($`\nu 1`$ GHz) this variability is usually interpreted as being intrinsic to the source (e.g. Qian et al 1995, although see Kedziora-Chudczer et al 1997). Variability at lower frequencies (e.g. Hunstead 1972; Ghosh & Rao 1992) is normally attributed to refractive interstellar scintillation, in which the intensity variations are caused by distortions of the wavefront by electron density gradients in an intervening screen of material (Shapirovskaya 1978; Rickett et al 1984). There is evidence that the parameters of such variability depend on the Galactic latitude of the source (Spangler et al. 1989; Ghosh & Rao 1992), suggesting that the material causing the scintillation is in our own Galaxy. In some sources, both intrinsic variability and scintillation may be occurring at the same time (e.g. Mitchell et al 1994). Such sources show large but uncorrelated variations at high and low frequencies. At frequencies $`\nu 1`$ GHz, one might expect both effects to occur; however, variability in this region of the spectrum is largely unexplored. The Molonglo Observatory Synthesis Telescope (MOST; Mills 1981; Robertson 1991) operates at a frequency of 843 MHz, and is thus well placed to study this regime. For calibration purposes, the MOST monitors the flux density of $``$10 compact extragalactic sources every day. Thus the full record of MOST calibrations, running from 1984 until the commencement of the Wide Field Project in 1996 (Large et al 1994), forms an ideal database with which to study variability in this intermediate frequency range. A preliminary analysis of three MOST calibrators was made by Campbell-Wilson & Hunstead (1994), hereafter Paper I. It was shown that flux density measurements with a relative accuracy of 2% could be extracted from the database. Over the period from 1990.1 to 1993.7, the source MRC B0409–752 was shown to be stable, while MRC B0537–441 and MRC B1921–293 were found to be highly variable. In this paper we now report on all 55 calibrators used by the MOST, over a thirteen year period. In Section 2 we explain how we process the calibrator measurements in order to produce light curves for each source, and then determine whether a source is variable or not. In Section 3 we present light curves for all 55 sources, plus structure functions for those sources found to be variable. In Section 4 we discuss some individual sources in our sample, and consider whether any of the observed properties correlate with Galactic latitude. ## 2 Observations and Data Analysis ### 2.1 SCAN Measurements The MOST is an east-west synthesis telescope, consisting of two cylindrical paraboloids of dimensions 778 m $`\times `$ 12 m. Radio waves are received by a line feed system of 7744 circular dipoles. The telescope is steered by mechanical rotation of the cylindrical paraboloids about their long axis, and by phasing the feed elements along the arms. In a single 12-hour synthesis, the MOST can produce an image at a spatial resolution of $`43^{\prime \prime }\times 43^{\prime \prime }\mathrm{cosec}(|\delta |)`$ and at a sensitivity of $``$1 mJy beam<sup>-1</sup> (where 1 jansky \[Jy\] $`=10^{26}`$ W m<sup>-2</sup> Hz<sup>-1</sup>). Before and after each 12-hour synthesis, the MOST typically observes $`5`$ calibration sources in fan-beam “SCAN” mode in order to determine the gain and pointing corrections for the telescope. These sources are chosen from a list of 55 calibrators, 45 of which were chosen from the Molonglo Reference Catalogue (MRC) at 408 MHz (Large et al 1981), using as selection criteria that they have declination $`\delta <30^{}`$, Galactic latitude $`|b|>10^{}`$, angular sizes $`<10^{\prime \prime }`$ and flux densities $`S_{408\mathrm{MHz}}>4`$ Jy and $`S_{843\mathrm{MHz}}>2.5`$ Jy; further discussion is given by Hunstead (1991). This list was later supplemented by seven flat-spectrum ($`S_{408\mathrm{MHz}}<4`$ Jy) sources from the work of Tzioumis (1987), plus three compact sources for which $`\delta >30^{}`$. The full list of calibrators is given in Paper I. For each SCAN observation the calibrator source is tracked for two minutes, after which the mean antenna response is compared with the theoretical fan-beam response to a point source. From 1994 to 1996, over 58 000 such measurements were made. In each case, parameters such as the goodness-of-fit of the response and the pointing offset from the calibrator position are recorded, along which an amplitude which is the product of the instantaneous values of the source flux density, the intrinsic telescope gain and local sensitivity factors. The main factors are strong but well-determined functions of meridian distance<sup>1</sup><sup>1</sup>1Meridian distance, MD, is related to hour-angle, $`H`$, by $`\mathrm{sin}\mathrm{MD}\mathrm{cos}\delta \mathrm{sin}H`$ — see Robertson (1991). (MD) and of ambient temperature (which ranges from $``$10C to +40C during the year); the variation of sensitivity with MD is shown in Figure 1. After applying corrections for these two factors, the telescope gain for each SCAN is derived by comparing the corrected amplitude with the tabulated flux density of the corresponding source (see Table 1 of Paper I). The residual scatter in the gain determined from steady sources (defined in Section 2.3) is typically 2% RMS; this is the fundamental limit to the uncertainty of measurements made using the SCAN database. ### 2.2 Selection Criteria Various selection criteria are applied to the SCAN database before accepting measurements for further analysis: * The uncertainties in the MD gain curve increase towards large MD, and observations made outside the MD range $`\pm 50`$ are excluded; * Observations made during routine performance testing (characterised by a large number of successive SCANs of the same source) are discounted, except where the standard deviation in gain was less than 5%. In such cases the group is treated as a single measurement with a gain equal to the average of the group; * a poor fit to the antenna response can often indicate a confusing source or a telescope malfunction, and such data are excluded; * extreme values of the relative gain (below 0.5 or above 1.5) are assumed to be discrepant and are discarded. Because calibration observations are made just before and after each synthesis, the database is typically clustered into SCANs closely spaced in time. We define a “block” as a group of at least three valid observations made within the space of an hour. We initially exclude observations of 15 of the 55 calibrators (see Table 1 of Paper I), because of: (i) a flat spectrum ($`\alpha >0.5`$, $`S\nu ^\alpha `$), (ii) suspected variability or (iii) the presence of a confusing source. By averaging the gains determined from each SCAN within a block, a representative gain for the telescope at that particular epoch can be determined. This is then applied to each individual observation within the block to obtain a measurement of flux density for that source. Some of the resultant light curves have thousands of data points, generally sampled at highly irregular intervals. Some light curves have significant scatter; it is not clear whether this scatter is due to unrecognised systematic errors in our flux density determination, to true variability on time-scales shorter than the typical sampling interval, or to the presence of confusing sources in the field. In any case, we chose to bin each light-curve at 30 day intervals; the mean of all flux densities within a given bin becomes a single point on a smoothed light curve, and the standard deviation of the measurements in that bin becomes the error bar associated with this measurement<sup>2</sup><sup>2</sup>2In cases where there is only one measurement in a particular 30 day interval, the error is nominally assigned to be 5% of the measured flux density.. While binning the data filters out any genuine variability on time-scales less than a month, the irregular sampling intervals of the observations and the inherent uncertainty in a single SCAN’s flux density make the MOST database less than ideal for studying such short-term behaviour. ### 2.3 Analysis of Variability In order to quantify which sources are variable and which are steady, we calculate the $`\chi ^2`$ probability that the flux has remained constant for a given source (e.g. Kesteven et al 1976). We first calculate the quantity $$x^2=\underset{i=1}{\overset{n}{}}(S_i\stackrel{~}{S})^2/\sigma _i^2$$ (1) where $`\stackrel{~}{S}`$ is the weighted mean, given by $$\stackrel{~}{S}=\frac{_{i=1}^n(S_i/\sigma _i^2)}{_{i=1}^n(1/\sigma _i^2)},$$ (2) $`S_i`$ is the $`i`$th measurement of the flux density for a particular source, $`\sigma _i^2`$ is the variance associated with each 30-day estimate of $`S_i`$, and $`n`$ is the number of binned data points for that source. For normally-distributed random errors, we expect $`x^2`$ to be distributed as $`\chi ^2`$ with $`n1`$ degrees of freedom. For each source, we can then calculate the probability, $`P`$, of exceeding $`x^2`$ by chance for a random distribution. A high value of $`P`$ indicates that a source has a steady flux density over the available time period; we classify a source as steady (S) if $`P>0.01`$, and undetermined (U) if $`0.001<P<0.01`$. However the $`\chi ^2`$ test cannot distinguish between sources which are genuinely variable and those which simply have a large scatter in their light curve; both light curves result in a low value of $`P`$. We distinguish between these possibilities by computing the structure function (e.g. Hughes et al 1992; Kaspi & Stinebring 1992) for each source for which $`P<0.001`$. The mean is subtracted from the binned time series $`S_t`$, and these data are then normalised by dividing by the standard deviation. This yields a new time series $`F_t`$, from which the structure function $$\mathrm{\Sigma }_\tau =[F_{t+\tau }F_t]^2$$ (3) can be calculated, where $`\tau `$ is a parameter known as the lag. If a light curve contains scatter but no true variability, then the structure function will have the value $`\mathrm{\Sigma }_\tau 2`$ for all values of $`\tau `$. But when a source is truly varying, we expect the resulting structure function to consist of three regimes: * Noise regime: at small lags, $`\mathrm{\Sigma }_\tau `$ is more or less constant. * Structure regime: as $`\tau `$ increases, $`\mathrm{\Sigma }_\tau `$ increases linearly (on a log-log plot). * Saturation regime: at high lags, the structure function turns over and oscillates around $`\mathrm{\Sigma }_\tau =2`$ (for our normalisation). If there is a second, longer, time-scale in the data, the structure function can enter another linear regime at longer lags before again saturating. If a source has $`P<0.001`$ but shows no clear structure in its structure function, we classify it as undetermined (U). Only sources which have both $`P<0.001`$ and show structure are classified as variable (V). In these cases, the structure function can also be used to obtain a characteristic time scale, $`\tau _V`$, for variability; we define $`\tau _V`$ to be equal to twice the lag at which the structure function saturates. We expect a structure function to be sensitive only to time scales longer than about 100 days (i.e. a few multiples of the sampling interval of the binned data). Furthermore, caution should be applied when interpreting structure at large values of $`\tau `$, as only a few points make a contribution to $`\mathrm{\Sigma }_\tau `$ at these long lags (e.g. Hughes et al 1992). ## 3 Results Approximately 28 000 SCANs meet the selection criteria described in Section 2.2, and around 22 000 of these fall within valid blocks. The resulting light curves for the 55 MOST calibrators are given in Figure 2.<sup>3</sup><sup>3</sup>3The corresponding data tables are available at http://www.physics.usyd.edu.au/astrop/scan/ . Using the criteria described in Section 2.3, 18 sources are found to be variable, 19 are found to have steady light curves, and the remaining 18 are undetermined. Each source in Figure 2 is marked with a V, S or U corresponding to its classification. Structure functions for the 18 variable sources are shown in Figure 3; for each source we have estimated the time scale for variability, $`\tau _V`$, as marked on each plot. However, we note that some of these estimates are very approximate, as a result of the sparse and/or irregular sampling of the light curves. For example, for MRC B2326–477 we have assigned $`\tau _V=400`$ d, but one could just as easily argue that $`\tau _V=2000`$ d. Furthermore, there is evidence that the structure functions for some sources, such as MRC B1740–517, enter another linear regime beyond the point where they saturate. This suggests that there are variations on time scales longer than we can measure with these data. Some properties of the 18 variable sources are summarised in Table 1. ## 4 Discussion ### 4.1 Individual Sources We restrict our comments here to the 18 sources found to be variable. Many of these sources have been observed in snapshot mode at 5 GHz with the Australia Telescope Compact Array (ATCA, Burgess 1998), and are also ATCA secondary phase calibrators. MRC B0208$``$512: VLBI modelling shows a strong core (Preston et al 1989), and a jet-like feature (Tingay et al 1996). Detected as an X-ray source in the ROSAT All-Sky Survey (Brinkmann et al. 1994) and as a $`\gamma `$-ray source in the EGRET survey (Bertsch et al. 1993). MRC B0537$``$441: See Paper I. MRC B0943$``$761: Close $`2.^{\prime \prime }8`$ double at 5 GHz (Burgess 1998). Detected as an X-ray source in the ROSAT All-Sky Survey (Brinkmann et al 1994). MRC B1151$``$348: Radio spectrum peaks at $``$200 MHz. A VLBI image shows a 90 mas double structure (King et al 1993). MRC B1215$``$457: Compact steep-spectrum source with a strong, slightly resolved VLBI core (Preston et al 1989). MRC B1234$``$504: Compact steep-spectrum source, with no optical counterpart on the UK Schmidt sky survey but possible stellar identification on a CCD image obtained at the Anglo-Australian Telescope (AAT) (Burgess 1998). MRC B1424$``$418: Discordant flux densities measured at Parkes point to the source being variable at 5 GHz (Burgess, priv comm). VLBI modelling shows an unequal 23 mas double structure (Preston et al 1989). MRC B1458$``$391: Compact steep-spectrum source in a crowded optical field; optical ID based on an AAT CCD image (Burgess 1998). MRC B1549$``$790: VLBI image shows a curved structure, possibly a core plus jet (Murphy et al 1993). MRC B1610$``$771: Quasar with a flat radio spectrum and very steep optical spectrum (Hunstead & Murdoch 1980). VLBI observations (Preston et al 1989) show a strong core surrounded by a 50 mas halo. MRC B1718$``$649: The nearest GHz-peaked-spectrum source, with a radio spectrum peaking near 3 GHz. VLBI imaging shows two sub-parsec-scale components separated by $``$2 pc (Tingay et al. 1997). MRC B1740$``$517: Crowded optical field; galaxy ID by di Serego Alighieri et al (1994) is confirmed by an AAT CCD image (Burgess 1998). MRC B1827$``$360: Compact ultra-steep-spectrum source identified with a galaxy in a very crowded field. MRC B1829$``$718: Candidate source for defining the VLBI astrometric reference frame (Ma et al 1998). MRC B1854$``$663: Compact steep-spectrum source identified with a faint galaxy (Burgess 1998). MRC B1921$``$293: See Paper I. MRC B2052$``$474: Radio spectrum steep at low frequency, but flattens at high frequency; core dominated at 5 GHz, possibly triple (Burgess 1998). Detected as an X-ray source by the ROSAT All-Sky Survey (Brinkmann et al 1994). MRC B2326$``$477: Detected as an X-ray source in the ROSAT All-Sky Survey (Brinkmann et al 1994). One of the set of defining sources for the VLBI astrometric reference frame (Ma et al 1997). ### 4.2 General Properties If the observed variability is a result of refractive scintillation in the Galactic interstellar medium (ISM), then we expect some sort of dependence of one of modulation index $`m=\sigma /\overline{S}`$, characteristic timescale $`\tau _V`$ or their product, $`m\tau _V`$, on the Galactic latitude, $`b`$ (e.g. Spangler et al. 1989; Ghosh & Rao 1992). However, apart from a weak tendency for larger $`\tau _V`$ to occur at lower $`|b|`$, there is no obvious correlation in our data. This is not surprising given the large uncertainties in $`\tau _V`$ arising from the irregular sampling of the light curves, and the fact that there are few variable sources at high latitudes (14 of the 18 variable sources have $`10^{}<|b|<30^{}`$). An alternative indicator of the effects of the Galactic ISM is to test whether variable sources are more likely to be found at low latitudes. We consider this possibility in Figure 4, where we plot the ratio of variable sources ($`N_V`$) to variable plus steady sources ($`N_V+N_S`$) in different latitude bins. While the statistics are poor, there is a clear indication that sources are more likely to be variable at low latitudes, as found for northern hemisphere sources (Cawthorne & Rickett 1985; Gregorini et al. 1986). This is unlikely to be caused by selection effects associated with a dependence of spectral index on Galactic latitude (cf. Cawthorne & Rickett 1985), since the main criterion for source selection was angular size ($`\theta <10^{\prime \prime }`$). Thus the extensive monitoring data for the MOST calibrators provide good evidence that the variability observed at 843 MHz arises from scintillation in the local ISM. While spectral index was not considered in selecting the majority of the sample, Table 1 shows that two-thirds of the variables have flat or inverted spectra ($`\alpha >0.5`$), consistent with source angular size being the main determinant of source variability. Surprisingly, the remaining third of the variables fall in the class of compact steep-spectrum (CSS) sources which are generally believed to be young sources still contained within their host galaxies, and not known to vary at high frequencies. The latter sources display a lower level of variability, as measured by the modulation index $`m`$, and in four of the six cases their V classification appears to be due to one-off events lasting $``$1 year. To investigate the variability properties of the MOST calibrator sample as a whole, in Figure 5 we have plotted $`m`$ versus $`\alpha `$ for all 55 sources. This Figure shows a clear trend towards higher average modulation index as the radio spectrum flattens, with a suggestion of an upper envelope. Perhaps the simplest explanation for this behaviour in the unified model for powerful extragalactic radio sources is to link $`m`$ and $`\alpha `$ through the orientation of the radio axis to the line of sight (e.g. Orr & Browne 1982). We assume that the ‘core’ of a classical triple source is the only part with components small enough in angular size to scintillate. If the core contribution dominates, as a consequence of Doppler boosting in the flat spectrum sources, even small fractional variations will be readily detected. However, the same fractional variations in the core of a steep-spectrum, lobe-dominated source will go undetected. We can therefore understand the trends in Figure 5 in a qualitative sense, and it is possible that a more detailed analysis may provide useful constraints on radio source models. ## 5 Conclusions 55 sources used for calibration purposes by the MOST at a frequency of 843 MHz have been observed irregularly over a 13 year interval. We have developed an algorithm to process these data and produce a light curve for each source. Our analysis shows that 18 of these sources can be considered variable. There is some suggestion that these sources are distributed at lower Galactic latitudes than the 19 sources whose flux densities are unvarying. This suggests that variability at 843 MHz on time scales of 1–10 years is predominantly due to scintillation in the Galactic ISM rather than effects intrinsic to the source. A possible correlation between modulation index and spectral index can be explained qualitatively in terms of a variation in the core fraction with orientation of the radio axis to the line of sight. ## Acknowledgements We thank Ann Burgess for providing us with unpublished ATCA images of several sources and for supplying us with her improved meridian-distance gain curve. We also thank Duncan Campbell-Wilson, Lawrence Cram, Jean-Pierre Macquart, Gordon Robertson, Mark Walker and Taisheng Ye for useful discussions and advice, and an anonymous referee for a careful reading of the manuscript. This research has made use of the NASA/IPAC Extragalactic Database (NED), operated by JPL under contract with NASA. The MOST is supported by grants from the Australian Research Council, the University of Sydney Research Grants Committee, and the Science Foundation for Physics within the University of Sydney. BMG acknowledges the support of an Australian Postgraduate Award and of NASA through Hubble Fellowship grant HF-01107.01-98A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5–26555. ## References Bertsch, D.L. et al 1993 ApJ 405, 21 Brinkmann, W., Siebert, J., & Boller, T. 1994 A&A 281, 355 Burgess, A.M. 1998 PhD thesis, University of Sydney Campbell-Wilson, D., & Hunstead, R.W. 1994 PASA 11, 33 (Paper I) Cawthorne, T.V., & Rickett, B.J. 1985 Nature 315, 40 di Serego Alighieri, S., Danziger, I.J., Morganti, R., & Tadhunter, C.N. 1994 MNRAS 269, 998 Ghosh, T., & Rao, A.P. 1992 A&A 264, 203 Gregorini, L., Ficarra, A., & Padrielli, L. 1986 A&A 168, 25 Hughes, P.A., Aller, H.D., & Aller, M.F. 1992 ApJ 396, 469 Hunstead, R.W. 1972 Astrophys. Lett. 12, 193 Hunstead, R.W. 1991 Aust J Phys 44, 743 Hunstead, R.W. & Murdoch, H.S. 1980 MNRAS 192, 31P Kaspi, V.M., & Stinebring, D.R. 1992 ApJ, 392, 530 Kedziora-Chudczer, L., Jauncey, D.L., Wieringa, M.H., Walker, M.A., Nicolson, G.D., Reynolds, J.E., & Tzioumis, A.K. 1997 ApJ, 490, L9 Kesteven, M.J.L., Bridle, A.H., & Brandie, G.W. 1976 AJ 81, 919 King, E.A. et al 1993, in ‘Sub-arcsecond Radio Astronomy’, ed. R.J. Davis & R.S. Booth, Cambridge: CUP, 152 Large, M.I., Mills, B.Y., Little, A.G., Crawford, D.F., & Sutton, J.M. 1981 MNRAS 194, 693<sup>4</sup><sup>4</sup>4http://www.physics.usyd.edu.au/astrop/data/mrc.dat.gz Large, M.I., Campbell-Wilson, D., Cram, L.E., Davison, R.G., & Robertson, J.G. 1994 PASA 11, 44 Ma, C. et al 1998 AJ 116, 516 Mills, B.Y. 1981 PASA 4, 156 Mitchell, K.J., Dennison, B., Condon, J.J., Altschuler, D.R. Payne, H.E., O’Dell, S.L., & Broderick, J.J. 1994 ApJS 93, 441 Murphy, D.W. et al 1993, in ‘Sub-arcsecond Radio Astronomy’, ed. R.J. Davis & R.S. Booth, Cambridge: CUP, 243 Orr, M.J.L., & Browne, I.W.A. 1982 MNRAS 200, 1067 Preston, R.A. et al 1989 AJ 98, 1 Qian, S.J., Britzen, S., Witzel, A., Krichbaum, T.P., Wegner, R., & Waltman, E. 1995 A&A 295, 47 Rickett, B.J., Coles, W.A., & Bourgois, G. 1984 A&A 134, 390 Robertson, J.G. 1991 Aust. J. Phys. 44, 729 Shapirovskaya, N.Y. 1978 Sov. Astron. 22, 544 Spangler, S., Fanti, R., Gregorini, L., & Padrielli, L. 1989 A&A 209, 315 Tingay, S.J. et al 1996 ApJ 464, 170 Tingay, S.J. et al 1997 AJ 113, 2025 Tzioumis, A.K. 1987 PhD thesis, University of Sydney
no-problem/9911/quant-ph9911112.html
ar5iv
text
# Analysis of radiatively stable entanglement in a system of two dipole-interacting three-level atoms ## I The model Extending the model described in , we consider here two identical three-level atoms in a $`\mathrm{\Lambda }`$ configuration (Fig. 1) fixed at a distance $`R`$. The dipole transitions $`|1|3`$ and $`|2|3`$ of both atoms are driven by two near-resonant laser fields. Taking the two limiting cases, we consider only two types of geometry: when the laser fields are either perpendicular or parallel to the radius vector $`\stackrel{}{R}`$ connecting the atoms (these geometries are shown in Fig. 2 and identified as symmetric and antisymmetric, respectively). Within the interaction picture and rotating wave approximation, the evolution of the system interacting with the laser fields is governed by the following master equation : $$\frac{\widehat{\rho }}{t}=\frac{i}{\mathrm{}}[\widehat{}_{\mathrm{eff}},\widehat{\rho }]+\underset{i,j,k=1,2}{}\frac{\gamma _{k3}^{(ij)}}{2}\left(2\widehat{\sigma }_{3k}^{(i)}\widehat{\rho }\widehat{\sigma }_{k3}^{(j)}\widehat{\rho }\widehat{\sigma }_{3k}^{(i)}\widehat{\sigma }_{k3}^{(j)}\widehat{\sigma }_{3k}^{(i)}\widehat{\sigma }_{k3}^{(j)}\widehat{\rho }\right),$$ (1) where the upper indices, $`i`$ and $`j`$, number the atoms, the lower ones, $`k3`$ and $`3k`$ ($`k=1,\mathrm{\hspace{0.17em}2}`$), refer to dipole transitions of the atoms, and $`\sigma _{kl}^{(i)}`$ denotes the Heisenberg transition operators from level $`|k`$ to level $`|l`$ within the $`i`$th atom. Relaxation effects in the system are characterized by the single-atom decay rates, $`\gamma _{k3}=\gamma _{k3}^{(11)}=\gamma _{k3}^{(22)}`$, which correspond to the conventional radiative decay into free space, and the photon exchange rates, $`\gamma _{k3}^{(12)}=\gamma _{k3}^{(21)}`$, which describe collective relaxation, a well-known companion of the RDDI. The effective Hamiltonian $`\widehat{}_{\mathrm{eff}}`$ includes interaction with the laser field and the RDDI coupling on both transitions: $$\widehat{}_{\mathrm{eff}}=\mathrm{}\underset{i,k=1,2}{}\left(\delta _{k3}\widehat{n}_k^{(i)}+\frac{\mathrm{\Omega }_{k3}^{(i)}}{2}\widehat{\sigma }_{k3}^{(i)}+\chi _{k3}\widehat{\sigma }_{k3}^{(1)}\widehat{\sigma }_{3k}^{(2)}+\text{H.\hspace{0.17em}c.}\right),$$ (2) where $`\widehat{n}_k^{(i)}`$ stands for the population operator of the level $`|k`$ in the $`i`$th atom, $`\delta _{k3}`$ are the detunings of the laser field frequencies from the corresponding transitions $`|k|3`$ of an isolated atom, $`\mathrm{\Omega }_{k3}^{(i)}`$ is the Rabi frequency of the laser field acting on the $`|k|3`$ transition of the $`i`$th atom, and $`\chi _{k3}`$ is the RDDI coupling strength on the $`|k|3`$ transition. Throughout the rest of this paper we will consider the case of wide homogeneous laser beams, so that the Rabi frequencies acting on the two atoms may differ in phase but not in magnitude, $`|\mathrm{\Omega }_{k3}^{(1)}|=|\mathrm{\Omega }_{k3}^{(2)}|`$. Normalizing the RDDI parameters, $`\chi _{k3}`$, $`\gamma _{k3}^{(12)}`$, and $`\gamma _{k3}^{(21)}`$, by the decay rate of an isolated atom, $`\gamma _{k3}`$, we introduce the dimensionless parameters $$g_{k3}=\gamma _{k3}^{(12)}/\gamma _{k3}=\gamma _{k3}^{(21)}/\gamma _{k3},f_{k3}=\chi _{k3}/\gamma _{k3},$$ (3) which are given by the following expressions : $$\begin{array}{cc}f_{k3}=F(\phi _{k3})=\hfill & \frac{3}{2}\left(\frac{\mathrm{cos}\phi _{k3}}{\phi _{k3}^3}+\frac{\mathrm{sin}\phi _{k3}}{\phi _{k3}^2}\frac{\mathrm{cos}\phi _{k3}}{\phi _{k3}}\right)\left[\stackrel{}{e}_1\stackrel{}{e}_2(\stackrel{}{e}_1\stackrel{}{e}_R)(\stackrel{}{e}_2\stackrel{}{e}_R)\right]\hfill \\ & 3\left(\frac{\mathrm{cos}\phi _{k3}}{\phi _{k3}^3}+\frac{\mathrm{sin}\phi _{k3}}{\phi _{k3}^2}\right)\left[(\stackrel{}{e}_1\stackrel{}{e}_R)(\stackrel{}{e}_2\stackrel{}{e}_R)\right],\hfill \\ g_{k3}=G(\phi _{k3})=\hfill & \frac{3}{2}\left(\frac{\mathrm{sin}\phi _{k3}}{\phi _{k3}}+\frac{\mathrm{cos}\phi _{k3}}{\phi _{k3}^2}\frac{\mathrm{sin}\phi _{k3}}{\phi _{k3}^3}\right)\left[\stackrel{}{e}_1\stackrel{}{e}_2(\stackrel{}{e}_1\stackrel{}{e}_R)(\stackrel{}{e}_2\stackrel{}{e}_R)\right]\hfill \\ & +3\left(\frac{\mathrm{sin}\phi _{k3}}{\phi _{k3}^3}\frac{\mathrm{cos}\phi _{k3}}{\phi _{k3}^2}\right)\left[(\stackrel{}{e}_1\stackrel{}{e}_R)(\stackrel{}{e}_2\stackrel{}{e}_R)\right],\hfill \end{array}$$ (4) where $`\stackrel{}{e}_i`$ ($`i=1,2`$) is the unit vector in the direction of the dipole moment matrix element of the corresponding transition $`|k|3`$ of the $`i`$th atom, $`\stackrel{}{e}_R`$ is the unit vector in the direction of $`\stackrel{}{R}`$, and $`\phi _{k3}=k_{k3}R`$ is the dimensionless distance between the atoms ($`k_{k3}=\omega _{k3}/c`$ is the wave number associated with the transition $`|k|3`$ of an isolated atom). Throughout the following discussion we will assume, for the sake of simplicity, that the dipole moments are real, collinear with each other, and perpendicular to the radius vector $`\stackrel{}{R}`$ (other dipole moment orientations lead to qualitatively the same results). In the case of two-level atoms, the simplest description of the system dynamics is offered by the basis of the Dicke states, which is formed by the doubly excited state, $`|\mathrm{\Psi }_e=|e_1|e_2`$, the ground state, $`|\mathrm{\Psi }_g=|g_1|g_2`$, and the two singly excited maximally entangled states—the symmetric, $`|\mathrm{\Psi }_s=\frac{1}{\sqrt{2}}(|g_1|e_2+|e_1|g_2)`$, and the antisymmetric one, $`|\mathrm{\Psi }_a=\frac{1}{\sqrt{2}}(|g_1|e_2|e_1|g_2)`$ \[the corresponding energy diagram is shown in Fig. 3(a)\]. For the case of three-level atoms considered here it is useful to introduce simple generalizations of the Dicke states. The role of the ground and doubly excited Dicke states is then played by the three tensor product states $`|kk=|k_1|k_2,`$ $`k=1,2,3,`$ while the symmetric and antisymmetric Dicke states now are represented by the three symmetric and three antisymmetric maximally entangled states $`|s_{kl}=\frac{1}{\sqrt{2}}(|k_1|l_2+|k_1|l_2)`$ and $`|a_{kl}=\frac{1}{\sqrt{2}}(|k_1|l_2|k_1|l_2),`$ $`k,l=1,2,3;`$ $`k<l`$. The corresponding energy diagram is shown in Fig. 3(b). Note that in both two- and three-level models the energy levels can be grouped according to their type of symmetry: the unentangled states $`|kk=|k_1|k_2,`$ as well as the states $`|s_{kl}=\frac{1}{\sqrt{2}}(|k_1|l_2+|k_1|l_2)`$, can be said to belong to one type of symmetry (symmetric with respect to the atom interchange), and the states $`|a_{kl}=\frac{1}{\sqrt{2}}(|k_1|l_2|k_1|l_2)`$ to another (antisymmetric with respect to the atom interchange). The transitions between these levels can then be classified as symmetry-preserving and symmetry-breaking, respectively. It is easy to show that, due to the form of the transition matrix elements, the symmetry-preserving transitions are sensitive only to the sum of the Rabi frequencies, $`\mathrm{\Omega }_{k3}^{(1)}+\mathrm{\Omega }_{k3}^{(2)}`$, acting on the atoms, and the symmetry-breaking transitions only to their difference, $`\mathrm{\Omega }_{k3}^{(1)}\mathrm{\Omega }_{k3}^{(2)}`$. In the following, we also assume that the system is initially stored in the $`|11`$ state, which can easily be achieved by conventional optical pumping methods . ## II Coherent entangling processes ### A Resonant Raman pulses In our previous paper we have shown that the maximally entangled Dicke states $`|\mathrm{\Psi }_s`$ or $`|\mathrm{\Psi }_a`$ of two two-level atoms can be efficiently populated at small interatomic distances simply by applying an appropriately tailored laser pulse. Assuming that initially the entire population of the system is concentrated in the ground state $`|\mathrm{\Psi }_g`$, this pulse should be tuned into resonance with a transition to only one of these maximally entangled states. Then, by applying a $`pi`$ pulse analog, a significant part of the population of the system can be transferred to one of these states, thereby creating entanglement in the system. We have also shown that the entanglement fidelity, defined as the population of the corresponding maximally entangled state, can be made arbitrarily close to unity as the interatomic distance $`R`$ goes to zero. In this paper we propose new ways to create stable entanglement in a system of two three-level atoms. To be radiatively stable, the created entangled states should involve only the lower levels $`|1`$ and $`|2`$ of the original $`\mathrm{\Lambda }`$ system of each atom, as only these states are not vulnerable to radiative decay. Therefore, our goal here will be to achieve the maximum possible population of one of the maximally entangled states $`|a_{12}`$ or $`|s_{12}`$ (see Fig. 3(b)). The most straightforward way to do this is to extend the results of the two-level model to the three-level one, considered here, by using resonant Raman pulses. By the latter we mean a sequence of two coherent $`pi`$ pulses, the first of which transfers the population to one of the maximally entangled states involving the initial lower level of the $`\mathrm{\Lambda }`$ system and some quickly decaying upper lying “transit” level, while the second one transfers the entire population of the “transit” level to another radiatively stable lower level of the $`\mathrm{\Lambda }`$ system, thus removing the radiative instability of the entanglement. In the considered system of two dipole-interacting three-level atoms, the role of the intermediate “transit” state can be played by the above-mentioned levels $`|a_{13}`$ or $`|s_{13}`$ (one should not forget that the system is initially in the state $`|11`$). During the first step, the pulse resonant, for example, with the $`|11|s_{13}`$ transition transfers the population to the maximally entangled state $`|s_{13}`$; the second step creates the radiatively stable maximally entangled state $`|s_{12}`$ by application of the symmetry-preserving $`pi`$ pulse resonant with the $`|s_{13}|s_{12}`$ transition. In fact, it is the symmetry preservation rules that prevent population from going into the $`|a_{13}`$ state in a transition that is also resonant with the second pulse. For both pulses to be resonant, the parameters of the laser field should be chosen in the following way $$\alpha _{k3}=0,\delta _{k3}=\chi _{13}/2,|\mathrm{\Omega }_{k3}^{(i)}||\chi _{13}|,$$ (5) where $`\alpha _{k3}`$ is the phase difference between the Rabi frequencies acting on the two atoms, $`\mathrm{\Omega }_{k3}^{(1)}=\mathrm{\Omega }_{k3}^{(2)}\mathrm{exp}(i\alpha _{k3})`$ (considering laser beams formed by traveling waves, $`\alpha _{k3}`$ varies from zero for the symmetric geometry to $`\phi _{k3}`$ for the antisymmetric one, and takes all the intermediate values for other types of laser field geometry). The parameters given by (5) correspond therefore to the case when both lasers are used in the symmetric geometry, the “transit” state is $`|s_{13}`$, and the final radiatively stable maximally entangled state is $`|s_{12}`$. Other types of geometries and laser parameter sets can obviously be chosen when using the other intermediate state $`|a_{13}`$ and/or creating the other radiatively stable maximally entangled state $`|a_{12}`$. For example, to create the $`|a_{12}`$ state, one can use the following set of parameters: $$\alpha _{13}=\phi _{13},\alpha _{23}=0,\delta _{k3}=\chi _{13}/2,|\mathrm{\Omega }_{k3}^{(i)}||\chi _{13}|.$$ (6) The phase differences $`\alpha _{k3}`$ in this case correspond to one of the lasers beams being used in the antisymmetric geometry, and the other in the symmetric one. Note that, when using antisymmetric geometry at small interatomic distances ($`\phi _{k3}1`$), most of the laser power is “wasted” since only a fraction of it contributes to the corresponding transition matrix element $`\left|11|\widehat{}_{\mathrm{eff}}/\mathrm{}|a_{13}\right|=|\mathrm{\Omega }_{13}^{(1)}\mathrm{\Omega }_{13}^{(2)}|/2=|\mathrm{\Omega }_{13}^{(i)}|\mathrm{sin}(\phi _{13}/2)|\mathrm{\Omega }_{13}^{(i)}|`$, and actually induces transitions $`|11|a_{13}`$. While a simple estimate of the resulting fidelity of creation of the maximally entangled state is offered by a product of the fidelities of each step of the resonant Raman process (which were calculated in within the two-level atoms model), rigorous results can be obtained only by explicit solution of the corresponding master equation. Due to the high dimensionality of the master equation (1), this calculation is rather demanding computationally, and was not included in the present treatment. ### B Stimulated Raman adiabatic passage Another coherent method for creation of maximally entangled states is based on the stimulated Raman adiabatic passage (STIRAP) technique , a well-known alternative to Raman pulses. The STIRAP method uses adiabatic following of the system state after the slowly changing parameters of the laser field, which are chosen to form a so-called counterintuitive pulse sequence. The STIRAP technique benefits from extremely low probabilities of losing coherence due to radiative decay of the intermediate states, and has already been proposed for use in entanglement-related problems . In our case, efficient transfer of the population from the state $`|11`$ to the state $`|a_{12}`$ or $`|s_{12}`$ may be deterred by existence of several intermediate states . However, as we show below, efficient transfer is still possible for appropriately chosen laser pulse parameters. To realize STIRAP in our system we need to choose the frequencies and geometries of the two constituent laser pulses in a way that would leave active (i.e., resonant and having strong transition amplitudes due to the use of the corresponding geometries, see section I) only two transitions in the whole system. An appropriate choice is given by $$\alpha _{13}=0,\alpha _{23}=\pi ,\delta _{k3}=\chi _{13}/2,\underset{t}{\mathrm{max}}|\mathrm{\Omega }_{k3}^{(i)}|=\mathrm{\Omega }_0|\chi _{13}|,$$ (7) where $`\mathrm{\Omega }_0`$ stands for the amplitude of the corresponding constituents of the counterintuitive laser pulse sequence. The condition $`\alpha _{23}=\pi `$, which is very important as it prevents leakage of population into other levels, can be easily realized by using two laser beams in antisymmetric geometry, which form a standing wave with one of the nodes situated exactly in the middle of the vector $`\stackrel{}{R}`$ connecting the two atoms . In this case only two transitions, $`|11|s_{13}`$ and $`|s_{13}|a_{13}`$, are active, and the adiabatic passage results in transfer of the total population to the radiatively stable state $`|a_{13}`$. We have numerically calculated the final population (fidelity) of the state $`|a_{12}`$ after the STIRAP procedure by explicit solution of the corresponding Schrödinger equation with the Hamiltonian given by (2). The two laser field pulses had the same Gaussian form and were delayed with respect to each other by their length , and the rest of the parameters were given by (7). For determinacy, the length of the pulses was chosen to be equal to one-tenth of the lifetime of the excited level $`|3`$ of the original $`\mathrm{\Lambda }`$ system, $`\tau _p=0.1/(\gamma _{13}+\gamma _{23})`$. The final population of the level $`|a_{12}`$ is shown in Fig. 4(a) as a function of the pulses Rabi frequency amplitude, $`\mathrm{\Omega }_0`$, for different values of the RDDI splitting parameter $`f_{13}`$. As one can see from the figure, for sufficiently high RDDI splittings (i.e., for sufficiently small atomic separations) the fidelity first grows with increasing $`\mathrm{\Omega }_0`$, reaching saturation at $`\mathrm{\Omega }_0\tau _p5`$, which corresponds to the adiabaticity condition on the pulse area . Then, after some point, the final inequality in (7) is no longer fulfilled and the efficiency of the process degrades due to nonresonant excitation of other levels caused by power broadening. For the same reasons, the fidelity does not reach unity at any values of $`\mathrm{\Omega }_0`$ for low RDDI splittings (large interatomic separations). In Fig. 4(b) we show the overall fidelity of the STIRAP method for optimized values of the Rabi frequency amplitude as a function of the interatomic distance $`\phi _{13}`$. As we ignore relaxation processes in this model (a common practice for STIRAP simulations), one should beware of relaxation-induced errors. However, these errors assume significant values only for the case of long pulses, $`\tau _p1/(\gamma _{13}+\gamma _{23})`$, and low overall STIRAP process fidelity, i.e., situations that are not of great concern to us here. ## III An incoherent entangling process: Optical pumping An interesting alternative to the coherent methods can be offered by optical pumping schemes where the stationary state of the system corresponds to one of the maximally entangled states. In this situation, the population of the system is pumped into the entangled state after asymptotically large time periods. Consider the following choice of the laser field parameters: $$\alpha _{k3}=0,\delta _{k3}=\chi _{k3}/2,|\mathrm{\Omega }_{k3}||\chi _{k3}|.$$ (8) Neglecting nonresonant excitation at small interatomic distances, only a few transitions remain resonant and have the corresponding geometry. These active transitions are shown in Fig. 5(a) (the upper state $`|33`$ is omitted in the figure, as it is only negligibly excited at small interatomic distances ). As seen from the figure, the maximally entangled state $`|a_{12}`$ is not included in the chain formed by the laser-induced transitions; however, it is still populated as a result of the decay of the upper-lying levels, as shown by the dotted lines in the same figure. As the state $`|a_{12}`$ is stable with respect to both the laser-induced transitions and radiative decay, all of the population will eventually be pumped into this state. One should note, though, that as the interatomic distance goes to zero, the symmetry-breaking decay rates decrease, which leads to a corresponding increase of the required pumping time. If we choose another configuration that uses antisymmetric standing-wave geometry of the laser beams \[Fig. 5(b)\], $$\alpha _{k3}=\pi ,\delta _{k3}=\chi _{k3}/2,|\mathrm{\Omega }_{k3}||\chi _{k3}|,$$ (9) the increase of the pumping time is still brought on by the decrease of efficiency of the symmetry-breaking laser-induced transitions at small distances since the corresponding transfer matrix elements are proportional to $`|\mathrm{\Omega }_{k3}^{(1)}\mathrm{\Omega }_{k3}^{(2)}|\mathrm{sin}(\phi _{k3}/2)`$. Strictly speaking, the above arguments hold only in the case when the RDDI coupling constants $`\chi _{k3}`$ on different transitions are equal (possibly, up to an error on the order of $`\gamma _{k3}`$). This condition is satisfied, for example, when the two lower levels of the original $`\mathrm{\Lambda }`$ system are sublevels of the same atomic level. However, even when the RDDI coupling on two transitions differs considerably, the present treatment is still applicable provided that one uses four lasers instead of two to satisfy all of the resonance conditions for transitions shown in Fig. 5. In contrast to the methods presented in the previous sections, it is also very important to avoid high a degree of mutual coherence of the components of the biharmonic laser pumping, as otherwise the population of each atom will be trapped in a corresponding dark state . To prove the foregoing arguments, we have numerically calculated the stationary states of the master equation (1) with the laser pumping parameters given by (8) and (9). In order to disrupt trapping of the populations of the two atoms in the single-atom dark states, we introduced additional elastic dephasing of the lower level transition $`|1|2`$ in both atoms, which can be easily realized by the relative jitter of the two pumping laser frequencies. Assuming that this elastic dephasing is characterized by the rate $`\mathrm{\Gamma }_{12}`$, the corresponding relaxation superoperator, which should be plugged into the master equation (1), has the form $$_{\mathrm{jitter}}\widehat{\rho }=\mathrm{\Gamma }_{12}\underset{i,j=1,2}{}\left(2\sigma _z^{(i)}\widehat{\rho }\sigma _z^{(j)}\widehat{\rho }\widehat{\sigma }_z^{(i)}\widehat{\sigma }_z^{(j)}\widehat{\sigma }_z^{(i)}\widehat{\sigma }_z^{(j)}\widehat{\rho }\right),$$ (10) where $`\sigma _z^{(i)}=n_1^{(i)}n_{22}^{(i)}`$ is the lower level population difference operator in the $`i`$th atom. For simulations we used a realistic value $`\mathrm{\Gamma }_{12}=0.01\gamma `$, where we again assume for simplicity $`\gamma =\gamma _{13}=\gamma _{23}`$. The results of the numerical calculations of the steady state population of the level $`|a_{12}`$ for different values of laser pumping Rabi frequencies $`\mathrm{\Omega }`$ (in our calculations they are equal for all transitions and atoms, $`|\mathrm{\Omega }_{k3}^{(i)}|=\mathrm{\Omega }`$, $`i,k=1,2`$) are presented in Fig. 6 as a function of the interatomic distance $`\phi _{13}`$ for the two geometries discussed. As seen from the graphs, the fidelity of the entanglement produced first monotonically decreases with increasing Rabi frequency, and then strongly degrades when the magnitude of the Rabi frequency approaches that of the RDDI splitting due to power-broadening-induced nonresonant excitation. The graphs for $`\mathrm{\Omega }=0.001\gamma `$ in Fig. 6, therefore, decently represent the overall fidelity of the optical pumping method. For low Rabi frequency amplitudes, the antisymmetric geometry clearly shows better results and achieves fidelity of 0.8 at $`\phi _{13}1`$, which is much better than the fidelities achieved by other methods at such distances. ## IV Conclusions We have considered three methods for creation of radiatively stable entanglement in a system of two dipole-interacting three-level atoms in a $`\mathrm{\Lambda }`$ configuration. It was shown that the radiatively stable maximally entangled states, $`|a_{12}`$ and $`|s_{12}`$, which involve only the lower levels of the original $`\mathrm{\Lambda }`$ systems, can be efficiently populated at small interatomic distances by employing coherent or incoherent methods. The first of the coherent methods, which employs resonant Raman pulses for transfer of population (first to a radiatively unstable maximally entangled state and then to a stable maximally entangled state), makes use of specific resonance conditions and symmetry-preservation rules. The second coherent method, which utilizes a STIRAP process, realizes adiabatic transfer of the population of the system into the final state coinciding with one of the radiatively stable maximally entangled states. The STIRAP method, however, requires the use of standing waves to avoid leakage of population into unentangled states. As a rather surprising result, we have also shown that entanglement can be deterministically created as a result of an incoherent process , optical pumping in our case. Creating a laser field configuration where one of the maximally entangled states, $`|a_{12}`$ or $`|s_{12}`$, is not included in the chain of laser-induced transitions, we achieve high populations of that state at asymptotically large times due to radiative decay into that state. An important restriction for realization of the optical pumping method is that one has to avoid high mutual coherence of the pumping laser beams, but this restriction becomes an advantage when realizing the proposed schemes experimentally as it is usually easier to provide an incoherent pumping than a coherent one. For two of the proposed methods (STIRAP and optical pumping), the fidelity of the created approximations of the maximally entangled states was calculated, and it shows qualitatively the same dependence on the interatomic distance $`R`$ as in the previously considered two-level atom model . The fidelity of 0.8 (a good benchmark for Bell inequality violations) is achieved in all of the considered methods at interatomic separations between one-fifteenth and one-sixth of the wavelengths of the working transitions. In conclusion, we have shown that radiatively stable maximally entangled states can be created in a system of two dipole-interacting atoms under conditions that can be experimentally implemented, for example, in optical lattices . The general form of the RDDI operator also suggests that simple analogs of the proposed methods can be employed in other physical systems, such as quantum dots in semiconductors or cavity QED systems (or, indeed, a combination of the latter two ). ###### Acknowledgements. This work was partially supported by the programs “Fundamental Metrology” and “Physics of Quantum and Wave Processes” of the Russian Ministry of Science and Technology.
no-problem/9911/math9911135.html
ar5iv
text
# From endomorphisms to automorphisms and back: dilations and full corners ## Introduction In recent years there has been renewed interest in crossed products by semigroups of endomorphisms, viewed now as universal algebras in contrast to their original presentation as corners in crossed products by groups. This new approach, initiated by Stacey following a strategy pioneered by Raeburn for crossed products by group actions , is based on the explicit formulation of a semigroup crossed product as the universal C\*-algebra of a covariance relation. As such, it motivated the development of specific techniques and brought about new insights and applications, e.g. . Nevertheless, the implicit view of semigroup crossed products as corners continues to have a very important role: it is often invoked to prove the existence of nontrivial universal objects and it allows one to import results from the well-developed theory of crossed products by groups. When the endomorphisms are injective and the semigroup is abelian the two approaches are equivalent, and the proof involves using a direct limit to transform the endomorphisms into automorphisms and the isometries into unitaries. This has been done when the abelian semigroup is $``$ , when it is totally ordered , and, in general, when it is cancellative . As crossed products by more general (nonabelian) semigroups are being considered from the universal property point of view, the need arises to determine whether a realization as corners in crossed products by groups is true and useful in those cases too. This is the main task undertaken in the present work. A step away from commutativity of the acting semigroup was taken in where isometric representations and multipliers of normal cancellative semigroups were extended using the same direct limits (the semigroup $`S`$ is normal if $`xS=Sx`$ for every $`xS`$, in which case the natural notions of right and left orders on $`S`$ coincide). Here we will go further and consider discrete semigroups that can be embedded in a discrete group and for which the right order is cofinal; since cofinality is a key ingredient of a directed system, this class is, arguably, the most general one for which the usual direct limit construction would work without a major modification. Based on the results presented below one may argue that the relevant object is the action of an ordered group, and that there are two ways of looking at it; the first is as an automorphic action on a C\*-algebra taken together with a distinguished subalgebra which is invariant under the action of the positive cone, and the second is simply as the endomorphic action of this positive cone on the invariant subalgebra. We show that these two points of view are equivalent: to go from the former to the latter one just cuts down the automorphisms to endomorphisms of the invariant subalgebra and restricts to the positive cone, and the process is reversed by way of a dilation-extension construction, Theorem 2.1.1, which constitutes our first main result. We also explicitly state and prove two additional features of this equivalence that, in our opinion, have not previously received enough attention. The first one is that the minimal automorphic dilation is canonically unique, which for instance allows one to test a good candidate, as done in Subsection 3.2 below. The second one is that the crossed product by the semigroup action is realized as a full corner in the crossed product by a group action, so the equivalence of the two approaches technically translates into the strong Morita equivalence of the crossed products. This is done in Theorem 2.2.1, which is our second main result. A modicum of extra work shows that these results are also valid for twisted crossed products and projective isometric representations with circle-valued multipliers. This requires the easy generalization, to Ore semigroups, of results known for semigroups that are abelian or normal , which is done in the preliminary subsections 1.2 and 1.3. The arguments given are for projective isometric representations and twisted crossed products, but setting all multipliers to be identically $`1`$ will lighten the burden slightly for those interested in the dilation-extension itself and not in projective representations, twisted crossed products, and extensions of multipliers. In the final section we give an application to the semigroup dynamical system from number theory which has the Bost-Connes Hecke C\*-algebra as its crossed product. Starting with the $`p`$-adic version of the system \[18, Section 5.4\] we show how one is quite naturally led to consider the ring of finite adeles with the multiplicative action of the positive rationals. This establishes a natural heuristic link between the Bost-Connes Hecke C\*-algebra and the space $`𝒜/^{}`$, which lies at the heart of Connes’s recent formulation of the Riemann Hypothesis as a trace formula . ## 1. Preliminaries In this first section we gather the basic definitions and results concerning the semigroups on which we will be interested. We also generalize other results about isometries and crossed products that are valid, with more or less the same proofs, in the present setting, although they were originally stated for particular cases. ### 1.1. Ore semigroups. ###### Definition 1.1.1. An Ore semigroup $`S`$ is a cancellative semigroup such that $`SsSt\mathrm{}`$ for every pair $`s,tS`$. Ore semigroups are also known as right–reversible semigroups. (We leave the obvious symmetric consideration of left–reversibility to the reader.) ###### Theorem 1.1.2 (Ore, Dubreil). A semigroup $`S`$ can be embedded in a group $`G`$ with $`S^1S=G`$ if and only if it is an Ore semigroup. In this case, the group $`G`$ is determined up to canonical isomorphism and every semigroup homomorphism $`\varphi `$ from $`S`$ into a group $`𝒢`$ extends uniquely to a group homomorphism $`\phi :G𝒢`$. ###### Proof. See e.g. theorems 1.23, 1.24 and 1.25 in for the first part. We only need to prove the assertion about extending $`\varphi `$. Since $`G=S^1S`$, given $`x,yS`$ there exist $`u,vS`$ such that $`v^1u=yx^1`$, and hence the element $`ux=vy`$ is in $`SxSy`$, proving that $`S`$ is directed by the relation defined by $`s_rt`$ if $`tSs`$. An easy argument shows that $`\phi (x^1y)=\varphi (x)^1\varphi (y)`$ defines a group homomorphism from $`G=S^1S`$ to $`𝒢`$ that extends $`\varphi `$. ∎ ###### Remark 1.1.3. The last assertion of the theorem generalizes \[19, Lemma 1.1\]. Here we have found it more convenient, for compatibility with the rest of , to work with the right order $`_r`$ determined by $`S`$ on $`G`$ via $`x_ry`$ if $`ySx`$. To illustrate the class of semigroups being considered, we list a few examples which have appeared recently in the context of semigroup actions: * Abelian semigroups, (notably the multiplicative nonzero integers in an algebraic number field ); * Semigroups obtained by pulling back the positive cone from a totally ordered quotient ; * Normal semigroups, in particular semidirect products ; * Groups of matrices over the integers having positive determinant \[6, Example 4.3\]; ### 1.2. Extending multipliers and dilating isometries. Let $`\lambda `$ be a circle–valued multiplier on $`S`$, that is, a function $`\lambda :S\times S𝕋`$ such that $$\lambda (r,s)\lambda (rs,t)=\lambda (r,st)\lambda (s,t),r,s,tS.$$ A projective isometric representation of $`S`$ with multiplier $`\lambda `$ on a Hilbert space $`H`$ (an isometric $`\lambda `$–representation of $`S`$ on $`H`$) is a family $`\{V_s:sS\}`$ of isometries on $`H`$ such that $`V_sV_t=\lambda (s,t)V_{st}`$. A twisted version of Ito’s dilation theorem was obtained in \[19, Theorem 2.1\], where projective isometric representations of normal semigroups were dilated to projective unitary representations. Essentially the same proof, inspired on Douglas’s , works for Ore semigroups and gives the following. ###### Theorem 1.2.1. Suppose $`S`$ is an Ore semigroup and let $`\{V_s:sS\}`$ be an isometric $`\lambda `$–representation of $`S`$ on a Hilbert space $`H`$, where $`\lambda `$ is a multiplier on $`S`$. Then there exists a unitary $`\lambda `$–representation of $`S`$ on a Hilbert space $``$ containing a copy of $`H`$ such that 1. $`U_s`$ leaves $`H`$ invariant and $`U_s|_H=V_s`$; and 2. $`_{sS}U_s^{}H`$ is dense in $``$. ###### Proof. Verbatim from the proof of \[19, Theorem 2.1\], except for the following minor modification of the part of the argument where normality is used to obtain an admissible value for the function $`f_t`$. The value $`st`$ used there has to be substituted by any (fixed) $`zSsSt`$, and thus the fourth paragraph there should be replaced by the following one. Suppose now that $`fH_0`$ and $`tS`$, and consider the function $`f_t`$ defined by $`f_t(x)=\lambda (x,t)f(xt)`$ for $`xS`$. If $`sS`$ is admissible for $`f`$, let $`zSsSt`$. We will show that $`s_0:=zt^1`$ is admissible for $`f_t`$. For every $`xSs_0`$, $`xtSz`$, and since $`z`$ is admissible for $`f`$ $`\lambda (x,t)f(xt)`$ $`=`$ $`\lambda (x,t)\overline{\lambda (xtz^1,z)}V_{xtz^1}f(z)`$ $`=`$ $`\overline{\lambda (xtz^1,zt^1)}V_{xtz^1}\lambda (zt^1,t)f(zt^1t)`$ $`=`$ $`\overline{\lambda (xs_0^1,s_0)}V_{xs_0^1}f_t(s_0)`$ where the second equality holds by the multiplier property applied to the elements $`xtz^1`$, $`zt^1`$, and $`t`$ in $`S`$. This proves that $`s_0`$ is admissible for $`f_t`$, so $`f_tH_0`$. ∎ Since the results of concerning discrete normal semigroups depend only on this dilation theorem and on the unique extension of group–valued homomorphisms, they too are valid for Ore semigroups and we list them here for reference. ###### Theorem 1.2.2. Suppose $`S`$ is an Ore semigroup and let $`G=S^1S`$. Then 1. Every multiplier on $`S`$ extends to a multiplier on $`G`$. 2. Restriction of multipliers on $`G`$ to multipliers on $`S`$ gives an isomorphism of $`H^2(G,𝕋)`$ onto $`H^2(S,𝕋)`$. 3. Suppose $`\lambda `$ is a multiplier on $`S`$ and let $`V`$ be a $`\lambda `$–representation of $`S`$ by isometries on $`H`$. Assume $`\mu `$ is a multiplier on $`G`$ extending $`\lambda `$. Then there exists a unitary $`\mu `$–representation $`U`$ of $`G`$ on a Hilbert space $``$ containing a copy of $`H`$ such that $`U_s|H=V_s`$ for $`sS`$, and $`_{sS}U_s^{}H`$ dense in $``$. Moreover, $`U`$ and $``$ are unique up to canonical isomorphism. ###### Proof. The proofs of all but the last statement about uniqueness are as in Theorem 2.2, Corollary 2.3 and Corollary 2.4 of , provided one considers the left-quotients $`x=t^1s`$ instead of the right-quotients used there. In order to prove the uniqueness statement suppose $`(U^{},^{})`$ is another unitary $`\mu `$-representation such that $`U_{}^{}{}_{s}{}^{}|H=V_s`$ and $`_{sS}U_{}^{}{}_{s}{}^{}H`$ is dense in $`^{}`$. It is easy to see that the map $$W:U_s^{}hU_{}^{}{}_{s}{}^{}h,sS,hH$$ is isometric, and that it extends to an isomorphism of $``$ to $`^{}`$ because of the density condition. It only remains to show that $`W`$ intertwines $`U`$ and $`U^{}`$. Since $`S`$ is an Ore semigroup, for every $`x`$ and $`s`$ in $`S`$ there exist $`z`$ and $`t`$ in $`S`$ such that $`xs^1=t^1z`$. Then $`tx=zs`$, so $`WU_x(U_s^{}h)`$ $`=`$ $`WU_xU_{tx}^{}U_{zs}U_s^{}h=\mu (t,x)\overline{\mu (z,s)}WU_t^{}U_zh=\mu (t,x)\overline{\mu (z,s)}WU_t^{}(V_zh)`$ $`=`$ $`\mu (t,x)\overline{\mu (z,s)}U_{}^{}{}_{t}{}^{}(V_zh)=\mu (t,x)\overline{\mu (z,s)}U_{}^{}{}_{t}{}^{}U_{}^{}{}_{z}{}^{}h=U_{}^{}{}_{x}{}^{}(U_{}^{}{}_{s}{}^{}h)`$ $`=`$ $`U_{}^{}{}_{x}{}^{}W(U_s^{}h)`$ This shows that $`WU_x=U_x^{}W`$ for every $`xS`$, hence for every $`xG`$. ∎ ### 1.3. Twisted semigroup crossed products Suppose $`A`$ is a unital C\*-algebra and let $`\alpha `$ be an action of the discrete semigroup $`S`$ by not necessarily unital endomorphisms of $`A`$. Let $`\lambda `$ be a circle-valued multiplier on $`S`$. A twisted covariant representation of the semigroup dynamical system $`(A,S,\alpha )`$ with multiplier $`\lambda `$ is a pair $`(\pi ,V)`$ in which 1. $`\pi `$ is a unital representation of $`A`$ on $`H`$, 2. $`V:SIsom(H)`$ is a projective isometric representation of $`S`$ with multiplier $`\lambda `$, i.e., $`V_sV_t=\lambda (s,t)V_{st}`$, and 3. the covariance condition $`\pi (\alpha _t(a))=V_t\pi (a)V_t^{}`$ holds for every $`aA`$ and $`tS`$. When dealing with twisted covariant representations with a specific multiplier $`\lambda `$, we will refer to the dynamical system as a twisted dynamical system and denote it by $`(A,S,\alpha ,\lambda )`$. The (twisted) crossed product associated to $`(A,S,\alpha ,\lambda )`$ is a C\*-algebra $`A_{\alpha ,\lambda }S`$ together with a unital homomorphism $`i_A:AA_{\alpha ,\lambda }S`$ and a projective $`\lambda `$-representation of $`S`$ as isometries $`i_S:SA_{\alpha ,\lambda }S`$ such that 1. $`(i_A,i_S)`$ is a twisted covariant representation for $`(A,S,\alpha ,\lambda )`$, 2. for any other covariant representation $`(\pi ,V)`$ there is a representation $`\pi \times V`$ of $`A_{\alpha ,\lambda }S`$ such that $`\pi =(\pi \times V)i_A`$ and $`V=(\pi \times V)i_S`$, and 3. $`A_{\alpha ,\lambda }S`$ is generated by $`i_A(A)`$ and $`i_S(S)`$ as a C\*-algebra. The existence of a nontrivial universal object associated to $`(A,S,\alpha ,\lambda )`$ depends on the existence of a nontrivial twisted covariant representation with multiplier $`\lambda `$. For general endomorphisms such representations need not exist, even in the untwisted case. For instance, the action of $``$ by surjective shift-endomorphisms of $`c_0`$ described in Example 2.1(a) of does not admit any nontrivial covariant representations. We will assume that our endomorphisms are injective, hence nontriviality of the semigroup crossed product will follow from its realization as a corner in a nontrivial classical crossed product. See for abelian semigroups, and Remark 2.2.2 below. There are other possible covariance conditions which yield nontrivial crossed products even if the endomorphisms fail to be injective, see e.g. and . We will not deal with them here, but we refer to for an interesting comparative discussion of the different constructions. ###### Remark 1.3.1. It is immediate from the definition that the crossed product $`AS`$ is generated, as a C\*-algebra, by the monomials $`v_x^{}av_y`$ with $`aA`$, and $`x,yS`$, but more is true for Ore semigroups: the products of such monomials can be simplified using covariance to obtain another monomial of the same type. Specifically, in order to simplify the product $`v_x^{}av_yv_r^{}bv_s`$ we begin by finding elements $`t`$ and $`z`$ in $`S`$ such that $`yr^1=t^1z`$, so that $`ty=zr`$, (such elements do exist because $`S`$ is an Ore semigroup). It follows that $`v_x^{}av_yv_r^{}bv_s`$ $`=`$ $`\lambda (y,t)\overline{\lambda (z,r)}v_x^{}av_yv_{ty}^{}v_{zr}v_r^{}bv_s`$ $`=`$ $`\lambda (y,t)\overline{\lambda (z,r)}v_x^{}av_yv_y^{}v_t^{}v_zv_rv_r^{}bv_s`$ $`=`$ $`\lambda (y,t)\overline{\lambda (z,r)}v_x^{}v_t^{}\alpha _t(a\alpha _y(1))\alpha _z(\alpha _r(1)b)v_zv_s`$ $`=`$ $`\lambda (y,t)\overline{\lambda (z,r)}\overline{\lambda (t,x)}\lambda (z,s)v_{tx}^{}\alpha _t(a\alpha _y(1))\alpha _z(\alpha _r(1)b)v_{zs},`$ hence the linear span of such monomials is dense in the crossed product. ## 2. The minimal automorphic dilation. There are two steps in realizing a semigroup crossed product as a corner in a crossed product by a group action. The first one is the dilation-extension of a semigroup action by injective endomorphisms to a group action by automorphisms, and the second one is the corresponding dilation-extension of covariant representations of the semigroup dynamical system to covariant representations of the dilated system. ### 2.1. A dilation-extension theorem. ###### Theorem 2.1.1. Assume $`S`$ is an Ore semigroup with enveloping group $`G=S^1S`$ and let $`\alpha `$ be an action of $`S`$ by injective endomorphisms of a unital C\*-algebra $`A`$. Then there exists a C\*-dynamical system $`(B,G,\beta )`$, unique up to isomorphism, consisting of an action $`\beta `$ of $`G`$ by automorphisms of a C\*-algebra $`B`$ and an embedding $`i:AB`$ such that 1. $`\beta `$ dilates $`\alpha `$, that is, $`\beta _si=i\alpha _s`$ for $`sS`$, and 2. $`(B,G,\beta )`$ is minimal, that is, $`_{sS}\beta _s^1(i(A))`$ is dense in $`B`$. ###### Proof. By right–reversibility, $`S`$ is directed by $`_r`$ so one may follow the argument of \[25, Section 2\]. However, extra work is needed here: since $`G`$ need not be abelian, the choice of embeddings in the directed system must be carefully matched to the choice of right-order $`_r`$ on $`S`$. Consider the directed system of C\*-algebras determined by the maps $`\alpha _y^x=\alpha _{yx^1}`$ from $`A_x:=A`$ into $`A_y:=A`$, for $`xS`$ and $`ySx`$, i.e. for $`x_ry`$ in $`S`$. By \[15, Proposition 11.4.1(i)\] there exists an inductive limit C\*-algebra $`A_{\mathrm{}}`$ together with embeddings $`\alpha ^x:A_xA_{\mathrm{}}`$ such that $`\alpha ^x=\alpha ^y\alpha _y^x`$ whenever $`x_ry`$, and such that $`_{xS}\alpha ^x(A_x)`$ is dense in $`A_{\mathrm{}}`$. The next step is to extend the endomorphism $`\alpha _s`$ to an automorphism of $`A_{\mathrm{}}`$. For any fixed $`sS`$ the subset $`Ss`$ of $`S`$ is cofinal, so $`A_{\mathrm{}}`$ is also the inductive limit of the directed subsystem $`(A_x,xSs)`$, and, for this subsystem, we may consider new embeddings $`\psi ^x:A_xA_{\mathrm{}}`$ defined by $`\psi ^x(a)=\alpha ^{xs^1}(a)`$ for $`xSs`$ and $`aA_x`$. By \[15, Proposition 11.4.1(ii)\] there is an automorphism $`\stackrel{~}{\alpha }_s`$ of $`A_{\mathrm{}}`$ such that $`\stackrel{~}{\alpha }_s\alpha ^x=\psi ^x`$ for every $`xSs`$. Since $`\alpha ^1=\alpha ^s\alpha _s^1`$ and $`\psi ^x=\alpha ^{xs^1}`$, the choice $`x=s`$ gives $$\stackrel{~}{\alpha }_s\alpha ^1=\stackrel{~}{\alpha }_s\alpha ^s\alpha _s^1=\alpha ^1\alpha _s$$ so that (1) holds with $`\beta =\stackrel{~}{\alpha }`$ and $`i=\alpha ^1:A_1A_{\mathrm{}}`$. Since $`\stackrel{~}{\alpha }_s^1(i(A))=\alpha ^s(A_s)`$, (2) also holds. Uniqueness of the dilated system follows from \[15, Proposition 11.4.1(ii)\]: $`A_{\mathrm{}}`$ is the closure of the union of the subalgebras $`\stackrel{~}{\alpha }_s^1(i(A))`$ with $`sS`$, if $`(B,G,\beta )`$ is another minimal dilation with embedding $`j:AB`$ then there is an isomorphism $`\theta :A_{\mathrm{}}B`$ given by $`\theta \stackrel{~}{\alpha }_{s^1}(i(a))=\beta _{s^1}(j(a))`$ for $`aA`$ and hence which intertwines $`\stackrel{~}{\alpha }`$ and $`\beta `$. ∎ ###### Definition 2.1.2. A system $`(B,G,\beta )`$ satisfying the conditions (1) and (2) of Theorem 2.1.1 is called the minimal automorphic dilation of $`(A,S,\alpha )`$. If $`\lambda `$ is a multiplier on $`S`$ with extension $`\mu `$ to $`G`$, we say that the twisted system $`(B,G,\beta ,\mu )`$ is the minimal automorphic dilation of the twisted system $`(A,S,\alpha ,\lambda )`$. (By Theorem 1.2.2 the extended multiplier exists and is unique up to a coboundary.) ###### Lemma 2.1.3. Let $`(\pi ,V)`$ be a covariant representation for the twisted system $`(A,S,\alpha ,\lambda )`$ on the Hilbert space $`H`$, and let $`\stackrel{~}{V}`$ be the minimal projective unitary dilation of $`V`$ on $``$ given by Theorem 1.2.1. Then there exists a representation $`\stackrel{~}{\pi }`$ of $`B`$ on $``$ such that $`(\stackrel{~}{\pi },\stackrel{~}{V})`$ is covariant for the minimal automorphic dilation $`(B,G,\beta ,\mu )`$ and $`\stackrel{~}{\pi }i=\pi `$ on $`H`$. ###### Proof. We work with the dense subspace $`_0=_{tS}U_t^{}H`$ of $``$ and the dense subalgebra $`B_0=_{sS}\beta _s^1(i(A))`$. If $`\xi _0`$ there exists $`tS`$ such that $`U_t\xi H`$; assume $`b=\beta _t^1(i(a))`$, since we want $`(\stackrel{~}{\pi },\stackrel{~}{V})`$ to be covariant, the only choice is to define $`\stackrel{~}{\pi }`$ by $$\stackrel{~}{\pi }(b)\xi =\stackrel{~}{\pi }(\beta _t^1(i(a)))\xi =\stackrel{~}{V}_t^{}\stackrel{~}{\pi }(i(a))\stackrel{~}{V}_t\xi =\stackrel{~}{V}_t^{}\pi (a)\stackrel{~}{V}_t\xi $$ because $`\stackrel{~}{\pi }`$ restricted to $`i(A)`$ and cut down to $`H`$ has to be equal to $`\pi `$. Of course we have to show that this actually defines an operator $`\stackrel{~}{\pi }(b)`$ on $``$ for each $`bB_0`$, that $`\stackrel{~}{\pi }`$ extends to a homomorphism from all of $`B`$ to $`B()`$, and that $`(\stackrel{~}{\pi },\stackrel{~}{V})`$ is covariant. The first step is to define $`\stackrel{~}{\pi }(b)`$ on $`_0`$ for a fixed $`bB_0`$. We begin by fixing $`bB_0`$, $`aA`$ and $`sS`$ such that $`b=\beta _s^1(i(a))`$. For $`\xi \stackrel{~}{V}_{t_0}^{}H`$ with $`t_0`$ in the cofinal set $`Ss`$, we let (2.1.1) $$\phi (b)\xi =\stackrel{~}{V}_{t_0}^{}\pi (\alpha _{t_0s^1}(a))\stackrel{~}{V}_{t_0}\xi .$$ If $`tSt_0`$ then $`\xi \stackrel{~}{V}_t^{}H`$, and $`\stackrel{~}{V}_t^{}\pi (\alpha _{ts^1}(a))\stackrel{~}{V}_t\xi _0`$ $`=`$ $`\stackrel{~}{V}_{t_0}^{}\stackrel{~}{V}_{tt_0^1}^{}\pi (\alpha _{tt_0^1}\alpha _{t_0s^1}(a))\stackrel{~}{V}_{tt_0^1}\stackrel{~}{V}_{t_0}\xi `$ $`=`$ $`\stackrel{~}{V}_{t_0}^{}\stackrel{~}{V}_{tt_0^1}^{}V_{tt_0^1}\pi (\alpha _{t_0s^1}(a))V_{tt_0^1}^{}\stackrel{~}{V}_{tt_0^1}\stackrel{~}{V}_{t_0}\xi `$ $`=`$ $`\stackrel{~}{V}_{t_0}^{}\pi (\alpha _{t_0s^1}(a))\stackrel{~}{V}_{t_0}\xi .`$ So the definition of $`\varphi (b)\xi `$ could have been given using any $`tSt_0`$ in place of $`t_0`$. Next we show that $`\varphi (b)\xi `$ is also independent of $`s`$ and $`a`$, in the sense that if $`b`$ is also equal to $`\beta _s^{}^1(i(a^{}))`$ then $`\alpha _{ts_{}^{}{}_{}{}^{1}}(a^{})`$ is equal to $`\alpha _{ts^1}(a)`$ for $`t`$ in a cofinal set. To see this let $`tSsSs^{}`$. Then $`\alpha ^t\alpha _t^s^{}(a^{})=\alpha ^s^{}(a^{})=\beta _s^{}^1(i(a^{}))=\beta _s^1(i(a))=\alpha ^s(a)=\alpha ^t\alpha _t^s(a)`$, and since the embedding $`\alpha ^t`$ is injective, it follows that $`\alpha _{ts_{}^{}{}_{}{}^{1}}(a^{})=\alpha _{ts^1}(a)`$. The map $`\phi (b):_0_0`$ is clearly linear, and since the endomorphisms are injective, $`\phi (b)\xi b\xi `$. Thus $`\varphi (b)`$ can be uniquely extended to a bounded linear operator (also denoted $`\phi (b)`$) on all of $``$ such that $`\phi (b)b`$. For any $`s`$ the map $`\mathrm{Ad}_{\stackrel{~}{V}_{t_0}^{}}\pi \alpha _{t_0s^1}`$ is a \*-homomorphism on $`A`$, and by cofinality of $`_r`$, for any $`b_1`$ and $`b_2`$ in $`B_0`$ there exist $`sS`$ and $`a_1`$ and $`a_2`$ in $`A`$ such that $`b_1=\beta _s^1(i(a_1))`$ and $`b_2=\beta _s^1(i(a_2))`$. It follows easily from (2.1.1) that $`\phi :B_0B()`$ is a \*-homomorphism which can be extended to a representation $`\stackrel{~}{\pi }`$ of $`B`$ on $``$. Putting $`a=1`$ in (2.1.1) shows that $`\stackrel{~}{\pi }`$ is nondegenerate and there only remains to check that $`(\stackrel{~}{\pi },\stackrel{~}{V})`$ is a covariant pair for $`(B,G,\beta ,\mu )`$. Suppose first $`xS`$ and $`bB_0`$; we can assume that $`b=\beta _s^1(i(a))`$ for some $`aA`$ and $`sSx`$. Let $`\xi \stackrel{~}{V}_t^{}H`$; we can assume $`tSsSx`$, and we observe that $`\stackrel{~}{V}_x\xi \stackrel{~}{V}_{tx^1}^{}H`$. Then $`\stackrel{~}{\pi }(\beta _x(b))\stackrel{~}{V}_x\xi `$ $`=`$ $`\stackrel{~}{\pi }(\beta _{xs^1}(i(a)))\stackrel{~}{V}_x\xi `$ $`=`$ $`\stackrel{~}{\pi }(\beta _{sx^1}^1(i(a)))\stackrel{~}{V}_x\xi `$ $`=`$ $`\stackrel{~}{V}_{tx^1}^{}\pi (\alpha _{tx^1xs^1}(i(a)))\stackrel{~}{V}_{tx^1}\stackrel{~}{V}_x\xi `$ $`=`$ $`\stackrel{~}{V}_{tx^1}^{}\pi (\alpha _{ts^1}(i(a)))\stackrel{~}{V}_{tx^1}\stackrel{~}{V}_x\xi `$ $`=`$ $`\stackrel{~}{V}_{x^1}^{}\stackrel{~}{V}_t^{}\pi (\alpha _{ts^1}(i(a)))\stackrel{~}{V}_t\xi `$ $`=`$ $`\stackrel{~}{V}_x\stackrel{~}{\pi }(\beta _s^1(i(a)))\xi ,`$ and since $`_0`$ is dense in $``$ and $`B_0`$ is dense in $`B`$, the pair $`(\stackrel{~}{\pi },\stackrel{~}{V})`$ satisfies the covariance relation. ∎ ### 2.2. Full corners. Once we know how to dilate covariant representations from the semigroup action to the group action we can establish the relation between the respective crossed products. Before proving our main result we recall that if $`p`$ is a projection in the C\*-algebra $`A`$ then the algebra $`pAp`$ is a corner in $`A`$, which is said to be full if the linear span of $`ApA`$ is dense in $`A`$. The most relevant feature of full corners is that if $`pAp`$ is a full corner in $`A`$, then $`pA`$ is a full Hilbert bimodule implementing the Morita equivalence, in the sense of Rieffel, of $`pAp`$ to $`A`$. ###### Theorem 2.2.1. Suppose $`(A,S,\alpha ,\lambda )`$ is a twisted semigroup dynamical system in which $`S`$ is an Ore semigroup acting by injective endomorphisms and $`\lambda `$ is a multiplier on $`S`$. Let $`(B,G,\beta ,\mu )`$ be the minimal automorphic dilation, with embedding $`i:AB`$. Then $`A_{\alpha ,\lambda }S`$ is canonically isomorphic to $`i(1)(B_{\beta ,\mu }G)i(1)`$, which is a full corner. As a consequence, the crossed product $`A_{\alpha ,\lambda }S`$ is Morita equivalent to $`B_{\beta ,\mu }G`$. ###### Proof. Let $`U`$ be the projective unitary representation of $`G`$ in the multiplier algebra of $`B_{\beta ,\mu }G`$, and notice that $$i(1)U_si(1)=U_si(1),sS,$$ because $`i(A)`$ is invariant under $`\beta _s`$. Define $`v_s=U_si(1)`$. Then $`v_s^{}v_s=i(1)U_s^{}U_si(1)=i(1)`$ and $`v_sv_t=U_si(1)U_ti(1)=U_sU_ti(1)=\mu (s,t)U_{st}i(1)=\lambda (s,t)v_{st}`$, so $`v`$ is a projective isometric representation of $`S`$ with multiplier $`\lambda `$. Since $`i(1)(B_{\beta ,\mu }G)i(1)`$ is generated by the elements $`i(1)U_x^{}i(a)U_yi(1)=v_x^{}i(a)v_y`$, the isomorphism will be established by uniqueness of the crossed product once we show that the pair $`(i,v)`$ is universal. Suppose $`(\pi ,V)`$ is a covariant representation for the twisted system $`(A,S,\alpha ,\lambda )`$, and let $`(\stackrel{~}{\pi },\stackrel{~}{V})`$ be the corresponding dilated covariant representation of $`(B,G,\beta ,\mu )`$ given by Lemma 2.1.3. By the universal property of $`B_{\beta ,\mu }G`$ there is a homomorphism $$(\stackrel{~}{\pi }\times \stackrel{~}{V}):B_{\beta ,\mu }GC^{}(\stackrel{~}{\pi },\stackrel{~}{V})$$ such that $`\stackrel{~}{\pi }(b)\stackrel{~}{V}_s=(\stackrel{~}{\pi }\times \stackrel{~}{V})(i_B(b)U_s)`$ . Let $`\rho `$ be the restriction of $`(\stackrel{~}{\pi }\times \stackrel{~}{V})`$ to $`i(1)(B_{\beta ,\mu }G)i(1)`$, cut down to the invariant subspace $`H`$. By Lemma 2.1.3 $$\rho (i(a))=(\stackrel{~}{\pi }\times \stackrel{~}{V})(i(a))=\stackrel{~}{\pi }i(a)=\pi (a),aA,$$ while $$\rho (v_s)=(\stackrel{~}{\pi }\times \stackrel{~}{V})(U_si(1))=\stackrel{~}{V}_s\pi (1)=V_s,sS$$ Thus $`\rho i=\pi `$ and $`\rho v=V`$, so $`(i,v)`$ is universal for $`(A,S,\alpha ,\lambda )`$. Finally we prove that the corner is full, i.e., that the linear span of the elements of the form $`Xi(1)Y`$ with $`X,YB_{\beta ,\mu }G`$ is a dense subset of $`B_{\beta ,\mu }G`$. It is easy to see that the elements of the form $`U_s^{}bU_t`$ span a dense subset of $`B_{\beta ,\mu }G`$ because $`G=S^1S`$, where $`b`$ may be replaced with $`U_r^{}i(a)U_r`$ by minimality of the dilation. Thus the elements $`U_y^{}i(\alpha _z(a))U_x`$ with $`x,y,zS`$ and $`aA`$ span a dense subset of $`B_{\beta ,\mu }G`$, and since $`i(\alpha _z(a))=i(1)i(\alpha _z(a))`$, the proof is finished. ∎ ###### Remark 2.2.2. If one drops the assumption of injectivity of the endomorphisms, it is still possible to carry out the constructions and the arguments in the proofs of the preceding theorems. However, the resulting homomorphism $`i:AB`$ may not be an embedding any more. Indeed, Example 2.1(a) of shows that the limit algebra $`B`$ may turn out to be the $`0`$ C\*-algebra, yielding a trivial dilated system. We notice that the dilated system $`(B,G,\beta ,\mu )`$ has nontrivial covariant representations if and only if $`B0`$, and these representations, when cut down to $`i(A)`$, give nontrivial covariant representations of the original semigroup system $`(A,S,\alpha ,\lambda )`$. Thus, following \[31, Proposition 2.2\] which deals with the case $`S=`$, we conclude that the crossed product $`A_{\alpha ,\lambda }S`$ is nontrivial if and only if the limit algebra $`B`$ is not $`0`$. Clearly, this is the case when, for instance, the endomorphisms are injective. ## 3. An example from number theory. As an application of the preceding theory we consider the semigroup dynamical system from whose crossed product is the Bost-Connes Hecke C\*-algebra . Since Morita equivalence implies that the representation theory of the semigroup dynamical system is equivalent to that of the dilated system, it is quite useful to have an explicit formulation of the dilation. We point out that since the semigroup in question is abelian, this application is somewhat independent from the rest of the material on nonabelian semigroups. In fact, the example could be dealt with by enhancing \[25, Section 2\] with the uniqueness and fullness properties discussed above, which are easier to prove for abelian semigroups. ### 3.1. Finite Adeles. The natural setting for identifying the ingredients of the minimal automorphic dilation of the semigroup dynamical system introduced in will be the (dual) $`p`$-adic picture described in \[18, Proposition 32\], in which the algebra is $`C(_p_p)`$ and the endomorphisms $`\alpha _n`$ consist of ‘division by $`n`$’ in $`_p_p`$: $$\alpha _n(f)(x)=\{\begin{array}{cc}f(x/n)\hfill & \text{if }n|x\hfill \\ 0\hfill & \text{ otherwise}.\hfill \end{array}$$ By \[21, Corollary 2.10\] the crossed product associated to this system is canonically isomorphic to the Bost-Connes Hecke C\*-algebra $`𝒞_{}`$. The ring $`𝒵:=_p_p`$ has lots of zero divisors and hence no fraction field. However, the diagonally embedded copy of $`^\times `$ is a multiplicative set with no zero divisors, and we may enlarge $`𝒵`$ to a ring in which division by an element of $`^\times `$ is always possible. Our motivation is to extend the endomorphisms $`\alpha _n`$ defined above to automorphisms. The algebraic part is easy: we consider the ring $`(^\times )^1𝒵`$ of formal fractions $`z/n`$ with $`z𝒵`$ and $`n^\times `$, with the obvious rules of addition and multiplication (and simplification!), \[23, II.§3\]. This ring has a universal property with respect to homomorphisms of $`𝒵`$ that send $`^\times `$ into units. Since $`^\times `$ has no zero divisors, the canonical map $`zz/1`$ is an embedding of $`𝒵`$ in $`(^\times )^1𝒵`$. The topological aspect requires a moments thought, after which we declare that the subring $`𝒵`$ must retain its compact topology and be relatively open. Since we want division by $`n^\times `$ to be an automorphism, this determines a topology on the compact open sets $`(1/n)𝒵`$ and hence on their union, $`(^\times )^1𝒵`$, which becomes a locally compact ring containing $`𝒵`$ as a compact open subring. The ring we have just defined is (isomorphic to) the locally compact ring $`𝔸_f`$ of finite adeles, which is usually defined as the restricted product, over the primes $`p𝒫`$ of the $`p`$-adic numbers $`_p`$ with respect to the $`p`$-adic integers $`_p`$: $$𝔸_f:=\{(a_p):a_p_p\text{ and }a_p_p\text{ for all but finitely many }p𝒫\},$$ with $`_p_p`$ as its maximal compact open subring. The isomorphism is implemented by the map from $`(^\times )^1𝒵`$ into $`𝔸_f`$ given by the universal property; this map is clearly injective and, since every finite adele can be written as $`z/n`$ with $`z𝒵`$ and $`n^\times `$, it is also surjective. Specifically, for each $`a_p_p`$ there exists $`k_p`$ such that $`p^{k_p}a_p=z_p_p`$ and a sequence $`a=(a_p)_{p𝒫}`$ is an adele if and only if $`k_p`$ can be taken to be $`0`$ for all but finitely many $`p`$’s, in which case $`n=_pp^{k_p}^\times `$ and $`a=(na)/n`$, with $`na=(na_p)_{p𝒫}_p_p`$. ### 3.2. The minimal automorphic dilation of $`(C(𝒵),^\times ,\alpha )`$. The rational numbers are embedded in $`𝔸_f`$, and division by a nonzero rational is clearly a homeomorphism so $$\beta _r(f)(a)=f(r^1a),a𝔸_f,r_+^{}$$ defines an action of $`_+^{}=(^\times )^1^\times `$ by automorphisms of $`C_0(𝔸_f)`$. Since $`𝒵`$ is compact and open, its characteristic function $`1_𝒵`$ is a projection in $`C_0(𝔸_f)`$ and there is an obvious embedding $`i`$ of $`C(𝒵)`$ as the corresponding ideal of $`C_0(𝔸_f)`$, given by $$i(f)(a)=\{\begin{array}{cc}f(a)\hfill & \text{ if }a𝒵\hfill \\ 0\hfill & \text{ if }a𝒵.\hfill \end{array}$$ ###### Proposition 3.2.1. The C\*-dynamical system $`(C_0(𝔸_f),_+^{},\beta )`$ is the minimal automorphic dilation of the semigroup dynamical system $`(C(𝒵),^\times ,\alpha )`$, and hence $`𝒞_{}`$ is the full corner of $`C_0(𝔸_f)_\beta _+^{}`$ determined by the projection $`1_𝒵`$. ###### Proof. The embedding clearly intertwines $`\alpha _n`$ and $`\beta _n`$, in the sense that $`\beta _n(i(f))=i(\alpha _n(f))`$, and the union of the compact subgroups $`(1/n)𝒵`$ is dense in $`𝔸_f`$, so the union of the subalgebras $`\beta _{1/n}(i(C(𝒵)))`$ is dense in $`C_0(𝔸_f)`$, and the result follows from Theorem 2.1.1 and Theorem 2.2.1. ∎ Since the discrete multiplicative group $`_+^{}`$ acts by homotheties on the locally compact additive group $`𝔸_f`$, and since $`𝔸_f`$ is self-dual, we obtain another characterization of $`𝒞_{}`$ as a full corner in the group C\*-algebra of the semidirect product $`𝔸_f_+^{}`$. One should bear in mind, however, that the self duality of $`𝔸_f`$ is not canonical. ###### Corollary 3.2.2. Let $`e_𝒵C^{}(𝔸_f)`$ be the Fourier transform of $`1_𝒵C_0(𝔸_f)`$. Then $$𝒞_{}e_𝒵C^{}(𝔸_f_+^{})e_𝒵.$$ ###### Proof. The action of $`_+^{}`$ on $`𝔸_f`$ is by homotheties, which are group automorphisms, so $`C^{}(𝔸_f_+^{})`$ is isomorphic to the crossed product $`C^{}(𝔸_f)_\beta _+^{}`$. Moreover, the self-duality of the additive group of $`𝔸_f`$ satisfies $`rx,y=x,ry`$ for $`r_+^{}`$, thus $`C^{}(𝔸_f)`$ is covariantly isomorphic to $`C_0(𝔸_f)`$, so $`C^{}(𝔸_f)_\beta _+^{}`$ is isomorphic to $`C_0(𝔸_f)_\beta _+^{}`$, and the claim follows from Proposition 3.2.1. ∎ ###### Remark 3.2.3. One of the principles of noncommutative geometry advocates that if $`G`$ is a group acting on a space $`X`$, then the quotient space $`X/G`$ has a noncommutative version in the associated crossed product $`C_0(X)G`$, which is often more tractable. Accordingly, if we allow back in the all-important place at infinity which is left out from $`𝒜_f`$ and if we substitute $`_+^{}`$ by $`^{}`$, cf. \[5, Remarks 33\], then our Proposition 3.2.1 gives an explicit path leading from the Bost-Connes Hecke C\*-algebra to the space $`𝒜/^{}`$, on which the construction of is based.
no-problem/9911/physics9911057.html
ar5iv
text
# Solidification pipes: from solder pots to igneous rocks ## Abstract When a substance that shrinks in volume as it solidifies (for example, lead) is melted in a container and then cooled, a deep hole is often found in the center after resolidification. We use a simple model to describe the shape of the pipe and compare it with experimental results. In an experiment that involves atomic beams of thallium , it was noticed that a deep narrow hole was formed in the thallium that melted and resolidified. The hole that formed was at the center of the container and extended from the surface to nearly the bottom. It was surmised that the phenomenon was due to the change in volume of thallium during solidification. Such formation is sometimes known as “pipe” in metallurgy . In this note, we discuss a simple model of pipe formation and compare it with straightforward experiments that can be carried out in classrooms. Suppose a molten substance is cooling in a circular cylinder. Assuming that solidification occurs from the side walls of the container inwards in the radial direction and neglecting the surface tension effects, we should expect the liquid level to drop as a layer of solid is formed because of the higher density of the solid. Consider a newly solidified layer of thickness $`dr`$. Let $`\rho _s`$ and $`\rho _l`$ be the solid and liquid densities respectively, and let $`h(r)`$ be the height of solid as a function of radius $`r`$. Equating the mass before and after solidification, one obtains a differential equation: $$\pi r^2h\rho _l=\pi \left(rdr\right)^2\left(hdh\right)\rho _l+2\pi rhdr\rho _s.$$ (1) Keeping only first order differentials, we get: $$\frac{dh}{h}=2\left(\frac{\rho _s\rho _l}{\rho _l}\right)\frac{dr}{r}.$$ (2) With the boundary condition of $`h(R)=h_0`$, where $`R`$ and $`h_0`$ are the radius of the container and the initial liquid level respectively, the solution is: $$h=h_0\left(\frac{r}{R}\right)^{2\alpha },\alpha =\frac{\rho _s\rho _l}{\rho _l}0.$$ (3) This solution (plotted in Fig. 1 for the parameters of an experiment described below) gives a sharp hole in the center, the shape of which, for a given container and liquid volume, is determined by $`\alpha `$, the fractional density change. With this simple model in mind, we have performed solidification experiments with various substances (this time omitting the highly toxic thallium). The changes in densities upon solidification for these materials and for thallium are listed in Table I . As expected, pipes are observed in all materials tested except Wood’s metal (an alloy of 50% Bi, 25% Pb, 12.5% Cd and 12.5% Sn). Indeed, Wood’s metal has the property that the volume changes little during solidification. Note that for substances that expand upon solidification (water, bismuth, antimony and gallium), no ”anti-pipe” is formed because the liquid is pushed out by the expanded solidified material and assumes a horizontal level. Photographs of several experimental samples are shown in Figures 2-5. Figure 2 shows a sample of conventional solder alloy (60% lead, 40% tin) that was melted and poured into a glass beaker where it cooled and solidified. The sample was then cut through the center of the pipe, the resulting cross-section is shown in Figure 3. Comparing the shape of the pipe predicted by our simple model (Fig. 1) to the one observed experimentally (Figs. 2 and 3), one finds that, while the shape is reproduced qualitatively, there are also significant discrepancies. First, the pipe does not actually go to the bottom of the container as the model predicts. Second, the pipe in the experiment turns out to be much wider. Presumably this is because we have assumed that solidification occurs only from the sides (see below). In fact, when cooling from the surface and the bottom becomes significant, other scenarios in addition to pipe formation are possible. Fig. 4 shows a solidified lead sample, in which a layer of solid on the surface covers the pipe, turning it into a cavity. We can see that the cavity width is greater than the pipe width predicted from Equation 3. Qualitatively this can be understood from the requirement of mass conservation: the material solidified on the top does not have a chance to fill the pipe. To reduce the relative solidification rate from the surface, we attempted accelerated cooling from the sides by putting a beaker with molten solder into a water bath. This time, instead of a deep pipe, a surface recession shown in Fig. 5 was observed. To explain this observation, we modified the model by adding a term to account for solidification from the bottom. Let $`k`$ be the ratio of the solidification rate of the bottom to that of the sides. In order to keep the model as simple as possible, we assume $`k=h_{r=0}/R`$. (Note that this would not be a valid approximation for large $`k`$. If the solidification from the bottom is sufficiently rapid, the entire substance solidifies before solidification from the sides reaches $`r=0`$. In the cases discussed here, however, the liquid level is high and the cooling rate from the bottom is about the same as that from the sides, so the assumption can be safely granted.) The differential equation analogous to Equation 1, with the shorthand $`h^{}=hk(Rr)`$, is then: $`\pi r^2h^{}\rho _l=\pi \left(rdr\right)^2\left(h^{}kdrdh\right)\rho _l`$ (5) $`+2\pi rh^{}dr\rho _s+\pi \left(rdr\right)^2kdr\rho _s.`$ Simplifying, we get $$\frac{dh}{dr}=\frac{2\alpha \left(hkR\right)}{r}+3k\alpha .$$ (6) The solution is a long algebraic expression, which we omit here, but the solution plot (for $`k=1`$) is given in Fig. 6. Comparing it to the picture of the sample (Fig. 5), one can find close resemblance between the two. So far we have neglected the effect of surface tension (a simple discussion of surface tension is given in , for example). If wetting occurs at the solid-liquid interface of the solidifying substance, the surface of the liquid will not be flat, and the curvature of the surface will affect the final shape of the solid. However, it is reasonable to assume that this effect only becomes significant when the dimension of the contained liquid is ”capillary” — i.e., the radius of curvature of the surface near the wall, $`a`$, becomes comparable to the radius of the liquid surface, $`r`$. From dimensional analysis, we expect $`a^2\frac{\sigma }{\rho g}`$. Plugging in realistic parameters, for example, $`\rho _l=10^4kg/m^3`$(for metal), $`\sigma =0.5N/m`$, we obtain $`a2mm`$. This means that surface tension only becomes important near the center of the container. The effect should be observable at the bottom of the pipe. Qualitatively, we would expect the bottom to be more concave than predicted by our model due to the curved liquid surface, and this is indeed the case (see Fig. 3). In conclusion, we have discussed the mechanism of formation of surface pipes upon resolidification of materials with $`\rho _l/\rho _s<1`$. These prominent formations can often be observed in solder pots, candle containers, etc. They are important in metallurgy where they have to be taken into account in casting processes. Similar formations also occur in igneous rocks due to density changes of magma on solidification . However, it is often difficult to separate this effect from a large number of other factors that determine the structure and texture of igneous rocks. The authors are grateful to D. E. Brown, D. DeMille, J. Demouthe, D. F. Kimball, S. M. Rochester, V. V. Yashchuk for useful discussions. This work was supported by National Science Foundation under CAREER Grant No. PHY-9733479.
no-problem/9911/astro-ph9911151.html
ar5iv
text
# COSMOLOGICAL PARAMETERS FROM THE EIGENMODE ANALYSIS OF THE LAS CAMPANAS REDSHIFT SURVEY ## 1 Introduction The accurate measurement of cosmological parameters has been a long-standing challenge for cosmologists. Fortunately, the rapidly increasing size of redshift surveys is moving the estimation of many of these parameters out of the shot-noise limited regime. With these larger data sets, more precise measurements now depend on correspondingly more sophisticated methods of analysis. For example, in estimating the power spectrum of galaxy clustering, one of the greatest challenges is in properly accounting for the effects of a finite survey geometry and the effects of redshift distortions on the signal. The observed power spectrum is a convolution of the true power with the Fourier transform of the spatial window function of the survey, $`P_{\mathrm{obs}}(𝒌)=P_{\mathrm{true}}(𝒌^{})|W(𝒌𝒌^{})|^2d^3k^{}`$. One can attempt to deconvolve the true power spectrum or compare to convolved theoretical spectra, but in either case the survey geometry limits both the resolution and the largest wavelength for which an accurate measurement can be obtained. The standard methods for power spectrum estimation (e.g., Park et al. 1994; Feldman et al. 1994; Fisher et al. 1993) work reasonably well for data in a large, contiguous, three-dimensional volume, with homogeneous sampling of the galaxy distribution, and a weighting scheme optimized for the shot-noise dominated errors. Using these techniques, nearby wide-angle redshift surveys (CfA, SSRS, IRAS 1.2, QDOT) yield strong constraints on the power spectrum on scales approaching $`100h^1\mathrm{Mpc}`$. Tegmark et al. (1998) provides a detailed comparison of power spectrum estimation methods in cosmology. Because the uncertainty in the power spectrum depends on the number of independent modes at a given wavelength, constraints on larger scales require deeper surveys. Due to the difficulty of obtaining redshifts for fainter galaxies and limited telescope time, deep redshift surveys typically have complex geometry, e.g., deep pencil beams or slices. Unfortunately, the standard methods are not efficient when applied to data in oddly-shaped and/or disjoint volumes, or when the sampling density of galaxies varies greatly over these regions. Moreover, convolution of the true power with the complex window function causes power in different modes to be highly coupled. In other words, plane waves do not form an optimal eigenbasis for expansion of the galaxy density field sampled by such surveys. Other intrinsic problems arise due to redshift distortions and the effects non-linear fluctuation growth. As a consequence, advanced methods for power spectrum estimation are needed that optimally weight the data in each region of the survey, taking into account our prior knowledge of the nature of the noise and clustering in the galaxy distribution. These methods must also incorporate the effects of redshift-distortions to produce unbiased and robust measurements. In this paper we describe a technique that can take all these effects into consideration, and present the first results applied to real data. The method employs Karhunen-Loève (KL) eigenmodes and is based on the technique outlined by Vogeley and Szalay (1996) (see also Hoffman 1999), merged together with the analytic results of redshift distortions in wide angle redshift surveys by Szalay, Matsubara and Landy (1998). This analysis uses the largest publicly available redshift survey, the Las Campanas Survey, LCRS, (Shectman et al. 1996). ## 2 Construction of the Eigenbasis ### 2.1 A Short Overview of the Karhunen-Loeve Transform In a KL analysis, the survey data is represented as galaxy counts in a finite number of $`N`$ cells. In practice, the data vector $`𝒅`$ is defined as $$d_i=n_i^{1/2}(c_in_i),$$ (2.1) where $`c_i`$ is the observed number count of galaxies in the $`i`$-th cell, and $`n_i=c_i`$ is the expected number of galaxies, based on the number of fibers in the LCRS observation. The factor $`n_i^{1/2}`$ whitens the shot noise term (see Vogeley & Szalay 1996). This vector is then expanded over the KL basis functions $`𝚿_n`$ as $$𝒅=\underset{n}{}B_n𝚿_n.$$ (2.2) The cosmological information is contained in the amplitude and distribution of the coefficients $`B_n`$. The KL eigenmodes are uniquely determined by the following conditions: (a) Orthonormality, $`𝚿_n𝚿_m=\delta _{nm}`$, and (b) Statistical orthogonality, $`B_nB_m=B_n^{\mathrm{\hspace{0.17em}2}}\delta _{nm}`$. This is equivalent to the eigenvalue problem $`𝑹𝚿_n=\lambda _n𝚿_n`$ (Vogeley & Szalay 1996), where $$R_{ij}=d_id_j=n_i^{\mathrm{\hspace{0.17em}1}/2}n_j^{\mathrm{\hspace{0.17em}1}/2}\xi _{ij}+\delta _{ij}+\eta _{ij}.$$ (2.3) is the correlation matrix calculated for this geometry and choice of pixelization. $`\eta _{ij}`$ describes the additional noise terms arising from systematic effects, like extinction, or total number of fibers in a given area of the sky. Although there is a large degree of freedom in the choice of pixelization, it is advantageous to choose a pixelization with the lowest resolution appropriate to the question at hand to reduce computing time, which is proportional to $`N^3`$. Since this research focused on cosmological measurements in the linear regime, the survey volume was divided into cells about $`15\times 15\times 40(h^1\mathrm{Mpc})^3`$ in size. The cells are elongated in the direction of the line-of-sight, to reduce the Finger-of-God effects. The cell boundaries are based on polar coordinates, closely following the original tiling of the survey. This resulted in 1440 and 1503 cells for the Northern and Southern sets of slices in the LCRS, respectively. ### 2.2 Computation of the Correlation Matrix in Redshift Space Since the data resides in redshift space, the correlation matrix must be calculated in redshift space to decompose the signal properly. The difficulty here is in calculating the cell-averaged correlation function $`\xi _{ij}`$ in redshift space, which is used to construct the correlation matrix $`𝑹`$. Szalay, Matsubara & Landy (1998) derived an analytic expression for the two-point correlation function in redshift space without using the distant observer approximation. In this expansion, $`\xi ^{(s)}`$ is given by $`\xi ^{(s)}(𝒓_1,𝒓_2)=c_{00}\xi _0^{(0)}+c_{02}\xi _2^{(0)}+c_{04}\xi _4^{(0)}+\mathrm{};`$ $`\xi _L^{(n)}(r)={\displaystyle \frac{1}{2\pi ^2}}{\displaystyle 𝑑kk^2k^nj_L(kr)P(k)},`$ (2.4) where $`c_{00}=1+{\displaystyle \frac{2}{3}}\beta +{\displaystyle \frac{1}{5}}\beta ^2{\displaystyle \frac{8}{15}}\beta ^2\mathrm{cos}^2\theta \mathrm{sin}^2\theta ,`$ (2.5) $`c_{02}=\left({\displaystyle \frac{4}{3}}\beta +{\displaystyle \frac{4}{7}}\beta ^2\right)\mathrm{cos}2\theta P_2(\mu ){\displaystyle \frac{2}{3}}\left(\beta {\displaystyle \frac{1}{7}}\beta ^2+{\displaystyle \frac{4}{7}}\beta ^2\mathrm{sin}^2\theta \right)\mathrm{sin}^2\theta ,`$ (2.6) $`c_{04}={\displaystyle \frac{8}{35}}\beta ^2P_4(\mu ){\displaystyle \frac{4}{21}}\beta ^2\mathrm{sin}^2\theta P_2(\mu ){\displaystyle \frac{1}{5}}\beta ^2\left({\displaystyle \frac{4}{21}}{\displaystyle \frac{3}{7}}\mathrm{sin}^2\theta \right)\mathrm{sin}^2\theta ,`$ (2.7) and $`\beta =\mathrm{\Omega }_0^{0.6}/b`$, the usual parameter used in relating velocities to the density field, where $`b`$ is the bias parameter. The additional coefficients in a complete expansion ($`c_{11}`$, $`c_{13}`$, $`c_{20}`$, $`c_{22}`$) are small enough so that they can be ignored in this analysis. The geometry of any two points, $`𝒓_1`$ and $`𝒓_1`$ is parameterized by $`r=|𝒓_1𝒓_2|`$, $`\mathrm{cos}(2\theta )=\widehat{𝒓}_1\widehat{𝒓}_2`$, and $`\mu =\mathrm{cos}\theta (r_1r_2)/r`$. The cell-averaged correlation function is calculated by numerically integrating the above equations. This is done by an adaptive Monte-Carlo integration for adjacent pairs of cells, and by a second-order Taylor approximation for more distant pairs. $`\xi _{ij}={\displaystyle \frac{1}{v_iv_j}}{\displaystyle _{v_i}}{\displaystyle _{v_j}}𝑑v_i𝑑v_j\xi ^{(s)}(𝒓_1,𝒓_2).`$ (2.8) A CDM-type power spectrum with $`\mathrm{\Gamma }=0.2`$, $`\sigma _8^\mathrm{L}=1.0`$, and $`\beta =0.5`$ is used to construct the initial KL basis. This initial choice does not bias any subsequent results, since we adopted an iterative procedure for our likelihood analysis. After determining the eigensystem of the matrix $`𝑹`$, the KL modes are sorted by descending eigenvalue. The eigenvalues closely represent the signal-to-noise ratio of each mode. In addition, the KL modes with large eigenvalues correspond to the larger wavelength fluctuations (Vogeley & Szalay 1996). For further analysis we only use the first $`M`$ modes ($`M<N`$). This both reduces the necessary computations and selects modes where linear theory is more applicable. How do we select $`M`$? For the essentially two-dimensional geometry of the LCRS survey the 3D window function in $`k`$-space is a very elongated cigar, with the major axis perpendicular to the plane of the survey, while the 2D window function is extremely compact (Landy et al. 1996). The KL modes fill the available $`k`$-space as densely as possible, given the survey geometry. The long wavelength KL modes in our truncated basis correspond to the densely packed 2D modes, thus the cutoff wavelength is proportional to $`M^{1/2}`$, where $`M`$ is the number of modes in the truncated set. A true 3D survey would yield better results, since the cutoff wavelength would scale as $`M^{1/3}`$. On the one hand, one would like to select as many modes as possible, because then the cosmic variance of the measured parameters is smaller, due to the averaging over a larger set of random numbers. On the other hand, including lower signal-to-noise modes not only brings us closer to non-linear scales but also dilutes the signal-to-noise. It is non-trivial to balance this issue of cosmic variance versus non-linearity especially given the complex geometry of the LCRS. A natural method is to inspect the sorted window functions to see the relevant scales of each mode to single out the inappropriate scales. However, we found the resulting window functions had much too complex shapes for that purpose. This is because the Las Campanas Survey has a complex geometry and a selection function which varies from field to field and consequently makes eigenmodes form complex window functions in terms of Fourier modes. In this paper, we choose the maximum number of modes $`M`$ which reproduces reasonable estimates of cosmological parameters in analyzing mock catalogs drawn from $`N`$-body simulations in which true values of parameter are known. ## 3 Likelihood Analysis The likelihood function (LF) is obtained from the expression $`|det𝑪_{\mathrm{model}}|^{1/2}\mathrm{exp}\left[{\displaystyle \frac{1}{2}}𝑩^T𝑪_{\mathrm{model}}^1𝑩\right],`$ (3.9) where $`𝑪_{\mathrm{model}}`$ is the covariance matrix computed from our theoretical model hypotheses for a set of parameters $`\mathrm{\Pi }(\beta ,\sigma _8^\mathrm{L},\mathrm{\Gamma })`$, rotated to the KL basis. This matrix is very close to diagonal. For $`i,j=1,\mathrm{},M`$, $$(C_{\mathrm{model}})_{ij}=B_iB_j_{\mathrm{model}}=𝚿_i𝑹_{\mathrm{model}}𝚿_j,$$ (3.10) In practice, the correlation matrix can be expressed as a linear combination of several matrices, proportional to powers of $`\beta `$ and $`\sigma _8^\mathrm{L}`$. In this analysis, $`\sigma _8^\mathrm{L}`$ is always understood as the linear amplitude of the fluctuation spectrum. The shape of the respective correlation functions only depends on $`\mathrm{\Gamma }`$, therefore we computed the matrices for each value of $`\mathrm{\Gamma }`$, but then computed their linear combinations for the various values of $`\beta `$ and $`\sigma _8^\mathrm{L}`$. The calculations are still quite computationally intensive, and were only possible by using efficient numerical algorithms. Details of our numerical analysis will be described in a longer, more technical paper (Matsubara et al. 1999). This paper will also discuss other, higher-dimensional parameterizations of the power spectrum, like the use of band-power amplitudes. Our original fiducial choice of parameters determined the initial KL basis. After the maximum of the LF is determined, the KL basis is recomputed at that point, and the likelihood analysis repeated. In the subsequent section, the results are reported as both LF contours, and marginalized one-dimensional LF. ## 4 Analysis of N-body Simulations The N-body simulations were kindly supplied by C. Park and are the same ones that have been used in earlier analyzes of the LCRS (Landy et al. 1996, Lin et al. 1996). The simulation is an open CDM model with $`h=0.5`$, $`\mathrm{\Omega }_0=0.4`$, and $`b=1`$. The model was normalized so that $`\sigma _8=1`$. Thus, in this analysis, $`\beta =0.577`$, $`\mathrm{\Gamma }=0.2`$. Determination of $`\sigma _8^\mathrm{L}`$ from the data is problematic due the nonlinear effects and the finiteness of the volume. The three-dimensional LF for $`M=100,150,200`$ was computed in a $`21^3`$ grid in parameter space $`\mathrm{\Pi }`$. The LF was then marginalized with respect to each parameter, $`\beta ,\mathrm{\Gamma },\sigma _8^\mathrm{L}`$. The resulting discrete LF was fitted by Gaussian curve, in which the center and the variance are identified with our estimate of the parameters and its 1$`\sigma `$ error bars. In Figure 1, the resulting estimates are plotted. Experimenting showed that iterating the basis did not change the estimation, thus for the model data we did not iterate in this figure. The N-body results show that there is excellent agreement with the shape parameter $`\mathrm{\Gamma }`$, fairly good agreement with $`\beta `$. It is difficult to determine what fiducial value should be used for $`\sigma _8^\mathrm{L}`$ in the mock catalogs for comparison. The major problem arises from the fact that the small-scale resolution of the analysis, $`20h^1\mathrm{Mpc}`$, is over twice the scale of non-linear clustering. This makes direct analytical calculations problematic since we never truly sample $`\sigma _8`$. If it is assumed that the analysis here is accurate, $`\sigma _8^\mathrm{L}`$ is expected to be under-estimated by about $`15\%`$ of the true $`\sigma _8`$. The number of modes $`M`$ to use in likelihood analysis mainly affects the estimation of the error bars. Thus we can decide from Figure 1, which number $`M`$ should be used in analyzing the actual LCRS data. One can see that a choice of $`M=150`$ is reasonable for the parameter estimation. ## 5 Discussion of Results from the LCRS We have calculated the LF in the 3D parameter space for both the Northern and Southern samples separately, then for the combined set. We used a truncated base with $`M=150`$ that was determined from experience with the N-body simulations as described above. In Figure 2, several sections of the three-dimensional LF and the marginalized LF from the actual LCRS data are shown. In Table 1, the best fit model parameters are summarized. The number of iterations is three. The results of the first two iterations are also consistent with the final estimation in Table 1. $`\sigma _8^\mathrm{L}`$ is consistent with expectations from simulations and indicates that true $`\sigma _8`$ is approximately one. The estimate of the shape parameter, $`\mathrm{\Gamma }=0.16\pm 0.10`$, is somewhat lower than that found in other analyzes although consistent within errors as derived from the simulations. For example, Feldman et al. (1994) find $`\mathrm{\Gamma }=0.20`$ and Landy et al. (1996) find $`\mathrm{\Gamma }=0.24`$ below $`75h^1\mathrm{Mpc}`$. If $`b1`$, the parameter values in the Table 1 indicate a low value of $`\mathrm{\Omega }_0\beta ^{1.67}\stackrel{<}{}0.5`$. Here, it should be noted the limitations of these results with respect to a CDM three-dimensional parameterization of the shape and amplitude of the power spectrum. Earlier work by Broadhurst et al. (1989) and Landy et al. (1996) have shown a perturbation of the power spectrum on $`100h^1\mathrm{Mpc}`$ scales. This ’bump’ in the power spectrum cannot be resolved by such a parameterization and would lead to an under-estimation of $`\mathrm{\Gamma }`$ as the fit finds an average shape. The other models of the large-scale structure, including PIB or defect models, can be also studied using the present formalism with appropriate parameterizations of power spectrum. The method we have developed here can be straightforwardly applied to redshift data of any geometry and of any selection function. By restricting our analysis to the large-wavelength modes, our method does not depend much on correction for nonlinear effects. The error bars of the results clearly show the advantages offered by surveys of larger volume and more isotropic geometry, like SDSS, which would increase the number of independent large scale modes substantially. We would like to thank the LCRS collaboration for creating the largest publicly available redshift survey to date. TM was supported by JSPS Postdoctoral Fellowships for Research Abroad. SL would like to recognize support from the Jeffress Memorial Trust and NSF Grant AST-9900835, AS has been supported by NSF AST-9802980 and NASA NAG5-3503.
no-problem/9911/cond-mat9911172.html
ar5iv
text
# Maximum velocity of a fluxon in a stack of coupled Josephson junctions ## I Introduction Experimental and theoretical studies of magnetic flux quanta (fluxons) in stacks of inductively coupled long Josephson junctions (LJJ’s) have recently attracted a great deal of attention . The interest to the stacks is stimulated both by the fact that the high-$`T_c`$ superconductors, on the atomic level, have a naturally layered structure that is tantamount to an intrinsic Josephson stack , and by the development of the (Nb-Al-AlO<sub>x</sub>)<sub>N</sub>-Nb low-$`T_c`$ technology , which is demonstrated fabrication of artificial stacks of up to 28 LJJ’s, with a parameter spread between them $`<10\%`$ . A fundamental characteristic of a single LJJ is its Swihart velocity $`\overline{c}_0`$, i.e. , a minimum phase velocity of the electromagnetic (Josephson plasma) waves propagating in the superconducting micro-strip structure . Simultaneously, $`\overline{c}_0`$ is the maximum velocity for fluxons that correspond to topological solitons of the sine-Gordon model describing LJJ. In a system of $`N`$ linearly coupled junctions, the dispersion curve has $`N`$ branches corresponding to different modes of the linear electromagnetic waves propagating in the system. Accordingly, there are $`N`$ split (different) Swihart velocities $`\overline{c}_n^{(N)}`$, $`n=1,2,\mathrm{},N`$ (see, Refs. ) such that $`\overline{c}_n^{(N)}<\overline{c}_{n+1}^{(N)}`$. For example, in the simplest case of two coupled junctions, there are two Swihart velocities $`\overline{c}_1^{(2)}\overline{c}_{}`$ and $`\overline{c}_2^{(2)}\overline{c}_+`$ ($`\overline{c}_{}<\overline{c}_0<\overline{c}_+`$), which correspond, respectively, to the system’s in-phase and out-of-phase Josephson plasma wave eigenmodes. In the $`N`$-fold stack, we also use notation $`\overline{c}_{}\overline{c}_1^{(N)}`$ and $`\overline{c}_+\overline{c}_N^{(N)}`$, which are the smallest and largest Swihart velocities. An important issue is to study the conditions for the existence and stability of a single-fluxon state in the stacked system. It is implied that the fluxon is trapped in one junction and its screening currents spread over neighboring junctions. The fluxon induces, through the magnetic coupling, “images” in adjacent layers, so that a full solution for the single fluxon state in the stack includes both the core topological soliton in the central layer and its non-topological images in the other layers. Such a fluxon state we denote as $`[0|\mathrm{}|0|1|0|\mathrm{}|0]`$. As mentioned above, in a standard single-barrier LJJ a fully stable fluxon with a velocity exceeding the Swihart velocity cannot exist. It was first suggested, and recently demonstrated theoretically and experimentally, that a fluxon may nevertheless move in a multilayer system with a velocity which exceeds the minimum phase velocity of the plasma waves. It is important to find the maximum fluxon velocity $`u_{\mathrm{max}}`$ and its dependence on the parameters of the system. A possibility of having $`u_{\mathrm{max}}>\overline{c}_{}`$ is especially interesting, as it implies steady motion of the fluxon at $`u>\overline{c}_{}`$ with Cherenkov radiation tail of Josephson plasma waves behind it. This issue, which is of evident physical interest, is the main subject of the present work. The possible range of $`u_{\mathrm{max}}`$ was not investigated systematically in Refs. . The only prediction which has been made is that $`u_{\mathrm{max}}>\overline{c}_{}`$ for asymmetric junctions in a two-fold stack (for the case when the fluxon is trapped in LJJ with lower $`j_c`$), and that $`u_{\mathrm{max}}>\overline{c}_{}`$ always holds in $`N`$-fold stacks of identical junctions. The exact value of $`u_{\mathrm{max}}`$ has not been found until now. We will discuss three cases which differ by the number $`N`$ of the coupled LJJ’s. These cases are $`N=2`$, $`3`$, and $`\mathrm{}`$. For $`N=2`$ and $`N=3`$ we will consider a system of asymmetric LJJ’s with different critical currents $`j_c`$, in which the maximum velocity is different depending on where the fluxon is trapped. In the case $`N=3`$ and $`N=\mathrm{}`$ we will assume that the core topological soliton is placed in the central junction which will be labeled by $`0`$, while LJJ’s above and below the central one will be labeled by $`1,2,\mathrm{},\mathrm{}`$ and $`1,2,\mathrm{},\mathrm{}`$, respectively. In section II we formulate the model, section III displays results of full PDE numerical simulations of the asymmetric model for the cases $`N=2`$ and $`N=3`$. To choose an analytical form for the fitting function which predicts the dependence $`u_{\mathrm{max}}`$ on junction parameters, in section IV we use the variational approximation (VA). Although VA does not produce very accurate quantitative results, it predicts reasonable functional dependence for $`u_{\mathrm{max}}`$. Section V concludes the work and summarizes the obtained results for different $`N`$. ## II The Model A model for $`N`$-fold stack of long Josephson junctions is well known : $`\left(\varphi _n\right)_{xx}`$ $`=`$ $`(\varphi _n)_{tt}+\mathrm{sin}\varphi _n+\gamma +\alpha (\varphi _n)_t`$ (2) $`S\left[(\varphi _{n1})_{tt}+\mathrm{sin}\varphi _{n1}+\alpha (\varphi _{n1})_t+(\varphi _{n+1})_{tt}+\mathrm{sin}\varphi _{n+1}+\alpha (\varphi _{n+1})_t+2\gamma \right],`$ where $`\varphi _n`$ is the Josephson phase across the $`n`$-th LJJ, $`n=N/2\mathrm{}N/2`$, the coordinate $`x`$ and time $`t`$ are measured in units of the Josephson length $`\lambda _J`$ and inverse plasma frequency $`\omega _p^1`$ of single-layer LJJ, $`S<0`$ is a dimensionless coupling parameter, $`\alpha `$ is a dissipative constant, and $`\gamma `$ is the density of the bias current flowing through the stack. We consider the most natural case when the bias current is the same in all layers. The model (2) pertains to a stack consisting of an infinite number of junctions, or to an exotic configuration, in which the stack is closed in a loop in $`z`$ direction ($`\varphi _{N+1}\varphi _1`$). In practice, the equations for the edge (top and bottom) junctions include only a half of the coupling terms corresponding to the neighboring LJJ’s. Note also that the model implies that all the junctions are identical. In particular, the Swihart velocity of each uncoupled LJJ, is $`\overline{c}_01`$ in the notation adopted hereafter. In the case of $`N=2`$, a relevant version of the model (2), which takes into account the difference in the critical currents of the LJJ’s, writes as : $`{\displaystyle \frac{\left(\varphi _0\right)_{xx}}{1S^2}}\left(\varphi _0\right)_{tt}\mathrm{sin}\varphi _0{\displaystyle \frac{S\left(\varphi _1\right)_{xx}}{1S^2}}`$ $`=`$ $`\alpha \left(\varphi _0\right)_t+\gamma ;`$ (3) $`{\displaystyle \frac{\left(\varphi _1\right)_{xx}}{1S^2}}\left(\varphi _1\right)_{tt}{\displaystyle \frac{\mathrm{sin}\varphi _1}{J}}{\displaystyle \frac{S\left(\varphi _0\right)_{xx}}{1S^2}}`$ $`=`$ $`\alpha \left(\varphi _1\right)_t+\gamma ,`$ (4) where $`J=j_{c0}/j_{c1}`$ is the ratio of the critical currents of the two junctions. When considering the model (3) and (4 ) below, we will place the fluxon into the LJJ whose phase is denoted as $`\varphi _0`$. Discussing the case of $`N=3`$, we impose the symmetry condition $`\varphi _1\varphi _1`$, which is natural when the fluxon moves in the middle layer. Thus, we can write Eqs. (2) in the form $`{\displaystyle \frac{1}{12S^2}}\left(\varphi _1\right)_{xx}\left(\varphi _1\right)_{tt}{\displaystyle \frac{\mathrm{sin}\varphi _1}{J}}{\displaystyle \frac{S}{12S^2}}\left(\varphi _0\right)_{xx}`$ $`=`$ $`\gamma \alpha \left(\varphi _1\right)_t;`$ (5) $`{\displaystyle \frac{1}{12S^2}}\left(\varphi _0\right)_{xx}\left(\varphi _0\right)_{tt}\mathrm{sin}\varphi _0{\displaystyle \frac{2S}{12S^2}}\left(\varphi _1\right)_{xx}`$ $`=`$ $`\gamma \alpha \left(\varphi _0\right)_t.`$ (6) Note the factor of 2 in the last term of the left-hand side of Eq. (6). For further theoretical treatment of the case $`N=\mathrm{}`$, we will assume that the coupling parameter $`S`$ is small. This assumption will allow us to write a Lagrangian corresponding to the dynamical equations (2), which is a key ingredient of VA. In the case of small $`S`$, neglecting terms $`S^2`$ and smaller, one can easily transform Eqs. (2) into a simplified form: $$\left(\varphi _n\right)_{xx}\left(\varphi _n\right)_{tt}\mathrm{sin}\varphi _nS\left[\left(\varphi _{n1}\right)_{xx}+\left(\varphi _{n+1}\right)_{xx}\right]=\alpha (\varphi _n)_t\gamma ,$$ (7) which we will use for further analysis of the $`N=\mathrm{}`$ case. A fluxon steadily moving at a constant velocity $`u`$ can be described by a solution to the above equations (without the $`\alpha `$ and $`\gamma `$ terms) which depends on the single variable $`\xi =C(xut)`$. The constant $`C`$ is introduced for renormalization purposes and will be different in the cases $`N=2`$, $`3`$, and $`\mathrm{}`$. Substituting $$\xi \sqrt{\frac{1S^2}{S}}(xut)$$ (8) into Eqs. (3) and (4) and neglecting the $`\alpha `$ and $`\gamma `$ terms (which must be kept if one aims to find an equilibrium velocity determined by the balance between the losses and bias current, that is not our objective in this work), we get: $`\sigma ^{(2)}\varphi _0^{\prime \prime }+\varphi _1^{\prime \prime }\mathrm{sin}\varphi _0`$ $`=`$ $`0,`$ (9) $`\sigma ^{(2)}\varphi _1^{\prime \prime }+\varphi _0^{\prime \prime }{\displaystyle \frac{\mathrm{sin}\varphi _1}{J}}`$ $`=`$ $`0.`$ (10) The parameter $`\sigma ^{(2)}`$ (the subscript $`2`$ implies that the definition is adjusted to the case $`N=2`$), that will be used instead of the velocity, is defined as: $$\sigma ^{(2)}\frac{1\left(1S^2\right)u^2}{S}.$$ (11) For the 3-fold stack, introduction of the traveling coordinate $$\xi \sqrt{\frac{12S^2}{S}}(xut)$$ (12) transforms Eqs. (5) and (6) into $`\sigma ^{(3)}\varphi _1^{\prime \prime }+\varphi _0^{\prime \prime }{\displaystyle \frac{\mathrm{sin}\varphi _1}{J}}`$ $`=`$ $`0,`$ (13) $`\sigma ^{(3)}\varphi _0^{\prime \prime }+2\varphi _1^{\prime \prime }\mathrm{sin}\varphi _0`$ $`=`$ $`0,`$ (14) where $$\sigma ^{(3)}\frac{1\left(12S^2\right)u^2}{S}.$$ (15) And, finally, for the case $`N=\mathrm{}`$ the substitution of $$\xi (xut)/\sqrt{S}$$ (16) into Eqs. (7) yields $$\sigma ^{(\mathrm{})}\varphi _n^{\prime \prime }+\varphi _{n1}^{\prime \prime }+\varphi _{n+1}^{\prime \prime }\mathrm{sin}\varphi _n=0,$$ (17) where $$\sigma ^{(\mathrm{})}\frac{1u^2}{S}.$$ (18) Thus, from the mathematical viewpoint, the issue is to look for solutions of Eqs. (9) and (10), or (13) and (14), or of Eqs. (17) that describe the stationary fluxon. The eventual objective is finding the maximum velocity $`u_{\mathrm{max}}`$, or the minimum value of the parameter $`\sigma `$, beyond which the fluxon solution does not exist. The fluxon solution is defined by the following boundary conditions: $`\varphi _0(\mathrm{})=0`$, $`\varphi _0(+\mathrm{})=2\pi `$, and $`\varphi _{n0}(\pm \mathrm{})=0`$. To conclude this section, we recall that the set of the split Swihart velocities can be found in an exact form from the linearized version of Eqs. (2), setting there $`\gamma =\alpha =0`$. These velocities are given by the following expression : $$c_n^{(N)}=\frac{1}{\sqrt{12S\mathrm{cos}\left({\displaystyle \frac{\pi n}{N+1}}\right)}},n=1,2,\mathrm{},N.$$ (19) It is also useful to find the values of the parameter $`\sigma `$ corresponding to the minimum velocity $`\overline{c}_{}`$ and the maximum velocity among all the velocities given by Eq. (19). From Eqs. (11), (15) and (18), using Eq. (19), we find $`\sigma ^{(2)}(\overline{c}_\pm )`$ $`=`$ $`1,`$ (20) $`\sigma ^{(3)}(\overline{c}_\pm )`$ $`=`$ $`\sqrt{2},`$ (21) $`\sigma ^{(\mathrm{})}(\overline{c}_\pm )`$ $`=`$ $`2,`$ (22) while zero velocity corresponds to $`\sigma ^{(2,3,\mathrm{})}=1/S`$. We can immediately find the values of $`\sigma _{\mathrm{min}}`$ for some specially selected values of $`J`$. Let us set $`\sigma ^{(2)}=1`$ in Eqs. (9) and (10). This reduces Eqs. (9) and (10) to a simple algebraic equation: $$\mathrm{sin}\varphi _0=\frac{\mathrm{sin}\varphi _1}{J}.$$ (23) The state with a fluxon only in the first junction ($`\varphi _1`$) can be realized only for $`J<1`$. This means that $$\sigma _{\mathrm{min}}^{(2)}(J=1)=1.$$ (24) In the same fashion, setting $`\sigma ^{(3)}=\sqrt{2}`$ in Eqs. (13) and (14), we get $$\mathrm{sin}\varphi _0=\frac{\sqrt{2}}{J}\mathrm{sin}\varphi _1.$$ (25) This means that $$\sigma _{\mathrm{min}}^{(3)}(J=\sqrt{2})=\sqrt{2}.$$ (26) A similar relation can be obtained for any $`N`$-fold stack. Using Eqs. (7), we get the following result. The value $`\sigma _{\mathrm{min}}^{(2N1)}`$ in the stack consisting of $`(2N1)`$ LJJ’s is given by the maximum eigenvalue of the $`N\times N`$ matrix, $$\left(\begin{array}{ccccc}\hfill 0& \hfill 1& & & \\ \hfill 1& \hfill 0& \hfill 1& & \\ & \hfill \mathrm{}& \hfill \mathrm{}& \hfill \mathrm{}& \\ & & \hfill 1& \hfill 0& \hfill 1\\ & & & \hfill 2& \hfill 0\end{array}\right).$$ (27) This matrix has the size $`N\times N`$ due to the symmetry which, as we suppose, is present when the fluxon moves in the middle layer. In this case $`2N1`$ coupled equations can be reduced to $`N`$ equations in the same way as we reduced 3 equations for $`N=3`$ to only 2 independent equations. As a result of this reduction, the $`N`$-th equation, which describes the junction containing the fluxon and corresponds to the last row of matrix (27), contains the factor of 2. The values of $`\sigma _{\mathrm{min}}`$ calculated for some $`N`$ are summarized in Table I. One can see that as $`N\mathrm{}`$, $`\sigma _{\mathrm{min}}2`$. Similar to the above considerations, the single-fluxon state exists only if $`J<\sigma _{\mathrm{min}}`$. Note that this is, actually, quite a noteworthy result, which means that, for any $`N>2`$, the steady motion of the fluxon accompanied by the emission of the Cherenkov radiation is possible at $`J=1`$, i.e., in the uniform stack of identical LJJ’s. Another simple case is obtained by setting $`J=0`$. In this case Eq. (10) or Eq. (13) yield $`\varphi _1=0`$, hence, the remaining equation \[(9) or (14), respectively\] is nothing else but the single sine-Gordon equation, which has proper solitonic solution (giving the $`[1|0]`$ state) only for positive $`\sigma `$. Therefore $$\sigma _{\mathrm{min}}(0)=0,\text{ for any }N.$$ (28) Physically this means that at large disbalance of critical currents the maximum velocity of fluxon is approaching $`1\overline{c}_0`$. ## III Numerical Results In the region $`u>\overline{c}_{}`$, i.e. at fluxon velocities larger than the lowest phase velocity of plasma waves, the phase dynamics in a stack is quite complex. Analytically, it can not be reduced just to looking for solutions of unperturbed equation in the form $`\varphi =\varphi (xut)`$. Therefore, direct numerical simulations are necessary to get an insight into the problem. The PDE’s (3) and (4) for $`N=2`$ or (6) and (5) for $`N=3`$ were solved numerically using an explicit method \[expressing $`\varphi ^{A,B}(t+\mathrm{\Delta }t)`$ as a function of $`\varphi ^{A,B}(t)`$ and $`\varphi ^{A,B}(t\mathrm{\Delta }t)`$\], and treating $`\varphi _{xx}`$ with a five-point, $`\varphi _{tt}`$ with a three-point and $`\varphi _t`$ with a two-point symmetric finite-difference scheme. Equations were supplemented by the periodic boundary conditions, $`\varphi _0(x+\mathrm{})\varphi _0(x)+2\pi `$, and $`\varphi _1(x+\mathrm{})\varphi _1(x)`$, with the period $`\mathrm{}=200`$, so that the fluxon moves in the quasi-infinite system. Numerical stability was checked by doubling the spatial and temporal discretization steps $`\mathrm{\Delta }x`$ and $`\mathrm{\Delta }t`$ and checking its influence on the fluxon profiles and current-voltage curves (IVC). The values used for the simulations were $`\mathrm{\Delta }x=0.025`$ and $`\mathrm{\Delta }t=0.00625`$. To calculate the voltage in each point of the IVC, the instant voltage was averaged over the progressively increasing time intervals and also averaged over the length of the system. The following convergence criterion was adopted: the difference between the voltages obtained by averaging over two successive time intervals must not exceed $`\delta V=510^4`$. When the average voltage corresponding to the current $`\gamma `$ is found, the current is increased by a small amount $`\delta \gamma =0.001`$ to calculate the voltages at the next point of the IVC. We use the phases (and their derivatives) attained in the previous point of the IVC as the initial conditions for the next point. By gradually increasing $`\gamma `$ and, thus, $`u`$, we encountered a maximum value $`u_{\mathrm{max}}`$ of $`u`$ above which the single-fluxon mode becomes unstable, and the system switches into a resistive state. The simulations of IVC’s were carried out for the damping values $`\alpha =0.02`$, $`0.04`$, $`0.1`$, in order to understand the effect of the dissipation on $`u_{\mathrm{max}}`$. In experiment, at low temperatures, $`\alpha `$ can be about $`0.01`$ and less. It turns that the simulations of the system with $`\alpha <0.02`$ incurs unaffordable computational expenses. Therefore, we focus on higher values of $`\alpha `$ and then will try to extrapolate $`u_{\mathrm{max}}`$ (if possible) to the ideal case $`\alpha =0`$, which is of fundamental theoretical importance. Here, we will only display the numerical results obtained for the cases $`N=2`$ and $`N=3`$. Examples of IVC’s for $`N=2`$ and $`3`$ and different values of $`J`$ are shown in Fig. 1 and Fig. 2. Strong effect of the ratio $`J`$ between the critical currents of the junctions on $`u_{\mathrm{max}}`$ can be learned from Fig. 1. In the case $`J=2`$, i.e. , when the fluxon is located in the junction with the larger critical current, the fluxon’s maximum velocity is smaller than the lowest Swihart velocity $`\overline{c}_{}`$. In addition, one can observe a back bending of the IVC close to its tip. The last point on back bent IVC marked as $`[1|+1,1]`$ corresponds to the state when fluxon-antifluxon pair stretched out in the idle junction with lower $`j_c`$. In the case $`J=0.5`$, i.e., when the fluxon moves in the junction with the smaller critical current, the fluxon’s maximum velocity is larger than $`\overline{c}_{}`$. We note that the latter case is nontrivial and of the particular interest since, as it was already mentioned above, the fluxon can propagate stably, emitting the Cherenkov radiation. Fig. 2 shows IVC’s of the $`[0|1|0]`$ state for $`J=1`$ and different values of $`\alpha `$. As long as we consider 3-fold stack with $`J<\sqrt{2}`$, the IVC bends to the right into the Cherenkov region. The fluxon motion in such a case is very similar to the case $`J<1`$, $`N=2`$. Fig. 2 demonstrates that the damping not only changes the slope of the IVC at low velocity, but also affects $`u_{\mathrm{max}}`$. The numerically found values $`u_{\mathrm{max}}`$ for different values of $`J`$ are transformed into the corresponding values of the parameter $`\sigma _{\mathrm{min}}`$ according to Eqs. (11) and (15). The values of $`\sigma _{\mathrm{min}}`$ for $`N=2`$ and $`N=3`$ are plotted in Fig. 3 and Fig. 4, respectively. Note, that the numerically obtained dependence $`\sigma _{\mathrm{min}}(J)`$ is in agreement with our analytical predictions given by Eqs. (24), (26) and (28). These three conditions work better for small $`\alpha `$ which is quite natural since they were derived from the unperturbed ($`\alpha =\gamma =0`$) equations. We remind that predictions (24), (26) and (28) are strict results and must be considered exact in the framework of inductive-coupling model. We found it interesting that the dependence of the maximum current $`\gamma _{\mathrm{max}}=\gamma (u_{\mathrm{max}})`$ on $`J`$ is almost linear with $`\gamma _{\mathrm{max}}/J=0.476`$, 0.531, 0.623 for $`\alpha =0.02`$, 0.04, 0.10, respectively. In the following section we suggest a functional dependence for $`\sigma _{\mathrm{min}}(u)`$ and will compare this dependence with numerical results presented in Fig. 3 and Fig. 4. ## IV The Variational Approximation The purpose of this section is to find an analytical dependence which describes the simulation data obtained in the previous section. We will present an analysis using VA only for $`N=2`$, but other cases can be considered similarly. VA is based on the fundamental fact that Eqs. (9) and (10) admit the variational representation with the Lagrangian $`L=_{\mathrm{}}^+\mathrm{}_2𝑑\xi `$, where the Lagrangian density corresponding to Eqs. (9) and (10) is $$_2=\frac{1}{2}\sigma \left(\varphi _0^{}\right)^2+\frac{1}{2}\sigma \left(\varphi _1^{}\right)^2+\varphi _0^{}\varphi _1^{}+\left(1\mathrm{cos}\varphi _0\right)+\frac{1}{J}\left(1\mathrm{cos}\varphi _1\right).$$ (29) A crucial step in the application of VA is to adopt an ansatz, i.e., a trial form of the solution. The ansatz is then to be inserted into the corresponding Lagrangian, and the integration over $`\xi `$ given by Eq. (8) must be performed explicitly. This will produce an effective Lagrangian as a function of free parameters that the ansatz contains. Finally, the values of this parameters will be found from the condition that they must realize an extremum of the effective Lagrangian. Thus, it is necessary to select a tractable ansatz (that admits an analytical calculation of the integrals in the expression for the Lagrangian), which satisfies the above boundary conditions for the fluxon. Here, both when selecting the ansatz and considering the boundary conditions, we ignore the above-mentioned nonvanishing Cherenkov oscillatory tails (note that the tail is formally infinitely long in the ideal model with $`\alpha =\gamma =0`$, while in the damped system the tail decays exponentially far from the fluxon’s “body”). The simplest and, in fact, the only practical ansatz is $`\varphi _0(\xi )`$ $`=`$ $`4\mathrm{arctan}\mathrm{exp}(\lambda \xi ),`$ (30) $`\varphi _{n0}`$ $`=`$ $`B{\displaystyle \frac{\mathrm{sinh}(\lambda \xi )}{\mathrm{cosh}^2(\lambda \xi )}}.`$ (31) Here, $`\lambda `$ and $`B`$ are free real parameters. Note that $`\lambda `$ can always be defined positive, while $`B`$ may be positive as well as negative. The expression (31) was chosen in such a way that it describes the “image” profile qualitatively well in comparison with simulation and analytical results. Still, the effective Lagrangian cannot be immediately calculated with the above ansatz. A further necessary simplification is to assume that the amplitude $`B`$ of the fluxon’s images in the noncore layers is sufficiently small, so that the nonlinear term $`\mathrm{sin}\varphi _1`$ in Eq. (10) may be linearized. In other words, the term $`\left(1\mathrm{cos}\varphi _1\right)`$ in the Lagrangian density (29) is to be replaced by $`\frac{1}{2}\varphi _1^2`$. The final expression for the effective Lagrangian is $$L=4\sigma \lambda +\frac{4}{\lambda }+\frac{7}{15}\sigma B^2\lambda +\frac{1}{3}\frac{B^2}{J\lambda }+\frac{4}{3}B\lambda .$$ (32) The variation of (32) in $`B`$ leads to an equation that allows us to eliminate $`B`$, i.e., $$B=\frac{10J\lambda ^2}{\left(5+7J\sigma \lambda ^2\right)}.$$ (33) The remaining algebraic equation for the fluxon’s inverse width $`\lambda `$ is $$7J^2\sigma \left(21\sigma ^25\right)\lambda ^63J\left[257\sigma ^2\left(107J\right)\right]\lambda ^415\sigma \left(14J5\right)\lambda ^275=0.$$ (34) This equation has one or three positive roots for $`\lambda ^2`$. The first two solutions exist and disappear simultaneously at some value of $`\sigma `$. The third solution disappears diverging at $`\sigma _{\mathrm{min}}=\sqrt{5/21}`$. This can be seen from the form of the coefficient in front of the term $`\lambda ^6`$. We drop the third solution from the consideration since it gives a constant $`\sigma _{\mathrm{min}}(J)`$ which is unphysical due to Eqs. (24) and (28). To find the minimum value of $`\sigma `$ at which two smallest positive roots disappear, we write two conditions: the function $`f(\lambda )`$ (34) touches $`\lambda `$-axis and its derivative $`f^{}(\lambda )`$ vanishes. These two equations determine $`\sigma _{\mathrm{min}}`$ and $`\lambda (\sigma _{\mathrm{min}})`$ for given value of $`J`$. Eliminating, consecutively, the terms $`\lambda ^6`$, $`\lambda ^4`$ and $`\lambda ^2`$ yields an equation which determines the dependence of $`\sigma _{\mathrm{min}}`$ on $`J`$: $$56\left(7J+5\right)^3\sigma _{\mathrm{min}}^4\left(3675J^2+36750J+1875\right)\sigma _{\mathrm{min}}^2+7500J=0,$$ (35) A solution to this equation is $$\sigma _{\mathrm{min}}^2=\frac{36750J+1875+3675J^2\pm 125\sqrt{15}\sqrt{(15+7J)\left(17J\right)^3}}{112(5+7J)^3}.$$ (36) One can see that this solution exists only for $`J<1/7`$. For $`J>1/7`$, Eqs. ( 34) has only one (third) root, which is unphysical. To relax the latter limitation, we can make use of the fact that, as it was said above, the asymmetry parameter $`J`$ always takes values $`J<1`$. We will make use of this, treating $`J`$ as a small parameter, which is also justified by the fact that the stack with a sufficiently strong asymmetry might be of special physical interest. With regard to this, VA leads to a simple expression for $`\sigma _{\mathrm{min}}`$ (we just expand Eq. (36) in a Taylor series around $`J=0`$): $$\sigma _{\mathrm{min}}2\sqrt{J}.$$ (37) The essential feature in Eq. (37) is a square root dependence. To comply with the strict analytical result for the unperturbed case, see Eqs. (24) and (28), we adopt $$\sigma _{\mathrm{min}}^{(2)}\sqrt{J},$$ (38) For $`N=3`$, a similar dependence is found to be $$\sigma _{\mathrm{min}}^{(3)}\sqrt{\sqrt{2}J}.$$ (39) as an approximation. The bold solid lines corresponding to these approximations are shown in Fig. 3 and Fig. 4. Apparently, the smaller the dissipation, the better the approximations (38) and (39) work. In the presence of the dissipation, the fluxons can exist at velocities somewhat larger than in the idealized model. The latter circumstance is quite natural, as in the single Josephson junction the onset of the fluxon’s instability past the Swihart velocity is also delayed by the dissipation . We can also obtain an analytical approximation for $`N\mathrm{}`$. In this case, $$\sigma _{\mathrm{min}}\sqrt{2J}.$$ (40) The stack with the uniform critical current distribution, therefore, has $`\sigma _{\mathrm{min}}=\sqrt{2}`$. The Cherenkov radiation will appear in the range $`\sqrt{2}<\sigma _{\mathrm{min}}<2`$. It is convenient to present the results of our calculations as the plot $`u_{\mathrm{max}}(|S|)`$. Such a plot for $`N=3`$ and $`\sigma ^{(3)}=2^{1/4}`$ (uniform stack) is shown in Fig. 5. The region where the fluxon moves faster than $`\overline{c}_{}`$ is shaded. It is the domain of existence of the Cherenkov radiation. The range of $`S`$ in Fig. 5 corresponds to the maximum value of the coupling parameter $`|S|_{\mathrm{max}}`$ in 3-fold stack which is equal to $`1/\sqrt{2}`$. ## V Conclusion We have demonstrated that a single fluxon moving in the stack of Josephson junctions has the maximum velocity which does not necessarily coincide with one of the Swihart velocities of the system. The dependence $`u_{\mathrm{max}}(J)`$ was studied numerically for $`N=2`$ and $`3`$. An analytical approximation for this dependence was put forward on the basis of the variational approximation in the limit of small dissipation and is given by Eqs. (38) and (39). Our results show that $`u_{\mathrm{max}}>\overline{c}_{}`$ for $`J<1`$, in the case $`N=2`$, and for $`J<\sqrt{2}`$, in the case $`N=3`$. This leads to Cherenkov radiation of plasma waves by a fluxon in the velocity range $`\overline{c}_{}<u<u_{\mathrm{max}}`$. Simulations also show that the damping stabilizes the fluxon motion at higher velocities, i.e. , $`u_{\mathrm{max}}(\alpha >0)>u_{\mathrm{max}}(\alpha =0)`$. In uniform stack with large $`N`$ (e.g. intrinsic Josephson stacks in crystals of high-$`T_c`$ superconductors), $`u_{\mathrm{max}}`$ exceeds $`\overline{c}_{}`$ and, therefore, a single fluxon always generates Cherenkov radiation. Experiments with high-$`T_c`$ stacks show that flux-flow branches do not have vanishing differential resistance $`R_d`$ close to the top of the flux-flow step as in the single LJJ but bend towards higher voltages, in agreement with our results. From the theoretical point of view, the absence of the “relativistic” singularity at $`u=\overline{c}_{}`$ is a result of the lack of the Lorentz invariance in the coupled sine-Gordon equations. ###### Acknowledgements. B.A. Malomed appreciates hospitality of the Department of Physics at the University of Erlangen-Nürnberg. This work is supported by a grant no. G0464-247.07/95 from German-Israeli Foundation and, in part, by Deutsche Forschungsgemindschaft (DFG). ## Figure Captions
no-problem/9911/astro-ph9911412.html
ar5iv
text
# Properties of Dust in Giant Elliptical Galaxies: The r("o")̂le of the Environment ## 1 Introduction: The Multi-Phase ISM of Elliptical Galaxies It has become evident in recent years that elliptical galaxies are far from being the simple, (violently) relaxed, isothermal, purely stellar systems anticipated by the traditional picture developed by Hubble. Recent deep surveys across the electromagnetic spectrum have shown that ellipticals contain a complex, multi-phase interstellar medium (ISM). In fact, all ISM components known to exist in spiral galaxies are now accessible in elliptical galaxies as well, although in rather different proportions. The main difference is that the dominant (in mass) gaseous component in spirals is “cool”, in the form of neutral gas (H i, H<sub>2</sub>), whereas in ellipticals it is “hot” ($`10^7`$ K), radiating at X-ray wavelengths. The typical mass of this hot gas has been found to be of order $`10^910^{11}`$ M (for H<sub>0</sub> = 50 km s<sup>-1</sup> Mpc<sup>-1</sup>), which is similar to the expected amount from stellar mass loss accumulated over the lifetime of luminous ellipticals (e.g., Forman et al. 1985; Loewenstein 1998). Hence, it is now commonly believed that the hot gas is indeed originated —and constantly replenished— by mass loss of stars within the ellipticals which is thermalized in the gravitational field of the galaxies (i.e., with temperature $`T_\sigma =\mu m_p\sigma _{}^2/k`$, where $`\mu `$ is the mean atomic weight, $`m_p`$ is the proton mass, and $`\sigma _{}`$ is the stellar velocity dispersion). It should be noted that evidence for the existence of this hot component of the ISM is currently only substantial for the most luminous ellipticals ($`L_B\begin{array}{c}>\\ \end{array}\mathrm{5\hspace{0.33em}10}^{10}`$L, see e.g., Kim et al. 1992), as well as for ellipticals that are in the centers of groups (Mulchaey & Zabludoff 1998). It thus seems that only those ellipticals that are “privileged” to reside within a deep enough potential well are able to retain the material lost by stars and suppress the supernova-driven wind proposed by Mathews & Baker (1971). In this scenario, smaller ellipticals with too shallow potential wells may not see the bulk of their internally produced ISM ever again, donating it to the intracluster or intragroup medium. Along with the “hot” ISM, the cooler ISM components exist in ellipticals as well: “warm” ionized gas (Goudfrooij 1998; Macchetto et al. 1996), dust (Knapp et al. 1989; Goudfrooij & de Jong 1995), and “cold” CO and H i (Lees et al. 1991; Wiklind et al. 1995; Oosterloo et al. 1998). However, the measured amounts of dust, cold gas and warm gas are typically small, and do not correlate with the stellar luminosity of ellipticals (as opposed to the case among spirals, cf. Lees et al. 1991). While it is conceivable that some part of the observed dust and gas originates from stellar mass loss, this observational result indicates that most of it has an external origin (e.g., accreted during an interaction with a smaller, gas-rich galaxy). See also Forbes (1991) and Goudfrooij & de Jong (1995). Typical properties of the ISM in ellipticals are listed below in Table 1 (for comprehensive reviews, see e.g., Goudfrooij 1997, 1998; Knapp 1998). ## 2 Properties of Dust within Ellipticals in Different Environments Dust features have been found to exist in about half of all nearby ellipticals. Dust lanes or patches are known to exist in small ellipticals as well as in giant ellipticals, whereas the latter often reside in environments of high galaxy density and/or massive halos of hot gas. This leads me to the main issue I wish to discuss in this contribution: Do the properties of dust (e.g., grain size, dust content, morphology) vary among ellipticals due to differences in the environment ? Most detailed studies of the properties of dust in ellipticals have been undertaken by means of optical imaging, thanks to its intrinsically high spatial resolution and the inherently smooth optical light distributions of ellipticals which eases the comparison of extinguished areas of the galaxies with the appropriate unextinguished areas (e.g., Goudfrooij et al. 1994b). As the extinction in the optical regime ($`\lambda 40007000`$ Å) is caused by grains with a typical size of $``$ 0.1 $`\mu `$m, it is worth taking a look at the typical grain destruction time scales ($`\tau _\mathrm{d}`$) of different mechanisms that might cause any size change for 0.1 $`\mu `$m-sized dust grains within ellipticals. These mechanisms are summarized below: 1. Grain-grain collisions in low-velocity ($`\begin{array}{c}<\\ \end{array}`$ 50 km s<sup>-1</sup>) shocks: $`\tau _\mathrm{d}1\times 10^9`$ yr (e.g., Jones et al. 1996). This velocity was selected since it is the typical maximum velocity dispersion in nebular emission lines within ellipticals (e.g., Goudfrooij 1998). 2. Sputtering in supernova-driven blast waves in a two-phase medium (cold dense clouds embedded within a coronal intercloud medium of low density): $`\tau _\mathrm{d}3\times 10^9`$ ($`L/10^{10}`$ L)<sup>-1</sup> yr (Draine & Salpeter , using the current supernova explosion rate within ellipticals from Turatto et al. and H<sub>0</sub> = 50 km s<sup>-1</sup> Mpc<sup>-1</sup>). 3. Sputtering by thermal ions: (a) “Warm” 10<sup>4</sup> K gas: $`\tau _\mathrm{d}10^{10}`$ yr (Barlow 1978); (b) “Hot”, T $``$ 10<sup>7</sup> K gas: $`\tau _\mathrm{d}2\times 10^5(n_p/\text{cm}^3)^1(a/0.1\mu \text{m})`$ yr (e.g., Tielens et al. 1994). Note that the latter destruction time scale is typically only $`\begin{array}{c}<\\ \end{array}10^7`$ yr for 0.1 $`\mu `$m grains (and proportionally shorter for smaller grains) within the central few kpc of X-ray bright ellipticals where the typical proton density $`n_p0.03\mathrm{\hspace{0.17em}0.1}`$ cm<sup>-3</sup> (see, e.g., Trinchieri et al. 1997). Sputtering by hot ions (protons, He nuclei) is therefore the dominant destruction agent by far for dust in ellipticals embedded in hot gas. One would therefore expect the dust content of ellipticals to decrease with increasing X-ray flux, while the dust grain size distribution in X-ray bright ellipticals is expected to be depleted in small grains. On the other hand, if sputtering by hot ions is not the dominant grain destruction agent (e.g., in ellipticals not embedded in hot gas), destruction mechanism (1) above may be dominant, which preferentially destroys large grains. To address the correctness of these suggested relations, I selected from the catalog of galaxies observed by EINSTEIN (Fabbiano et al. 1992) the ellipticals with $`L_X/L_B2\times 10^{30}`$ erg s$`{}_{}{}^{1}\text{L}_{}^{1}`$, for which Kim et al. (1992) showed that the X-ray flux is dominated by emission from hot gas. For those ellipticals, dust masses were derived from the IRAS 60 and 100 $`\mu `$m fluxes (Knapp et al. 1989) as described in Goudfrooij & de Jong (1995). Masses of hot gas were derived from the EINSTEIN fluxes according to eq. (9) of Canizares et al. (1987). Keep in mind that these gas masses are uncertain by a factor of 2 – 4 due to the uncertainties connected with deriving density profiles from EINSTEIN data. Fig. 1 depicts the relation of the masses of dust and hot gas for X-ray bright ellipticals. The masses have been divided by the optical luminosities of the galaxies to remove the (distance)<sup>2</sup> factor. And indeed, a strong anticorrelation between the masses of dust and hot gas is seen, as expected (cf. above). The single data point that deviates strongly from the general trend in Fig. 1 represents NGC 4696, the dominant elliptical in the Centaurus cluster. This galaxy is however expected to be a special case in view of the presence of a filamentary, obviously “young” dust structure in its central region. de Jong et al. (1990) and Sparks et al. (1989) have suggested that the dust in NGC 4696 was captured during a recent ($`10^8`$ yr ago) tidal interaction with a smaller, gas-rich galaxy, while the dust can be replenished during $`10^8`$ yr by evaporation of cool clouds (captured during the interaction) by hot electrons within the hot gas. What about the dust grain size distribution ? Goudfrooij et al. (1994b) measured extinction curves for dust in 10 ellipticals that display obvious dust extinction features superposed on an otherwise smooth distribution of light (following a de Vaucouleurs law). Interestingly, they found $`R_VA_V/E_{BV}`$ to be very small for a number of giant ellipticals with large-scale dust lanes having relaxed morphologies ($`R_V2.12.7`$, whereas $`R_V=3.1`$ in the diffuse ISM in our Galaxy), which means that the characteristic grain size for the grains causing optical extinction is significantly smaller in those galaxies than in ours. Conversely, the optical extinction curve in X-ray bright ellipticals turns out to be more “normal”, with $`R_V`$ values that are consistent with the Galactic value (Sparks et al. 1989; Goudfrooij & Trinchieri 1998). In Fig. 2, I show a comparison of the distribution of dust extinction (through a $`BI`$ image) with the extinction curve for two representative cases: IC 4320, an isolated elliptical with a relaxed, large-scale dust lane along its minor axis, and NGC 4696, the central, X-ray bright galaxy of the Centaurus cluster (see above). Unfortunately, none of the giant ellipticals with large-scale dust lanes for which Goudfrooij et al. (1994b) derived $`R_V`$ values that are significantly smaller than the Galactic value has been observed in X-rays yet (except for the ROSAT all-sky survey, which was however too shallow to detect any significant emission by hot gas from these ellipticals). Moreover, none of them show any other sizeable galaxies within a radius of several 10<sup>2</sup> kpc around them on Digital Sky Survey images, and they are not in any group catalog (e.g., Garcia 1993). They may well be field ellipticals which stripped dust (and gas) from small neighboring galaxies, after which no significant replenishment of dust has occurred in the lanes. In the absence of a massive hot gas halo, the characteristic dust grain size will then slowly decrease through grain-grain collisions, as observed. Recent models of galaxy interactions involving an elliptical-spiral pair have shown that the settling time scale for gas disks over radial extent of $`10`$ kpc (which is typical for the large-scale dust lanes in these isolated ellipticals) is at least a few tens of crossing times (i.e.$`3\times 10^9`$ yr; e.g., Steiman-Cameron & Durisen 1990). Note that the grain destruction time scale in hot gas ($`\tau _\mathrm{d}\begin{array}{c}<\\ \end{array}10^7`$ yr, see above) is 100 – 300 times shorter than this. Hence, one would predict that ellipticals with such large-scale dust lanes do not host massive hot gas haloes. If this prediction proves true, the $`L_X/L_B`$ ratio for these luminous ($`L_B\begin{array}{c}>\\ \end{array}10^{11}\text{L}_{}`$) ellipticals would be significantly lower than any observed so far by EINSTEIN or ROSAT. This would provide strong evidence for a scenario in which the potential wells of single galaxies are not deep enough to retain the stellar mass loss. Only ellipticals located in the centers of (sub-)clusters or rich groups would be able to stifle the galactic winds. One will be able to resolve this question using the highly sensitive EPIC camera aboard the X-ray satellite XMM. ###### Acknowledgements. It is a pleasure to thank the organizing committee of this conference for an exciting and fruitful meeting.
no-problem/9911/cond-mat9911483.html
ar5iv
text
# Semiclassical approach to the thermodynamics of spin chains ## Abstract Using the PQSCHA semiclassical method, we evaluate thermodynamic quantities of one-dimensional Heisenberg ferro- and antiferromagnets. Since the PQSCHA reduces their evaluation to classical-like calculations, we take advantage of Fisher’s exact solution to get all results in an almost fully analytical way. Explicitly considered here are the specific heat, the correlations length and susceptibility. Good agreement with Monte Carlo simulations is found for $`S>1`$ antiferromagnets, showing that the relevance of the topological terms and of the Haldane gap is significant only for the lowest spin values and temperatures. Several applications to condensed matter systems have demonstrated the usefulness of the improved effective potential approach ; its generalized version for non standard Hamiltonian, the so called pure-quantum self-consistent harmonic approximation (PQSCHA) , has also been successfully applied to different spin systems. We consider here the one-dimensional isotropic Heisenberg model Hamiltonian, $$=\pm \frac{J}{2}\underset{𝒋𝜹}{}𝑺_𝒋𝑺_{𝒋+𝜹},$$ (1) with the exchange interaction $`J>0`$ restricted to nearest-neighbour sites of a simple cubic $`d`$-dimensional lattice; the sign $``$ refers to the ferromagnet (FM), $`+`$ to the antiferromagnet (AFM). The thermodynamic quantities of this model were successfully calculated for two- and three-dimensional magnets, both of which are characterized by a ground state that can be obtained perturbatively starting from the classical-like minimum-energy configuration. In one dimension the situation is largely different and the ferro- and antiferromagnets, at variance with the classical case where both models are mapped onto the classical non linear Schrödinger equation, behave in a markedly different way. In fact, ferromagnets do not present any peculiarity, being their (ordered) ground state, as in higher dimension, the quantum counterpart of the classical minimum-energy configuration. The relevant excitations, both linear and non-linear, which contributes to the thermodynamic properties and destroy the long-range order at any finite temperature, are basically the same in the quantum and in the classical case. The absence of long-range order at any $`T0`$ implies that the linear excitations spectrum should be considered only up to wavelengths of the order of the correlation length $`\xi (T)`$. Quantum effects have a much more apparent impact on the qualitative behaviour of antiferromagnets. Its ground state cannot be obtained perturbatively from the Néel configuration, and the relevant quantum excitations are completely different from the classical ones. Moreover, as firstly suggested by Haldane , integer and half-integer spin chains display a qualitatively different low-temperature behaviour, which arises from a topological term in the path-integral description of spin systems. For half-integer spins, the interference term leads to gapless excitations, while the Haldane gap appears for integer $`S`$. However, these effects should rapidly disappear for increasing spin value, as the interference becomes less destructive and the gap vanishes exponentially. Hence, in one dimensional magnets should also exist regimes where semiclassical methods can be sensibly applied, i.e., when the spin length and/or the temperature increase and the classical behaviour is approached. In such regimes, the PQSCHA is a good tool for calculating thermodynamic quantities and should be very competitive in comparison with other semiclassical ones, as, for instance, the theory based on the use of real-space coherent states that has been recently introduced . Indeed, by the PQSCHA most calculations can be performed analytically, and with a full quantum inclusion of the linear excitations in wave-vector space. The final outcome of the application of the PQSCHA to Heisenberg models is that the free energy of the quantum model described by Eq. (1) is given by the the free energy of the effective classical Heisenberg Hamiltonian, $$_{\mathrm{eff}}=\pm \frac{1}{2}J\stackrel{~}{S}^2\theta ^4(t)\underset{𝒋𝜹}{}𝒔_𝒋𝒔_{𝒋+𝜹}+NJ\stackrel{~}{S}^2𝒢(t),$$ (2) whose thermodynamic properties are exactly known after the work by Fisher . In Eq. (2) $`𝒔_𝒋`$ are unit vectors, $`\stackrel{~}{S}=S+1/2`$ plays the role of the ‘classical’ spin length and $`J\stackrel{~}{S}^2`$ that of the overall energy scale: we hence define $`t=k_BT/J\stackrel{~}{S}^2`$ as the reduced temperature. The temperature- and spin-dependent parameters $`\theta (t)`$ and $`𝒢(t)`$ account for the effects of the pure-quantum fluctuations. Their explicit expressions are: $`\theta ^2(t)`$ $`=`$ $`1{\displaystyle \frac{𝒟}{2}},`$ (3) $`𝒢(t)`$ $`=`$ $`{\displaystyle \frac{t}{N}}{\displaystyle \underset{k}{}}\mathrm{ln}{\displaystyle \frac{\mathrm{sinh}f_k}{\theta ^2f_k}}{\displaystyle \frac{z}{2}}\kappa ^2(t)𝒟.`$ (4) It is worthwhile pointing out that the first term of $`𝒢(t)`$ restores the quantum free energy of the linear excitations . The renormalization coefficient $`𝒟(t)`$ reads $$𝒟=\frac{1}{N\stackrel{~}{S}}\underset{𝒌}{}\left(\mathrm{coth}f_𝒌\frac{1}{f_𝒌}\right)\times \{\begin{array}{cc}\sqrt{1\gamma _𝒌^2}\hfill & \text{(AFM)}\hfill \\ (1\gamma _𝒌)\hfill & \text{(FM)}\hfill \end{array},$$ (5) and represents indeed the pure-quantum nearest-neighbour transverse spin fluctuations in self-consistent Gaussian approximation. This essential renormalization coefficient of the PQSCHA for Heisenberg models, as well as the connected quantities $`\theta ^2(t)`$ and $`𝒢(t)`$, are global parameters, i.e., they take into account the quantum effects only on average, so that the details of the excitation spectrum are smeared out. Furthermore, $`f_𝒌=(\mathrm{}\omega _𝒌)/(2k_BT)=\stackrel{~}{\omega }_𝒌/2\stackrel{~}{S}t`$, where $$\stackrel{~}{\omega }_𝒌=\{\begin{array}{cc}z\kappa ^2\sqrt{1\gamma _𝒌^2}\hfill & \text{(AFM)}\hfill \\ z\kappa ^2(1\gamma _𝒌)\hfill & \text{(FM)}\hfill \end{array},$$ (6) are the dimensionless spin-wave frequencies, whose renormalization factor $`\kappa ^2(t)=\frac{1}{2}\left(\theta ^2+\sqrt{\theta ^44t\epsilon /z}\right)`$ is calculated by taking into account only the thermal fluctuations with $`|𝒌|>\pi /\xi (t)`$, so that $`\epsilon =11/\xi `$ for the one-dimensional system. We point out that the contributions to the pure-quantum renormalization coefficient (5) are weighted by the Langevin function $`\mathrm{coth}f_𝒌f_𝒌^1`$, so that the major role is played by the high-frequency (short wavelength) excitations, which are just those that survive in spite of the absence of long-range order. The PQSCHA in the low-coupling approximation employed to derive Eq. (2) neglects contributions of order $`𝒟^2`$ so that $`𝒟`$ must be small compared to one. Using the exact results for the classical one-dimensional Heisenberg model of Ref. , we can thus easily write an analytical expression for the free energy per spin of the quantum spin chain within the PQSCHA: $$\frac{f}{J\stackrel{~}{S}^2}=𝒢(t,S)t\mathrm{ln}\left[\mathrm{sinh}\left(\frac{\theta ^4(t,S)}{t}\right)\frac{t}{\theta ^4(t,S)}\right].$$ (7) The other macroscopic thermodynamic quantities, e.g. internal energy and specific heat, can be easily obtained from this equation by (numerical) derivation, taking care of the $`t`$-dependence of $`𝒢`$ and $`\theta `$, which prevents us from using directly the expressions of Ref. in evaluating such quantities. One can also obtain the spin correlation function and the susceptibility by means of the formulas reported in Ref. , where it is shown also how the correlation length $`\xi (t)`$ is simply related to its classical counterpart only by the change of the temperature scale involved in the renormalization of the exchange constant, $$\xi (t)=\xi _{\mathrm{cl}}\left(t/\theta ^4(t)\right).$$ (8) This formula is of remarkable simplicity, especially when one notices that $`\xi _{\mathrm{cl}}(t)`$ has a very simple analytical expression , $$\xi _{\mathrm{cl}}(t)=\left[\mathrm{ln}(\mathrm{coth}t^1t)\right]^1.$$ (9) In principle, the PQSCHA approach should work well for the one-dimensional Heisenberg FM, because: i) their ground state is ordered and is an eigenstate of $`S_{\mathrm{tot}}^z=_iS_i^z`$; ii) the absence of long-range order at any finite temperature is due to nonlinear excitations, which can be well treated in semiclassical ($`1/S`$-expansion) approximation; iii) the pure-quantum fluctuations related to the linear excitations are accounted for through the coefficient $`𝒟(t)`$. Therefore, the only limitation we should care of for the FM is a possible too high value of $`𝒟`$: we shall assume that the condition $`𝒟0.5`$ must be satisfied to get reliable results. As shown in Fig. 1 this occurs at any temperature for $`S3/2`$. As for lower values of $`S`$ it is apparent that for $`S=1`$ the approach can be used in a wide range of temperatures starting from $`t0.25`$; on the other hand, the strongest quantum case, $`S=1/2`$, cannot be well described except for very high temperatures $`t1`$. Unfortunately, to the best of our knowledge, reference data on one-dimensional Heisenberg ferromagnets are not available for intermediate spin values, and we found only rather old ones for $`S=1/2`$. However it could be appealing to derive some thermodynamic quantities by means of the effective Hamiltonian (2) and Eq. (7). The specific heat of the FM spin chain is shown in Fig. 2 for spin values $`S=1/2`$, $`S=1`$ and $`S=3/2`$. Comparing with Fig. 1 we see why the curve for $`S=1/2`$ is truncated at $`t1`$: for higher temperature the behaviour of $`C(t)`$ is in agreement with the available numerical data . Turning to antiferromagnets, the temperature behaviour of $`𝒟(t)`$ is shown in Fig. 3. Using the same criterion as above, we can deduce that the PQSCHA should work well for any temperatures for $`S1`$ while its validity is confined to $`t0.15`$ for $`S=1/2`$. However, we must recall that the afore-mentioned quantum effects which strongly modify the nature of the ground state and lead to the Haldane gap, cannot be accounted for by any semiclassical approach, so that we do not expect that peculiar quantum properties at very low temperature and low values of the spin could be reproduced within the PQSCHA. Instead the PQSCHA is expected to give good results when the spin becomes larger and larger approaching the classical AFM Heisenberg model. The results for the specific heat plotted in Figs. 4 and 5 seem to confirm this prediction for the thermodynamic quantities. The comparison with the existing quantum Monte Carlo and transfer-matrix Renormalization Group data shows that the agreement improve for larger spin and can be considered very good almost at all temperature already for $`S=3/2`$. In Figs. 6 and 7 we show the PQSCHA results for the correlation length $`\xi (t)`$ and the uniform susceptibility $`\chi (t)`$ of the AFM chain. The latter quantity has been shown to be particularly sensitive to test the peculiar quantum effects related to the Haldane gap; nevertheless, Fig. 7 clearly shows that a semiclassical approach like the PQSCHA compares well with numerical data not only where the classical regime is already approached, but also in the intermediate temperature range, where the curves for low spin values already are clearly different from the classical behaviour; only the very low temperature behaviour can not be, as expected, reproduced. The conclusion is that the thermodynamic quantities of the Heisenberg AFM are not strongly affected by the consequences of the Haldane ground state at intermediate and high temperatures. Small differences appear only at the lowest values of the spin and from the point of view of the PQSCHA it is difficult to ascertain if they are due to the ”Haldane effects” or to the rather large value of $`𝒟`$. On the other hand, it is curious to note that the Haldane conjecture was theoretically proven for large $`S`$ – where the role of the topological term decreases and the gap is exponentially vanishing, giving negligible quantitative contributions to the thermodynamics.
no-problem/9911/gr-qc9911044.html
ar5iv
text
# Inertial control of the VIRGO Superattenuator1footnote 11footnote 1To appear in the Proceedings of the Third E.Amaldi Conference on Gravitational Waves Experiments, Caltech, Pasadena, 12-16 July 1999. ## I Introduction The test mass suspension of the VIRGO detector, the superattenuator (SA) sa , has been designed in order to suppress the seismic noise below the thermal noise level above 4 Hz. The expected residual motion of the mirror is $`10^{18}`$ m$`\sqrt{\mathrm{Hz}}`$ @4 Hz. At lower frequencies, the residual motion of the mirror is much larger ($`0.1`$ mm RMS), due to the normal modes of the SA (the resonant frequencies of the system are in the range 0.04-2 Hz). To lock the VIRGO interferometer the RMS motion of the suspended mirrors must not exceed $`10^{12}`$ m (to avoid the saturation of the read-out electronics). VIRGO locking strategy is based on a hierarchical control: feedback forces can be exerted on 3 points of the SA (inverted pendulum (IP) ip , marionetta and mirror). The control on the 3 points is operated in different ranges of frequency and amplitude. The maximum mirror displacement that can be controlled from the marionetta without injecting noise in the detection band is $`10`$ $`\mu `$m. Therefore, a damping of the SA normal modes is required for a correct operation of the locking system. An active control of the SA normal modes, using sensors and actuators on top of the IP, capable of reducing the mirror residual motion within a few microns, has been successfully implemented. ## II Experimental setup The setup (fig. 1) of the experiment is composed by a full scale superattenuator, provided with 3 accelerometers (placed on the top of the IP), 3 LVDT position sensors (measuring the relative motion of the IP with respect to an external frame), 3 coil-magnet actuators. The accelerometers work in the range DC-400 Hz and have acceleration spectral sensitivity $`10^9\mathrm{m}\mathrm{s}^2\mathrm{Hz}^{1/2}`$ below 3 Hz acc . The sensors and actuators are all placed in pin-wheel configuration. The sensors and actuators signals are elaborated by a computer controlled ADC (16 bit)-DSP-DAC (20 bit) system. The DSP allows to handle the signals of all the sensors and actuators, to recombine them by means of matrices, to create complex feedback filters (like the one of fig. 5) with high precision poles/zeroes placement and to perform a large amount of calculations at high sampling rate (10 kHz). The suspended mirror is provided with an LVDT to measure its displacement with respect to ground. ## III The control strategy The active control of the SA normal modes is defined inertial damping, because it makes use of inertial sensors (accelerometers) to sense the SA motion. The advantage of using accelerometers is that they perform the measurement with respect to the “fixed stars”, while position sensors do it with respect to a reference frame which is not seismic noise free. Therefore, inertial sensors are to be used so that no seismic noise is reinjected by the feedback. Actually, in the real SA control both sensors are used: position sensors provide a low frequency (DC - 10 mHz) control of the SA position (in order to avoid drifts), while accelerometers allow a wideband reduction of the noise in the region of the SA resonances (10 mHz - 2 Hz). The object to control is a MIMO (multiple in-multiple out) system: each sensor (accelerometer/LVDT) is sensitive to the 3 modes (X,Y,$`\mathrm{\Theta }`$) of the IP and each actuator excites all the modes. To simplify the control strategy the sensors outputs and the actuators currents are digitally recombined to obtain independent SISO (single in-single out) systems (fig. 2): the system is described in the normal modes coordinates (for a description of the diagonalization procedure see nota ; tesi ). Each normal mode is associated to a so called virtual sensor (sensitive to that mode and “blind” to the others) and to a virtual actuator (acting on one mode only, leaving the others undisturbed). In this way one is able to implement independent feedback loops on each d.o.f., greatly simplifying the control strategy. Fig. 4 shows the output of the virtual accelerometers $`X`$ and $`\mathrm{\Theta }`$. In the $`X`$ plot, the 40 mHz resonance of the IP translation mode and all the modes of the SA chain are visible (as pole/zero structures). In the $`\mathrm{\Theta }`$ plot, only the rotation mode of the IP is visible. The two plots show that different feedback strategies have to be implemented on the different d.o.f.. The basic idea of inertial damping is to use the accelerometer signal to build up the feedback force. Actually, if the control band is to be extended down to DC, a position signal is necessary. Our solution was a merging of the two sensors: the virtual LVDT and accelerometer signals are combined in such a way that the LVDT signal ($`l(s)`$) dominates below a chosen cross frequency $`f_{\mathrm{merge}}`$ while the accelerometer signal ($`a(s)`$) dominates above it (see fig. 4 and ref. jila ). The feedback force has the form<sup>2</sup><sup>2</sup>2Actually, the LVDT signal $`l(s)`$ is properly filtered in order to preserve the feedback stability at the cross frequency and in order to reduce the amount of reinjected noise at $`f>f_{\mathrm{merge}}`$.: $$f_{\mathrm{fb}}=G(s)\left[a(s)+ϵl(s)\right]$$ (1) where $`G(s)`$ is the digital filter transfer function (see fig. 5) and $`ϵ`$ is the parameter whose value determines $`f_{\mathrm{merge}}`$. We have chosen $`f_{\mathrm{merge}}10`$ mHz (corresponding to $`ϵ510^3`$). This approach allows to stabilize the system with respect to low frequency drifts at the cost of reinjecting a fraction $`ϵ`$ of the seismic noise via the feedback. ## IV Inertial control performance The result of the inertial control (on 3 d.o.f.) is shown in figure 6. The measurement has been performed in air. The noise on the top of the IP is reduced over a wide band (10 mHz \- 4 Hz). A gain $`>1000`$ is obtained at the main SA resonance (0.3 Hz). The RMS motion of the IP (calculated as $`x_{\mathrm{RMS}}(f)=\sqrt{_f^{\mathrm{}}\stackrel{~}{x}^2(\nu )d\nu }`$) in 10 sec. is reduced from 30 to 0.3 $`\mu `$m. The closed loop floor noise corresponds to the fraction of seismic noise reinjected by using the position sensors for the DC control and can, in principle, be reduced by a steeper low pass filtering of the LVDT signal at $`f>f_{\mathrm{merge}}`$ and by lowering $`f_{\mathrm{merge}}`$: both this solution have drawbacks and need a careful implementation. Preliminary measurements of the displacement of the mirror with respect to ground have been performed in air, using an LVDT position sensor. The residual RMS mirror motion in 10 sec. is<sup>3</sup><sup>3</sup>3This number has been obtained with a feedback design less aggressive then the one of fig. 5: the gain raised as $`1/f`$, the cross frequency was 30 mHz and no compensation of the dips was needed.: $$x_{\mathrm{RMS}}(0.1\mathrm{Hz})3\mu \mathrm{m}.$$ (2) When the damping is on such a measurement can provide only an upper bound because the LVDT output is dominated by the seismic motion of the ground. ## V Further developments Several ways of improving the inertial damping performance have been identified: * a steeper low pass filtering of the LVDT output above $`f_{\mathrm{merge}}`$ may reduce the amount of reinjected seismic noise. In doing this one has to be careful to preserve proper phase difference between the LVDT and accelerometer signals; * the lower $`f_{\mathrm{merge}}`$ the smaller the amount of reinjected noise. Lowering $`f_{\mathrm{merge}}`$ is difficult due to the mechanical tolerance on the parallelism of the IP legs: if the legs are not perfectly parallel, the top table tilts slightly as it translates. Therefore, the accelerometer signal is dominated by the tilt below 15-20 mHz, making thus impossible to use the accelerometers at very low frequencies. A technique for subtracting the effect of the tilt (using the information provided by the displacement sensors) has been defined and used to obtain the results here described damp . Cancelling the tilt effect down to $`5`$ mHz makes us able to use the accelerometers down to 10 mHz damp . Stricter requirements on the IP legs machining and improvements in the tilt subtraction technique may allow a lower cross frequency.
no-problem/9911/adap-org9911004.html
ar5iv
text
# 1 Introduction ## 1 Introduction Recently much attention has been given to connections between criticality and self-organized criticality (SOC) and evolutionary phenomenon, particularly punctuated equilibrium, and to connections between SOC and synchronization. We describe a model which we hope draws some connection between these 2 ideas. SOC has been proposed to describe out of equilibrium systems that are critical, that self-organize into a scale invariant critical state without tuning of a control parameter and show fractal time series. Evolution SOC type models have been proposed to explain punctuated equilibrium. Punctuated equilibrium is the phenomenon observed in the fossil record where long periods of stasis are interrupted by sudden bursts of evolutionary change. Kaufman and Johnsen have modeled co-evolution, where agents live on a coupled fitness landscape and walk around by random mutation, only moves to higher fitnesses are allowed. Once at a local maximum the walk stops until moves by another agent deform the lanscape so the agent is no longer at a maximum. Kauffman has linked this to SOC. Bak, Sneppen and Flyvbjerg have taken a similar approach. They define a species as a barrier to increasing fitness and choose the least fit, then randomly change its barrier and the barriers of other agents. The system evolves to a critical state with a self-organized fitness threshold. SOC has also been linked to periodic behaviour. A.Corral et al and Bottani , say there is a close relationship between SOC and synchronization. SOC appears when a system is perturbed which otherwise should synchronise totally or partially. The perturbation may be open boundary conditions rather than periodic or it may be randomness present in initial conditions which is preserved by the dynamic, or it may be the addition of noise. The model we study appears to link these 2 ideas with the emergence of avalanches of partial synchronization on all scales. However all these models are real space models whereas ours is a mean-field model with no spatial dimension. Our model is entirely determistic, the critical state is produced by certain initial conditions, indeed other initial conditions produce completely periodic states. Our model also is not strictly speaking an SOC model since critical behaviour only occurs for a certain range of the parameter and then only for certain initial conditions. A complete analysis of the intial conditions is outside the scope of this paper. Our model was originally motivated as a model of the behaviour of speculating traders in a financial market in the spirit of co-evolution. Recent results have shown stock price time series to be fractal with Hurst exponent different from 0.5, and with positive lyapunov exponent. Scrambling daily returns changes the Hurst exponent back to 0.5. Large crashes have been supposed to be due to exogenous shocks, where information enters the market randomly. However large crashes interspersed with periods of slow growth are strongly reminiscent of punctuated equilibrium. Indeed Mandlebrot has noted large changes of cotton prices occur in oscillatory groups and the movement in tranquil periods is smoother than predicted. Scaling behavior has been noted in a financial index and in the size of companies. Stanley et al have noted ‘scaling laws used to describe complex systems comprised of many interacting inanimate particles (as in many physical systems) may be usefully extended to describe complex systems comprised of many interacting animate subsystems (as in economics).’ Various models have been proposed to explain market movements. Sato and Takayasu have proposed a threshold type model. Since critical states can produce avalanches on all scales, without the need for exogenous shocks, we believe critical type dynamics are present in financial market dynamics. ## 2 Model We hope to model co-evolutionary phenomenon where the micro-level itself defines the macro-level but is also slaved to the macro-level. This is very evident in speculative financial market dynamics where a collection of individuals (micro) trade therby creating a price time series (macro), but determine their trading behaviour by reference to this same price series and other macro variables. We desired to make a model in analogy to this phenomenon. This is a highly stylized toy model of a stock-market.There are $`N`$ agents which are represented by spins $`s_i(t)`$, where $`s_i(t)=1`$, means the agent $`i`$ owns the stock and $`s_i(t)=1`$ means doesn’t own the stock at time t. Each agent also has an absolute fitness $`F_i(t)`$ and a relative fitnes $`f_i(t)=F_i(t)F(t)`$ where the mean-fitness $`F(t)=\frac{1}{N}_{i=1}^NF_i(t)`$. We believe speculative traders are part of 2 crowds, bulls and bears, and our macrovariable is ‘groupthink’ $`G(t)`$, defined by, $$G(t)=\mathrm{\Delta }P(t)=\frac{1}{N}\underset{i=1}{\overset{N}{}}s_i(t)$$ (1) The dynamic is: $$\mathrm{\Delta }s_i(t)=s_i(t+1)s_i(t)=\{\begin{array}{cc}2s_i(t)\hfill & f_i(t)0\hfill \\ 0\hfill & f_i(t)>0\hfill \end{array}$$ (2) $$\mathrm{\Delta }F_i(t)=F_i(t+1)F_i(t)=\frac{1}{2}\mathrm{\Delta }s_i(t)G(t)+\frac{1}{2}|\mathrm{\Delta }s_i(t)|c$$ (3) The dynamic is synchronous and deterministic. First $`G(t)`$ and $`F(t)`$ are calculated then all agents are updated according to (2) and (3). The price $`P(t)`$ is defined by (1) and $`P(0)=0`$. Initially $`s_i(0)`$ are chosen randomly with probablity 1/2 and $`F_i(0)`$ are chosen randomly from the interval \[-1,1\]. $`G(t)`$ measures the bullishness or bearishness of the crowd. Although different to ours Callan and Shapiro mention groupthink in Theory of Social Imitation and Vaga’s Coherent Market Hypothesis explicitly includes a variable called groupthink. We believe speculative agents determine their spin state dependent on whether they believe the market will move in their favour in the future. Therefore our agents have an absolute fitness $`F_i(t)`$ which measures their perception of whether they are in a good position with respect to the future. If $`F_i(t)`$ is relatively high their state will be stable and if $`F_i(t)`$ is relatively low they will want to change their current state. Many ways to define $`F_i(t)`$ are possible. In this model we define it by analogy to Plummer. Agents consider the market to be ‘overbought’ or ‘oversold’. In our simplistic model this is measured by $`G(t)`$. An agent is fit if it is in the minority group. According to Plummer when most agents are in one position then there must be less buying into this position (because there are only a finite amount of agents) and therefore the market will eventually correct itself (change direction) because its growth will not be sustainable. It is always profitable then to be in the minority group before a correction. At a correction the dominant crowd breaks, the macro position dissolves, the market may crash, and subsequently bull and bear crowds will begin to reform. In fact at these times the agents may trade in 2 macro-clusters or chaotically with the market attaining high volatility which persists for some time. This type of trader has been called a sheep trader , in contrast to fundamentalists speculators and non-speculators. Therefore in this model an agents fitness is increased if it changes from the majority group to the minority group, with the increase proportional to the size of the majority. The opposite is applied if it changes the other way. If an agent doesn’t change its state then it’s absolute fitness $`F_i(t)`$ is not changed, regardless of whether $`G(t)`$ changes. An agent also has a relative fitness $`f_i(t)`$. The $`f_i(t)`$ are the behaviour controlling variables in this model. They may change in 2 ways. Firstly an agent $`\alpha `$ may change its state $`s_\alpha (t)`$ thereby directly changing $`F_\alpha (t)`$ and $`f_\alpha (t)`$. This is similar to a single adaptive move on a fitness landscape by an individual optimizing agent. Secondly co-evolution may occur. Here an agent’s relative fitness $`f_\beta (t)`$ may change due to changes in the other agents fitnesses $`F_i(t)`$ changing $`F(t)`$ while $`F_\beta (t)`$ remains constant. To model evolution then we follow natural selection by analogy and mutate unfit agents and leave fit agents unchanged (although their relative fitnesses may change). as in Kaufman, and Bak et al. Mutation is considered to be a state change and this changes an agent’s fitness according to (3). In this model since their are only 2 possible states this means we simply flip state. (In a more extensive model this would correspond to changes to an ownership portfolio vector). To decide which spins flip we could compare pairs of fitnesses and change the least fit. That is we could choose 2 agents $`\alpha `$ and $`\beta `$ and let them compete so that $`F_\alpha >F_\beta `$ then we say $`s_\alpha (t+1)=s_\alpha (t)`$ and $`s_\beta (t+1)=s_\beta (t)`$. However in this paper we simply take a mean-fitness approach. That is all agents $`i`$ whose fitnesses $`F_i(t)`$ fulfill $`F_i(t)F(t)`$ ie $`f_i(t)0`$ flip there spins and their fitnesses change according to (3). All other agent’s states and absolute fitnesses $`F_i(t)`$ do not change although their relative fitnesses $`f_i(t)`$ of course do. Therefore fit agents which could be considered to be at a local maximum do not change their states until the mean-fitness $`F(t)`$ has become equal to their fitnesses $`F_i(t)`$. This means the fitnesses are all internally defined emergent properties as in co-evolution. Of course if there is no overall crowd polarisation then changing state does not change fitness. Therefore the fitness update rule (3) can be seen as the adaptive walk part and this is the reason why we do not simply set $`F_i(t)=s_i(t)G(t)`$ or $`\mathrm{\Delta }F_i(t)=s_i(t)G(t)`$ continuously for all agents. We hope agents will take time to walk out of unfit states and that fit maxima will be created which persist for some time. Our absolute fitness is therefore cumulative and is only changed for unfit agents. More realistically we could think of agents imperfectly sampling the market ie $`G(t)`$ at a series of times to determine their current absolute fitness. In fact we see that the concept of relative fitness and absolute fitness are very similar to the concept of ‘bounded rationality.’ An agents rationality is bounded because he only makes local adaptive moves and can percieve only his absolute fitness but not the overall mean-fitness or his relative fitness This fitness of the position is natural in the sense that it can be seen as a kind of potential for future profit, usually termed ‘utility’ in economics. The fitter an agent is the less likely it will want to change its state, the more stable it is, because it believes the market to be oversold in its favour. Since $`G(t)`$ will on average be $`0`$, in addition in equation (3) we include a very small control parameter $`c`$ which controls the driving rate. We add this to all fitnesses below the mean so that unfit agents on average will be come fitter and interact with the fit agents. This is a general characteristic of evolutionary systems that single entity moves on fitness landscape should be on average uphill. Our market price $`P(t)`$ is defined by $`\mathrm{\Delta }P(t)=G(t)`$, ie price increases while more people own the share than don’t own it, and the price is theoretically unbounded as it should be. Positive groupthink means positive increase and vice-versa. This is similar to the way prices are usually defined by $`\mathrm{\Delta }p(t)Z(t)`$, where $`Z(t)`$ is the excess demand for something. This model does not included a fixed amount of shares. Indeed any trader may independently buy or sell a share without the notion of swapping. This reflects the fact that this is a model of only speculative behaviour, and part of a much larger of pool of shares. However a more realistic model should include a fixed amount of shares. This model is intended to be a suggestive illustration rather than a realistic stock-market. ## 3 Results Shown in Fig.1a is a time series for the fitnesses $`F_i(t)`$ for an $`N=80`$ system for $`c=0.01`$. Punctuated equilibrium behaviour is clearly visible, with periods of relative stasis interspersed with sudden jumps. Although not shown the mean-fitness time series $`F(t)`$ shows changes on all scales similar to a devils staircase. Also shown in Fig.1b is the corresponding daily returns time-series $`\mathrm{\Delta }P(t)=G(t)`$, this also shows calm periods and sudden bursts of high volatility. Infact this behaviour is a kind of intermittent partial synchronization. Shown in Fig.2 is the same time series but with a small portion magnified. Macroscopic synchronization can be seen. Partial synchronizations show various different periods and complexities, and persist for various lengths of time. Clustering allows synchronized spins to trade in phase with groupthink $`G(t)`$ thereby rapidly increasing there fitnesses, or out of phase thereby becoming less fit, this is the origin of the sudden large changes in fitness. Also at these times the fitness deviation suddenly increases,(not shown). Between periods of large-scale partial synchronization with high volatility, periods of calm are characterized by a small even number of spins flipping in anti-phase, they therefore increase their $`F_i`$ only slowly due to the driving parameter c, the returns $`G(t)`$ remaining roughly constant during these periods with the fitness deviation decreasing. (Of course anti-phase flipping with no average increase in fitness is prevented in a real market by a fixed transaction cost. A more realistic model must include this.) When the mean-fitness $`F(t)`$ which is usually increasing crosses some non-flipping $`F_i`$, this spin flips and may cause the $`F(t)`$ to cross some more $`F_i`$, possibly starting an avalanche. This only happens when the total fitness deviation is small. Shown in Fig.3 are two price $`P(t)`$ time series for $`c=0.013`$. Their fractal slightly repetetive pattern is highly reminiscent of real financial time series. Since this model is deterministic, completely periodic states are also possible. Shown in Fig.4 is the average of the quantity $`<R(t)>`$ where $`R(t)=_{i=1}^N|\mathrm{\Delta }s_i(t)|`$ is the amount of spins which flip at any time and $`<\mathrm{}>`$ denotes time averaging. In Fig.4a are time series for $`c=0.0113`$ while Fig.4b is $`c=0.01`$. Fig.4a shows one time series finds the 2 cluster periodic state, where 2 groups alternately topple. Here 59 time series are included in the non-periodic state. If this is a transient it is super long even for the moderate size $`N=200`$. Fig.4b shows at more regular $`c`$ values the system can find periodic states with larger amounts of clusters. Roughly half of the 60 series investigated become periodic by $`t=2.5\times 10^7`$. Shown in Fig.5a is $`<R(t)>`$ plotted against $`c`$. Infact to construct this plot we discarded $`3\times 10^7`$ time steps and then averaged over the next $`20,000`$, each point represents a different initial condition and there are $`8`$ for each cost $`c=0.001x+0.000138`$, $`x`$ is an integer. For small cost critical type behaviour is evident with a sudden phase transition at $`c=0`$. This is of course because at negative $`c`$ less fit spins continuously flip and never interact with spins at greater than mean fitness. In fact the system divides into a frozen solid fit component and an unfit gaseous type component for negative $`c`$. The size of the frozen component depends on the initial conditions as can be seen from the points at negative cost. For larger positive cost an upper branch of periodic attractors at $`<R(t)>=100`$, half the system size, is evident, the system has settled into 2 alternately toppling clusters which interleave the mean-fitness $`F(t)`$. The lower branch is characterised by the punctuated equilibrium state shown in Fig.1. Fig.5b shows the time average $`<S(t)>`$ of an entropy type quantity of the fitness distribution $`S(t)`$ given by, $`S(t)=_{i=1}^N\frac{|f_i(t)|}{f(t)}ln\frac{|f_i(t)|}{f(t)}`$ where $`f_i(t)`$ is the fitness deviation and $`f(t)=_{i=1}^N|f_i(t)|`$. The averaging is the same as for $`<R(t)>`$. For this $`N=200`$ system the maximum $`S=ln2005.3`$ and the periodic points at positive and negative cost are very near this. The punctuated equilibrium state which exists near the transition is more ordered at lower entropy. This is our first evidence of critical behaviour for small $`c`$. Second evidence is obtained by looking at the distribution of avalanches. In the punctuated equilibrium state the system finds a state characterised by fluctuations on all scales. Shown in Fig.6 is the distribution of $`P(R)`$ against $`R`$ where $`P(R)`$ is the probability of an avalanche of size $`R`$. These are distributions of avalanches for 1 time series for 3 different system sizes. They are not ensembles of time series, this distribution is independent of the initial conditions and any non-periodic time series contains all avalanche sizes. The time series were of length $`T=16,000,000`$, near the transition point at $`c=0.0113`$. The distribution shows scale invariance,$`P(R)R^\alpha `$ up to about half the system size. At half the system size there is a peak where the system almost finds the periodic attractor and spends more time in these states. After this the distribution continues to the cutoff near the system size. The scaling exponent $`\alpha `$ taken from the $`N=3000`$ distribution is $`\alpha =1.085\pm 0.002`$. Also shown in Fig.7 is the distribution of magnitude of changes in mean fitness $`\mathrm{\Delta }F(t)=|F(t+1)F(t)|`$, the steps in the devils staircase. The time series are the same as in Fig.6 for 2 different system sizes, there is no ensemble averaging. In fact the two distributions for $`N=1500,923`$ are almost identical, if we were to superpose them, only one could be seen. This is also true for other system sizes. Peaks appear at $`\mathrm{\Delta }F0.12,0.5,0.75`$. Between the peaks we see scaling regimes. Here we see at least 2 scaling regimes, $`P(\mathrm{\Delta }F)\mathrm{\Delta }F^\beta `$ where for $`\mathrm{\Delta }F0.1`$, $`\beta =1.25\pm 0.003`$ and for $`0.1\mathrm{\Delta }F0.4`$, $`\beta =1.39\pm 0.02`$. Possibly there is another scaling regime for $`0.55\mathrm{\Delta }F0.7`$. ## 4 Conclusion This model illustrates an interesting relation between critical phenomenon and punctuated equilibrium on the one hand, and between partial synchronization and punctuated equilibrium on the other hand. The system synchronizes for certain cost $`c`$ and certain intial conditions, otherwise it shows critical behaviour, similar to the SOC models cited in the introduction. We believe this deserves further investigation. We also find an interesting phase transition. Some typical behaviour of money markets is present here, especially the periods of low volatility, where the price is relatively stable and the fitness grows slowly while the fitness deviation decreases slowly, interrupted by shorter periods of persistent high volatility and macroscopic oscilations which are observed in real time series. We wonder if like in earthquake dynamics, which are often modelled by SOC dynamics, a large crash in a real financial market is preceded by some smaller self-reinforcing oscilatory pre-shock, as is seen in our dynamics here. Also the time price time series is highly suggestive of real time series, with formations similar to ‘double tops’ and ‘rebounds’ described in quantitative analysis, produced by the near periodic macro behaviour which can appear. The slightly repetetive self-similarity reminds us of financial time series. Many possible models of financial market dynamics can be plausibly suggested, included many exhibiting threshold dynamics, since data concerning the micro behaviour of individual traders is not available.
no-problem/9911/astro-ph9911190.html
ar5iv
text
# Anomalous X-ray pulsars and soft gamma-ray repeaters in supernova remnants ## 1. Introduction The recent detection of rapidly slowing $``$6-second pulsations from soft gamma-ray repeaters (SGRs) makes a strong argument that these sources are “magnetars”, isolated neutron stars with inferred dipole magnetic fields $`B10^{14}10^{15}`$ G (e.g. Kouveliotou et al. 1998). Thompson & Duncan (1996) have noted that the emergent class of six “anomalous X-ray pulsars” (AXPs; van Paradijs et al. 1995) are strikingly similar to SGRs in their periods, period derivatives, X-ray luminosities, X-ray spectra, lack of evidence for binarity and coincidence with supernova remnants (SNRs). They thus propose that AXPs, like SGRs, are magnetars. In the subsequent few years, several more AXPs and SGRs have been discovered, several of which are near or in SNRs (e.g. Vasisht & Gotthelf 1997; Woods et al. 1999; Gaensler et al. 1999). Below I briefly summarise these associations, then consider what these results tell us about AXPs, SGRs, and the relationship between the two populations. ## 2. Associations of SNRs with AXPs and SGRs Claimed associations of SNRs with AXPs and SGRs are summarised in Table 1. Note that the association between SGR 1806–20 and G10.0–0.3 (Kulkarni et al. 1994) has been omitted, as the latter appears to be a synchrotron nebula powered by the SGR (or perhaps by some other source; Eikenberry, these proceedings) and gives no evidence for a supernova explosion at some point in the past. For each association, I have listed an estimated age and distance for the SNR. It should be noted that the $`\mathrm{\Sigma }D`$ relation is not a valid method of determining distances to individual SNRs (e.g. Green 1984), and that distances derived using this method should not be taken seriously. Age estimates for SNRs are also uncertain, and usually depend on assumptions about the ambient density. The parameter $`\beta `$ corresponds to the offset of a compact object from the apparent centre of its SNR, in units of the SNR radius (e.g. Shull et al. 1989). For example, $`\beta =1`$ corresponds to an AXP or SGR sitting on the rim of its associated SNR. The column $`V_T`$ refers to the implied transverse velocity of the pulsar, using the adopted age, distance and offset. ### 2.1. Anomalous X-ray Pulsars Associations between neutron stars and SNRs are usually judged on criteria such as agreement in age/distance, positional coincidence and evidence from proper motion. Distance estimates for AXPs have uncertainties $`>`$50%, and there is no evidence that their characteristic ages ($`\tau _cP/2\dot{P}`$) are reliable age estimators. We also lack proper motion measurements for these sources, and so are left only with positional coincidence in order to judge associations. In all three cases in Table 1, the AXP is sitting almost exactly at the centre of its SNR. The probability of random superposition is thus very small, $`<`$0.2% (see Gaensler et al. 1999), and we can conclude that all three AXPs are likely to be physically associated with their coincident SNRs. The upper limits on the AXPs’ transverse velocities are entirely consistent with the velocity distribution seen for radio pulsars (e.g. Lyne & Lorimer 1994). Both the ages of the associated SNRs, and the values of $`\beta `$ argue strongly that AXPs are young ($`<`$10 kyr) objects; the apparent absence of SNRs around the remaining three AXPs is consistent with the expectation that many (or even most) SNRs occur in low density regions, and do not produce detectable emission (Kafatos et al. 1980; Gaensler & Johnston 1995b). This result implies a Galactic birth-rate for AXPs of $`>`$0.6 kyr<sup>-1</sup>, corresponding to at least 5% of core-collapse supernovae (see Gaensler et al. 1999). ### 2.2. Soft Gamma-ray Repeaters Just as for the AXPs, we cannot appeal to age, distance or proper motion in considering associations between SGRs and SNRs. Turning to positional coincidence, we find that all three SGRs are on the edge of, or outside, their coincident SNRs. The probability of a chance coincidence increases as $`\beta ^2`$, and one consequently finds a substantially higher probability than for the AXPs that the SGR/SNR associations are spurious (e.g. Smith et al. 1999). Of the $``$10 claimed associations between SNRs and radio pulsars with $`\beta >1`$, all but one is likely to be geometric projection (e.g. Gaensler & Johnston 1995a,b; Nicastro et al. 1996; Stappers et al. 1999). Thus we are left to conclude either that the SGR/SNR associations are not genuine, or that SGRs have substantially higher velocities than do radio pulsars. There is currently no way to distinguish between these possibilities; using Chandra to measure the proper motion of the SGRs seems to be the only avenue by which this might be resolved. We note that Duncan & Thompson (1992) argue that the mechanism which forms a magnetar will indeed impart the neutron star with a high recoil velocity, consistent with the values of $`V_T`$ for the SGRs in Table 1. ## 3. Relationship between AXPs and SGRs On the basis of the small value of $`\tau _c`$ for SGR 1806–20, Kouveliotou et al. (1998) have argued that SGRs eventually evolve into AXPs. Meanwhile, Gotthelf et al. (1999) appeal to the young age of the Kes 73/1E 1841–045 association to argue that AXPs evolve into SGRs! However, if all the associations in Table 1 are genuine, then AXPs and SGRs clearly have different velocity distributions and so cannot possibly be drawn from the same population, coeval or otherwise. On the other hand, if one argues that the SGR/SNR associations in Table 1 are merely chance coincidence, then the corresponding estimates of $`V_T`$ are invalidated. The absence of associated SNRs for SGRs would then imply that SGRs have ages $`>`$50–100 kyr (e.g. Shull et al. 1989; Frail et al. 1994), and the data would then be consistent with AXPs evolving into SGRs. One possible problem with this scenario is that if one extrapolates the steady spin-down seen in several AXPs to such ages (Gotthelf et al. 1999; Kaspi et al. 1999), we would then expect SGRs to have periods $``$10 s, which is not observed. ## 4. Conclusions The three associations between AXPs and SNRs are all convincing, and indicate that AXPs are young ($`<`$10 kyr), low velocity neutron stars. The three SGR/SNR associations seem less likely to be genuine, and rely on SGRs being high velocity ($`>`$1000 km s<sup>-1</sup>) objects. If the SGR/SNR associations are indeed spurious, then SGRs can be explained as older manifestations of AXPs. However, if the SGR/SNR associations are shown to be real, then we must conclude that there is no evolutionary link between SGRs and AXPs. Possible alternatives are that AXPs are accreting systems as originally claimed (e.g. van Paradijs et al. 1995), or that there is more than one type of magnetar. #### Acknowledgments. My research is supported by NASA through Hubble Fellowship grant HF-01107.01-98A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. ## References Corbel, S., Chapuis, C., Dame, T. M., & Durouchoux, P. 1999, ApJ, 526, L29 Duncan, R. C., & Thompson, C. 1992, ApJ, 392, L9 Frail, D. A., Goss, W. M., & Whiteoak, J. B. Z. 1994, ApJ, 437, 781 Gaensler, B. M., Gotthelf, E. V., & Vasisht, G. 1999, ApJ, 526, L37 Gaensler, B. M., & Johnston, S. 1995a, PubASA, 12, 76 Gaensler, B. M., & Johnston, S. 1995b, MNRAS, 277, 1243 Gotthelf, E. V., Vasisht, G., & Dotani, T. 1999, ApJ, 522, L49 Green, D. A. 1984, MNRAS, 209, 449 Green, D. A. 1989, MNRAS, 238, 737 Kafatos, M., Sofia, S., Bruhweiler, F., & Gull, T. 1980, ApJ, 242, 294 Kaspi, V. M., Chakrabarty, D., & Steinberger, J. 1999, ApJ, 525, L33 Kouveliotou, C. et al. 1998, Nature, 393, 235 Kulkarni, S. R. et al. 1994, Nature, 368, 129 Lyne, A. G., & Lorimer, D. R. 1994, Nature, 369, 127 Nicastro, L., Johnston, S., & Koribalski, B. 1996, A&A, 306, L49 Rho, J., & Petre, R. 1997, ApJ, 484, 828 Sanbonmatsu, K. Y., & Helfand, D. J. 1992, AJ, 104, 2189 Shull, J. M., Fesen, R. A., & Saken, J. M. 1989, ApJ, 346, 860 Smith, D. A., Bradt, H. V., & Levine, A. M. 1999, ApJ, 519, L147 Stappers, B. W., Gaensler, B. M., & Johnston, S. 1999, MNRAS, 308, 609 Thompson, D., & Duncan, R. C. 1996, ApJ, 473, 322 van Paradijs, J., Taam, R. E., & van den Heuvel, E. P. J. 1995, A&A, 299, L41 Vancura, O., Blair, W. P., Long, K. S., & Raymond, J. C. 1992, ApJ, 394, 158 Vasisht, G., & Gotthelf, E. V. 1997, ApJ, 486, L129 Woods, P. M. et al. 1999, ApJ, 519, L139
no-problem/9911/hep-ph9911477.html
ar5iv
text
# On the Top Mass Reconstruction Using Leptons ## Acknowledgements I acknowledge M.L. Mangano and M.H. Seymour for their help in obtaining the presented results. I am also grateful to A. Kharchilava and B.R. Webber for useful suggestions. ## References
no-problem/9911/astro-ph9911339.html
ar5iv
text
# ZERO-METALLICITY STARS AND THE EFFECTS OF THE FIRST STARS ON REIONIZATION ## 1 INTRODUCTION We know from observational constraints that the hydrogen in the intergalactic medium (IGM) was reionized by redshift $`z5`$ (Schneider, Schmidt, & Gunn 1991) and that the He II was reionized at $`z3`$ (Reimers et al. 1997). However, the exact nature of the ionizing sources is still uncertain. The observed drop in the space density of bright quasars at $`z3`$ (Pei 1995) suggests that early stellar populations played a role in the reionization of hydrogen, but it is not known whether hot stars produced photons at rates sufficient to ionize the universe before this epoch. Our understanding of reionization is closely connected to our knowledge of the extragalactic radiation field. Models of reionization assume an extragalactic spectrum composed of the distinct intrinsic spectra of active galactic nuclei (AGN) and star-forming galaxies. This composite spectrum determines the reionization epoch, the He II/H I ionization ratio, and metal-line absorption ratios at $`z<5`$. Models by Giroux & Shull (1997) of the observed Si IV/C IV ratio at $`z3`$ (Songaila & Cowie 1996) and subsequent work by Haardt & Madau (1996) and Fardal et al. (1998) on the He II Gunn-Peterson effect demonstrated that the observations are consistent with an extragalactic spectrum produced by a mixture of QSOs and hot stars. These conclusions depend, however, on the assumed shape of the ionizing spectrum of stars. While the details of these spectra vary (Sutherland & Shull 1999; Leitherer et al. 1999), one common element is the assumption that hot stars contribute few He II ionizing photons. Some models of reionization assume a phenomenological prescription for star formation, in which a single parameter describes the conversion efficiency of mass to stars and then to ionizing photons (Gnedin & Ostriker 1997). Others use existing model grids of stellar structure and atmospheres designed for application to metal-poor environments (Haiman & Loeb 1997). The first method ignores the details of star formation, ionizing photon production, and radiation escape from the immediate regions of star formation. The second method has the drawback of applying theoretical calculations of metal-poor stars to the very different regime of zero metallicity. The existing grids of stellar evolution tracks extend to $`Z=0.001`$ (Schaerer et al. 1996 and references therein). Existing model atmospheres extend to $`Z=2\times 10^7`$ but are limited in the range of stellar parameters (Kurucz 1992). These models are meant for application to low-metallicity starbursts (Leitherer et al. 1999) and metal-poor galactic halo populations. As shown by existing models of metal-free stars (Ezer 1972; Ezer & Cameron 1971; El Eid et al. 1983), however, stars with $`Z0.001`$ are quite different from their $`Z=0`$ counterparts. Thus, when we consider the effects of stellar populations on reionization, we must use true metal-free models to predict the ionizing photon production of the first generation of stars. In this Letter we adopt the common term “Population III” or “Pop III” for metal-free stars, which are understood to have formed from primeval gas. Although extremely metal-poor populations ($`Z0.001`$) may fit an observer’s definition of Pop III, we apply that label to metal-free stars only. In § 2 we present structure and atmosphere models of metal-free stars, and in § 3 we predict ionizing photon yields from these model stars. In § 4 we evaluate the effects of these models on the epoch of reionization, and in § 5 we comment on further cosmological implications of metal-free stellar populations. ## 2 STRUCTURE AND ATMOSPHERE MODELS The models presented here are static stellar structure models calculated using a fitting-method technique that incorporates OPAL radiative opacities (Rogers & Iglesias 1992) and analytic expressions for energy generation. These models were used to predict the effective temperature ($`T_{\mathrm{eff}}`$), luminosity ($`L`$), and surface gravity ($`g`$) of stars with mass 2 – 90 $`M_{}`$ (at 5 – 10 $`M_{}`$ intervals). There is currently no full set of evolutionary tracks for metal-free stars, and our models do not incorporate evolution. However, the existing evolutionary tracks for metal-free stars (Castellani, Chieffi, & Tornambe 1983; Chieffi & Tornambe 1984) show that, like their metal-enriched counterparts, these stars become systematically cooler, larger, and more luminous over their H-burning lifetimes, which are similar in duration. Therefore, we assume that metal-free tracks differ in their first-order characteristics only in their starting point on the Hertzsprung-Russell (HR) diagram. If so, the “gain” in ionizing photons at $`Z=0`$ is maintained throughout the main sequence (MS) lifetime of the star, when most of its ionizing radiation is released. Existing evolutionary tracks of metal-free stars show that these stars may build up a small fraction of C nuclei ($`Z_C10^{10}`$) via the triple-$`\alpha `$ process before they join the H-burning main sequence (El Eid et al. 1983; Castellani et al. 1983). Following this result, we assume that stars with $`M15`$ $`M_{}`$ are enriched to $`Z_\mathrm{C}=10^{10}`$ via triple-$`\alpha `$ burning prior to the onset of MS H-burning. We use these pre-enriched models in all our analysis. We will explore the detailed effects of pre-MS self-enrichment in a later paper, once a comprehensive grid of tracks is available. Figure 5 shows an HR diagram for zero-age main sequence models with Pop I and Pop III metallicities. The most striking feature of the metal-free models is the high temperature they maintain at their photospheres. These stars derive their nuclear energy from a combination of inefficient proton-proton burning and CNO burning with the small fraction of C built up in the pre-MS phase (Castellani et al. 1983; El Eid et al. 1983). As a result of lower energy generation rates in the convective core, they maintain core temperatures in excess of 10<sup>8</sup> K to support the mass against gravitational collapse. Together with reduced radiative opacity in their envelopes, the high core temperatures of Pop III stars make these stars hotter and smaller than their metal-enriched counterparts. Using homology relations, and assuming constant (electron scattering) opacity and a CNO burning rate $`ϵ\rho T_c^{12}(Z_{\mathrm{CN}}/Z_{})`$, we find that massive stars have $`R(Z_{\mathrm{CN}}/Z_{})^{1/12}`$ and $`T_{\mathrm{eff}}(Z_{\mathrm{CN}}/Z_{})^{1/24}`$, in good agreement with our numerical models. With $`g`$ and $`T_{\mathrm{eff}}`$ in hand, we need a model atmosphere to derive the spectral luminosity distribution $`L_\nu `$. A model atmosphere is necessary because a simple blackbody curve for each $`T_{\mathrm{eff}}`$ will not accurately reproduce the spectrum near the ionization edges of H I, He I, and He II. For each stellar model and $`\mathrm{log}g`$$`T_{\mathrm{eff}}`$ pair, we calculated a static, non-LTE model atmosphere that was used to predict the spectral luminosity distribution for the star. We used the atmosphere code TLUSTY (Hubeny & Lanz 1995) to produce the model atmospheres and its included package SYNSPEC to produce the continuum spectra. Figure 5 illustrates the change in the spectral distribution of a 15 $`M_{}`$ star at $`Z=0.001`$ ($`T_{\mathrm{eff}}=`$ 36,000 K) and $`Z=0`$ ($`T_{\mathrm{eff}}=`$ 63,000 K). The key between the two spectra if the high $`T_{\mathrm{eff}}`$ of the $`Z=0`$ model. A Pop II star of the same $`T_{\mathrm{eff}}`$ would exhibit a continuum shape similar to the Pop III model but attenuated by metal-line absorption. However, a Pop II star at 15 $`M_{}`$ is unlikely to reach $`T_{\mathrm{eff}}=`$ 63,000 K during its MS lifetime. ## 3 IONIZING PHOTON YIELDS The model atmospheres were used to predict the rates of ionizing photon production, $`Q_i`$, in photons s<sup>-1</sup>, for all the modeled stars: $`Q_i=4\pi R_{}^2{\displaystyle _{\nu _i}^{\mathrm{}}}{\displaystyle \frac{F_\nu }{h\nu }}𝑑\nu ,`$ (1) where $`R_{}`$ is the radius of the star, $`F_\nu `$ is the spectral flux distribution in erg cm<sup>-2</sup> s<sup>-1</sup> Hz<sup>-1</sup>, and the indices $`i`$ = 0, 1 and 2 correspond to integration over the H I, He I, and He II ionizing continua, $`h\nu h\nu _i`$ (13.60, 24.58, and 54.40 eV, respectively). The $`Q_i`$ for Pop III stars appear in Figure 5. Individual metal-free stars emit far more of their energy in photons with $`h\nu >13.6`$ eV than do their Pop I and Pop II counterparts. This increase produces a 50% gain in the total ionizing photon production at high mass (40 – 70 $`M_{}`$) and gains by factors of $`240`$ at moderate mass (10 – 30 $`M_{}`$). Above 70 $`M_{}`$, a larger fraction of the star’s energy is released above 1 Ryd, but the average photon energy increases such that the overall gain in $`Q_0`$ is modest. A striking feature of the ionizing photon production is the high fraction of the photons emitted with energy sufficient to ionize He I and He II. These fractions are expressed by the ratios $`Q_1/Q_0`$ and $`Q_2/Q_0`$, displayed in Figure 5. High-mass Pop III stars emit 60 – 70% of their ionizing photons in the He I continuum and up to 12% of these photons in the He II continuum. For comparison, $`Q_1/Q_0`$ 0.2 – 0.4 for Pop I O3 – O5 stars with $`T_{\mathrm{eff}}`$ in the range 45,000 – 51,000 K (Vacca, Garmany, & Shull 1996; Schaerer & de Koter 1997). The overall gain in integrated ionizing photon production is best evaluated with synthetic spectra of stellar clusters (Sutherland & Shull 1999; Leitherer et al. 1999), which compare the rates $`Q_i`$ per unit mass of stellar material. Synthetic spectra of Pop III and Pop II zero-age clusters with a standard initial mass function (Salpeter IMF with $`0.1M/M_{}100`$) appear in Figure 5. The gain in $`Q_0`$ (s<sup>-1</sup>) is near 50% for this IMF. However, $`\mathrm{log}Q_1`$ increases from 52.4 to 52.9, and $`\mathrm{log}Q_2`$ increases from 46.1 to 51.7. This dramatic increase in the capability of stars to ionize He I and He II are not predicted by populations that approximate Pop III with existing metal-poor models. ## 4 IMPLICATIONS FOR REIONIZATION Metal-free stars emit 50% more ionizing radiation per unit mass of stellar material than metal-enriched stars. Given the uncertain efficiency of star formation out of primeval gas (Abel et al. 1998), our conclusion that Pop III stars are more efficient ionizing sources (per unit mass) does not necessarily mean that they are more capable than other stellar populations of reionizing the universe. For this reason, we calculate a star formation rate (SFR) for metal-free stars necessary to reionize the universe and test this quantity against observations. Madau, Haardt, & Rees (1999) imposed the condition that the universe is reionized when the number of ionizing photons emitted in one recombination time equals the mean number of hydrogen atoms. Accounting for clumping of the IGM and assuming a typical rate for ionizing photon production per unit mass, they found that the critical SFR required to reionize by $`z=5`$ is 0.013 $`f_{esc}^1`$ $`M_{}`$ yr<sup>-1</sup> Mpc<sup>-3</sup>, where $`f_{esc}`$ is the fraction of ionizing photons that escape into the IGM (see Tumlinson et al. 1999 for a review). Based on the per-mass photon emission rates of the Pop III cluster in Figure 5, we estimate the critical rate of metal-free star formation to be $`0.008f_{esc}^1`$ $`M_{}`$ yr<sup>-1</sup> Mpc<sup>-3</sup>. This requirement would be comparable to the inferred (highly uncertain) SFR at $`z=5`$ if $`f_{esc}`$ = 0.40 (Madau, Mozetti, & Dickenson 1998). In addition to enhanced ionizing capability, Pop III stars convert less of their initial mass into metals. Preliminary evaluation of the metal yields of Pop III stars (Woosley & Weaver 1995) indicates that for every solar mass of stars formed, 0.007 $`M_{}`$ in metals are released. Assuming that this value holds throughout the Pop III epoch, then the critical SFR for H reionization by Pop III stars implies a metal enrichment rate of $`10^4`$ $`M_{}`$ yr<sup>-1</sup> Mpc<sup>-3</sup>. Over the 500 Myr between $`z=10`$ and $`z=5`$, Pop III stars would enrich the universe to a mean metallicity $`Z6\times 10^4Z_{}`$ ($`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`\mathrm{\Omega }_b=0.08`$), 20% of the minimum value $`Z=10^{2.5}Z_{}`$ observed at $`z3`$ and $``$2% of the average metallicity in damped Ly$`\alpha `$ systems at that redshift (Pettini 1999). Our new models of metal-free stars raise the possibility that stars are also responsible for He II reionization. Various studies (Reimers et al. 1997; Heap et al. 1999; Hogan, Anderson, & Rugers 1997) constrain the epoch of He II reionization to $`z3`$ with observations of patchy He II absorption. Conventional wisdom (Fardal et al. 1998; Madau et al. 1999) states that only the hard spectra of AGN could produce enough photons with $`h\nu `$ 54.4 eV to reionize He II, based in part on the result $`Q_2`$/$`Q_00.02`$ for low-metallicity stellar spectra to which Wolf-Rayet stars contribute only minimally (Leitherer & Heckman 1995). However, for the Pop III cluster in Figure 5, $`Q_2`$/$`Q_0=0.05`$. Thus, for $`n_{He}/n_H=0.0785`$ ($`Y=0.239`$) and assuming full ionization of H and He, the He III region excited by this cluster will have 50% the radius of its H II Strömgren sphere. This large He III region may imply that He II reionization differs in the denser regions, compared to the low-density IGM where recombinations are not important. ## 5 COSMOLOGICAL IMPLICATIONS Pop III stars can be distinguished from metal-enriched stars by their theoretical radii and effective temperatures. Unfortunately, these features are not observable directly. However, these characteristics modify the spectrum of the stars in ways are that are potentially observable. We discuss these in brief below, but defer detailed studies of metal-free stars to future work. If the ionizing spectra of clusters follow a power law, $`L_\nu \nu ^\alpha `$, we can use $`\alpha `$ to characterize the effects of Pop III stars on the intergalactic radiation field. The Pop III cluster in Figure 5 has a hard (and inverted) spectral index $`\alpha _1=1.2`$ in the range 1 – 4 Ryd and $`\alpha _4=2.0`$ for $`h\nu 4`$ Ryd. For comparison, the Pop II cluster in Figure 5 has $`\alpha _1=1.0`$ and negligible flux above 4 Ryd. Thus, metal-free stars could contribute harder radiation to the extragalactic spectrum than QSOs with intrinsic $`\alpha _s=1.8`$ between 0.9 – 2.6 Ryd (Zheng et al. 1997). At $`z5`$ this hard EUV spectrum is accessible to ground- and space-based instruments in the optical and near-UV. The dramatic increase in He II ionizing photons from metal-free stars implies that they will have distinctive effects on their neighborhoods. Simple models of Pop III H II regions show that Pop III stars excite sizable He III regions. The $`\lambda `$1640 and $`\lambda `$4686 recombination lines of He II might be detected for targets with $`z=510`$ in the 1 - 5 $`\mu `$m range with NGST. Similarly, the radio recombination lines of He II may provide a unique signature of these stars. However, He II recombination emission observed in the spectra of metal-poor extragalactic H II regions has often been attributed to Wolf-Rayet stars, X-ray binaries, or shocks. (Garnett et al. 1991; de Mello et al. 1998; Izotov et al. 1997). These sources are likely to be less important at high $`z`$, but they may complicate the identification of metal-free stellar populations. The photodissociation of H<sub>2</sub> by FUV radiation (912 – 1126 Å) from the first luminous sources in the universe is suggested to have inhibited subsequent star formation near these sources by destroying their only coolant (Haiman, Rees, & Loeb 1997; Ciardi, Ferrara, & Abel 1999). Photons with $`h\nu `$ = 11.2 – 13.6 eV can propagate freely into neutral, dust-free gas and dissociate H<sub>2</sub> by exciting permitted transitions to the $`{}_{}{}^{1}B\mathrm{\Sigma }_u^+`$ state, 10 – 15% of which decay into the continuum. This “negative feedback” is presumed to precede the ionization front by a distance dependent on the spectrum of the ionizing source. Pop III clusters, with enhanced ionizing photon production and suppressed FUV flux, may dissociate H<sub>2</sub> with their ionization fronts. The hard spectra of Pop III stars may also affect the IGM ionization ratios through changes to the extragalactic spectrum. Fardal et al. (1998) define the ratio $`\eta N_{\mathrm{HeII}}/N_{\mathrm{HI}}`$ to express the relative column densities of He II and H I. This ratio is sensitive to the shape of the extragalactic spectrum and can be used to predict the optical depths $`\tau _{\mathrm{HI}}`$ and $`\tau _{\mathrm{HeII}}`$ in the IGM. An increase in the He II ionizing flux favors He III and decreases $`\eta `$. Fardal et al. (1998) estimate that $`\eta 100`$ is necessary to explain $`\tau _{\mathrm{HeII}}`$ 1 – 5 observed at $`z3`$ (Davidsen, Kriss, & Zheng 1996). The $`Z=0`$ stellar spectra give $`\eta 30`$, inconsistent with the observed opacity if these stars are still forming at that epoch. Thus, $`Z=0`$ stars may not dominate the intergalactic radiation field at $`z=3`$. In summary, we outline a general picture of the era of Pop III stars based on our models and consistent with the observations discussed above. We assume that the first stars formed at $`z10`$, consistent with simulations of large-scale structure. If the total cosmic star formation rate exceeded the critical rate calculated in § 4, then these first stars may have reionized hydrogen and helium in the universe. Upon their deaths, they enriched the universe to an average metallicity 20% of that observed at $`z3`$. Population III then faded, leaving their metal-enriched progeny to provide the rest of the metals and the softer radiation seen at $`z3`$. This work was supported in part by astrophysical theory grants from NASA (NAG5-7262) and NSF (AST96-17073).
no-problem/9911/cond-mat9911013.html
ar5iv
text
# Large scale molecular dynamics simulation of self-assembly processes in short and long chain cationic surfactants ## 1 Introduction For the past two decades, amphiphilic systems have constituted a field of great interest, both from a fundamental and an industrial point of view . This is mainly due to the fact that they exhibit a rich phase diagram when mixed with water and other organic species. Their amphiphilic nature leads to the formation of self-assembled mesoscopic structures, for example spherical micelles at intermediate surfactant concentrations. At higher concentrations, or with the addition of electrolyte or organic compounds (co-surfactant), cylindrical wormlike micelles may be stabilized. These wormlike micelles confer visco-elastic properties to the fluid. Indeed, gels are frequently formed, reflecting the entanglement between worms, with viscosity dependent on temperature, salt concentration, etc. As the surfactant concentration increases, new structures appear, including vesicles and bilayers. The rheology of these amphiphilic systems is often analysed using a simple theory of self-assembly which combines both thermodynamics and geometrical arguments . According to this approach, the ability of surfactant molecules to form a particular type of structure is governed by the packing parameter $`P`$, which is the ratio of the volume of the hydrophobic tail $`v`$ to the length of this tail $`l`$ times the effective surface area per head group $`a`$: $$P=\frac{v}{la}.$$ (1) Considering $`v`$ and $`l`$ to be constant for a particular molecule, the geometry of the self-assembled structure is controlled by the effective surface area $`a`$ of the head group (for example, the addition of electrolyte tends to screen the repulsive interaction between head groups, and so to decrease the effective head group area, leading to a transition from spherical to wormlike micelles). Molecular dynamics simulations have succeeded in resolving the detailed structure of spherical micelles, and their interactions with the solvent. These simulations have commonly involved 10 to 20 surfactant molecules with up to 1000 water molecules, and have been run for a few hundred picoseconds . Under these conditions, the dynamical properties of micelles are extremely difficult to study because the time scales of dynamical processes may vary from $`10^{11}`$ (presumed characteristic time of shape fluctuations) to $`10^4`$ seconds (relaxation time of some shear-induced structures). The length and timescales involved in self-assembly phenomena correspond typically to the domain of applicability for different mesoscopic models including lattice-gas , lattice-Boltzmann and dissipative particle dynamics (DPD) . An outstanding challenge for these upscaling methods is the definition of a coherent link between the atomistic description of the sytem and the related meso and macroscopic behaviour. Some recent work has shown the feasibility of such an approach based on a systematic coarse graining of the system in order to link DPD to the microscopic world. Smit *et al.* earlier developed a molecular dynamics model using a simplified description of the component molecules. This allows a prediction of the size distribution of micelles which agrees qualitatively with both experiments and theory. Nevertheless, chemical specificity is not taken into account in this model. In this paper, we report a fully atomistic study of the structure and dynamics of micelles, based on trajectories calculated up to 3 nanoseconds. On this timescale, we can study some fast dynamical processes, such as monomer insertion or removal from a micelle as well as local growth and fragmentation for individual micelles, which occur much faster than the approach to global thermodynamic equilibrium. The kinetics of such processes may be modelled on the basis of a generalisation of Classical Nucleation Theory (CNT) . This theory considers clusters of different sizes, which exchange monomers with the surrounding medium (solvent + surfactant monomers), but neglects cluster-cluster interactions. It provides a description of the time evolution of the size distribution of clusters. A development of this model including inhibition phenomena has recently been published . The Becker-Döring equations of CNT are a special case of the more general discrete Smoluchowski coagulation-fragmentation equations which describe rate processes between two clusters of size $`r`$ and $`s`$ and a cluster of size $`r+s`$. Although the determination of the rate coefficients for each single process is not possible from a small number of molecular dynamics trajectories, we are mainly interested here in analysing the dominant mechanism (Becker-Döring or Smoluchowski) when systems are not at equilibrium. To the best of our knowledge, this work constitutes the first fully atomistic approach to such phenomena on this timescale, for systems containing up to 15000 atoms. This has been achieved with the conjugate use of large parallel computers and a highly scalable molecular dynamics program. ## 2 Description of the model ### 2.1 Surfactant and water molecules Two surfactant molecules have been studied in aqueous solution, namely *n*-nonyltrimethylammonium chloride ($`\mathrm{C}_9\mathrm{TAC}`$), and erucyl *bis* \[2-hydroxyethyl\] methylammonium chloride (EMAC). The latter is known to form wormlike micellar viscoelastic fluids which find commercial use in hydraulic fracturing operations . The two surfactant molecules are shown in figure 1. In some simulations, salicylate co-surfactant molecules have been added to the EMAC solution. The force-field used to model these surfactant molecules is the “constant valence force-field” (cvff) , which includes explicitly all the atoms. The total energy is the sum of the intramolecular and intermolecular interactions. The intramolecular interactions are represented as a sum of four types of term: $$\epsilon _{intra}=\epsilon _{stretch}+\epsilon _{bend}+\epsilon _{torsion}+\epsilon _{outofplane}.$$ (2) The explicit forms for each term in eq (2) are given in eqs (3-6). The bond stretching term is given by: $$\epsilon _{stretch}=\underset{i}{}k_i^b(bb_0)^2,$$ (3) where $`k_i^b`$ is the bond stretching force constant, $`b_0`$ is the equilibrium bond length, and $`b`$ is the actual bond length. The bond bending term is given by eq (4) : $$\epsilon _{bending}=\underset{i}{}k_i^\theta (\theta \theta _0)^2,$$ (4) where $`k_i^\theta `$ is the bond bending force constant, $`\theta _0`$ is the equilibrium bond angle, and $`\theta `$ is the actual bond angle. The torsional contribution to the intramolecular energy is represented by a cosine series: $$\epsilon _{torsion}=\underset{i}{}k_i^\varphi (1+Scos(n\varphi )),$$ (5) where $`k_i^\varphi `$ is the torsional force constant, S is a phase factor (equal to $`1`$ or $`1`$ depending on the dihedral angle considered), and $`\varphi `$ is the torsional angle. The out-of-plane term describes the resistance to out-of-plane bending and is expressed by a quadratic distortion potential function: $$\epsilon _{outofplane}=\underset{i}{}k_i^\chi \chi ^2,$$ (6) where $`k_i^\chi `$ is the bending constant and $`\chi `$ is the bending angle. The expression for the non-bonded interactions is: $$\epsilon _{inter}=\epsilon _{vdW}+\epsilon _{coulombic}$$ (7) where the summations are performed over all the non-bonded pairs of atoms. A Lennard-Jones 12-6 pair interaction is used for the van der Waals energy, $`\epsilon _{vdW}`$, and the partial charges involved in the coulombic term were computed using the semi-empirical quantum mechanical program MOPAC , within the AM1 approximation. The charge on the $`N(CH_3)_3`$ head group of the $`\mathrm{C}_9\mathrm{TAC}`$ surfactant was found to be $`+0.88`$, and $`+0.90`$ for the EMAC head group $`N(C_2H_4OH)_2CH_3`$. Water molecules are represented using the Jorgensen TIP3P model: interactions between water molecules are described by a Lennard-Jones potential between oxygen atoms and electrostatic contributions between all atoms (hydrogen and oxygen). All the parameters of the TIP3P model are shown in table 1. In order to check the performance of the TIP3P water force-field in predicting the bulk properties of water, we performed a molecular dynamics simulation of liquid water and compared the computed properties such as water density, diffusion coefficient and radial distribution function between oxygen atoms to the corresponding experimental data. More details about the simulation procedure can be found in ref . The calculated bulk water density and self-diffusion coefficient as averages over the stored time series of particle coordinates are listed in table 2. Both values compare well with experiments. ### 2.2 Molecular dynamics method Several distinct initial configurations of surfactant molecules surrounded by water molecules were constructed. They consisted of an infinite wormlike micelle, a spherical micelle, or a random distribution of surfactant molecules. Forty-eight or fifty surfactant molecules were employed in the cases of $`\mathrm{C}_9\mathrm{TAC}`$ and EMAC respectively. The number of surrounding water molecules was approximately equal to 3000 in each simulation. In some cases, electrolyte (NaCl) and/or co-surfactant (salicylate) was also added to the solution. At the beginning of the dynamics simulation the total potential energy was minimised in order to generate a reasonable starting point. To carry out the minimisation, we used a truncated Newton-Raphson method requiring evaluation of the second derivative of the potential energy with respect to the atomic coordinates. After minimisation, random velocities selected according to a Maxwellian distribution at a temperature of 300 K were assigned to each atom. The pressure was set to 1 atm and the temperature was fixed at 300 K. Pressure and temperature were controlled using the Nosé-Hoover algorithm . In most of our studies, the total simulation time was larger than 1 ns. To integrate the Newtonian equations of motion for all atoms, we used the Verlet leapfrog algorithm with a timestep of 1 fs. Periodic boundary conditions were applied in all three spatial directions and Ewald summation was used to handle the long-range electrostatic interactions in conjunction with the particle-particle / particle mesh (PPPM) method, an $`𝒪(NlogN)`$ algorithm . A cut-off radius of $`10.0`$ $`\mathrm{\AA }`$ was used for non-coulombic interactions. ### 2.3 Parallel implementation of molecular dynamics All MD simulations were carried out either on a Silicon Graphics Origin 2000 or on a Cray T3E using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code . LAMMPS is a highly scalable classical molecular dynamics code designed for simulating molecular and atomic systems on parallel supercomputers. To study large systems of molecules for a large number of time steps, an algorithm is required that has a very good speedup with the number of processors used. This speedup has been calculated up to 1024 processors on a 1500 node T3E, and the results display the desired linear scalability property. The same calculations have also been performed on a Silicon Graphics Origin 2000 using up to 32 processors. Results of the benchmarks are displayed in figure 2, where the speedup is given by: $$\mathrm{speedup}(\mathrm{k})=\frac{\mathrm{time}(1\mathrm{processor})}{\mathrm{time}(\mathrm{k}\mathrm{processors})}.$$ (8) One can see that the parallel performance of the LAMMPS code is superior to that of the MD codes in the commercial package $`\mathrm{Cerius}^2`$ . The superlinear behaviour observed in the case of LAMMPS, related to the efficient calculation of the coulomb interactions and the use of spatial domain decomposition, can be understood in term of cache memory utilisation which is sub-optimal for one (or a very small number) of processors. ## 3 Structure and dynamics of $`𝐂_9\mathrm{TAC}`$ micelles The instantaneous configurations of the system are displayed using the MSI $`\mathrm{Cerius}^2`$ package . Some images employ Connolly surfaces calculated specifically for some molecules or fragments. A Connolly surface is the van der Waals surface of the molecule/fragment that is accessible to a solvent molecule. In all cases studied in this paper, the solvent molecule is a water molecule. The blue square appearing in each snapshot represents the simulation box. ### 3.1 Spherical micelle A spherical micelle was built with 48 $`\mathrm{C}_9\mathrm{TAC}`$ surfactant molecules surrounded by 2997 water molecules in a cubic box with sides of length 70 $`\mathrm{\AA }`$. It took around $`50`$ ps of molecular dynamics at T=300 K for the volume of the simulation cell to reach a stable value $`V=1.1`$ $`10^{25}m^3`$, corresponding to a density of $`0.97`$ $`g/cm^3`$. The surfactant concentration in the simulated solution is thus $`C=0.72`$ $`mol/l`$. The values of the critical micelle concentration (cmc) for similar surfactant molecules like $`n`$-decyltrimethylammonium chloride ($`\mathrm{C}_{10}\mathrm{TAC}`$) or $`n`$-dodecyltrimethylammonium chloride ($`\mathrm{C}_{12}\mathrm{TAC}`$) in water are reported as $`0.05`$ $`mol/l`$ and $`0.0610.065`$ $`mol/l`$ respectively. Our simulations are performed at a much higher concentration than the cmc; micelles would thus be expected to form. Figure 3 shows snapshots of the system at different times during the simulation. Water molecules are not displayed in order to have a more detailed view of the structure of the micelle. The calculated values of the radius of gyration, the ratios of the lengths of the principal axes, and the radius of the micelle at different timesteps, are reported table 3. After $`50`$ ps, both the volume and the total energy of the system have reached stable values. Figure 3 gives evidence of the spherical shape of the micelle. The polar head groups are located on the micellar surface and are in direct contact with water molecules; the alkyl chains are directed into the hydrophobic core. Analogous views of the system at $`600`$ ps, $`1.1`$ ns and $`3`$ ns are displayed in figure 3. After $`600`$ ps, we observe that 3 surfactant molecules have left the micelle. At this stage, the micelle contains $`45`$ molecules. Its shape is ellipsoidal rather than spherical, as can be deduced from the ratios of the lengths of the principal axes (see table 3). After $`1.1`$ ns of simulation, the micelle has broken down into two smaller spherical micelles, one containing $`29`$ surfactant molecules and the other one $`15`$ molecules. The four remaining surfactant molecules are solubilized as monomers in the water. The observed mechanism through which the micelle breaks into two smaller entities is as follows: the initial micelle undergoes a structural change from spherical to ellipsoidal, one of the principal axis becoming twice as long as the other two. This ellipsoid looks like a small dumbbell (the density of surfactant molecules at its middle is small). Finally, the dumbbell separates into two spherical entities. A rapid reorganisation of the surfactant molecules occurs after the separation. Experimental results of Imae *et al.* on several similar surfactants in aqueous solution indicate the presence of spherical micelles with average aggregation numbers of $`84`$ for $`\mathrm{C}_{16}\mathrm{TAC}`$ ($`n`$-hexadecyl trimethylammonium chloride), $`62`$ for $`\mathrm{C}_{14}\mathrm{TAC}`$ ($`n`$-tetradecyl trimethylammonium chloride), and $`44`$ for $`\mathrm{C}_{12}\mathrm{TAC}`$ ($`n`$-dodecyl trimethylammonium chloride). An extrapolation of their results would predict an average aggregation number of approximately $`25`$ for $`\mathrm{C}_9\mathrm{TAC}`$. At the end of the simulation (figure 3), several surfactant molecules have left the larger micelle while the smaller one has increased its size by adsorption of one further monomer. The two micelles finally contain $`24`$ and $`16`$ surfactant molecules, with $`8`$ isolated surfactant monomers remaining in the solution. Three nanoseconds of molecular dynamics simulation are clearly not long enough to guarantee that the system has reached thermodynamic equilibrium. At equilibrium, spherical micelles are known to exhibit a size distribution rather than one particular aggregation number. The size of a micelle thus evolves in time, over a range given by the polydispersity of the size distribution function. In this simulation, one micelle has adsorbed one surfactant monomer while the other micelle has desorbed a few monomers. ### 3.2 Infinite cylindrical micelle A cylindrical micelle containing $`48`$ $`\mathrm{C}_9\mathrm{TAC}`$ surfactant molecules was constructed by putting together $`6`$ discs of $`8`$ molecules each. This cylindrical micelle was surrounded by $`4305`$ water molecules. The length of the initial cylinder was 30 $`\mathrm{\AA }`$ and 3D periodic boundary conditions were applied in order to simulate infinite cylinders. Figure 4 shows the initial configuration (viewed perpendicular to the axis of the cylinder), and instantaneous configurations taken after $`200`$ ps, $`700`$ ps and at the end of the simulation ($`1.1`$ ns); it takes approximately $`80`$ ps to reach stable values of the volume of the simulation cell and the total energy. The micelle evolves in time and some distortions from the initial configuration appear quickly. After $`200`$ ps of simulation, the micelle is still cylindrical, but the density of molecules along the axis of symmetry of the cylinder no longer appears to be uniform: the surfactant cations clump together, forming high density regions while other regions of the worm exhibit a small number of surfactant molecules. The latter correspond to “weaker” points, i.e. preferential zones for fragmentation of the worm, with weaker hydrophobic tail interactions. Such a phenomenon signals the incipient break-up of the infinite cylinder, as is clearly seen in figure 4 after $`700`$ ps of simulation: the initial cylinder has expelled a small spherical micelle comprising $`15`$ surfactant molecules. The characteristic value of this spherical micelle’s radius of gyration and the lengths of its principal axes are reported in table 3, providing evidence of its spherical shape. The rest of the cylindrical micelle contains $`30`$ cations and exhibits a non-spherical structure. At the end of the simulation ($`t=1.1`$ ns), the small spherical micelle contains $`14`$ surfactant cations, and the other one $`30`$ monomers. The remaining four monomers are located in the aqueous solvent. The different characteristic shapes of these micelles are reported table 3. This state is similar to the state reached in the previous simulation after fragmentation of the spherical micelle into two micelles of sizes 15 and 30. We thus conclude that the time evolution of this state would probably produce the same features as those described previously, tending to thermodynamic equilibrium. ### 3.3 Infinite cylindrical micelle with electrolyte The addition of electrolyte to an aqueous solution of surfactant micelles is known empirically to preferentially stabilize the cylindrical shape . For a cationic surfactant, the negative ions of the added salt associate with the positively charged head groups of the surfactant. Such associations reduce the strong electrostatic repulsion between neighbouring head groups that exists in the cylindrical structure where the latter lie closer together than they do on the surface of a sphere (thus the packing parameter $`P`$ increases due to a decrease in the surface area per head group $`a`$). Our aim here was to investigate whether $`\mathrm{C}_9\mathrm{TAC}`$ cylindrical micelles could indeed be stabilized by adding an electrolyte. The model was constructed in the same way as the previous one for an infinite cylindrical micelle but now $`\mathrm{Na}^+,\mathrm{Cl}^{}`$ ion-pairs were added to the solution. Specifically, the system was composed of $`48`$ surfactant cations (and $`48`$ chloride counterions), $`2997`$ water molecules, and $`100`$ sodium chloride ion pairs, each ion being placed at random within the water molecules of the solvent. The concentration of chloride anions is thus $`C=1.64`$ $`mol/l`$. A view perpendicular to the principal axis of the cylinder is shown in figure 5. After $`100`$ ps of MD simulation (figure 5), the infinite micelle has broken down into a finite micelle. Its shape remains roughly cylindrical, as evidenced by the ratios of the principal axes reported in table 3. In order to compare the stability of this structure with that of the spherical micelles produced in the absence of electrolyte, the simulation was run up to $`2.5`$ ns. At the end of the simulation (figure 5), the micelle still exhibited a cylindrical shape, containing $`44`$ surfactant molecules (four monomers left the cylinder and were individually dissolved in the water): it is an example of a finite, rod-like micelle (self-assembled structures which have been observed experimentally). However, we cannot confirm that this system is near equilibrium, owing to the lack of either longer-time simulation data or relevant experimental results. ### 3.4 Random monomer distribution A starting configuration of $`48`$ surfactant and $`2997`$ water molecules was prepared, the surfactant cations and chloride counterions being randomly distributed throughout the system (see figure 6). The simulation cell was a cubic box with sides of length $`70`$ $`\mathrm{\AA }`$, corresponding to an initial surfactant density of $`0.3`$ $`\mathrm{g}/\mathrm{cm}^3`$. The simulation was run up to $`900`$ ps in order to obtain information on the mechanism of surfactant micelle formation. At $`t=75`$ ps, the system has already become inhomogeneous with some low and high density regions (figure 6). In the high density regions, no particular arrangement of the surfactant molecules is discernible. A structural aggregation between the surfactant particles is seen at $`t=200`$ ps (figure 6), leading to the appearance of two micelles. Each micelle is composed of approximately $`15`$ molecules. By examining several instantaneous configurations between $`t=75`$ and $`t=200`$ ps, we can obtain insight into the dynamics of spherical micelle formation at the molecular level. It appears that in the first stage (between 0 and 100 ps), the surfactant molecules approach one another, forming aggregates without any well-defined organisation, both from a translational and rotational point of view (figure 6). The size of these disordered micelles is small ($`10`$ molecules). In the second step these random aggregates rearrange to form spherical micelles. The driving force for this rearrangement appears to be the minimisation of the repulsive interactions between head groups together with the hydrophobic attractions between the hydrocarbon tails. During this structural rearrangement, the aggregation number of the micellar clusters increases via the addition of small clusters of surfactant (typically 2 or 3 molecules). Finally, by the end of the simulation, both micelles exhibit a spherical shape, containing $`15`$ and $`17`$ surfactant molecules respectively. Their characteristic radii are reported in table 3. A small number of monomers remain solvated in the surrounding aqueous medium. The sizes of the micelles are consistent with the assumption of a mean aggregation number of around 15-20. ## 4 Structure and dynamics of EMAC micelles ### 4.1 Spherical micelle A simulation cell was constructed, comprising $`40`$ EMAC molecules within a spherical micelle, together with $`8`$ solvated monomers, $`48`$ chloride counterions and 3497 water molecules. Molecular dynamics was performed on this system up to $`1.05`$ ns. It took $`120`$ picoseconds for the volume of the simulation cell and the total energy to reach stable values (the surfactant concentration is equal to $`0.54`$ $`mol/l`$). At this point in the simulation, the characteristic dimensions of the spherical micelle are as reported in table 4. During the entire simulation, the micelle maintained its spherical shape. Contrary to the case of $`\mathrm{C}_9\mathrm{TAC}`$ spherical micelles, no fragmentation occurred within this timescale. This is possibly due to the fact that the expected aggregation number for a spherical EMAC micelle may be greater than $`40`$ . Nevertheless, two surfactant molecules initially dissolved in water approached the micelle and adhered to its surface. This phenomenon is shown in figure 7, where the two adsorbing molecules are highlighted. We can see that the two molecules remain close to each other like a dimer, even after their adsorption at the micellar surface. Their hydrophobic tails are not directed toward the centre of the micelle; they rather remain in close proximity to water along their entire length. Figure 7 shows a view of the final configuration after $`1.05`$ ns. The shape of the micelle is spherical, and 6 surfactant molecules remain dispersed in water. ### 4.2 Infinite cylindrical micelle A cylindrical micelle of EMAC molecules was constructed by combining five discs of 10 molecules each. Due to the periodic boundary conditions, the cylinder is effectively infinite in the direction of its principal axis. This micelle was surrounded by $`3119`$ water molecules and $`50`$ chloride ions. The system took $`100`$ ps to equilibrate (that is, to attain stable values of the simulation cell volume and the total energy). Over a total period of $`1.85`$ ns of molecular dynamics, the micelle retained its cylindrical shape. No surfactant monomers were present in the surrounding water at the beginning of the simulation and neither desorption of monomers, nor any other form of fragmentation was observed: the micelle remained an infinite cylinder throughout. Figure 8 shows a projection of the system on a plane perpendicular to the axis of the cylinder, at the end of the simulation. Water molecules are also shown. The cross section of the micelle is more ellipsoidal than circular, as can be deduced from the values of the ratios of the lengths of the principal axes (see table 4). Moreover, no penetration of water molecules in the micelle core was observed; water molecules remain at the micellar surface, in close proximity to and surrounding the head groups. The core of the micelle is indeed completely anhydrous. The dimensions of a cross section of the micelle are reported in table 4. A snapshot of the system is also shown in figure 8, perpendicular to the axis of the cylinder, where three images of the periodic simulation cell are displayed. The micelle exhibits regions of varying density along its principal axis. Some local regions of the worm are narrower than others, corresponding to “weaker” points. With our enhanced understanding of the behaviour of $`\mathrm{C}_9\mathrm{TAC}`$ micelles, this could be interpreted as the incipient site of rupture of the cylinder into smaller spherical micelles. Nevertheless, we believe that this system is still at some distance from equilibrium, because the final state in this simulation is very different from the one obtained in the previous simulation (spherical micelle) while the compositions of both systems are more or less identical. ### 4.3 Random EMAC monomer distribution A simulation was set up starting from a random distribution of $`50`$ EMAC surfactant molecules in a simulation cell containing $`3119`$ water molecules and $`50`$ chloride ions. Figure 9 displays the initial configuration of the system from which molecular dynamics was perfomed for $`1.3`$ ns. At t = 500 ps, two surfactant clusters can already be seen, each containing $`20`$ molecules; these are shown in figure 9. The Connolly surface has been calculated for all atoms of the surfactant molecules and is displayed in yellow. The first micelle does not exhibit a dense spherical shape, as can be seen by the presence of several voids in the structure. The second micelle is spherical, however, with a uniform density of surfactant molecules in all directions, leading to a compact structure. The structural differences between the two micelles are summarized succinctly in terms of the various characteristic geometric parameters reported in table 4. By the end of the simulation, no significant change occurs, and the final configuration, consisting of two micelles of sizes $`22`$ and $`20`$, is shown in figure 9. One micelle has grown while the other has kept its size unchanged; eight monomers remain elsewhere in the solution. The final geometrical parameters for the two clusters are also reported in table 4. ### 4.4 Infinite cylindrical micelle with electrolyte and co-surfactant An infinite cylindrical micelle was built as previously described. It contained $`50`$ molecules, and was surrounded by $`3322`$ water molecules, $`150`$ chloride ions, $`109`$ sodium ions, and $`9`$ salicylate co-surfactant molecules. The co-surfactant molecules were placed outside the micelle. Molecular dynamics was performed on this 3D periodic system for $`400`$ ps. During this period, the cylindrical shape remained stable (as in the previous study without additional electrolyte and co-surfactant, see section 4.2), and its geometrical parameters are reported in table 4. As in the case without additional electrolyte, the cross-section of the cylinder is not circular but rather ovoid, one principal axis being markedly greater than the other. Figure 10 shows two views of the micelle at the end of the simulation. In the first one, all atoms in the system are displayed. The Connolly surface has been calculated for the co-surfactant molecules and is displayed in yellow. We can see -as in the case without added salt-that no water molecules have penetrated the micelle. A large number (six out of nine) of co-surfactant molecules have entered the micelle structure, being adsorbed on its surface and surrounded by the surfactant head groups. The second snapshot shows the micelle in a view parallel to the axis of the cylinder; the Connolly surface has been calculated for the atoms of the head groups only. The surfactant molecules seem to be homogeneously dispersed along the structure, and there is no appearance of weak points, or an incipient site of fragmentation. Nevertheless, we can see that large voids remain in the structure, providing evidence of direct contacts between the hydrophobic tails of the surfactant and the water molecules at the micellar surface. These voids are not associated with inhomogeneous density regions but are more likely due to an insufficient initial density of surfactant and co-surfactant. ## 5 Discussion of results ### 5.1 $`\mathrm{C}_9\mathrm{TAC}`$ surfactant micelles For micellar systems, it appears that the characteristic time for full thermodynamic equilibration is generally greater than the microsecond scale. Nevertheless, the study of the behaviour of such systems on nanosecond timescales can provide useful information on the dynamics and structure of micelles. Indeed, some processes like monomer absorption or desorption, or the fragmentation of cylindrical or spherical micelles, can be simulated on this timescale. Starting from different initial configurations (spherical, cylindrical or random distribution of surfactant molecules) of essentially similar composition, we observe that these systems tend to evolve to the same state, which is characterised by the existence of two quasi-spherical micellar clusters of small aggregation number (between $`15`$ and $`20`$) and a few monomers isolated in the water. The radius of gyration of each micelle is roughly equal to $`11`$ $`\mathrm{\AA }`$, and the radii of the micelles (calculated as the averaged distance between the centre of the micelle and the nitrogen atoms) is $`7.58`$ $`\mathrm{\AA }`$. These micelles are non spherical *on average*, the average ratio of the length of the principal axes being significantly different from unity. The way this final state is reached starting from distinct initial configurations is different: in the case of a spherical or cylindrical micelle, we observe fragmentation into two smaller clusters. In the case of a random distribution of surfactant cations, we directly observe the *formation* of micelles. This process starts with the appearance of a disordered aggregate of surfactant molecules, of small size. A reorganisation of the surfactant molecules occurs, the head groups becoming anchored at the micellar surface. The time needed for some kinds of intramicellar rearrangements (following fragmentation) can be very small. As an illustration, a simulation was performed to investigate the rearrangements of surfactant molecules at the end of a finite rodlike micelle. The initial configuration was a finite rodlike micelle lacking end-caps, thus exposing extensive amounts of its hydrophobic interior to direct contact with water. In the first $`10`$ ps of simulation, the surfactant molecules move to create hemispherical end-caps. The time needed for several molecules to rearrange inside a micelle is thus very short, at least when high energy configurations are involved. In general, growth of micelles is achieved by the absorption of groups (typically dimers or trimers) of surfactant cations rather than by stepwise adsorption of monomers. We believe that this mechanism of micelle growth, corresponding to a Smoluchowski kinetic model: $$C_r+C_s\begin{array}{c}\stackrel{\stackrel{a_{r,s}}{}}{\stackrel{}{b_{r+s}}}\end{array}C_{r+s}$$ (9) where $`C_r`$ is a micellar cluster of aggregation number $`r`$ and $`a_{r,s}`$ and $`b_{r+s}`$ are the forward and backward rate coefficients for aggregation and fragmentation processes, is valid when the system is far from equilibrium and at surfactant concentrations well above the c.m.c., because it leads to a faster rate to reach equilibrium than with a Becker-Döring scheme: $$C_r+C\begin{array}{c}\stackrel{\stackrel{a_r}{}}{\stackrel{}{b_{r+1}}}\end{array}C_{r+1}$$ (10) i.e. a purely stepwise addition or removal of surfactant monomer . On the other hand, starting from a spherical or cylindrical micelle, we can observe its break up. In both cases, the initial micelle breaks into two clusters of different size, one containing approximately $`15`$ molecules and the other one $`30`$ molecules. The fact that a fragmentation process is observed confirms the validity of the Smoluchowski mechanism when the system is far from equilibrium. The size of the smaller micelle corresponds approximately to the equilibrium size of a $`\mathrm{C}_9\mathrm{TAC}`$ micelle, which remains essentially unchanged for the rest of the simulation. The size of the larger micelle does not correspond to the equilibrium size, and we observe a decrease of its aggregation number with time. This decrease is a stepwise process, following the Becker-Döring scheme. It is observed that the initial micelle produces two micelles of different sizes (one of which appears to be closer to the mean equilibrium cluster size) rather than producing two micelles of equal size, but larger than the equilibrium size. For systems not far from equilibrium, and/or for surfactant concentrations not greatly exceeding the c.m.c., a Becker-Döring scheme is more likely to be observed , characterized by a stepwise change in the aggregation number. Figure 11 shows the evolution of the radius of gyration of the two micelles (initially one spherical micelle) during the last 2 ns of simulation. The lower curve is associated with the smaller micelle which grows from $`15`$ to $`16`$ molecules during this part of the simulation. The radius of gyration of this micelle fluctuates around its average value of $`10.63`$ $`\mathrm{\AA }`$, with a standard deviation equal to $`0.29`$. This may correspond to the behaviour of a micelle near equilibrium. The upper curve is associated with the bigger micelle and arrows indicate the moments at which individual surfactant monomers leave the micelle. The main feature exhibited by the radius of gyration is a tendency to decrease, and this decrease is associated with the loss of monomers. Moreover, we can see that the curve exhibits oscillations of large amplitude (greater than a typical fluctuation). This corresponds in fact to an expansion-contraction process. The expansion of the micelle corresponds to an elongation in one direction; the surfactant molecules leave the micelle during the contraction process. This contraction indeed induces an increase in the repulsive interactions between head groups, leading to the desorption of one molecule. In order to highlight these shape fluctuation phenomena in the two micelles, the autocorrelation functions of fluctuations in the ratio of the lengths of the principal axes have been computed as : $$C(t)=\frac{\delta R(t)\delta R(0)}{\delta R(0)\delta R(0)}$$ (11) where $`\delta R(t)`$, defined in , is: $$\delta R(t)=R(t)R(t),$$ (12) and $`R(t)`$ is the ratio of the lengths of the principal axes. These are displayed in figures 12 and 13. The curve pertaining to the smaller cluster exhibits quasi-periodic oscillations, suggesting a time scale for the shape fluctuations of about $`50`$ ps. This agrees well with what has been previously observed in a 100 ps molecular dynamics simulation of a sodium octanoate micelle containing $`15`$ surfactant anions in water , where the time scale for a shape fluctuation was found to be equal to 30 ps. The curve associated with the larger micelle (figure 13) shows a different behaviour: periodic oscillations of large amplitude are seen, with a periodicity of $`500`$ ps. This time interval is associated with the slow expansion-contraction process described above. In this system, the fast shape fluctuation process has disappeared. The quasi-periodicity observed in these oscillations is possibly due to the fact that the rate of monomer desorption varies slowly with the cluster size. These results indicate that a shape fluctuation can be coupled with a desorption process, leading to a considerable increase in its characteristic time scale. The time scale of shape fluctuations in spherical micelles thus depends on their sizes (and therefore on the proximity to equilibrium). When electrolyte is added to the system, the dynamic behaviour is changed: the initially infinite micelle breaks up, loosing its infinite length. It thus becomes a finite rod, but still retains cylindrical symmetry. This small rod-like micelle is stable over the $`2.5`$ ns of simulation. The addition of salt is known to reduce the effective surface area per head group , leading to the stabilization of the cylindrical shape. The head groups, in a rod-like micelle, are thus expected to be closer together than in a spherical micelle. Figure 14 shows the radial distribution function between nitrogen atoms, calculated over the last nanosecond of the simulations described in section 3, in the three cases of a spherical micelle, a cylindrical micelle, and an initially random configuration. The three curves exhibit their main peak at roughly the same distance ($`9\mathrm{\AA }`$), while some differences can be discerned at shorter distances. The curves pertaining to the spherical micelle and the initially random surfactant configuration are very similar. Calculations of the mean total energies and volumes of the simulation cells confirm the similarity between these two systems. By contrast, the curve associated with the cylindrical micelle exhibits a more pronounced secondary peak at short N-N distances, providing evidence of a large number of close head-group contacts for this micellar structure. Figure 15 shows the radial distribution functions calculated between nitrogen atoms and respectively chloride ions, sodium ions, and oxygen atoms of water molecules. The short distances of the N-O (water) and N-Cl radial distribution functions indicate a high degree of structuring around the polar head groups. The closest distances correspond to nitrogen-oxygen interactions. Moreover, the appearance of a second (outer) peak in the N-Cl and N-O (water) radial distribution functions is associated with the existence of a second solvation shell surrounding the polar head group. This second peak in these curves is at the same distance in both cases. The sodium cations are located between the two solvation shells, but remain closer to the inner one. ### 5.2 EMAC micelles As in the case of $`\mathrm{C}_9\mathrm{TAC}`$ micelles, various starting configurations were used to investigate the dynamical behaviour of EMAC micelles. In the absence of electrolyte or co-surfactant, this molecule is known to form spherical micelles . Starting either from a spherical or a cylindrical micelle, the system was found to keep its initial shape over a few nanoseconds of molecular dynamics simulation. In the case of wormlike micelles, although some evidence of incipient fragmentation was detected, the simulations were not performed over a large enough time to confirm this. It was also found that cross-sections of the worm were not circular, but rather elliptical as in the case of ellipsoidal (initially spherical) sodium octanoate micelles . The dynamics of EMAC aggregation and fragmentation processes is found to be slower than those for $`\mathrm{C}_9\mathrm{TAC}`$ self-assembly. This might be interpreted in terms of the length of the hydrophobic tail of the surfactant (from steric and energetic considerations, a long hydrophobic tail in a liquid would be more difficult to displace than a short one, and its diffusion coefficient is smaller). Nevertheless, we can see some differences between the spherical and the cylindrical micelles of EMAC molecules in comparison with $`\mathrm{C}_9\mathrm{TAC}`$. Figure 16 displays the radial distribution function between the hydrogen atoms of the hydroxyl group of the EMAC surfactant cation and the chloride anions. The properties of this function pertaining to the spherical micelle and the initially random distribution of surfactant cations are similar. There is a close contact between these two atoms at about $`2.5`$ $`\mathrm{\AA }`$, followed by an exclusion domain. The close proximity is indicative of a strong association between these atoms; while the exclusion domain is due to the electrostatic repulsion between chloride anions. Beyond the exclusion domain, the probability of finding a chloride ion reaches a value equal to that for the bulk concentration of electrolyte. In the case of the wormlike micelle, the first peak is higher than those corresponding to other shapes, and a second peak is clearly present. The surfactant head groups within EMAC in a wormlike micelle are thus seen to exhibit a stronger interaction with the counterions than in the case of a spherical micelle. Figure 17 displays the radial distribution function between the terminal methyl group and the oxygen atoms of water. We can see that in the three cases, there is a first peak at about $`4`$ $`\mathrm{\AA }`$. In the case of the cylindrical micelle, it will be recalled (see figure 8) that there is no water penetration inside the micelle. The short C-O contact distance seen in the radial distribution function displayed in figure 17 is thus due to the fact that the terminal methyl group is not located at the centre of the micelle, but rather is in direct contact with surrounding water. We can see that the location of the first peak and the probability of finding a short contact between the terminal methyl group and the water molecule both increase when going from the cylindrical to the spherical micelle, and finally for the curve resulting from the initially random configuration of EMAC monomers. These trends can be interpreted in terms of the effective surface area per head group, and the compactness of the resulting structure. In micelles with a cylindrical geometry, the head groups are closer to each other than they are in a spherical micelle , so that the compactness of the former structure is greater, leading to reduced water penetration. Moreover, in the case of the initially random monomer distribution, the two resulting micellar clusters contain half the number of molecules as in the case of the spherical micelle studied here, while their radii (see table 4) are roughly identical to those of the spherical cluster. The structure of the two small clusters is thus less compact, leading to a larger probability of close contacts between the hydrophobic chain and the water molecules. Even if there are some differences concerning the water penetration process in $`\mathrm{C}_9\mathrm{TAC}`$ and EMAC micelles, in all cases we have observed that the core of the micelle is completely impervious to water. In the case of the cylindrical micelle with added electrolyte and salicylate co-surfactant, the radial distribution function between the hydrogen atoms of the hydroxyl group of the surfactant molecule and the oxygen atoms of the co-surfactant molecule is displayed in figure 18. We can see that there is a first peak at about $`2`$ $`\mathrm{\AA }`$, providing evidence of hydrogen bonding between these two species. As was also noticed earlier (see figure 10), a large number of co-surfactant anions are integrated into the surface of the micelle. These anions do not penetrate the core of the micelle but rather remain adsorbed on its surface, enjoying a strong interaction with the surfactant head groups. This interaction decreases the effective surface area per head group, and enhances the formation of rod-like micelles, precisely as proposed in simple geometrical theories . The existence of hydrogen bonds between the head groups of the surfactant and the solvent is evidently not a necessary condition for the formation of micelles . Nevertheless, in this particular systems, it seems to play a key role in the attachment of co-surfactant molecules to the micelle. We have also observed the presence of hydrogen bonding between the hydrogen atoms of the hydroxyl group of the EMAC surfactant and the oxygen atoms of water molecules. However, there is no evidence of hydrogen bonding involving the oxygen atom of the hydroxyl group of the EMAC surfactant and the hydrogen atom of water molecules nor between two hydroxyl groups of different surfactant head groups. No interaction between two salicylate molecules has been seen. ## 6 Conclusions Large-scale molecular dynamics simulations have been performed to investigate the structural and dynamical properties of self-assembled cationic surfactants in aqueous solution. One of the surfactants was comparatively short-chained ($`\mathrm{C}_9\mathrm{TAC}`$), the other long-chained (EMAC). The nanosecond regime has been reached, allowing the study of the dynamics of various self-assembly processes (growth and fragmentation of micelles, surfactant monomer insertion or removal). The mechanism of micelle formation at the molecular level has been described. We have interpreted separately the kinetics of these dynamical processes when the system is variously far from or near equilibrium. In the former case, a Smoluchowski-type scheme is obeyed, according to which micelles coalesce or fragment. In the latter case, a Becker-Döring scheme is observed , where only step-by-step monomer exchanges take place. It was also found that, for an oversized micellar cluster, the step-by-step elimination of surfactant monomers is associated with a slow expansion-contraction process of the micelle, with a characteristic time period of $`500`$ ps. On the other hand, a characteristic time period of $`50`$ ps for shape fluctuations was found in the case of a spherical $`\mathrm{C}_9\mathrm{TAC}`$ micelle with size close to the mean cluster size. The dynamics of the EMAC molecule is slower, and the equilibrium state has not been reached starting from different configurations. This difference in the time scales of the dynamics between the $`\mathrm{C}_9\mathrm{TAC}`$ and the EMAC molecule is attributed mainly to the size difference of the two cationic surfactant molecules. The effect of co-surfactant has been investigated and hydrogen-bonding with the head groups of the surfactant molecule has been found to play an important and probably a key role in stabilizing the wormlike assembly of EMAC cations. Finally, we have found that the penetration of water molecules inside micelles is not important in any instance examined, at least over time scales up to a few nanoseconds. ## 7 Acknowledgements This work has been done in collaboration with Silicon Graphics Inc. who have provided access to a number of large parallel machines. Daron Green and John Carpenter are gratefully acknowledged for their generous technical assistance. Fruitful discussions with Trevor Hughes and Edo Boek are gratefully acknowledged. Mike Stapleton, Andreas Bick and Richard Painter of Molecular Simulations Inc. are also thanked for their support of this work.
no-problem/9911/astro-ph9911316.html
ar5iv
text
# Escape of Ionizing Radiation from High–Redshift Galaxies ## 1 Introduction Recently, there has been considerable theoretical interest in calculating the reionization history of the intergalactic medium (e.g., Haiman & Loeb 1997; Abel, Norman, & Madau 1999; Madau, Haardt, & Rees 1999; Miralda-Escudé, Haehnelt, & Rees 1999; Gnedin 1999). The intergalactic ionizing radiation field is an essential ingredient in these calculations and is determined by the amount of ionizing radiation escaping from the host galaxies of stars and quasars. The value of the escape fraction as a function of redshift and galaxy mass remains a major uncertainty in all current studies, and could affect the cumulative radiation intensity by orders of magnitude at any given redshift. In general, the gas density increases towards the location of galaxies in the intergalactic medium (IGM) and so the transfer of the ionizing radiation must be followed through the densest regions on the galactic length scales. Reionization simulations are limited in dynamical range and small-scale resolution and often treat the sources of ionizing radiation (quasars and galaxies) as unresolved point sources within the large-scale intergalactic medium (see, e.g., simulations by Gnedin 1999). In this paper we calculate the escape of ionizing photons from disk galaxies as a function of formation redshift and mass, thereby providing a means of estimating the ionizing luminosity input for simulations of reionization. The escape of ionizing radiation ($`h\nu >13.6`$eV, $`\lambda <912`$Å) from the disks of present-day galaxies has been studied in recent years in the context of explaining the extensive diffuse ionized layers observed above the disk in the Milky Way (Reynolds et al. 1995) and other galaxies (e.g., Rand 1996; Hoopes, Walterbos, & Rand 1999). Theoretical models predict that of order 3–14% of the ionizing luminosity from O and B stars escapes the Milky Way disk (Dove & Shull 1994; Dove, Shull, & Ferrara 1999). A similar escape fraction of $`f_{\mathrm{esc}}=6`$% was determined by Bland-Hawthorn & Maloney (1998) based on H$`\alpha `$ measurements of the Magellanic Stream. From Hopkins Ultraviolet Telescope observations of four nearby starburst galaxies (Leitherer et al. 1995; Hurwitz, Jelinsky, & Dixon 1997), the escape fraction was estimated to be in the range 3%$`<f_{\mathrm{esc}}<57`$%. If similar escape fractions characterize high redshift galaxies, then stars could have provided a major fraction of the background radiation that reionized the IGM (Madau & Shull 1996; Madau 1999). However, the escape fraction from high-redshift galaxies, which formed when the universe was much denser ($`\rho (1+z)^3`$), may be significantly lower than that predicted by models that are adequate for present-day galaxies. In popular Cold Dark Matter (CDM) models, the first stars and quasars formed at redshifts $`z10`$ (see, e.g. Haiman & Loeb 1997, 1998; Gnedin 1999). These sources are expected to have formed in galaxies where the gas has cooled significantly below its virial temperature and has assembled into a rotationally-supported disk configuration (Barkana & Loeb 1999a,b). The ionizing radiation leaving their interstellar environments, and ultimately their host galaxies, streamed into intergalactic space creating localized ionized regions. Through time these ionized regions, or “Strömgren volumes,” expanded and eventually overlapped at the epoch of reionization (Arons & Wingert 1972). The redshift of reionization is still unknown, but the detection of flux shortward of the Ly$`\alpha `$ resonance for galaxies out to redshifts $`z`$ 5–6 (Stern et al. 2000; Weymann et al. 1998; Dey et al. 1998; Spinrad et al. 1998; Hu et al. 1998, 1999), implies that reionization occurred at even higher redshifts (Gunn & Peterson 1965). Current reionization models assume that galaxies are isotropic point sources of ionizing radiation and adopt escape fractions in the range $`5\%<f_{\mathrm{esc}}<60\%`$ (see, e.g., Gnedin 1999, Miralda-Escudé et al. 1999). In this paper, we examine the validity of these assumptions by following the radiation transfer of ionizing photons in the gaseous disks of high redshift galaxies. We consider either stars or a central quasar as the sources of ionizing photons. The mass and radial extent of the gaseous galactic disks, in which these sources are embedded, are functions of redshift and can be related to the mass and radius of their host dark matter halos (Navarro, Frenk, & White 1997; Mo, Mao, & White 1998). The vertical structure of the disk is dictated by its self-gravity and will be assumed to follow the isothermal profile (Spitzer 1942) with a thermal (or turbulent) speed of $`10\mathrm{km}\mathrm{s}^1`$ for all galaxies. This corresponds to the characteristic thermal speed of photoionized gas and also to an effective gas temperature of $`10^4\mathrm{K}`$, below which atomic cooling is suppressed (Binney & Tremaine 1987). We relate the ionizing luminosity emitted by the stars or quasars to the mass of the host dark matter halos (Haiman & Loeb 1997, 1998), adopting current estimates for Lyman continuum production in starburst galaxies (Leitherer et al. 1999) and quasars (Laor & Draine 1993). The escape fractions for disk galaxies as a function of mass and redshift are then calculated with a Monte Carlo radiation transfer code, similar to that of Och, Lucy, & Rosa (1998). Our numerical code finds the ionized fraction of the gas and follows the associated radiation transfer of the ionizing photons in a steady state. Strictly speaking, it provides an upper limit on the degree of ionization and the corresponding escape fraction for a given source luminosity. However, the characteristic propagation time of the ionization front through the scale-height of the galactic disks of interest here ($`10^6\mathrm{yr}`$), is much shorter than the expected decay time of the starburst or quasar activities which produce the ionizing photons. For the sake of simplicity, we begin our study of the problem with the assumption that the gas is distributed smoothly within the disk. Clumping is known to have a significant effect on the penetration and escape of radiation from an inhomogeneous medium (e.g., Boissé 1990; Witt & Gordon 1996, 2000; Neufeld 1991; Haiman & Spaans 1999; Bianchi et al. 2000). The inclusion of clumpiness will introduce several unknown free parameters into the calculation, such as the number and density contrast of the clumps, and the spatial correlation between the clumps and the ionizing sources. An additional complication might arise from hydrodynamic feedback, whereby part of the gas mass might be expelled from the disk by stellar winds and supernovae (Couchman & Rees 1986; Dekel & Silk 1986). We adopt a simple approach to gauge the significance of clumpiness in Section 4 by modeling the galactic disk density as a two-phase medium (Witt & Gordon 1996, 2000) and calculate the escape fractions from such clumpy disks. We also simulate the effects of outflows and the possible expulsion of gas from the disk by reducing the mass of the smoothly distributed gas in the disk by an order of magnitude. In § 2 we present the details of our model, including the disk geometry, the gas distribution, the source luminosities, and the radiation transfer code. § 3 presents the results of our models in terms of the derived escape fractions. Our investigation into clumpy disks is presented in § 4. We conclude with a discussion of our findings in § 5. ## 2 Model Ingredients In order to determine the escape of ionizing photons we need to specify the geometry and density of the disk galaxies as a function of their formation redshift, as well as the location and luminosity of the ionizing sources within the disks. ### 2.1 Geometry We adopt the theoretical properties of disks forming within cold dark matter halos (Mo et al. 1998; Navarro et al. 1997). A dark matter halo of mass $`M_{\mathrm{HALO}}`$ which forms at a redshift $`z_\mathrm{f}`$ is characterized by a virial radius (Navarro et al. 1997), $$r_{\mathrm{vir}}=0.76\left(\frac{M_{\mathrm{HALO}}}{10^8h^1M_{}}\right)^{1/3}\left(\frac{\mathrm{\Omega }_0}{\mathrm{\Omega }(z_\mathrm{f})}\frac{\mathrm{\Delta }_c}{200}\right)^{1/3}\left(\frac{1+z_\mathrm{f}}{10}\right)h^1\mathrm{kpc},$$ (1) where $`H_0=100h`$km s<sup>-1</sup> Mpc<sup>-1</sup> is the Hubble constant, $`\mathrm{\Omega }_0`$ is the present mean density of matter in the universe in units of the critical density ($`\rho _{\mathrm{crit}}=3H_0^2/8\pi G`$), and $$\mathrm{\Omega }(z_\mathrm{f})=\frac{\mathrm{\Omega }_0(1+z_\mathrm{f})^3}{\mathrm{\Omega }_0(1+z_\mathrm{f})^3+\mathrm{\Omega }_\mathrm{\Lambda }+(1\mathrm{\Omega }_0\mathrm{\Omega }_\mathrm{\Lambda })(1+z_\mathrm{f})^2}.$$ (2) $`\mathrm{\Delta }_c`$ is the threshold overdensity of the virialized dark matter halo which can be fitted by (Bryan & Norman 1998), $$\mathrm{\Delta }_c=18\pi ^2+82d39d^2,$$ (3) for a flat universe with a cosmological constant, where $`d=\mathrm{\Omega }(z_f)1`$. We assume that the disk mass (stars plus gas) is a fraction $`m_d`$ of the halo mass $$M_{\mathrm{DISK}}=m_dM_{\mathrm{HALO}},$$ (4) where $`m_d=\mathrm{\Omega }_b/\mathrm{\Omega }_0`$, and $`\mathrm{\Omega }_b`$ is the present baryonic density parameter. We adopt $`\mathrm{\Omega }_0=0.3`$ and $`\mathrm{\Omega }_b=0.05`$, giving $`m_d=0.17`$. At the high-redshifts of interest, most of the virialized galactic gas is expected to cool rapidly and assemble into the disk. Mo et al. (1998) suggest values in the range $`0.05<m_d<1`$. In our simulations in § 3.1 we also consider $`m_d=0.02`$ which is an order of magnitude lower than our canonical value of $`m_d=0.17`$. The exponential scale-radius of the disk is given by (Mo et al. 1998) $$R=\left(\frac{j_d}{\sqrt{2}m_d}\right)\lambda r_{\mathrm{vir}},$$ (5) where the disk angular momentum is a fraction $`j_d`$ of that of the halo, and the spin parameter $`\lambda `$ is defined in terms of the total energy, $`E_{\mathrm{HALO}}`$, and angular momentum, $`J_{\mathrm{HALO}}`$, of the halo, $`\lambda =J_{\mathrm{HALO}}|E_{\mathrm{HALO}}|^{1/2}G^1M_{\mathrm{HALO}}^{5/2}`$. For our calculation of the escape of ionizing radiation we will adopt the values of $`j_d/m_d=1`$ and $`\lambda =0.05`$, yielding $`R=0.035r_{\mathrm{vir}}`$. These values provide a good fit to the observed size distribution of galactic disks, given the characteristic value of the spin parameter found in numerical simulations of halo formation (Mo et al. 1998, and references therein). The vertical, $`z`$, structure of thin galactic disks is dictated by their self-gravity. For simplicity, we assume that the disk is isothermal and its surface density varies exponentially with radius. The number density of protons in the disk is then given by (Spitzer 1942), $$n(r,z)=n_0\mathrm{e}^{r/R}\mathrm{sech}^2\left(\frac{z}{\sqrt{2}z_0}\right).$$ (6) where $$z_0=\frac{c_s}{(4\pi G\mu m_Hn_0\mathrm{e}^{r/R})^{1/2}},$$ (7) is the scale height of the disk at at radius $`r`$, $`\mu `$ is the atomic weight of the gas and $`m_\mathrm{H}`$ is the mass of a hydrogen atom. Here $`c_s=\sqrt{kT/\mu m_H}`$ is the sound speed (or the effective turbulence speed), which dictates the scale-height of the disk. We take $`c_s=10`$km s<sup>-1</sup>, which corresponds to a gas temperature of $`10^4`$ K. This temperature is typical of photoionized gas, and should characterize the galactic disks of interest here since the atomic cooling rate decreases strongly at lower temperatures. The combination of isothermality and the exponential radial profile results in a disk scale-height that increases with radius \[see Eq. (7)\]. The galactic center number density, $`n_0`$, can be related to the total mass of the disk which is obtained by integrating the density over the entire disk volume $$n_0=\frac{M_{\mathrm{DISK}}}{\mu m_\mathrm{H}(n/n_0)2\pi r𝑑r𝑑z}=\frac{GM_{\mathrm{DISK}}^2}{128\pi \mu m_\mathrm{H}c^2R^4}.$$ (8) In Figure 1 we show the formation-redshift dependence of the number density $`n_0`$, scale-radius $`R`$, and the ratio $`z_0(r=R)/R`$ for disks of different masses. To compare with the Milky Way, we examine the results for $`M_{\mathrm{halo}}=10^{12}M_{}`$ and $`z_\mathrm{f}=1`$. This gives $`n_0100`$cm<sup>-3</sup>, a scale length $`R5`$kpc, and $`z_0(r=R)/R0.002`$, yielding a disk that is denser and thinner than that of the Milky Way. The discrepancy results from our choice of $`m_d=0.17`$, whereas for the Milky Way the disk mass today is a reduced fraction $`m_d0.05`$ of the halo mass, possibly due to the effects of supernovae driven outflows or inefficient cooling of the cosmic gas (the latter phenomenon could be important at $`z2`$ when the IGM becomes rarefied and hot, but is likely to be irrelevant for galaxies which form at higher redshifts out of the much denser and cooler IGM). A value of $`m_d=0.05`$ indeed yields $`n(r=2R)1`$cm<sup>-3</sup> and $`z_0(r=2R)/R0.04`$, more appropriate for the mean baryonic density in the Solar neighborhood. We emphasize that fragmentation of the gas into stars or quasar black holes is only possible as a result of substantial cooling of the gas well below its initial virial temperature. Since molecular hydrogen is likely to be photo-dissociated in the low metallicity gas of primeval galaxies (Haiman, Rees, & Loeb 1997; Omukai & Nishi 1999), only atomic cooling is effective, and so stars and quasars are likely to form only in halos with a virial temperature $`10^4`$ K, where atomic cooling is effective. We therefore restrict our attention to such halos. Inside such halos, thin disks can exist for our assumed gas temperature of $`10^4\mathrm{K}`$. We restrict our simulations to halos in the range $`10^9M_{}`$ to $`10^{12}M_{}`$. The gas in lower mass halos is either unable to cool and form stars (due to the ease by which the only available coolant, $`\mathrm{H}_2`$, is photo-dissociated), or is boiled out of the shallow gravitational potential wells of the host halos by photo-ionization heating of hydrogen to $`10^4`$K (above the virial temperature of these halos) as soon as a small number of ionizing sources form (Omukai & Nishi 1999; Nishi & Susa 1999; Barkana & Loeb 1999a). If low-mass halos lose their gas quickly, then their contribution to the ionizing background can be evaluated trivially from their assumed star formation efficiency, with no need for a detailed radiative transfer calculation. The most popular cosmology at present involves a $`\mathrm{\Lambda }`$CDM power spectrum of density fluctuations with $`\mathrm{\Omega }_m=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, $`\sigma _8=0.9`$, $`h=0.7`$, and $`n=1`$ (scale invariant spectrum). For this cosmology we find that halos with a dark matter mass of $`10^9M_{}`$ are $`1.4\sigma `$ fluctuations at $`z=5`$ and $`2.5\sigma `$ fluctuations at $`z=10`$. Halos of mass $`10^{12}M_{}`$ are $`2.9\sigma `$ fluctuations at $`z=5`$ and $`5.3\sigma `$ fluctuations at $`z=10`$. ### 2.2 Illumination We consider two separate cases for the sources of ionizing radiation within the galactic disks: stars and quasars. Below we describe the luminosity and the spatial location of the sources in these different cases. #### 2.2.1 Stars In Monte Carlo simulations of the transfer of starlight through galaxies, the stellar sources are often represented by a smooth spatial distribution (e.g., Wood & Jones 1997; Ferrara et al. 1999, 1996; Bianchi, Ferrara, & Giovanardi 1996) rather than by individual point sources (but see recent work by Cole, Wood, & Nordsieck 1999; Wood & Reynolds 1999). In our simulations, we consider two smooth distributions for the stellar emissivity: $`j_{}n(r,z)`$ and $`j_{}n(r,z)^2`$. In the first case, we are assuming that the star formation efficiency is independent of density while in the second case the stars are assumed to form preferentially in denser gaseous regions. Note that the second prescription reproduces the observed Schmidt law for the star formation rate of spiral galaxies as a function of the disk surface density (Kennicutt 1998). We assume a sudden burst of (metal poor) star formation with a Scalo (1986) stellar mass function, and adopt a corresponding ionizing luminosity of $`Q_{}(H^0)=10^{46}M_{}/M_{}`$ s<sup>-1</sup>, where $`M_{}`$ is the stellar mass of the galaxy (Haiman & Loeb 1997). This luminosity is consistent with detailed models of starburst galaxies (e.g., Leitherer et al. 1999). The stellar mass is defined by the fraction of gas which gets converted into stars, $`M_{}=f_{}M_{\mathrm{DISK}}`$. We consider two cases (independent of formation redshift), namely $`f_{}=0.04`$ and $`f_{}=0.4`$, with the gaseous disk being reduced in mass by the factor $`1f_{}`$. The value $`f_{}=0.04`$ was used by Haiman & Loeb (1997) so as to reproduce the observed metallicity of $`0.01Z_{}`$ in the IGM at $`z3`$ (Tytler et al. 1995; Songaila & Cowie 1996). We also consider a higher value of $`f_{}=0.4`$, in case $`90\%`$ of all the metals are retained within their host galaxies and only $`10\%`$ are mixed with the IGM. Since the actual IGM metallicity at $`z3`$ might be in the lower range of $`0.1`$–1%$`Z_{}`$ (Songaila 1997), our assumed values for $`f_{}`$ and the corresponding ionizing fluxes might be considered as high. The corresponding ionization fraction of the disk is overestimated, and our derived escape fractions should therefore be regarded as upper limits. In summary, the total ionizing luminosity for stellar sources in our simulations is related to the halo mass via $$Q_{}(H^0)=10^{46}m_df_{}\frac{M_{\mathrm{HALO}}}{M_{}}\mathrm{s}^1.$$ (9) Our code assumes a constant value for this ionizing luminosity and solves for the ionization structure of the disk and the consequent escape fraction in a steady state. Since the characteristic time over which a starburst would possess the ionizing luminosity in equation (9) is $`3\times 10^6\mathrm{yr}`$ for a Scalo (1986) stellar mass function (see Fig. 4 in Haiman & Loeb 1997), the star formation rate required in order to maintain a steady ionizing luminosity of this magnitude is, $$\dot{M}_{}\frac{f_{}M_{\mathrm{DISK}}}{3\times 10^6\mathrm{yr}}=23\frac{M_{}}{\mathrm{yr}}\left(\frac{f_{}}{0.04}\right)\left(\frac{M_{\mathrm{HALO}}}{10^{10}M_{}}\right).$$ (10) This is a rather high star formation rate, as it implies that for $`f_{}4\%`$ the entire disk mass will be converted into stars in less than a Hubble time at $`z10`$. The assumed star formation rates are unreasonably high for $`f_{}=40\%`$ and large halo masses (e.g., $`\dot{M}_{}2\times 10^4M_{}\mathrm{yr}^1`$ for $`M_{\mathrm{HALO}}10^{12}M_{}`$). For this reason, one may regard our calculated escape fractions as upper limits. In order to perform the photoionization calculation with our Monte Carlo code we need to specify the flux-weighted mean of the opacity of neutral hydrogen for a given ionizing spectrum (see § 2.3 and the Appendix). We adopt a flux mean cross-section of $`\overline{\sigma }=3.15\times 10^{18}`$ cm<sup>2</sup>, which is appropriate for the composite Scalo IMF emissivity spectrum presented in Fig. 3 of Haiman & Loeb (1997). #### 2.2.2 Quasars In the case of a quasar we assume that the ionizing radiation emanates from a point source at the center of the disk. Haiman & Loeb (1998, 1999) have demonstrated that the observed B-band and X-ray luminosity functions of high-redshift quasars at $`z2.2`$ can be fitted by a $`\mathrm{\Lambda }`$CDM model in which each dark matter halo harbors a black hole with a mass $$M_{\mathrm{BH}}=m_{bh}M_{\mathrm{HALO}},$$ (11) which shines at the Eddington luminosity, $`L_{\mathrm{Edd}}=1.4\times 10^{38}(M_{\mathrm{BH}}/M_{})`$ ergs s<sup>-1</sup>, for a period of $`10^6`$ years when the halo forms. The required value of $`m_{bh}6\times 10^4`$, is consistent with the inferred black hole masses in local galaxies (Magorrian 1998). We will adopt this prescription in deriving the quasar luminosity within a halo of a given mass. As there is a large scatter around the mean in the observed distribution of the black hole to bulge mass ratio, we also consider lower values in the range $`6\times 10^7<m_{bh}<6\times 10^4`$ for a $`10^{10}M_{}`$ halo (see § 3.2). The ionizing component of quasar spectra is not well determined empirically; we use the calibration of Laor & Draine (1993) and relate the ionizing luminosity to the total bolometric luminosity by $`Q_{\mathrm{QSO}}(H^0)=6.6\times 10^9(L_{\mathrm{Edd}}/\mathrm{erg}\mathrm{s}^1)`$ s<sup>-1</sup>. Hence, the ionizing luminosity of a quasar in our models is related to its host halo mass through $$Q_{\mathrm{QSO}}(H^0)=9.24\times 10^{47}m_{bh}\frac{M_{\mathrm{HALO}}}{M_{}}\mathrm{s}^1.$$ (12) We consistently adopt a flux-weighted mean of the opacity for the characteristic quasar spectrum of $`\overline{\sigma }=2.05\times 10^{18}`$ cm<sup>2</sup> (Laor & Draine 1993). ### 2.3 Radiation Transfer In our simulations we employ a three dimensional Monte Carlo radiation transfer code (Wood & Reynolds 1999) that has been modified to include photoionization. Our photoionization code is similar to that of Och et al. (1998), but somewhat simplified in that we only treat the photoionization of hydrogen at a constant temperature. However, as our simulations are run on a three dimensional grid we are able to investigate complex geometries which are relevant to the present study. We track the propagation of ionizing photons from the source as they are absorbed and reemitted as diffuse ionizing photons or as non-ionizing photons (in which case they are subsequently ignored). In each computational cell the contributions of each ionizing photon to the mean intensity are tallied, so that we may determine the ionization fraction throughout our grid. We iterate on this procedure until the ionization fractions converge. We keep track of the number of ionizing photons that exit our grid either directly from the source or after multiple absorption and re-emission as diffuse ionizing photons, thus determining the escape fraction of ionizing radiation. Note that we consider escaping ionizing photons to be either direct photons from the source or diffuse photons. As with most models of the escape of ionizing photons, we have not considered the detailed effects of the ionizing spectrum and work with a total ionizing luminosity (e.g., Miller & Cox 1993; Dove & Shull 1994; Dove et al. 1999; Razoumov & Scott 1999). Where our work differs from “Strömgren volume” analyses is that we calculate the ionization fraction throughout our grid and track diffuse (absorbed and re-emitted) ionizing photons also. A more detailed description of our code and its comparison with other photoionization calculations are presented in the Appendix. Our code is time independent and does not consider any time evolution of the ionizing spectrum in the calculation of the escape fraction. Stellar evolution models imply that the ionizing luminosity remains fairly constant for $`10^7`$ years (a typical main sequence lifetime of an O star) and decreases subsequently (Haiman & Loeb 1997; Leitherer et al. 1999). In recent modeling of the Milky Way, Dove et al. (1999) showed that including the time dependence (and also the effects of dynamical supershells) results in a decrease of the escape fractions compared to their static calculations (Dove & Shull 1994). Therefore we will tend to overestimate the escape fraction since we ignore the time decay of the ionizing luminosity, and our results will provide upper limits for the escape of stellar ionizing radiation from smooth galactic disks. For quasars, Haiman & Loeb (1998) found that the observed quasar luminosity functions are best fit with a quasar lifetime of $`10^6`$yr. As for the stellar case, our static quasar models yield upper limits on the escape fraction. However, since the maximum ionizing luminosity of quasars is higher by $`2`$ orders of magnitude than that of stars and involves harder photons, and since it all originates from a single point, the propagation of the ionization front is expected to be much faster for quasars, and so the steady-state assumption should, in fact, be more adequate in their case. For both stars and quasars, we ignore at first the effects of energetic outflows on the galactic disk. Outflows could clear away material and raise the calculated escape fractions. We will incorporate this dilution effect by artificially reducing the absorbing gas mass in the disk in some of the numerical runs. ## 3 Results ### 3.1 Stars We calculated the escape fraction of stellar ionizing radiation by simulating the disk out to a radius of $`3R(z_\mathrm{f})`$, and using a smooth emissivity profile in it. As the disk density increases with redshift, the recombination rate increases and the ionizing luminosity of the galaxy results in a lower ionization fraction, leading to a decrease in the escape fraction with increasing redshift. Figure 2 shows the escape fraction as a function of formation-redshift for various halo masses, assuming $`f_{}=4\%`$ and emissivities $`n^2`$ (Fig. 2a), and $`n`$ (Fig. 2b). In these models the largest escape fractions occur for low mass disks, but the escape fractions are $`f_{\mathrm{esc}}<0.1`$% at $`z_\mathrm{f}>5`$. When the emissivity scales as $`n^2`$, the sources are embedded more deeply within the disk, leaving a larger column of hydrogen to be ionized and thus leading to smaller escape fractions than in the case where the emissivity is $`n`$. The efficiency of converting baryons into stars is a free parameter, $`f_{}`$, in our models. In Figure 3 we show escape fractions for the much larger efficiency, $`f_{}=40`$%. As expected, the escape fractions are larger than the corresponding values shown in Figure 2 due to the increased ionizing luminosity \[see Eq. (9)\] and the reduced mass of the gaseous disk in the simulations. However, the escape fractions are very small at high redshifts, certainly much smaller than the value of $`f_{\mathrm{esc}}50`$% adopted by Madau (1999) as necessary for stellar sources to keep the intergalactic medium ionized at $`z5`$. In Figure 4 we show results for a simulation in which $`f_{}=4`$%, but with an order-of-magnitude reduction in the density of the smooth gaseous disk, $`n_0n_0/10`$. Since we keep the scale height of the smooth gas unchanged, this is equivalent to lowering the mass of the absorbing gas in the disk by a factor of ten, and may result from the incorporation of gas into highly dense and compact clumps (see § 4) or expulsion of gas by stellar winds and supernovae. The baryonic mass fraction in local disk galaxies, such as the Milky Way galaxy, could have been influenced by outflows, as it is a few times lower than the standard cosmic baryonic fraction of $`m_d=0.17`$ (see Fig. 5). The role of feedback from outflows is expected to be more pronounced at high redshifts, where the characteristic potential well of galaxies is shallower. Note, however, that under these conditions the overall star formation efficiency is expected to be reduced. Nevertheless, in our calculation we left the stellar luminosity unchanged and only reduced the mass of the smoothly distributed gas; under these circumstances, $`f_{}=4\%`$ corresponds to the stars having $`40\%`$ of the smooth gaseous disk mass. This calculation was intended to simulate the most favorable conditions for the escape of ionizing radiation from high-redshift disks. Figure 4 illustrates that although the resulting escape fractions for this extreme case are larger than the corresponding results in Figure 2, they are still negligible for halo masses $`M_{\mathrm{halo}}>10^{10}M_{}`$ at $`z_\mathrm{f}>10`$. It should be further noted that these low escape fractions were calculated for the steady ionization state of the disk implied by the rather high star formation rates in equation (10). We also examined the case of gas removal from the disk due to outflows, resulting in an increased disk scale height. Escape fractions for reducing the central density, $`n_0`$, by a factor of ten (or correspondingly the disk mass by a factor of three in this case) yield results very similar to those in Figure 4. In order to investigate the effects on the escape fractions of a lower disk mass than our assumed $`m_d=0.17`$ we have performed a simulation for a $`10^{10}M_{}`$ halo assuming $`f_{}=4\%`$, $`j_{}n`$, but with $`m_d=0.02`$. The results are shown in Fig. 5. The lower disk mass yields a lower ionizing luminosity (Eq. 9) and a lower gas density (Eq. 8). The overall result of this is an increase in the escape fraction, albeit for galaxies of intrinsically lower luminosity. ### 3.2 Quasars Being a point source of ionizing radiation, a quasar is expected to create an ionized region around the center of the galactic disk. The disk geometry presents a higher opacity to the quasar’s ionizing photons along the disk midplane than it does perpendicular to it. If the quasar luminosity is sufficiently high, it creates a vertically extended photoionized region of low opacity through which ionizing photons may escape. We have therefore restricted our simulations of quasar sources to the innermost region of the disk and performed the radiation transfer within a cube of size $`40z_0(r=0)`$ on a side. Figure 6 presents the escape fraction as a function of formation-redshift for various halo masses. The escape fractions are typically in the range of 20%–80% over a wide range of formation-redshifts and halo masses. They decrease as the disks get denser with increasing formation-redshift, but are always significantly larger than for the stellar case. In all our simulations, the vertical escape routes are created by the ionizing luminosity of the quasar without considering any additional dynamical clearing of the gas by outflows (e.g., a jet or a wind) from the quasar. Such dynamical action could open up low density escape routes for the ionizing photons, thereby increasing the already high escape fraction. In Figure 6b we show escape fractions assuming the gaseous disk has a density smaller by a factor of ten than considered in Figure 6a ($`n_0n_0/10`$). As expected, the escape fractions are very high in this case. Our derivation of high escape fractions for quasars is consistent with the lack of significant absorption beyond the Lyman limit at the host galaxy redshift in observed quasar spectra (see, e.g., review by Koratkar & Blaes 1999). We have also simulated lower luminosity quasars ionizing a disk within a $`10^{10}M_{}`$ halo. Figure 7 shows the escape fractions for black hole masses in the range $`6\times 10^7<m_{bh}<6\times 10^4`$. Since the ionizing luminosity is proportional to the black hole mass \[see Eqs. (11) and (12)\], the escape fraction decreases for low luminosity quasars. For very low luminosity quasars $`f_{\mathrm{esc}}`$ diminishes because the Strömgren sphere created by the quasar is smaller than the scaleheight of the disk and all ionizing photons are trapped. In these cases, the emergence of hydrodynamic outflows or jets from the quasar accretion flow could be important in opening escape channels for the ionizing radiation. ### 3.3 Angular Distribution of Escaping Ionizing Flux In addition to the ionization structure and escape fraction, our code also provides the angular distribution of the escaping ionizing flux. In Figure 8 we show this angular distribution for stellar sources ($`f_{}=4`$%) or a central quasar within a halo of mass $`M_{\mathrm{halo}}=10^{10}M_{}`$ at $`z_f=8`$. The escape fraction for the stellar case is $`f_{\mathrm{esc}}=6`$% and for the quasar $`f_{\mathrm{esc}}=60`$%. Since the system is axisymmetric in both cases, we show the normalized flux of ionizing photons as a function of $`\mathrm{cos}i`$, where $`i`$ is the disk inclination angle. The flux is normalized to unity for a face-on viewing of the disk. The figure clearly illustrates the asymmetry of the emerging ionizing radiation field due to the dense disk, as photons escape preferrentially along the paths of lower optical depth. For the quasar case the pole-to-equator flux ratio is more extreme since the central source opens up a low opacity ionized chimney and is unable to ionize the disk to a large distance in the midplane. For stars, the emission is distributed throughout the disk resulting in a more moderate pole-to-equator flux variation. The cosmological H II regions formed by both types of sources in the surrounding IGM will show neutral shadow regions aligned with the disk midplane. ## 4 Effects of Clumping on the Escape of Ionizing Radiation from Stars In § 3.1 we calculated the escape fraction, $`f_{\mathrm{esc}}`$, for stellar sources, assuming that the ionizing sources and absorbing galactic gas were smoothly distributed according to the formulae presented in § 2.2. In this section we present results of simulations where either the gas, the sources or both, have a clumpy distribution within the galactic disks. Several recent papers have studied the dust scattering of radiation in a two-phase medium, with emphasis on the penetration and escape of non-ionizing stellar radiation from clumpy environments (Boisse 1990; Witt & Gordon 1996, 2000; Bianchi et al. 2000; Haiman & Spaans 2000). Dove et al. (2000) investigated the escape of ionizing photons from the Milky Way disk where inhomogeneities in the interstellar medium were modeled as either spheres or cylindrical disks. In all of these studies, clumping allowed photons to penetrate to greater depths than in a smooth medium and the escape of radiation was enhanced relative to the case where the same gas mass was distributed smoothly. In evaluating the escape of ionizing radiation from clumpy environments we adopt the two-phase prescription from Witt & Gordon (1996). The two-phase medium has two parameters, namely the volume filling factor of dense clumps, $`ff`$, and the density contrast between the clump and interclump medium, $`C`$. We transform our smooth density distribution \[Eqs. (6), (7), and (8)\] into a clumpy one by looping through our three-dimensional grid and applying the following algorithm in each grid cell $$n_{\mathrm{clumpy}}=\{\begin{array}{cc}n_{\mathrm{smooth}}/[ff+(1ff)/C],\hfill & \text{if }\xi <ff\text{;}\hfill \\ n_{\mathrm{smooth}}/[ff(C1)+1],\hfill & \text{otherwise.}\hfill \end{array}$$ (13) where $`\xi `$ is a uniform random deviate in the range (0,1). This algorithm assures that on average the total disk mass is the same for the clumpy and smooth models, and that the ensemble average of the clumpy distribution follows the same spatial profile as the smooth gas does. In this approach the smallest clump is a single cell in our density grid. Our resolution is set by the size of our density grid ($`100^3`$ cells), so that each cell is a cube of size $`3R/50`$, where $`R`$ is the radial scalelength of the disk. For simplicity, we consider a two-phase medium for which the interclump medium has a zero density. Such a medium is described by a single parameter only, $`ff`$. In our simulations we approximate this extreme case by adopting a very large density contrast $`C=10^6`$. Slices through the resulting density distribution for various clump filling factors are shown in Figure 9. We have also applied the above algorithm to generate a clumpy emissivity distribution, which could result, for example, from clusters of young stars. In Figure 10 we show results of clumpy simulations for the case of a $`10^{10}M_{}`$ halo at $`z_f=10`$, where $`m_d=0.17`$, $`f_{}=4\%`$, and $`j_{}n`$. In our smooth simulations this yields $`f_{\mathrm{esc}}<10^3`$. The figure shows the results of three different clumpy simulations. The first simulation assumes a smooth distribution for the gas and a clumpy distribution for the emissivity, while the second examines a clumpy gas and a smooth emissivity distributions. In the third simulation both the emissivity and the gas density are clumped, but they are not spatially correlated (i.e., we generated the clumpy density and emissivity with the above algorithm, but from different random number sequences). For the first simulation, $`ff`$ is defined to be the filling factor of the clumpy emissivity. We find that $`f_{\mathrm{esc}}>1\%`$ is attained for $`ff<0.05`$. For these very low values of $`ff`$, the emissivity is concentrated into less than 5% of the galactic volume. Within these compact concentrations there is a higher flux of ionizing photons per hydrogen atom compared to the smooth simulations. Each high emissivity cell can now generate a bigger Strömgren volume and, depending on its location within the galactic disk, can open up an HII escape channel and yield a larger $`f_{\mathrm{esc}}`$ than the smooth simulation does. As the emissivity filling factor approaches unity, we retrieve the very small $`f_{\mathrm{esc}}`$ characteristic of the smooth case. The escape fractions are larger when the interstellar medium is clumped. We find that with the above prescription for the random clumping the results are insensitive to whether the emissivity is smooth or clumped. (However, when the clumpy emissivity and density are correlated, i.e., emission arises only within the dense clumps, we find that the escape fraction is negligible since the emission within the dense clumps is unable to ionize the clumps and escape.) When the clump filling factor is small, there are lines of sight from either the smooth or clumpy emissivity that do not encounter dense gas and the ionizing photons are free to directly exit the galactic disk. The escape fraction approaches unity for very small clump filling factors. Such small filling factors represent the unphysical situations of all the galactic mass residing in a few dense and highly compact clumps. However, we do find $`f_{\mathrm{esc}}`$ can exceed several percent for $`ff20\%`$ which is comparable to the filling factors of H I (Bregman, Kelson, & Ashe 1993) and H II (Reynolds et al. 1995) in the Milky Way galaxy. While the two-phase density distribution we have adopted does not fully describe the hierarchical structures observed in galactic disks, it does provide an estimate of the porosity levels and clump filling factors required to allow significant escape fractions from high redshift galaxies. ## 5 Discussion We have calculated the escape fraction of ionizing photons as a function of formation redshift, $`z_\mathrm{f}`$, for disk galaxies of various masses. In our canonical model, the disk mass within a given dark matter halo was assumed to be the cosmic baryonic mass fraction, since the initial cooling of the virialized gas with a temperature $`T10^4`$ K, is expected to be very rapid compared to a Hubble time at the high redshifts of interest (see relevant cooling rates in Binney & Tremaine 1987). We find that for smooth disks the escape fraction for bright quasars is in excess of $`30`$% for most galaxies at $`z_\mathrm{f}>10`$, whereas stellar sources are unable to ionize their host galaxies, yielding much smaller escape fractions. In our smooth disk models, stellar escape fractions in excess of a percent are never achieved beyond $`z_\mathrm{f}=10`$ (see Figure 2) unless the conversion efficiency of baryons into stars, $`f_{}`$, is large (Figure 3), requiring unreasonably high star formation rates \[see Eq. (10)\]. Even for the most extreme cases, we find that disks which form within rarer dark matter halos more massive than $`10^{10}M_{}`$, have negligible escape fractions at $`z_\mathrm{f}10`$. Clumping or expulsion of gas from the disks would tend to leave a diluted interstellar medium, and allow for larger escape fractions. We have investigated the effect of gas expulsion by considering a smooth gaseous disk in which the density is an order of magnitude lower than implied by the cosmic baryonic fraction (see Figs. 4 and 5). In this case we find that the escape fractions are considerably increased, but again only disks within low mass halos ($`M_{\mathrm{halo}}10^{10}M_{}`$) exhibit escape fractions in excess of several percent at $`z_\mathrm{f}10`$. Note that this case requires an extreme star formation rate even with $`f_{}=4\%`$, since $`40\%`$ of the smooth gas mass is already incorporated into stars. If the depletion of the smooth gas is due to outflows, then these flows would be effective at increasing the escape fraction only if they clear out the absorbing gas from the disk in less than a few million years, the characteristic timescale over which the ionizing radiation is emitted by massive stars. In this case, the overall star formation efficiency is expected to be lower than we assumed due to the rapid depletion of the galactic gas reservoir (and so the total contribution of the host galaxy to the ionizing background would be low). We have also investigated the escape fractions from clumpy disks with a two-phase density distribution and find that escape fractions larger than a few percent can be achieved if the inter-clump regions (which were assumed to be devoid of absorbing material) occupy $`80\%`$ of the disk volume. In all our simulations we have calculated escape fractions for the asymptotic steady ionization state of the disk. Unless the star formation rates maintain the high values implied by equation (10) over several generations of massive stars, our simulations overestimate the escape fractions since they do not include the lower ionization state of the surrounding gas due to the lower ionizing luminosity at both early and late times. Considering the Milky Way galaxy, Dove et al. (1999) have shown that including the effects of a finite source lifetime and radiation transfer through supershells created by stellar winds and superovae will decrease the escape fractions compared to the smooth, static cases we have considered. We expect this effect to be less important for quasars than it is for stars; due to their much higher brightness, quasars reach steady ionization conditions on a time scale much shorter than their expected lifetime (which is $`10^6`$ yr). Our discussion focused on isothermal ($`T=10^4`$K) disks that form within dark matter halos with masses, $`M_{\mathrm{halo}}10^9M_{}`$. Lower mass halos yield disk scale heights that are not smaller than their scale radii (see Fig. 1), since their virial temperature is lower than our assumed gas temperature of $`10^4`$K. The virialized gas in such halos is unable to cool via atomic transitions, and might fragment into stars only if molecular hydrogen forms efficiently in it (Haiman et al. 1999). However, the star formation process in such systems is expected to be self-destructive and to limit $`f_{}`$ to low values. If only a small fraction of the gas converts into massive stars then molecular hydrogen would be photo-dissociated (Omukai & Nishi 1999). More importantly, any substantial photo-ionization of the gas in these systems (which is required in order to allow for a considerable escape fraction) would heat the gas to a temperature $`10^4`$ K and boil it out of the gravitational potential well of the host galaxy, thus suppressing additional star formation. Winds from a small number of stars or from a single supernova could produce the same effect. It is, of course, possible that by the time the gas is removed from these systems, they would have already formed stars that contribute to the ionizing background. However, the star formation efficiency under these circumstances is expected to be significantly lower than in higher-mass galaxies. Our derived escape fractions should be regarded as upper limits since more absorption is likely to be added by neutral gas in the vicinity of the galaxies, due to infalling material, gas in galaxy groups, or gas in nearby intergalactic sheets and filaments. The surrounding gas density is expected to decline rapidly with distance away from a galactic source (roughly as $`1/r^{2.2}`$ in self-similar infall models \[Bertschinger 1985\]) and so have its strongest absorption effect close to the source. Large scale numerical simulations of reionization (e.g., Gnedin 1999) cannot resolve the small scales of interest due to dynamic range limitations, and are forced to adopt ad-hoc prescriptions for the escape fraction from galaxies. A natural extension of the radiation transfer calculation presented in this work would be to embed a galactic source inside its likely intergalactic environment based on high-resolution small-scale hydrodynamic simulations, and to find the fraction of ionizing photons which would escape into the more distant intergalactic space. Our calculations can be tested by spectroscopic observations of high redshift galaxies, such as the Lyman-break population at $`z3`$–5 (Steidel et al. 1999). While a direct detection of flux beyond the Lyman break might prove difficult, infrared observations of the H$`\alpha `$ emission from the halos of these galaxies might be used to constrain the escape fraction of their ionizing luminosities. The major implication of this work is that ionizing radiation from stars in high-redshift disk galaxies is expected to be trapped by their surrounding high-density interstellar medium unless ouflows or fragmentation dilutes this gas by more than an order of magnitude. If most of the gas remains in the smooth disk during the lifetime of the massive stars ($`10^7\mathrm{yr}`$), then stellar sources are unable to contribute significantly to the intergalactic ionizing background at redshifts $`z6`$, and only mini-quasars are capable of reionizing the Universe then. If mini-quasars are not sufficiently abundant or have very low luminosities at these redshifts (see Fig. 7), then reionization must have occurred late, close to the horizon of current observations at $`z6`$–7. KW acknowledges support from NASA’s Long Term Space Astrophysics Research Program (NAG 5-6039) and thanks John Mathis, Jon Bjorkman, George Rybicki, and Barbara Whitney for discussions related to photoionization within the Monte Carlo radiation transfer code. AL was supported in part by NASA grants NAG 5-7039 and NAG 5-7768. We thank Rennan Barkana, Benedetta Ciardi, Andrea Ferrara, Marco Spaans, and Linda Sparke for discussions and suggestions on this work. ## Appendix — Monte Carlo Photoionization Our three-dimensional Monte Carlo radiation transfer code (Wood & Reynolds 1999) has been modified to include photoionization balance in a scheme close to that of Och et al. (1998). In this Appendix we describe the simplifications that we have adopted and show comparisons of our code with other calculations. We assume that all ionizing photons encounter an average opacity (cross section) on their random walk through our grid. The opacity in each cell is given by $`n_{H^0}\overline{\sigma }`$, where $`n_{H^0}`$ is the number density of neutral hydrogen in the cell and $`\overline{\sigma }`$ is the average cross-section. The optical depth along a path length $`l`$ through the cell is then $`\tau =n_{H^0}\overline{\sigma }l`$. We use two different cross-sections depending on whether the photons are directly emitted by the source or have been absorbed and re-emitted as diffuse ionizing photons. The cross-section we use for source photons is averaged over the flux, $`\overline{\sigma }=_{\nu _0}^{\mathrm{}}F\sigma 𝑑\nu /F𝑑\nu `$, where $`F`$ is the source ionizing spectrum, $`\nu _0`$ is the frequency at the Lyman edge, and $`\sigma `$ is the absorption cross-section of hydrogen. The diffuse ionizing spectrum is strongly peaked at energies just above 13.6 eV, and so we set the cross-section for the random walks of diffuse photons to be the hydrogen cross-section at energies just above 13.6 eV, $`\overline{\sigma }=6.2\times 10^{18}`$cm<sup>2</sup>. This is the main simplification we have made over the work of Och et al. (1998): we are essentially considering only two frequencies in the radiation transfer. The simulation starts with an assumed ionization fraction throughout the grid and then source photons are emitted isotropically and the cross-section is set to its flux averaged value. Random optical depths are chosen from $`\tau =\mathrm{ln}\xi `$, where $`\xi `$ is a random number in the range $`0<\xi <1`$. The photons are tracked along their propagation direction until they reach an interaction point at a distance $`s`$ from their emission (or re-emission) point, where $`s`$ is determined from $`\tau =_0^sn_{H^0}\overline{\sigma }𝑑l`$. At the interaction points, the photons are absorbed and re-emitted as either line photons with energies $`h\nu <13.6`$ eV, or as diffuse ionizing photons with $`h\nu >13.6`$ eV. The probability of a photon being re-emitted as an ionizing photon is the ratio of the energy in the diffuse ionizing spectrum to the total energy \[see Eq. (16) and the accompanying discussion in Och et al. 1998\]. In our simulations we use the following simplified form for this probability, $$P=\frac{\alpha _1}{\alpha _A}=1\frac{\alpha _B}{\alpha _A}$$ (14) where $`\alpha _1`$ is the recombination coefficient to the ground state of hydrogen (which gives the diffuse ionizing spectrum), $`\alpha _B`$ is the recombination coefficient to the excited levels, and $`\alpha _A`$ is the recombination coefficient to all levels, including the ground state. In our simulations we adopt recombination coefficients appropriate for a gas at $`10^4`$K (Osterbrock 1989): $`\alpha _A=4.18\times 10^{13}`$cm<sup>3</sup> s<sup>-1</sup>, $`\alpha _B=2.59\times 10^{13}`$cm<sup>3</sup> s<sup>-1</sup>, and thus $`P=0.38`$. An absorbed photon is re-emitted isotropically from its absorption point as a diffuse ionizing photon if $`\xi <P`$. If $`\xi >P`$, then the photon will be re-emitted as a non-ionizing photon, in which case we ignore the opacity it encounters and remove it from the simulation. As the photons are tracked on their random walks, we calculate their contribution to the mean intensity in each cell, using the path length formula from Lucy \[1999, his Eq. (13)\], $$J=\frac{E}{4\pi \mathrm{\Delta }tV}l,$$ (15) where the total ionizing luminosity is split up into $`N`$ equal energy packets (Monte Carlo “photons”) of luminosity $`E/\mathrm{\Delta }t=h\nu Q(H^0)/N`$, $`l`$ is the path length the photon traverses in each cell, and $`V`$ is the volume of a cell in our grid. In the photoionization equilibrium equation $$n_{H^0}_{\nu _0}^{\mathrm{}}\frac{4\pi J_\nu }{h\nu }\sigma _\nu 𝑑\nu =\alpha _An_en_p,$$ (16) we need the integral of $`4\pi J_\nu \sigma _\nu /h\nu `$ over all ionizing frequencies. (Here, $`n_e`$ and $`n_p`$ are the number densities of free electrons and protons.) As we are effectively only considering the cross-section at two frequencies, a counter is kept for each cell that approximates the integral as $$_{\nu _0}^{\mathrm{}}\frac{4\pi J_\nu }{h\nu }\sigma _\nu 𝑑\nu =\frac{Q(H^0)}{N}\frac{l}{V}\overline{\sigma },$$ (17) where $`\overline{\sigma }`$ is the mean cross section seen by either the source photons or the diffuse photons. All photons are tracked and their contributions to the mean intensity tallied in each cell until they exit the grid either through multiple absorption and re-emission as diffuse ionizing photons, or as non-ionizing photons. At this point we have the mean intensity within each grid cell and can solve the hydrogen photoionization equation to determine the ionization fraction in each cell. With the ionization fraction and therefore the opacity calculated throughout the grid we iterate the above procedure, calculating new ionization fractions throughout the grid until convergence is achieved. We typically find that the ionization fractions converge within five iterations. In order to test the validity of our assumptions we have tested our code against the spherically symmetric photoionization code of Mathis (1985). The test case we have chosen is the same as that presented by Och et al. (1998). A blackbody ionizing source of $`T_{\mathrm{eff}}=4\times 10^4`$K and ionizing luminosity $`Q(H^0)=4.26\times 10^{49}`$s<sup>-1</sup> ionizes a region of constant number density $`n_H=100`$cm<sup>-3</sup>. In our approach, the flux mean cross section for this case is $`\overline{\sigma }=3\times 10^{18}`$cm<sup>2</sup>. Figure 11 shows the ionization fraction as calculated by our code and by the Mathis code. Even though we have employed many simplifications compared to Mathis’ code, our results for hydrogen photoionization (ionization fraction and extent of the ionized region) are in excellent agreement with his. As a second test case we have compared our code with the two-dimensional “Strömgren volume” calculations of Dove & Shull (1994, their Figs. 1 and 2). They calculated the escape fraction and ionized volumes formed within a three component galactic disk by point sources of different luminosity. Figure 12 shows the escape fractions calculated by our Monte Carlo code and the analytic escape fractions of Dove & Shull \[1994, their Eq. (16)\]. We have plotted the total escape fraction from both sides of the disk, whereas Dove & Shull (1994) showed the escape fraction from one side of the disk. Our Monte Carlo results are in excellent agreement at high luminosities, but are systematically lower at low luminosities. This discrepancy arises because we calculate the ionization fraction throughout our grid, whereas the Strömgren volume analysis assumes that the medium is either neutral or fully ionized. In Figure 13 we show three slices through our density grid showing the total number density, ionization fraction, and locations of the regions where photons are absorbed and re-emitted as escaping non-ionizing radiation. While the Strömgren volume is similar to that shown in Figure 1d of Dove & Shull (1994), our calculated neutral fraction of $`f10^3`$ provides sufficient opacity to absorb source photons, thereby reducing the escape fraction compared to that of Dove & Shull (1994). In the analysis of Dove & Shull, source photons can escape directly through this ionized “chimney” where they assume a vanishing neutral fraction.
no-problem/9911/astro-ph9911205.html
ar5iv
text
# Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months ## 1 Introduction SN 1987A has provided us with an excellent opportunity to test the theory of massive star evolution, nucleosynthesis, and supernova explosion. The broad band photometric observations, ranging from ultraviolet to far-infrared, and the resultant bolometric light curve enable us to probe the physical processes occurring in the interior of SN 1987A. The light curve is so sensitive to the hydrodynamics that it is a useful tool to infer the progenitor’s radius, the distribution of elements, mass of the ejecta (in particular, the mass of the hydrogen-rich envelope, $`M_{\mathrm{env}}`$), and the explosion energy, $`E`$ (e.g., Nomoto et al. 1994 for a review). Although earlier theoretical models with flux-limited diffusion approximation are generally in good agreement with observations (see, e.g., Arnett et al. 1989a, and Hillebrandt & Höflich (1989) for a review), there are still some uncertainties. One of the most important complications is that the supernova atmosphere is scattering dominated so that the color temperature is much higher than the effective temperature; the spectrum is a superposition of spectra emerging from layers with different depths and temperatures (Imshennik & Utrobin 1977; Shigeyama et al. 1987; Pizzochero 1990; Höflich & Wheeler 1999 and references therein). This effect is crucial to include in order to constrain model parameters. Previous works on modeling this effect have used supernova atmospheric codes which rely either on the temperature structure derived from equilibrium diffusion models (Höflich 1991), or on the time-dependent luminosity from one-group radiation-hydro models (Hauschildt & Ensman 1994). To produce light curves, the latter work also used the temperature structure prescribed by the one-group radiation-hydro results. In the present work we attempt to solve the problem of a scattering dominated supernova envelope doing all calculations time-dependently. We calculate the temperature structure self-consistently, and we do not make assumptions on the radiative equilibrium which is strongly violated during the shock breakout. We analyze the light curve of SN 1987A with a multi-group radiation hydrodynamics code called stella (Blinnikov & Bartunov (1993); Blinnikov et al. (1998)). The calculated broad band ( $`UBV`$, IUE UV) and bolometric light curves are compared with observations for the first $`4`$ months after core collapse. From this we update the constraints on model parameters such as the explosion energy and the extent of mixing. We discuss the uncertainties in our predictions due to NLTE effects (which are not included in our modeling) by comparing with NLTE atmospheric calculations done by other authors. We believe that those uncertainties can be bracketed for the first months of SN 1987A light by two extreme assumptions on the scattering of photons in spectral lines. It is found that the predictions for the first hours and days are fairly insensitive to the assumptions on spectral lines, because of the overwhelming dominance of electron scattering. So, for those epochs we produce reliable predictions for the soft X-ray/extreme UV flash of the supernova. ## 2 Radiation Hydrodynamics Earlier modeling of the light curve of SN 1987A has adopted the following types of numerical approaches: 1. Equilibrium-diffusion radiation hydrodynamics, with a flux-limiter to ensure a smooth transition from diffusion to free-streaming regimes (Shigeyama et al. 1987; Arnett 1987, 1988; Grasberg et al. (1987); Woosley 1988; Utrobin (1993)). Here equilibrium means one-temperature approximation where the radiation and gas have the same temperature. Local thermodynamical equilibrium (LTE) is assumed, and black body spectra are used to obtain monochromatic broad-band light curves. 2. Non-equilibrium, one-energy group (gray) radiation hydrodynamics, where equilibrium between the gas and the radiation is no longer assumed (i.e., two-temperature transfer). LTE is assumed and a diluted black body approximation is used for the monochromatic light curves (Ensman & Burrows 1992; Mair et al. 1992). 3. Multi-energy group (non-gray) atmospheric codes combined with gray radiation hydrodynamics. Here the atmospheric structures, i.e., the distributions of temperature, density, and velocity at $`\tau _{\mathrm{sc}}100`$ are obtained from the gray hydro code described above (Höflich 1991; Hauschildt & Ensman 1994) and the emerging spectra are computed taking into account NLTE effects. 4. Full multi-energy group radiation hydrodynamics. This is the approach in our study, and the code we have used is called stella (Blinnikov & Bartunov 1993; Blinnikov et al. 1998); a detailed description of new technical features of the code is given in Sorokina, Blinnikov, & Nomoto (1999). LTE for ionization and atomic level populations is assumed. stella solves the time-dependent equations for the angular moments of intensity averaged over fixed frequency bands, using up to 300 zones for the Lagrangean coordinate and up to 100 frequency bins (i.e., energy groups). This high number of frequency groups allows one to have a reasonably accurate representation of non-equilibrium continuum radiation. There is no need to ascribe any temperature to the radiation: the photon energy distribution can be quite arbitrary. The coupling of multi-group radiative transfer with hydrodynamics means that we can obtain the color temperature in a self-consistent calculation, and that no additional estimates of thermalization depth as in the one-energy group model of Ensman & Burrows (1992) are needed. Variable Eddington factors are computed, which fully take into account scattering and redshifts for each frequency group in each mass zone. The gamma-ray transfer is calculated using a one-group approximation for the non-local deposition of the energy of radioactive nuclei. Here we follow Swartz, Sutherland, & Harkness (1995; see also Jeffery 1998), and we only use a purely absorptive opacity. This should be a good approximation. In the equation of state, LTE ionizations and recombinations are taken into account. The effect of line opacity is treated as an expansion opacity according to the prescription of Eastman & Pinto (1993; see also Blinnikov et al. 1998). Their approach is different from that of Shigeyama & Nomoto (1990) who used Rosseland mean opacities for scattering and absorption processes, where the line opacities were assumed to be 0.009 and 0.01 cm<sup>2</sup> g<sup>-1</sup> for helium and heavier elements, respectively (Los Alamos opacity library). Our opacities are also different from those in other equilibrium-diffusion or one-group radiation hydro models, since instead of one single energy-averaged opacity we need opacities for all our energy groups. ## 3 Models We have studied two progenitor models in some detail: the evolutionary model of Nomoto & Hashimoto (1988) and Saio, Nomoto, & Kato (1988b) (see also Shigeyama et al. (1988); Shigeyama & Nomoto (1990); Yamaoka et al. (1991); Saio, Kato, & Nomoto 1988a), and the non-evolutionary model of Utrobin (1993). Here we concentrate on the model studied by Nomoto and co-workers. The results for Utrobin’s model (which gives one of the best fits to the bolometric light of SN 1987A among the equilibrium diffusion models) are presented elsewhere. In the evolutionary model, a star with the initial mass $`M_{\mathrm{ms}}`$ = 23 $`M_{}`$ and low metallicity $`Z`$ = 0.005 is evolved from the main-sequence and onwards, with the Schwarzschild criterion applied for convection. During the evolution from blue to red, the stellar mass decreases to 16.3 $`M_{}`$ and a helium core of $`M_{\mathrm{core}}`$ = 6.7 $`M_{}`$ is formed. During the red phase, 0.7 $`M_{}`$ of helium is mixed out into the hydrogen-rich envelope yielding $`M_{\mathrm{core}}`$ = 6.0 $`M_{}`$ and $`M_{\mathrm{env}}`$ = 10.3 $`M_{}`$. The dredge up of the helium enhances the surface helium abundance to $`Y_{\mathrm{surf}}`$ = 0.43, which is large enough to move the star in HR-diagram from the red to the location of Sk –69202 in the blue. The resultant luminosity and radius are $`L_0`$ = $`1.3\times 10^5`$ $`L_{}`$ and $`R_0`$ = 48.5 $`R_{}`$, respectively. Because the radius of the progenitor is not well constrained in the evolutionary model, we have also constructed a hydrostatic envelope model with a radius of $`R_0`$ = 40 $`R_{}`$ and of $`R_0`$ = 58 $`R_{}`$ which are fitted to the evolved 6 $`M_{}`$ He core. The 6 $`M_{}`$ He core was evolved through the iron core collapse (Nomoto & Hashimoto 1988). The resultant explosion and explosive nucleosynthesis were calculated as in Hashimoto, Nomoto, & Shigeyama (1989) and Thielemann, Hashimoto, & Nomoto (1990). We have assumed in all our models (i.e., irrespective of the explosion energy) that the mass cut is located at $`M_\mathrm{r}M_\mathrm{c}`$ = 1.6 $`M_{}`$, so that the mass of the ejecta is $`M_{\mathrm{ej}}=14.7`$ $`M_{}`$. Following Shigeyama & Nomoto (1990) we denote our standard model 14E1. Various suffixes have been added to distinguish between different models which have different explosion energies and initial radii, as well as other different physical properties. For our multi-group computation it is inconvenient to use the initial evolutionary 14E1 model directly. Instead, the star was constructed in hydrostatic equilibrium in the same way as was done in Blinnikov et al. (1998) for SN 1993J: as we need a much finer zoning in the outer layers in our calculations than was used in Nomoto & Hashimoto (1988) and Saio et al. (1988b), a remap of the original model was done onto another grid. We assume that at the outer boundary (i.e., at $`m=M`$, $`M`$ being the total mass of the star) the material pressure vanishes, $`p=0`$, and that there is no radiation coming from the outside. The density structure found in this way is shown in Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months. The density of the free blue wind is only $`10^{16}`$ g cm<sup>-3</sup> when scaled as $`r^2`$ from $`10^{17}`$ cm and inward to $`3\times 10^{12}`$ cm (Lundqvist 1999). This falls outside the plot in Figure 1. As the shock wave propagates through the star, the interfaces where the composition changes suddenly from C+O dominated to helium dominated, and then further out to hydrogen-dominated, are strongly Rayleigh-Taylor unstable. This induces mixing of the material before the shock breakout at the surface (e.g., Bandiera (1984); Ebisuzaki, Shigeyama, & Nomoto (1989); Arnett et al. 1989b ; Benz & Thielemann (1990), Hachisu et al. 1990, 1991, Müller, Fryxell, & Arnett (1991); Basko (1994)). Particularly important for the light curve behavior is the mixing of hydrogen down to the central region. Mixing of <sup>56</sup>Ni into the hydrogen-rich envelope also affects the light curve, and is decisive for spectral (Utrobin, Chugai, & Andronova (1995)) and X-ray observations. The mixed abundance distribution in Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months is inferred from the comparison of the X-ray and $`\gamma `$-ray light curves and spectra with observations (Kumagai et al. 1989). The outermost mixed <sup>56</sup>Ni corresponds to the velocity $`4000`$ /kms in the best fitting model of the light curve (see Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months below). The mixing is artificial, since 2D modeling failed to distribute <sup>56</sup>Ni further out than $`2500`$ /kms (Hachisu et al. . 1990, 1991). The density profile was only marginally changed by the mixing. We have tested explosion energies in the range $`E=(0.71.5)\times 10^{51}`$ ergs and named the runs 14E0.7, 14E1, 14E1.3, …, etc., where ‘14’ denotes the ejecta mass in solar units, see Table 1. All models in the runs labeled without the suffix ‘U’ have the mixed composition shown in Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months already prior to the model explosion. Each model was exploded by the deposition of heat energy in a layer of mass $`0.03`$ $`M_{}`$ outside of 1.6 $`M_{}`$. Since stella does not include nuclear burning, preservation of the same mixed composition in the ejecta is assured. The explosion energies given above refers to the asymptotic kinetic energy of ejecta. The heat energy injected for a simulated explosion is higher by $`0.7\times 10^{51}`$ ergs, i.e., by the gravitational binding energy for the presupernova model. A small fraction ($`7\times 10^{48}`$ ergs) goes to the photon energy which is radiated away. For other models, 14E1U, and 14E1.2U, the unmixed composition in Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months was used, and in 14E1A, we treated the opacity as pure absorption for the same total extinction (which, however, is often dominated by scattering in reality). For the latter model the entry for “forced $`\chi _{\mathrm{abs}}`$” is “yes”, i.e., $`\chi _{\mathrm{abs}}`$ was artificially set equal to the total extinction. The suffix ‘H’ denotes the model having the same density structure as the standard one, 14E1, but the abundance of hydrogen in the outer layers is enhanced to the solar value. The suffix ‘R’ is added for the models with non-standard initial radius (see Table 1). Some other models from the Table 1 are discussed below. ## 4 Hydrodynamics and Shock Breakout The shock wave arrives at the surface of the star at time $`t_{\mathrm{prop}}`$, which for different values of the initial radius $`R_0`$, the ejected mass $`M_{\mathrm{ej}}`$, and explosion energy $`E`$, could be approximated by: $$t_{\mathrm{prop}}1.6\left(\frac{R_0}{50R_{}}\right)\left[\left(\frac{M_{\mathrm{ej}}}{10M_{}}\right)/\left(\frac{E}{1\times 10^{51}\mathrm{erg}}\right)\right]^{\frac{1}{2}}\mathrm{hours}.$$ (1) This expression is consistent with Shigeyama et al. (1987). We compare the estimate (1) with computations in Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months. The change in the velocity profile in Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months shows how the materials are accelerated near the shock breakout until they reach homologous expansion. The maximum velocity at the outer edge is $`32,500`$ km s<sup>-1</sup> for 14E1 (which is somewhat lower than for 14E1.3 in Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months). Further acceleration is limited by the inefficiency of the radiative precursor of the shock (see a semi-analytic approach reviewed by Nadyozhin 1994; see also Imshennik & Nadyozhin 1988). We will discuss the influence of various assumptions on the maximum velocity and other details of shock breakout elsewhere. Here we just note that velocities of the order $`(34)\times 10^4`$ km s<sup>-1</sup> are in very good agreement with the value found from the absorption feature of Mg II $`\lambda `$2800 in the early IUE spectra (Pun et al. (1995)). This has important implications for our understanding of the density structure of the circumstellar medium of the supernova (Chevalier 1999; Lundqvist 1999). More specifically, it indicates that the density of the blue supergiant wind must have been very low, corresponding to a mass loss rate of only $`10^8M_{}\mathrm{yr}^1`$. The density distribution in the outer part is well approximated by a power law $`r^{8.6}`$ as found in Shigeyama & Nomoto (1990), but the very outermost layers are much steeper (see Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months). In between there is a dense shell, which was also found in non-equilibrium radiation hydrodynamic modeling (Blinnikov & Nadyozhin (1991); Blinnikov, Nadyozhin, & Bartunov (1991); Ensman & Burrows (1992)), but missed in the equilibrium diffusion modeling. Note that the density in the central parts computed by stella is much smoother than shown in Figure 6 by Shigeyama & Nomoto (1990). ## 5 Early Light Curve After the shock breakout, the early light curve up to $`t`$ 25 days is powered by the diffusive release of the internal energy of the radiation field that is established by the shock wave. The bolometric light curve reaches its maximum luminosity of $`L_{\mathrm{bol}}`$ (4–9) $`\times 10^{44}`$ ergs s<sup>-1</sup> (Table 2 and Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months) immediately after shock breakout, and then drops rapidly by many orders of magnitude in $`10`$ days. The total energy radiated during the first two days amounts to $``$ $`10^{47}`$ ergs (Table 2), but most of the radiation is emitted in a soft X-ray/EUV burst, and was not observed. However, the burst had the important effect that it ionized the surrounding gas (e.g., Lundqvist & Fransson 1996; Sonneborn et al. 1997; Lundqvist 1999). The resultant ionization of the circumstellar material is commented on briefly in §7.1, and will be compared with the observations in greater detail in Lundqvist, Blinnikov, & Bartunov (1999a). After the burst, the ejecta expand so rapidly that the interior temperature (both of the matter and radiation) decreases almost adiabatically as $`r^1`$. As a result, the bolometric luminosity decreases sharply to $`L_{\mathrm{bol}}`$ (2 – 3) $`\times 10^{41}`$ ergs s<sup>-1</sup> to form a minimum of the bolometric light curve. Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months demonstrates that the model with $`E=10^{51}`$ ergs (i.e, 14E1) gives the best agreement with the observed bolometric flux. Note that the agreement for 14E1 is much better than in Figure 7 of Shigeyama & Nomoto (1990)). (Note also the higher resolution in our figure than in the figure of Shigeyama & Nomoto (1990)). The light curves computed by the flux-limited diffusion in Shigeyama & Nomoto (1990) produce a short plateau, $``$ 20 days (see Figs. 16 – 19 in Shigeyama & Nomoto (1990)). This is neither seen in the observations, nor in our models. The light curves computed here by the full radiative transport are in better agreement with observations for the same models (cf. Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months) suggesting that the more accurate method of modeling gives results which are closer to reality. The luminosity at this phase is lower than for typical Type II-P supernovae by a factor of 10 – 20. This is due to the small initial radius which leads to a low luminosity because a much larger fraction of the radiation field energy is lost by $`P`$d$`V`$ work than in ordinary Type II-P supernovae. This is well-known from early modeling of low-luminosity Type II-P supernovae (Imshennik & Nadyozhin (1964); Chevalier 1976). For SN 1987A, the low luminosity was successfully demonstrated in the models of Shigeyama et al. (1987), Arnett (1987), Grasberg et al. (1987), Woosley (1988), Woosley, Pinto, & Eastman (1988) and Utrobin (1993). In Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months, we show the changes in the color temperature, $`T_\mathrm{c}`$, of the best blackbody fit to the flux, along with the effective temperature, $`T_{\mathrm{eff}}`$, defined by the luminosity and the radius of last scattering $`R`$ through $`L=4\pi \sigma T_{\mathrm{eff}}^4R^2`$ (see Blinnikov et al. , 1998, for details of finding $`R`$ and from that $`T_{\mathrm{eff}}`$). The maximum value of $`T_\mathrm{c}`$ is $`1.2\times 10^6`$ K for model 14E1 (see Table 2), which is higher than the $`7.6\times 10^5`$ K in Höflich’s (1991) NLTE time-dependent calculation for 14E1.25, but similar to the temperatures found by Ensman & Burrows (1992). Our results are in very good agreement with the estimates of Imshennik & Nadyozhin (1988, 1989). We emphasize that our multi-group radiative transfer with hydrodynamics obtains this temperature in a self-consistent way, and no additional estimates of the thermalization depth (like in the one-group model of Ensman & Burrows 1992) are needed. The large difference between color and effective temperatures is due to scattering (e.g., Sobolev 1980; Kolesov & Sobolev 1982; Höflich 1991; Wagoner & Montes 1993); the average energy of the photons is higher than that corresponding to the value of $`T_{\mathrm{eff}}`$. The effect is a deficit of photons in the visual at maximum light compared to a model with forced absorption (Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months). This effect continues throughout the first day after shock breakout. In Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months we demonstrate the dominance of scattering in extinction for high temperatures. Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months shows $`T_\mathrm{c}`$ for the same three runs as in Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months, and one can see that $`T_\mathrm{c}`$ is rather insensitive to $`E`$ (except for the maximum $`T_\mathrm{c}`$ at shock breakout, as displayed in Table 2). During the first day, the visual luminosity increases because the intensity peak is rapidly shifted into the optical due to the decreasing photospheric temperature. In order for the optical flare-up of the supernova to be seen at 6 mag at $`t=3`$ h, the condition $`t_{\mathrm{prop}}<3`$ h (Eqn. ) should be satisfied, which requires a relatively large $`E/M_{\mathrm{env}}`$ and small $`R_0`$ (Shigeyama et al. 1987; Woosley et al. 1987). Also, the ejecta and the radiation field should have expanded rapidly so that the radiation temperature becomes lower and the radius of the photosphere becomes larger sufficiently fast. Therefore, the expansion velocity, and thus $`E/M_{\mathrm{env}}`$, should be larger than certain values for a given initial radius. Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months shows the $`V`$ light curve for the model 14E1.21 with realistic scattering dominated opacity for the first hours of SN 1987A. Here, and for other models below, we have used a distance modulus of 18.5. (See Walker 1999, for a discussion on the uncertainty of this value.) We note that the $`V`$ curve exhibits an early local minimum which does not exist in equilibrium diffusion models (e.g., Woosley (1988); Arnett (1988); a small minimum was found also by Utrobin (1993)). Höflich (1991) ascribed this to a non-LTE effect, and used the location in time of the minimum to constrain $`E`$. However, we too recover the local minimum despite our LTE approach. The reason for the minimum is that when the bolometric flux continues to fall (Fig. 9) after shock breakout, the bolometric correction overcomes the effect of the falling bolometric flux. The visual luminosity then increases because the intensity peak is rapidly shifted into the optical due to the decreasing photospheric temperature. Regardless of the exact cause of the difference between our and Höflich’s results on the one hand, and equilibrium diffusion models on the other, the first maximum in $`V`$ in our models is just on the level of Jones’ limit (Wampler et al. (1987)) even for our 14E1.3 run, where the energy of explosion is $`9\times 10^{49}`$ ergs higher (see Table 1) than the model 14E1.25 used by Höflich (1991; see also Höflich & Wheeler 1999). The earlier appearance of the peak of the $`V`$-flux in our run is due to this higher energy. Compared with observations, and especially the first $`V`$ observation by McNaught & Zoltowski (1987; later revised by West & McNaught 1992), the calculated $`V`$ curve in Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months rises too slowly. In equilibrium diffusion modeling (e.g., Arnett (1988); Woosley (1988); Shigeyama & Nomoto (1990)) the $`V`$ flux rises much faster. The reason for this difference is that equilibrium diffusion models do not care whether the total extinction is absorptive or due to scattering. The spectra are assumed in those models to be blackbody with the temperature equal to $`T_{\mathrm{eff}}`$. This is not a good approximation, since in reality the thermalization of photons takes place well below the surface of the last scattering resulting in $`T_\mathrm{c}T_{\mathrm{eff}}`$. In our run 14E1A we have assumed purely absorptive extinction, i.e., we force even the electron scattering to act as true absorption so that the thermalization of the photons occurs at an optical depth of order unity. We thereby reproduce the results obtained in equilibrium diffusion models. As a matter of fact, the $`V`$ curve in 14E1A rises even faster than was observed. (See Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months which shows the $`UBV`$ light curves for 14E1 and 14E1A for the first 5 days.) There is of course no physical reason why electron scattering should be treated as absorptive. From this it is clear that that an accurate treatment of scattering is crucial to constrain $`E`$. We will discuss this now. Höflich (1991) was the first to show that non-equilibrium effects influence the $`V`$ light curve of SN 1987A drastically; the flux in $`V`$ is $`2`$ magnitudes lower for the first hours than in the equilibrium gray atmosphere case. Höflich (1991) tried a higher explosion energy than in 14E1 of Shigeyama & Nomoto (1990) to compensate for this reduction of flux by adopting the model 14E1.25 computed in the flux-limited equilibrium diffusion approximation by Shigeyama & Nomoto (1990). He then found a marginal agreement with the first data points of McNaught & Zoltowski (1987). The same results have since been presented in Höflich & Wheeler (1999). Compared with our $`V`$ light curve of 14E1.21 and 14E1.3 (see Figs. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months and Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months) the rise of the $`V`$ curve in Höflich’s (1991) 14E1.25 model is significantly faster. To clarify this discrepancy, one should note that Höflich used the temperature structure of an equilibrium diffusion model, which can be appreciably different from that in our full transport models. We cannot say with certainty that this fully explains why our results and those of Höflich differ so much. It could be due to that the expansion opacity should be treated differently in the energy equation than is described in Höflich (1990; see Blinnikov 1996, 1997). There are, of course, uncertainties also in our models. In particular, the role of NLTE effects needs to be further examined, but we cannot envisage that they are able to explain the big difference between us and Höflich (1991) for the first hours after shock breakout. At this epoch the extinction is totally dominated by electron scattering and spectral lines are not so important, see Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months. We see virtually no difference in our results for the first day when we treat the lines as fully absorptive, or as totally scattering dominated. The NLTE effects set in later, and are very important after a couple of weeks (cf. Baron et al. (1996)). We are therefore confident that our results for this epoch are more accurate than Höflich’s (1991). We note that even Höflich’s $`V`$-flux is lower than the observed after the new reductions by West & McNaught (1992). A possible explanation to the discrepancy between models and observations is that the stars, used to calibrate the early plates of the supernova by West & McNaught (1992), are too cool for an object with a color temperature of $`T_\mathrm{c}10^5`$K. (We find at $`t=0.128`$ day that $`T_\mathrm{c}=1.14\times 10^5,1.05\times 10^5,9.8\times 10^4`$ K for the runs 14E0.7, 14E1, and 14E1.3, respectively.) The temperature at $`t=0.128`$ days decreases with increasing explosion energy because of the earlier emergence of the shock and the faster adiabatic cooling. It should be emphasized that our models are much hotter than equilibrium diffusion models (e.g., Arnett (1988); Woosley (1988); Utrobin (1993)), and a comparison between observations and our results is therefore more sensitive to calibration errors than are equilibrium diffusion models. As we will see in §6.1 our models fit the early IUE observations well, and since these observations are less likely to have the same error, we cannot exclude calibration errors to be the cause of the mismatch in $`V`$. This would mean that the early true $`V`$ flux was lower than hitherto believed. We point out that we have changed various parameters in our models to try to make our $`V`$ flux increase faster and thereby fit the observations better. These experiments included enhancing the iron abundance, and varying the explosion energy and presupernova model within the limits allowed by the global light curve. However, none of these attempts reduced the discrepancy. (See, e.g., the results for two different initial radii in Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months.) ## 6 Before and After the Peak of the Light Curve After the minimum around day $``$ 10, the observed bolometric light curve showed an almost exponential increase up to day $``$ 60, and subsequently formed a plateau-like broad peak around day $``$ 100. After a relatively rapid drop, the luminosity then declined slowly between $`t`$ = 120 – 400 days at the rate of <sup>56</sup>Co-decay. The energy source responsible for the broad peak of the light curve, and the tail, is therefore without doubt the radioactive decay of <sup>56</sup>Ni$``$ <sup>56</sup>Co$``$ <sup>56</sup>Fe . The total mass of initial <sup>56</sup>Ni in our models is $`M_{\mathrm{Ni}}0.078M_{}`$, which is the same mass as in the models of Shigeyama & Nomoto (1990). The theoretical bolometric light curves for models 14E1, 14E1M, and 14E1U with a different extent of mixing are shown in Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months. It is clearly seen that the shape of the modeled light curve is strongly dependent on the distribution of hydrogen and <sup>56</sup>Ni in the ejecta, i.e., the amount of mixing that has occurred. For the model with standard mixing (14E1 with <sup>56</sup>Ni mixed out to $`v`$ 4000 km s<sup>-1</sup>, Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months), there is significant heating of the outer layers due to radioactivity. The heating becomes noticeable in the light curve already at $`10`$ days, and then forms a smooth increase in the optical light curve up to the peak, as observed. For the unmixed model 14E1U (Figs. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months, Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months), the shape of its bolometric light curve starts to differ from that of 14E1 already at $``$ 10 days. For the unmixed case, the increase in the luminosity due to radioactive heating is delayed until $`t=35`$ days, which causes a dip to appear in the light curve around day 30, and makes the light curve in the subsequent phase rise faster than in the mixed model. These properties are clearly incompatible with the observations. For the case with intermediate mixing (14E1M), <sup>56</sup>Ni is mixed out to $`v`$ 2500 km s<sup>-1</sup>, while hydrogen is mixed as in the mixed model 14E1 (Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months). Also in this model the appearance of radioactive heating occurs too late to be compatible with observations (see Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months). This suggests that mixing of <sup>56</sup>Ni out to $`v`$ 4000 km s<sup>-1</sup> is needed. This conclusion is independent of the radiation transfer scheme used (Nomoto, Shigeyama, & Hashimoto 1987; Woosley 1988; Shigeyama et al. 1988). The mixing out to large velocities is also supported by the redshifted feature at $`3900`$ km s<sup>-1</sup> observed by Haas et al. (1990; see also Utrobin, Chugai, & Andronova (1995)). We note, however, that Utrobin (1993) obtains a good bolometric light curve without <sup>56</sup>Ni mixing, and that Kozma & Fransson (1998a,b) do not need to mix nickel out to more than $``$ 2000 km s<sup>-1</sup> to model iron line profiles at late epochs. We postpone a more detailed discussion on mixing to a future paper. It is also interesting to compare the results of our non-equilibrium radiative transfer modeling by stella with the models in Figures 16 – 19 of Shigeyama & Nomoto (1990). One striking difference, already noted above, is that the shape of the minimum near day 10 is reproduced much better with our non-equilibrium modeling. Another difference is that the shape of the light curve around maximum (at $`3`$ months) is smoother in models with realistic scattering opacity (see Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months below), than in models with forced absorption like 14E1A (see Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months), and hence also in equilibrium diffusion models since the latter two types of model are closely related. These two models also show the same postmaximum sharp decline (cf. Utrobin (1993)), which can be understood in terms of enhanced emission according to Kirchoff’s law. We note that the rising part of the light curve is modeled better with enhanced absorption in spectral lines than with scattering lines (see Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months for the bolometric luminosity, and Figs. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months, Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months for $`UBV`$ colors). This can perhaps be explained by NLTE effects. In particular, the effect of fluorescence could be important (see Baron et al. (1996); Li, McCray, & Sunyaev (1993); Li & McCray (1996), and references therein). This cannot be studied directly by stella, but, as found by Baron et al. (1996), forced absorption in spectral lines can reproduce some properties of the fluorescence as fluorescence is a form of thermalization; spectra of LTE models with absorptive lines are very similar to full NLTE spectra, while LTE models with scattering lines are far from reality (see also Eastman 1997; Blinnikov et al. (1998)). This hints why the rising part is modeled well when we apply forced absorption in lines. Another cause for the deviation of the rising part of the light curve from what was observed could be that the distribution of hydrogen is different from that in our model. This effect was investigated by Utrobin (1993). The hydrogen distribution affects the light curve because it determines how the hydrogen recombination front propagates into the ejecta (e.g., Nadyozhin 1994; Shigeyama & Nomoto 1990). Because electron scattering is the main source of opacity, the opacity decreases sharply in the outward direction at the recombination front. This causes the photosphere to become associated with the recombination front. When the ejecta pass through this front their temperature quickly decreases to $`5500`$ K. This is seen as a change in the temperature profile in Figures Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months and Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months. One can locate the photosphere in Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months at the point where the radiation temperature starts to deviate from the material temperature appreciably. We note that the recombination front is much broader than that in Shigeyama & Nomoto (1990) because of the large contribution of line opacity to the total opacity in our calculations. The effects of non-equilibrium transport are also important for the width of the recombination front. As seen in Figures Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months and Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months the photosphere propagates inward in mass, while the material expands outward. For a certain period, the recombination front is almost stationary in radius (Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months). If $`T_{\mathrm{eff}}`$ were constant, then this would result in an almost constant bolometric luminosity, i.e., in a perfect plateau of the light curve, where the duration of the plateau stage depends on how deep into the star hydrogen has been mixed. For a deeper mixing of hydrogen, the plateau lasts longer. When the photosphere enters into layers which lack sufficient amounts of hydrogen, the hydrogen recombination front disappears and the plateau phase is terminated. This is the case for typical Type II-P supernovae, where radioactive heating does not show up until the very end of the plateau stage (Eastman et al. 1994). SN 1987A is quite different in this respect, since radioactivity is important also at early epochs; the diffusion flux from the radioactive energy release starts to dominate over the diffusion flux from the recombination of hydrogen already at $`6`$ weeks after the explosion. This is clearly seen in Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months (cf. Fig. 12 in Shigeyama & Nomoto (1990), and Fig. 11 in Utrobin (1993)). The relation between the duration of the plateau phase and the depth of the hydrogen layer can be given more quantitatively. Suppose that hydrogen is mixed down to a shell where the expansion velocity of the hydrogen-rich layer is $`v_\mathrm{H}`$. Then this velocity is related to the observed quantities for the plateau phase as $$v_\mathrm{H}1300\left(\frac{L_{\mathrm{pl}}}{8.5\times 10^{41}\text{ ergs s}\text{-1}}\right)^{1/2}\left(\frac{t_{\mathrm{pl}}}{100\text{ d}}\right)^1\text{ km s}\text{-1}$$ (2) where $`t_{\mathrm{pl}}`$ is the time at the end of the plateau, $`L_{\mathrm{pl}}=4\pi \sigma T_{\mathrm{eff}}^4R^2`$ the luminosity at $`tt_{\mathrm{pl}}`$, and $`T_{\mathrm{eff}}`$ 5500 K because of the association of the photosphere with the hydrogen recombination front. The time $`t`$ is for the freely expanding ejecta, $`t=R/v`$. For example, if we take $`L_{\mathrm{pl}}=2\times 10^{41}`$ ergs s<sup>-1</sup>, and $`t_{\mathrm{pl}}=30`$ days, as in the unmixed model 14E1U (Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months), we find from Eqn.(2) $`v_\mathrm{H}2000`$ km s<sup>-1</sup> which is in good agreement with Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months. In the standard model, 14E1, hydrogen is mixed down to $`M_\mathrm{r}2M_{}`$, i.e., only $`0.4M_{}`$ outside $`M_\mathrm{c}`$. The expansion velocity at that radius is only $``$ 1000 km s<sup>-1</sup>. If we adopt $`t_{\mathrm{pl}}100`$ days, Eqn. 2 gives $`v_\mathrm{H}`$ 1300 km s<sup>-1</sup> as is required from observations. Equation (2) thus appears to give a reasonable estimate of $`v_\mathrm{H}`$ when we put $`t_{\mathrm{pl}}`$ equal to the time when the second maximum ends. However, we caution that this result should not be overinterpreted. Equation (2) gives the photospheric velocity as long as $`T_{\mathrm{eff}}`$ is 5500 K, which is roughly the case for SN 1987A also for the second maximum. But the formula also assumes that the temperature is governed by the presence of hydrogen. This is not a unique statement, as the complicated thermal balance may settle around this temperature also without the presence of hydrogen. Even if hydrogen is mixed far into the core, the end of the second maximum is likely to give only limited information about the minimum velocity of hydrogen. ### 6.1 Broadband fluxes Ideally, both spectra and $`UBV`$ colors should be obtained by full NLTE modeling (see arguments put forward by Eastman 1997). For SN 1987A spectral NLTE modeling has been made by several groups (see, e.g., Schmutz et al. (1990); Höflich (1990); Takeda (1991); Mazzali, Lucy, & Butler (1992); Duschinger et al. (1995); Mazzali & Chugai (1995); Höflich & Wheeler (1999) and references therein), while NLTE modeling of colors are given in the literature only for the first days after the explosion (Eastman & Kirshner (1989); Hauschildt & Ensman (1994)). Our approach cannot add to this NLTE modeling, but as discussed by, e.g., Höflich (1995, see also references therein), LTE modeling is useful even for such rapidly expanding objects as Type Ia SNe, and should therefore also give some insight to the physical conditions of SN 1987A. Inspired by this, we compare our LTE results with observations in Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months. The $`B`$ and $`V`$ light curves are in surprisingly good agreement with observations. Likewise, for the $`U`$ band the agreement is satisfactory for the first days, though at later epochs the modeled flux is too high. Nevertheless, the shape of the light curve qualitatively reproduces the observations. There is also a qualitative agreement between our calculated fluxes and the observed fluxes in IUE bands (Pun et al. (1995)) (see Figs. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 monthsRadiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months). In particular, the agreement is good for the first 10 days for the SWP and the two LWP bands, while the modeled flux overshoots early for the LWP 3000 – 3300 Å band, like it does for the modeled $`U`$ flux. Later, the modeled flux in the bands with the shortest wavelengths undershoots, while for the range 2500 – 3000 Å, the agreement is still fairly good. The modeled flux in the band with the longest wavelengths continues to be high for the first 100 days (Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months). The disagreement between modeled and observed UV fluxes after $`100`$ days is not surprising, because the LTE modeling at that epoch becomes quite unrealistic when there is no longer a true photosphere. For earlier epochs, there can be several causes for deviations. The early drop of the modeled fluxes in the shortest IUE bands could signal that the photosphere is somewhat hotter than in the 14E1 model; the flux here falls into the Wien part of the spectrum and it is exponentially sensitive to the temperature. It is harder to explain why the observed UV flux for $`\lambda >3000`$ Å and the flux in the $`U`$ band fall below what we predict. Within the LTE approach this could mean that the expansion opacity is not complete in the Eastman-Pinto (1993) approximation. The line list of Eastman & Pinto includes $`10^5`$ lines, but they and Baron et al. (1996) have pointed out that it is also necessary to include millions of weaker lines (see also Höflich 1995), mostly of iron. Should the Eastman-Pinto list be sufficient, then one might think of enhancing the abundance of iron group elements in the outer layers of the supernova to increase the opacity. Our experiments with such an enhancement show that we can bring the $`U`$ flux in much better agreement with observations. There could also be other causes for our too strong UV flux, but it seems reasonable to assume that line opacities are somehow involved. This is highlighted by the good agreement between modeled and IUE fluxes in the 2500 – 3000 Å range where there are relatively few spectral lines. This could perhaps indicate that the envelope is contaminated with heavy elements which could give a higher opacity in the UV. In this context we note that Pun et al. (1995) do not cite Wagoner, Perez, & Vasu (1991) correctly when they state that “the expansion opacities in the wavelength region 1000–4000 Å increase by a factor of more than 100 as the temperature of the atmosphere drops from 12,000 to 5000 K”. We point out that it is not the expansion opacity in Wagoner et al. (1991; see also Eastman & Pinto (1993)) which increases by this number. The correct statement is that the ratio of expansion opacity to electron scattering increases by a factor of $`100`$. It does so because the electron scattering opacity drops drastically due to recombination as the temperature is lowered. We have not tried to include millions of weak lines in our models to check whether this can bring the UV observations and our LTE models in better agreement. If such an experiment would fail, and the metal content of the envelope is not unusually high (cf. above), then one has to consider NLTE effects already for a few days after the explosion. The main NLTE effect here could be the excitation of hydrogen from its second principal level, $`n=2`$, perhaps creating an optically thick Balmer continuum. From NLTE atmospheric calculations (see, e.g., Schmutz et al. (1990); Takeda (1991); Duschinger et al. (1995)) we know that early overpopulation of $`n=2`$ is present in SN 1987A, though it is not sufficient to explain the observed absorption in the Balmer range, unless there is a direct excitation due to radioactivity. While it is certainly important to include nonthermal excitation in late spectra hydrogen (e.g., Xu et al. (1992); Kozma & Fransson 1998b), the same effect operating at early times could be one more hint of extremely efficient outward mixing of radioactive material into the hydrogen-rich envelope. The possibility that hydrogen could be excited as a result of circumstellar interaction seems much more unlikely because of the very low density inferred for the circumstellar gas (§4; Chevalier 1999; Lundqvist 1999). ### 6.2 Parameters of the Photosphere We present in Figures Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 monthsRadiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months the comparison of our numerical results with the “photospheric” parameters found by observers. We put this in quotes since what is given by observers are not really the parameters of the photosphere, but the best blackbody fit temperature $`T_{\mathrm{obs}}`$ (which is higher than $`T_{\mathrm{eff}}`$), and the radius, $`R_{\mathrm{obs}}`$, found from $`L=4\pi \sigma T_{\mathrm{obs}}^4R_{\mathrm{obs}}^2`$. We emphasize that $`R_{\mathrm{obs}}`$ is substantially smaller than the radius of the true photosphere, $`R_{\mathrm{ph}}`$, especially at the earliest stages. ## 7 Dependence on Model Parameters ### 7.1 Explosion Energy In the above discussion, the light curve has mainly been used to probe the internal abundance distribution. Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months shows how the bolometric light curve depends on $`E`$. For a given ejecta mass and using the standard mixing, we can find constraints on $`E`$ from the light curve (Shigeyama et al. 1987, 1988; Nomoto et al. 1987, 1994; Woosley 1988; Woosley et al. 1988; Arnett & Fu 1989; Imshennik & Popov (1992)). First, the luminosity near the minimum around day 10 (which corresponds to the early short plateau in the $`V`$ curve) is almost proportional to $`E`$ (Litvinova & Nadezhin 1990, Popov 1993), thus providing an important constraint on $`E`$. Second, the time of the peak, $`t_{\mathrm{peak}}3`$ months, depends on $`E`$. For larger $`E`$, i.e., faster expansion, the rise starts earlier because of earlier appearance of heating due to radioactivity; the decline after the peak is earlier because of a larger velocity of the ejecta which causes the diffusion time-scale to be shorter. The analytical treatment of this epoch is given in detail by Imshennik & Popov (1992). We caution that it is not just $`E`$ that determines $`t_{\mathrm{peak}}`$ (for a given hydrogen distribution), but the combination $`E/M_{\mathrm{env}}`$. $`t_{\mathrm{peak}}`$ is therefore mainly determined by $`E/M_{\mathrm{env}}`$ and the hydrogen distribution. Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months shows that both the first and the second parts of the light curve are reproduced well by 14E1 (see also Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months). 14E1.3 is too bright near the minimum, while 14E0.7 is too dim near the minimum and evolves too slowly. Compared with the flux-limited diffusion model (Shigeyama & Nomoto 1990) 14E1 evolves similarly, but in much better agreement with observations both near the minimum and near the peak when a more accurate radiative transfer scheme is used. For the progenitor we have used in Figure Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months (i.e., the model with mixing and $`M_{\mathrm{env}}=10.3`$ $`M_{}`$), the best explosion energy in order to fit the observations is close to $`1.1\times 10^{51}`$ ergs. Considering the uncertainties of the progenitor model in terms of mixing and envelope mass, we obtain best fits to the bolometric light curve for the explosion energies in the range $`(0.851.35)\times 10^{51}`$ ergs. As noted in Figure 9, the “true” bolometric is likely to be intermediate to the SAAO and CTIO/ESO results used in our fits (see Suntzeff & Bouchet 1990). This allows for a $`\pm 10\%`$ span in luminosity. However, a larger error in luminosity ($`15\%`$, Lundqvist et al. 1999b) is due to the still prevailing uncertainty in distance modulus to the supernova (e.g., Walker 1999), which gives a combined error of approximately $`\pm 30\%`$ in absolute luminosity. The explosion energy should therefore be within the range $`(0.81.4)\times 10^{51}`$ ergs. We note that $`E`$ is very similar to in the analytical diffusion models of Arnett & Fu (1989) and Imshennik & Popov (1992). (The initial theory of diffusion was developed by Arnett for Type Ia supernovae.) All these models use an eigenvalue formulation of the problem, which is not strictly correct (Blinnikov & Popov 1993). However, the more correct, though more complicated, moving-boundary formulation produces results which agree rather well with the results of Imshennik & Popov (1992; see Popov 1995). We therefore support the use of the results of Imshennik & Popov (1992) to make reliable estimates of the supernova parameters from the near maximum light. This rather limited range of energies we find is important for calculations of the ionization of the circumstellar gas (e.g., Lundqvist & Fransson 1996; Lundqvist 1999). Until now these calculations have been based on the models 500full1 and 500full2 of Ensman & Burrows (1992). Qualitatively, the 500full1 model is rather similar to the models in our preferred energy range. We will discuss this in detail in Lundqvist et al. (1999a), but from the analysis in Lundqvist & Fransson (1996) we note immediately that the ionization of the outer rings is particularly sensitive to the spectrum of the burst. ### 7.2 Radius of the Progenitor The above analysis has been based on a progenitor model with $`R_0=`$ 48.5 $`R_{}`$. The uncertainty in the luminosity of the progenitor may imply that $`R_0`$ could be uncertain by $``$ 20 % (Woosley (1988); Saio et al. 1988b ). Thus we have calculated the light curve models 14E1.26R, 14E1.34R, and 14E1.45R for $`R_0=`$ 40$`R_{}`$, and one model, 14E1.4R6, for $`R_0=`$ 58$`R_{}`$ (Table 1 and Figs. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months, Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months). The early light curve for the first 2 days is not so different from the case of the standard $`R_0=`$ 48.5 $`R_{}`$, but $`t_{\mathrm{prop}}`$ scales as $`R_0/E^{1/2}`$. However, the light curve near the minimum (plateau in $`V`$) is dimmer for $`R_0=`$ 40$`R_{}`$ because the luminosity at that epoch is approximately proportional to $`R_0`$. As a result, for $`R_0=`$ 40$`R_{}`$, we need an explosion energy larger than $`1.2\times 10^{51}`$ ergs to put the light curve in agreement with the observations near the minimum (Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months). But then the peak is too early. A larger radius, $`R_0=`$ 58$`R_{}`$, shows an opposite trend. In this case, one needs a lower energy for the brightness at the minimum, but then the peak is too late. ### 7.3 Hydrogen mass We have also made some runs for models where the hydrogen abundance in the outer layers was artificially raised to 0.7 (as is also assumed in, e.g., the non-evolutionary models by Utrobin (1993)). For example, the model 14E1.25H (Table 1) has the same composition of metals as 14E1, but H is enhanced at the expense of He. So, the total mass of H is here $`M_\mathrm{H}7M_{}`$, while it was $`M_\mathrm{H}5.5M_{}`$ in the standard runs (14E1, 14E1.3 etc.). The hydrogen rich model 14E1.25H evolves similarly to the standard runs 14E1.3 and 14E1.21 (Fig. Radiation hydrodynamics of SN 1987A: I. Global analysis of the light curve for the first 4 months). This explains the success of such models, as demonstrated by Utrobin (1993), but it is hard to justify a solar H abundance for SN 1987A from the evolutionary point of view, as well as in context of the abundances in the inner circumstellar ring (e.g., Lundqvist & Fransson 1996). ## 8 Conclusions In this paper we have described an extensive set of full radiation hydrodynamics calculations aimed to improving the modeling of the first few months of the light curve of SN 1987A. We have shown that the improved models can reproduce the light curve and suggest that proper handling of the radiation transfer is indeed decisive for the success of model fits. The full multi-group radiation hydrodynamic modeling is more reliable mainly because the effects of scattering are treated self-consistently. Our findings are that: 1. the color temperatures and broad band photometry (and full continuum spectra) are predicted correctly for the first days of the supernova. 2. the shape of the modeled light curve, especially near the bolometric minimum at $`10`$ days, and during the broad peak at $`100`$ days is much better in the multi-group approach than in the equilibrium diffusion one. 3. the density profiles of the supernova at various epochs are smoother. Our code assumes LTE, but we have bracketed NLTE effects by extreme assumptions of the treatment of spectral lines (as either being scattering dominated, or fully absorptive). We find that the emission at shock break-out is not sensitive to those assumptions, which gives us confidence in our results. This is supported by a very good agreement with IUE observations for the first days. For the first 100 days, best agreement is obtained when NLTE effects are mimicked by treating the line opacity as absorptive, following the prescription of Baron et al. (1996). Looking at our results in greater detail, we find that the color temperature around shock breakout exceeds $`10^6`$ K, which is higher than those obtained using a more approximate approach (e.g., Shigeyama & Nomoto 1990), but not so different from the model 500full1 of Ensman & Burrows (1992). The large difference between color and effective temperatures persists for the first hours. This implies that the rise in the $`V`$ luminosity is slower than in equilibrium diffusion models (which was first noticed by Höflich 1991). Because of the overwhelming dominance of electron scattering during the first hours, the rise in the $`V`$ band is too slow in all of our models. This raises the question on the calibration of the first photometry data of SN 1987A (McNaught & Zoltowski 1987; West & McNaught 1992). It is indicated by our light curve modeling that mixing of <sup>56</sup>Ni up to $`v30004000`$ km s<sup>-1</sup> could be needed, but our analysis is unlikely to supersede those of spectral analysis (e.g., Utrobin, Chugai, & Andronova (1995), Kozma & Fransson 1998a,b), modeling of early X-ray emission (e.g., Kumagai et al. 1989), or direct observations of infrared IR lines (Erickson et al. 1988). We have improved on the constraining of $`E`$. The earlier flux-limited diffusion calculations (Shigeyama & Nomoto 1990) provided a constraint on $`E`$ from both the pre-peak light curve and the plateau-like maximum light, concluding $`E=(1.1\pm 0.4)\times 10^{51}`$ ergs. We find from our more detailed analysis that the best agreement with the observations is obtained for $`E=(1.1\pm 0.3)\times 10^{51}`$ ergs. To arrive at this result we have assumed that the most likely range of $`M_{\mathrm{env}}`$ is $`M_{\mathrm{env}}`$ = 7 - 10 $`M_{}`$ (Saio et al. 1988b), in order for models of the presupernova evolution to be consistent with the enhancement of N/C and N/O in the circumstellar matter (Lundqvist & Fransson 1996). Knowing the energy with this accuracy, as well as having a detailed spectroscopic evolution from our models, we can constrain the ionization of the circumstellar gas much better than before. This will be discussed in Lundqvist et al. (1999a). We are grateful to Ron Eastman, Stan Woosley, Wolfgang Hillebrandt, Vladimir Imshennik, Dmitriy Nadyozhin, Nikolai Chugai, Cecilia Kozma, Alexandra Kozyreva, Jason Pun and Victor Utrobin for discussions, and also Ron Eastman, Vlad Popolitov and Elena Sorokina for using their computer codes as part of ours. Preliminary results of this work were reported at the “CTIO/ESO/LCO Workshop on SN 1987A: Ten years after” in the contributions by Lundqvist & Sonneborn (1999) and Nomoto, Blinnikov, & Iwamoto (1999). We would like to thank the participants of this workshop for stimulating discussions. This work was supported in Russia by a grant from The International Science & Technology Center 97-370, and by the Russian Basic Research Foundation grants RBRF 96-02-19756 and RBRF 96-02-17604. We are also grateful to grants from The Royal Swedish Academy of Sciences and The Wenner-Gren Center Foundation for Scientific Research. P.L. receives further support from The Swedish Natural Science Research Council and The Swedish Board of Space Research. The project has in part also been supported by the grant-in-Aid for Scientific Research (05242102, 06233101) and COE research (07CE2002) of the Ministry of the Education, Science, Culture, and Sports in Japan.
no-problem/9911/cond-mat9911229.html
ar5iv
text
# Universal long-time relaxation on lattices of classical spins: Markovian behavior on non-Markovian timescales ## 1 Introduction We perform a numerical study of the long-time behavior of infinite temperature correlation functions defined on an infinite lattice of classical spins as: $$G(t)=S_k^x(t)_n\text{cos(}𝒒\mathbf{}𝒓\text{kn}\text{)}S_n^x(0),$$ (1) where $`S_k^\mu `$ is the $`\mu `$th ($`x`$,$`y`$ or $`z`$) spin component on the $`k`$th lattice site; $`𝒓`$<sub>kn</sub> is the translation vector between the $`k`$th and the $`n`$th sites; and $`𝒒`$ is a wave vector commensurate with the lattice periodicity. We consider three types of lattices: a simple one-dimensional chain, a two-dimensional square lattice, and a three-dimensional cubic lattice. In each case, the dynamical evolution of the system is driven by the nearest-neighbor interaction represented by the Hamiltonian $$=\underset{k,n}{}[J_xS_k^xS_n^x+J_yS_k^yS_n^y+J_zS_k^zS_n^z],$$ (2) where $`J_\mu `$ are coupling constants. With such a Hamiltonian, the timescale of the individual spin motion referred to below as the “mean free time” can be given by the time $`\tau =[\frac{1}{3}NS^2(J_{x}^{}{}_{}{}^{2}+J_{y}^{}{}_{}{}^{2}+J_{z}^{}{}_{}{}^{2})]^{1/2}`$, where $`N`$ is the number of the nearest neighbors (twice the number of the lattice dimensions). In the context of inelastic neutron scattering, correlation functions (1) are called “intermediate structure factors”. If $`q=0`$, Eq.(1) can also represent the free induction decay in nuclear magnetic resonance (NMR). In this work, we provide extensive numerical evidence that the generic long-time behavior of $`G(t)`$ has one of the following two functional forms: either $$G(t)e^{\xi t},$$ (3) or $$G(t)e^{\xi t}cos(\eta t+\varphi ),$$ (4) where the constants $`\xi `$ and $`\eta `$ are of the order of $`1/\tau `$. It is important to realize that, if the above functional form of the long-time behavior is, indeed, generic (i.e. independent of the specific details of interaction), then this property is very likely related to the randomness generated by the spin dynamics. At the same time, the problem cannot be reduced to the Markovian paradigm of “a slow variable interacting with a fast equilibrating background” — the characteristic timescale $`\tau `$ in Eqs.(3,4) is not ”slow”. It is, in fact, the fastest natural timescale of the problem. Therefore, whatever is the ultimate explanation of that behavior, it will certainly be a step beyond the standard theory of Brownian-type motion. Our interest in the long-time behavior of the correlation functions (1) was originally motivated by two isolated pieces of evidence supporting the oscillatory behavior (4) in quantum (spin 1/2) systems: (i) experiments on NMR free induction decay in CaF<sub>2</sub> and (ii) the results of numerical diagonalization of spin 1/2 chains . In the both cases, quantities analogous to the one defined by Eq.(1) have been measured or computed, and the results look very similar to the plots shown in the left column of Fig. 1. We came to recognize the importance of a detailed study of the classical limit, when, in an attempt to explain the long-time relaxation in quantum spin systems, we developed a theory that turned out to be simultaneously applicable to classical spins. That theory is presented in a different paper which has been written simultaneously with the present one. The present paper is mainly numerical: it is not intended to be a brief exposition of Ref.. Below, we only provide the summary of the results from Ref.. The theory developed in Ref. describes long-time relaxation as a correlated diffusion in finite volumes. In the classical case, those finite volumes correspond to the spherical surfaces on which the tips of classical spin vectors move, while in the quantum case, the finite volumes originate from a more sophisticated construction in Hilbert space. The overall structure of such a treatment has noticeable parallels with the theory of Pollicott-Ruelle resonances in classical chaotic systems. A definite prediction from the correlated diffusion description is that the functional form of the long-time relaxation should be given by Eqs.(3,4). The important part of the above theory is not the diffusion description itself but the reason why it is applicable, given the “non-Markovian” relaxation timescale. The theory is based on the fairly strong conjecture that, for a broad class of many-body systems, a formal extension of the Brownian-like description applies to the the long-time behavior of the ensemble average quantities, even when the problem exhibits no separation of the timescales between the slow and the fast motions. Before proceeding with the description of the simulations, it should be mentioned that, for the classical spin systems at infinite temperature, the long-time behavior of the $`q`$-dependent correlation functions (1) decaying on the timescale of $`\tau `$ has never been addressed. The closest to this subject was the work of de Alcantara Bonfim and Reiter, who considered the Heisenberg spin chain and focused on the long-time behavior of correlation functions (1) with small $`q`$. Those correlation functions, however, are not typical for our purposes, because, as a consequence of the total spin conservation, they decay on the timescale, which is much longer than the characteristic timescale of one-spin motion. In that situation, the hypothesis of spin diffusion would lead to the prediction of nearly exponential decay with the decay constant proportional to $`q^2`$. The results of de Alcantara Bonfim and Reiter did not cover the range of values, which would be sufficient to confirm or rule out the exponential character of the long-time decay. However, those results (in line with others,) indicated that, if, the spin diffusion regime exists for classical spin chains, the approach to that regime is anomalously slow. The present work includes one example of the Heisenberg interaction, just to show that this case does not appear to be special with respect to the long-time property (3,4). ## 2 Simulations Our computational strategy was similar to that of Müller. Namely, we did not deal with very large systems but, instead, performed an ensemble averaging over a large number of finite, but not too small, lattices having periodic boundary conditions. The finite size effects were then controlled by varying the size of the lattice. For a given lattice size, many computational runs have thus been performed. Each of them started from completely random initial conditions (corresponding to the infinite temperature) and generated the evolution of the system over a time interval two orders of magnitude longer than the mean free time $`\tau `$ (see Table 1 for specific numbers). The correlation functions were then obtained by averaging the data within each run and over different runs. The following algorithm has been used in order to simulate the evolution of the system. At each time step, the spins were advanced sequentially in such a way that, if the spin number $`k`$ interacted with the spin number $`n`$, and the $`k`$th spin was advanced first, then the new coordinates of the $`n`$th spin were computed based on the local field created by the already advanced $`k`$th spin. The procedure for advancing a given ($`k`$th) spin to the next point along the discrete time grid consisted of two steps. Step 1: The coordinates of the $`k`$th spin were changed by $`\delta `$$`𝑺`$<sub>k</sub> according to the straightforward discretization of the equations of motion, i.e. $$\delta 𝑺_k=[𝑺_k\times 𝒉_k]\delta t,$$ (5) where $`𝒉`$<sub>k</sub> was the local field equal to $`_n^{\text{n.n.}}[J_xS_n^x\widehat{𝒆}_x+J_yS_n^y\widehat{𝒆}_y+J_zS_n^z\widehat{𝒆}_z]`$ (sum over the nearest neighbors of the $`k`$th spin); and $`\delta t`$ was the discretization time step. Step 2: The higher order errors that changed the length of the spin vector were eliminated. This was done by contracting the spin component perpendicular to the local field, so that it took the absolute value it had before Step 1. The whole manipulation could not change the spin projection parallel to the local field and, therefore, the energy of interaction of that spin with its neighbors. Since the next spin was advanced in the newly updated local field, the energy of the whole system was conserved exactly during the entire integration. Apart from insuring meaningful behavior of the computed trajectories, the exact conservation of energy substantially improved the convergence of the algorithm with respect to the limit $`\delta t0`$. In our simulations, the discretization time steps (given in Table 1) were admitted as sufficient when their further reduction appeared to have no effect on the computed correlation functions. We also checked that the averaging over a larger number of much shorter runs led to results consistent with the longer runs we used. The time lengths of the runs indicated in Table 1 were chosen to optimize the resulting efficiency of averaging: too short runs (of the order of the time length of the computed correlation function) did not make many independent contributions to the correlation functions, while too long runs did not improve the quality of the averaging proportionally to their length. ## 3 Results The results of our simulations for different dimensions, interaction constants, and wave numbers are presented in Fig. 1, together with the long-time theoretical fits based on either Eq.(3) or (4). Since we aimed at demonstrating the exponential character of the long-time behavior, it was natural to use a logarithmic scale for $`G(t)`$. However, because in the half of the cases, $`G(t)`$ was also oscillating, we chose to show the logarithmic plots for the absolute value of $`G(t)`$, which explains the cusps in the left column of plots in Fig. 1. Those cusps correspond to the points where $`G(t)`$ crosses zero. The reason that the cusp minima do not reach $`\mathrm{}`$ is that a discrete grid was used. The selection of parameters for the simulations was subject to certain practical constraints: The computed correlation functions could be considered reliable only within quite a limited range along both the $`t`$\- and Log($`G(t)`$)- axes. Therefore, we avoided the correlation functions that reached too-small values too fast, or, on the contrary, decayed too slowly. From our experience, the finite size effects were least pronounced for correlation functions with $`q=0`$. For this reason, $`q`$ was chosen to be zero in six of eight examples presented in Fig. 1. Since the long-time behavior of the correlation functions was only marginally accessible with our computational resources, we present the results in substantial detail, thus making clear the uncertainties associated with insufficient ensemble averaging and finite size effects. Each of the correlation functions presented in Fig. 1 was computed four times: two statistically independent averaging results for each of two different lattice sizes. “Two statistically independent averaging results” means that, in each case, the same number of sample runs was performed but two different sets of random numbers were used for setting the initial orientations of spins. Each frame in that figure thus contains a superposition of four plots. The time interval where these plots do not deviate from each other can be considered as representing the limit of infinite lattice size with sufficient ensemble averaging. The finite size effects are not evident in any of the examples shown in Fig. 1 — in every case, the plots representing different simulation outcomes do not deviate from each other before the statistical fluctuations for each of the two lattice sizes become apparent. Our experience indicates that further improvement in the accuracy of the computed correlation functions would simultaneously require a finer discretization, much more extensive ensemble averaging and, probably, larger system sizes, i.e. much greater computational effort. Summarizing the evidence, we observe that, with the marginal exception of Fig. 1(d), in every other case presented, there is an interval, covering at least one decade of the values of $`G(t)`$, where the simulation results agree with the long-time fits (3) or (4). We would also like to point out that, while in all cases the exponential behavior becomes pronounced quite early, the long-time exponents in Figs. 1(e,g) describe almost the entire correlation functions. Thus our numerical results lend strong support to the idea expressed in Ref., that a discrete spectrum of well-separated exponents describes the long-time behavior of the correlation functions considered, with the slowest of those exponents responsible for the asymptotic functional form given by Eq.(3) or (4). It is also very likely, though slightly less reliable, that in the examples presented, our simulations revealed the slowest exponents. The reservation here is for the possibility that even slower exponents could enter the long-time expansion of $`G(t)`$ with anomalously small coefficients. ## 4 Conclusions In conclusion, we have presented a numerical evidence that the long-time behavior of the correlation functions considered is exponential with or without the oscillatory component. Leaving the details to Ref., here we just mention that our best hope for the theoretical explanation of that behavior is associated with the strong chaotic properties of the spin dynamics. Those properties are likely to be quite generic, i.e. present in other systems. ## 5 Acknowledgements The author is grateful to A. J. Leggett for numerous discussions, and to C. M. Elliott and R. Ramazashvili for helpful comments on the manuscript. This work has been supported in part by the John D. and Catherine T. MacArthur Foundation at the University of Illinois, by the Drickamer Endowment Fund at the University of Illinois and by the Foundation of Fundamental Research on Matter (FOM), which is sponsored by the Netherlands Organization for the Advancement of Pure Research (NWO). Note added to proof: After submitting this paper, the author became aware of a closely related work of T. Prosen, in which the decay of the correlation functions defined on the lattices of quantum spins was interpreted in terms of Policott-Ruelle resonances. TABLE CAPTIONS: Table 1: Simulation parameters and the long-time fits corresponding to the plots presented in Fig. 1. The numbers in the right four columns characterize each of the four data sets superimposed in the corresponding frame. FIGURE CAPTIONS: Figure 1: Correlation functions $`G(t)`$ of the form (1) for 1D chain, 2D square lattice, and 3D cubic lattice. The interaction coefficients and the wave numbers (in the units of inverse lattice spacing) are indicated above the plots. The main frame of each figure shows the logarithmic scale of the absolute value of $`G(t)/G(0)`$, while the inset frame shows the direct plots of $`G(t)/G(0)`$ (with neither the logarithm nor the absolute value being taken). Within each frame, the simulation results are presented by almost indistinguishable superposition of data for two lattice sizes, and each size is represented by two statistically independent averaging results; therefore, two solid lines for the larger size (Size 1) and two dash-dotted lines for the smaller size (Size 2). The spread of the four lines indicates the computational uncertainty — it becomes visible only in the lower right corner of each frame. The dashed lines in each figure are the long-time theoretical fits of form (3) or (4). The numbers relevant to each data set are given in Table 1.
no-problem/9911/astro-ph9911124.html
ar5iv
text
# The zoo of dwarf novae : illumination, evaporation and disc radius variation ## 1 Introduction Dwarf novae (DN) are cataclysmic variable binary systems which, every few weeks, exhibit 4 – 6 mag outbursts, which last for a few days (see e.g. Warner 1995a ). In several subclasses of DN both the outburst durations and recurrence times can be very different from the values quoted above. It is generally accepted that DN outbursts are due to a “thermal-viscous” instability. This instability occurs in the accretion disc in which the viscosity is given by the $`\alpha `$-prescription (Shakura & Sunyaev ss73 (1973)), at temperatures close to 8000 K. Hydrogen is then partially ionized and opacities are a steep function of temperature (see Cannizzo c93 (1993) for a review and Hameury et al. hmdlh98 (1998) for the most recent version of the model). Modeling dwarf nova lightcurves requires a varying Shakura-Sunyaev parameter $`\alpha `$ (Smak s84 (1984)): it must be of the order of 0.1 – 0.2 (0.2 according to Smak 1999b) in outburst and of the order of 0.01 in quiescence, when the temperature is below the hydrogen ionization temperature. One could therefore expect that the disc instability model (DIM) might offer useful constraints on mechanisms which generate accretion disc viscosity. This assumes, of course, a successful application of the model to the observed DN outburst cycles. However, despite its success in explaining the overall characteristics of DNs, the DIM in its standard version faces several serious difficulties when one tries to account for the detailed properties of dwarf nova outbursts. Some of these difficulties are the result of an incomplete version of the DIM. For example it was believed that a truncation of the inner parts of the disc is necessary to explain the long delay between the rise of optical light and that of UV and EUV in systems such as SS Cyg. As shown by Smak (s98 (1998)), however, when correct outer boundary condition are assumed, the standard DIM reproduces the observed delays. On the other hand, observed quiescent X-ray fluxes far exceed the predictions of the model and seem to require an inner ‘hole’ in the disk. Such a hole can either be due to evaporation of the disc close to the white dwarf (Meyer & Meyer-Hofmeister mm94 (1994)), or to the presence of a magnetic field strong enough to disrupt the disc. In addition, systems such as WZ Sge, which have long recurrence time and large amplitude, long outbursts require very low values of $`\alpha `$ ($`\alpha <10^4`$) if interpreted in the framework of the standard DIM (Smak s93 (1993), Osaki 1995a , Meyer-Hofmeister et al. mml98 (1998)). These values, much lower than those of other DNs at similar orbital periods, are, however, left unexplained. On the other hand, WZ Sge systems can be explained with standard values of $`\alpha `$, provided that the disc is truncated as in other systems so that it is either stable or marginally unstable (Lasota et al. lhh95 (1995), Warner et al. wlt96 (1996)) and the mass transfer from the secondary is significantly increased during the outburst under the influence of illumination by radiation from the accreting matter (Hameury et al. hlh97 (1997)). SU UMa systems are a subclass of dwarf novae which occasionally show long outbursts during which a lightcurve modulation (superhump) is observed at a period slightly longer than the orbital period; these superoutbursts are, usually, separated by several normal outbursts. The superhump is due to a 3:1 resonance in the disc which causes the disc to become eccentric and to precess (Whitehurst w88 (1988)). Osaki (o89 (1989)) proposed that the related tidal instability is also responsible for the long duration and large amplitude of superoutbursts (see Osaki o96 (1996) for a review of the thermal-tidal instability model). In his model, the tidal torques which remove angular momentum from the outer parts of the disc are increased by approximately an order of magnitude when the disc reaches the 3:1 resonance radius (typically $`0.46a`$, where $`a`$ is the orbital separation) until the disc outer radius has shrunk to typically $`0.35a`$, i.e. by about 30%. This model accounts for many of the properties of SU UMa systems; it has, however, difficulties in explaining systems with very short superoutburst cycles such as RZ LMi, for which one must assume that the tidal instability stops when the disc has shrunk by less than 10% (Osaki 1995b ), i.e. by much less than assumed in the standard case. Finally, it must be noted that a number of systems exhibit very bizarre lightcurves: we have already mentioned the case of RZ LMi, which has similarities, with systems such as ER UMa, V1159 Ori and DI UMa. The case of the December 1996 superoutburst of EG Cnc which was followed by 6 closely spaced normal outbursts in 1996 has not been reproduced by simulations, except by Osaki (o98 (1998)) who assumed that the viscosity parameter $`\alpha _\mathrm{c}`$ in the cold state was increased to 0.1, almost the value in the hot state, for 70 days after the superoutburst, and then returned to its quiescent value $`\alpha _\mathrm{c}=0.001`$. It is finally worth mentioning that the prototypical classical dwarf nova U Gem exhibited an unusually long outburst in 1985, lasting 45 days, with a shape similar to superoutbursts, but without superhumps (Mattei et al. m87 (1987)). Since in this system the radius of the 3:1 resonance is larger than the size of the primary’s Roche lobe, one can conclude that while superhumps can be attributed to the 3:1 resonance, the tidal instability is obviously not the sole cause of very long outbursts. The inability of numerical models to reproduce the large variety of observed light curves may indicate that additional physical effects should be added to the DIM. One such effect is the tidal instability. Another important class of effects is the illumination of the disc and the secondary star. These effects are usually not included in simulations, and it was suggested by Warner (1995c ) that irradiation of the secondary star, that gives rise to high and low states of mass transfer, and of the inner disc, that drastically affects the disc instability, account for the light curves of VY Scl stars. He also suggested (Warner w98 (1998)) that the wide spectrum of superoutburst behaviours is generated by the interplay of reactions of the disc and the secondary to irradiation. This suggestion is supported by observations since there is evidence that the mass transfer rate from the secondary star increases during outbursts (Smak s95 (1995)) in dwarf novae such as Z Cha or U Gem, most probably as a result of illumination of the secondary. Recently Smak (1999a ) argued that properties of outburst cycles of “standard” U Gem–type dwarf novae, which cannot be reproduced by the DIM (e.g. the same maximum brightness of narrow and wide outbursts), are well explained if all outbursts are associated with some mass–transfer enhancement due to the secondary’s irradiation. (Long outbursts would be due to important mass transfer enhancements, making the mass transfer rate larger than the critical value for stability). If this is the case the ‘pure’ DIM would find no application in the real world. It also appears that illumination of the disc itself by the hot white dwarf has strong effects on the stability properties of the disc as soon as the white dwarf temperature exceeds 15,000 K (King k97 (1997), Hameury at al. hld99 (1999)). In this paper, we investigate the influence of the combined effects of illumination of the disc, of the secondary star and of evaporation of the inner parts of the disc, on the predictions of the DIM in which standard values of viscosity are assumed. Our free parameters are the white dwarf mass, the quiescent white dwarf temperature, the mass transfer rate in the absence of illumination, and two parameters describing in a crude manner evaporation effects and the influence of illumination of mass transfer from the secondary. We show that a large variety of light curves are predicted by the models, many of which have an observational counterpart. In section 2, we show that the long 1985 outburst of U Gem requires enhanced mass transfer during the outburst. In section 3, we describe the model and our assumptions; in section 4, we give our results and compare them with lightcurves of observed systems, and we discuss briefly possible extensions of this work. ## 2 The long 1985 outburst of U Gem U Gem is a prototypical dwarf nova that undergoes outbursts that last for 7-14 days, with an average recurrence time of 120 days (Szkody & Mattei sm84 (1984)). The orbital period is 4.25 hr, and the primary and secondary masses are respectively 1.1 and 0.5 M (Ritter & Kolb rk98 (1998)); for such parameters, the outer disc radius, defined as the radius of the last non-intersecting orbits, is 4.15 10<sup>10</sup> cm. The 1985 outburst that lasted for 45 days is therefore exceptional; long outbursts are observed in other systems (SS Cyg for example), and are a natural outcome of models in which the outer disc radius is kept constant (see e.g. Hameury at al. hmdlh98 (1998)), which may happen when the outer disc radius reaches the tidal truncation radius. However, one does not normally obtain extremely long outbursts; the 1985 outburst of U Gem was in fact so long that, more mass was accreted during this outburst than what was contained in the pre-outburst disc. The maximum mass $`M_{\mathrm{d},\mathrm{max}}`$ of a quiescent disc during quiescence is the integral of the maximum surface density $`\mathrm{\Sigma }_{\mathrm{max}}`$ on the cool branch; using the fits given by Hameury et al. (hmdlh98 (1998)), one gets $$M_{\mathrm{d},\mathrm{max}}=2\times 10^{20}\alpha _\mathrm{c}^{0.83}M_1^{0.38}\left(\frac{r_{\mathrm{out}}}{10^{10}\mathrm{cm}}\right)^{3.14}$$ (1) where $`\alpha _\mathrm{c}`$ is the Shakura-Sunyaev viscosity parameter on the cool branch, $`M_1`$ is the primary mass in solar units, and $`r_{\mathrm{out}}`$ is the outer disc radius. On the other hand, during a long outburst, the whole disc is entirely on the hot branch, and the local mass transfer rate in the outer regions of a disc must be large enough to prevent a cooling wave formation; using again the analytical fits of Hameury al al. (hmdlh98 (1998)), this gives $$\dot{M}_{\mathrm{out}}>8\times 10^{15}M_1^{0.89}\left(\frac{r_{\mathrm{out}}}{10^{10}\mathrm{cm}}\right)^{2.67}$$ (2) Here, $`\dot{M}_{\mathrm{out}}`$ is the local mass transfer rate in the outer parts of the disc, which is close to the mass accretion rate onto the white dwarf if the whole disc sits for a long time on the hot branch. The maximum duration of such an outburst is thus $`t_{\mathrm{max}}=M_{\mathrm{d},\mathrm{max}}/\dot{M}`$. For the parameters appropriate for U Gem, one gets $$t_{\mathrm{max}}=26\left(\frac{\alpha _\mathrm{c}}{0.01}\right)^{0.83}M_1^{0.51}\left(\frac{r_{\mathrm{out}}}{\mathrm{4.1\hspace{0.33em}10}^{10}\mathrm{cm}}\right)^{0.47}\mathrm{d}$$ (3) As $`\alpha _\mathrm{c}`$ is larger than 0.01 for this prototypical dwarf nova (Livio & Spruit ls91 (1991) for example find that $`\alpha _\mathrm{c}`$ must be equal to 0.044 to account for the timing properties of U Gem), $`t_{rm\mathrm{max}}`$ can never be as high as 45 days. This means that the total amount of mass accreted during this long outburst is larger than the mass of the disc in quiescence; this is possible only if the mass transfer rate from the secondary has increased to a value close to the mass accretion rate onto the white dwarf, i.e. is close to the critical rate for stable accretion. Such an increase of the mass transfer rate is very likely caused by the illumination of the secondary. This reinforces the conclusion of Smak (1999a ) that long outbursts result from large mass–transfer enhancements. ## 3 The model ### 3.1 Disc irradiation We use here the numerical code described in Hameury et al. (hmdlh98 (1998)). This code solves the usual mass, angular momentum and energy conservation equations on an adaptive grid, with a fully implicit scheme. This allows to resolve narrow structures in the accretion disc (Menou et al. mhs99 (1999)), and avoids the Courant condition which would severely limit the time step. To describe disc irradiation we use a version of the code described in Dubus et al. (dlhc99 (1999)) (see also Hameury et al. hld99 (1999)). A grid of vertical structures is used to determine the cooling rate of the disc as a function of the vertical gravity, the integrated disc surface density $`\mathrm{\Sigma }`$, the central temperature $`T_\mathrm{c}`$ and the illumination temperature $`T_{\mathrm{ill}}`$, defined as $`T_{\mathrm{ill}}=(F_{\mathrm{ill}}/\sigma )^{1/4}`$ where $`F_{\mathrm{ill}}`$ is the illuminating flux. In what follows, we use $`\alpha _{\mathrm{hot}}=0.2`$ and $`\alpha _{\mathrm{cold}}=0.04`$, except where otherwise stated. We also neglect the albedo $`\beta `$ of the disc; taking it into account introduces a multiplicative factor $`(1\beta )^{1/4}`$ for the white dwarf temperatures. It must also be stressed that the white dwarf surface temperature cannot be too large; this is because the intrinsic (quiescent) white dwarf luminosity must be significantly less than the accretion luminosity in outburst; for $`M_1`$ = 0.6 M, the quiescent white dwarf temperature has to be smaller than 33,000 K if the outburst amplitude is to be larger than 2 magnitudes. ### 3.2 Illumination of the secondary We are interested here in the effect of illumination of the secondary on its mass transfer rate on short time scales (days), and we do not consider any long term effects that may lead to cycles accounting for the observed dispersion of the average mass transfer rate for a given orbital period (McCormick & Frank mf98 (1998)). Even on short time scales, the response of the secondary to illumination is complex (see e.g. Hameury et al. hlk88 (1988)); we prefer to use here a simpler approach in which we assume a linear relation between the mass transfer rate from the secondary $`\dot{M}_{\mathrm{tr}}`$ and the mass accretion rate onto the white dwarf $`\dot{M}_{\mathrm{acc}}`$, i.e. $$\dot{M}_{\mathrm{tr}}=\mathrm{max}(\dot{M}_0,\gamma \dot{M}_{\mathrm{acc}})$$ (4) where $`\dot{M}_0`$ is the mass transfer rate in the absence of illumination. This is similar to the formula used by Augusteijn et al. (aks93 (1993)) in the context of soft X-ray transients. Although it is an extremely crude approximation, it has, nevertheless, the advantage of having only one free parameter $`\gamma `$. Its value must be in the range $`[01]`$ for stability reasons. Such an approach obviously requires the illumination to have a noticeable effects on the secondary’s surface layers. As mentioned earlier, strongly irradiated companion stars are observed in several systems. To describe the effects of irradiation of a Roche-lobe filling star we shall follow the approach of Osaki (o85 (1985)) and Hameury et al. (hkl86 (1986)). The mass transfer rate from the secondary can be written as (Lubow & Shu ls75 (1975)): $$\dot{M}_{\mathrm{tr}}=Q\rho _{L1}c_s$$ (5) where $`Q`$ is the effective cross section of the mass transfer throat at the Lagrangian point $`L_1`$, $`Q=1.9\times 10^{17}T_4P_{\mathrm{hr}}^2`$ cm<sup>2</sup>, where $`T_4`$ is the surface temperature of the secondary and $`P_{\mathrm{hr}}`$ the orbital period in hours; $`c_s`$ is the sound speed, and $`\rho _{L1}`$ the density at $`L_1`$, which, in the case of an isothermal atmosphere, can be expressed as: $$\rho _{L1}=\rho _0e^{(RR_{L1})/H}$$ (6) where $`\rho _0`$ is the density calculated at a reference level, $`R`$ the secondary radius (defined by this reference position), $`R_{L1}`$ the Roche lobe radius and $`H`$ the scale height. In quiescence, the mass transfer rate is low, of order of $`10^{15}10^{16}`$ g s<sup>-1</sup>. At short orbital periods and correspondingly low secondary’s temperatures, both $`Q`$ and $`c_s`$ are small. If one assumed that $`\rho _0`$ corresponds to the photospheric density, which for a low mass main sequence star is of the order of $`10^5`$ g cm<sup>-3</sup>, one would find $`\dot{M}_{\mathrm{tr}}10^{16}e^{(RR_{L1})/H}`$ g s<sup>-1</sup>. This in turn would mean that $`(RR_{L1})/H`$ is small so that $`\dot{M}_{\mathrm{tr}}`$ varies as $`T_4^{3/2}`$, and therefore is not very sensitive to illumination. In the irradiated case, however, the reference density $`\rho _0`$ does not correspond to the photospheric density but to the density at the base of the isothermal atmosphere, which extends much deeper than the unilluminated photosphere when the atmosphere is illuminated with an irradiation temperature exceeding 10<sup>4</sup> K. For those high illumination fluxes, the outer layers of the star are affected on a thermal time scale (seconds) at least down to a point where the unperturbed temperature equals the irradiation temperature. This point is the base of the isothermal layers in the illuminated case, and from models of very low mass stars (Dorman et al. dnc89 (1989)), one gets $`\rho _010^3`$ g cm<sup>-3</sup>, with resulting very high mass transfer rates, exceeding 10<sup>18</sup> g s<sup>-1</sup>. The latter estimate is an upper limit, as it does not take into account the fact that a fraction of the secondary is shielded from irradiation by the accretion disc; partial shielding does not suppress the enhancement of mass transfer, since circulation at the surface of the star prevents the existence of large temperature gradients, but reduces it in a complex way which we are not attempting to describe here. Observations show, however, that the increase can be quite significant; the mass transfer rate rises by a factor 2 in U Gem and Z Cha (Smak s95 (1995)), whereas Vogt (v83 (1983)) finds that in VW Hyi, the bright spot luminosity increases by a factor $``$ 15 during maximum and decline of outbursts close to a superoutburst, which he attributed to a corresponding increase in mass transfer under the effect of illumination. Although the evidence is not very strong, there are some indications that the hot spot brightening, and hence the mass transfer increase, is delayed with respect to the eruption by a day or two; this could be either the response time of the secondary (Smak s95 (1995)), or the thermal inertia of the white dwarf (only an equatorial belt is instantaneously heated by accretion, so irradiation of the secondary is delayed). ### 3.3 Inner disc radius There is evidence that in dwarf novae accretion discs are truncated, as indicated by emission line profiles (Mennikent & Arenas ma98 (1998)), or the detection of a significant quiescent X-ray and UV flux (Lasota l96 (1996)). If this is due to the presence of a magnetic field, the inner disc radius $`r_{\mathrm{in}}`$ is a simple function of the mass accretion rate onto the white dwarf: $$r_{\mathrm{in}}=9.8\times 10^8\dot{M}_{15}^{2/7}M_1^{1/7}\mu _{30}^{4/7}\mathrm{cm}$$ (7) where $`\dot{M}_{15}`$ is the mass accretion rate in units of 10<sup>15</sup> g s<sup>-1</sup>, $`M_1`$ is the white dwarf mass and $`\mu _{30}`$ is the magnetic moment in units of 10<sup>30</sup> Gcm<sup>3</sup>. The value of $`\mu _{30}`$ should be such as to allow $`r_{\mathrm{in}}=R_1`$ in outbursts, where $`R_1`$ is the primary radius, as most DNs do not show then coherent pulsations. In quiescence, however, coherent oscillations are observed (Patterson et al. prkm98 (1998)), and for example in WZ Sge $`\mu _{30}`$ 50 (Lasota et al. lkc99 (1999)). An inner hole in the disc can be also due to evaporation. The physics of evaporation is poorly understood but several models were proposed (Meyer & Meyer-Hofmeister mm94 (1994), Liu et al. lmm97 (1997), Kato & Nakamura kn98 (1998), Shaviv et al. sww99 (1999)). The evaporation rates $`\dot{\mathrm{\Sigma }}`$ in the disc are, however, quite uncertain. Evaporation is normally accounted for by introducing the additional term $`\dot{\mathrm{\Sigma }}`$ in the mass conservation equation. However, because evaporation is expected to increase towards the accreting body, and since the local mass transfer rate in the disc increases sharply with radius during quiescence, the effects of evaporation are important essentially very close to the disc inner edge, and can be treated assuming that the disc inner radius is a function of the accretion rate onto the white dwarf, just as in the case of the formation of a magnetosphere. To first order, the effect of evaporation is to create a hole in quiescence which increases the recurrence time; what matters is thus the inner disc radius in quiescence and not the detailed way $`r_{\mathrm{in}}`$ varies with $`\dot{M}_{\mathrm{acc}}`$. We shall therefore use equation (7) in all cases, using $`\mu _{30}`$ as a free, unconstrained parameter that merely describes the size of the hole generated in the disc by either the presence of as magnetic field, or by evaporation. ## 4 Results In the following, we discuss the influence of each individual effect mentioned above; our reference situation is that of a system with a 1.0 M primary, whose radius is $`5\times 10^8`$ cm; the mass transfer rate is $`3\times 10^{16}`$ g s<sup>-1</sup>, and the average outer disc radius is $`1.8\times 10^{10}`$ cm. These parameters are typical of short period dwarf novae with massive primaries. The light curve corresponding to the standard version of the DIM is given in Fig. (1). For comparison we also show the case of a system with a 0.6 M primary. ### 4.1 Influence of the disc irradiation The effects of irradiation of the disc by both the hot white dwarf and the boundary layer has been described in detail in Hameury et al. (hld99 (1999)), and we summarize here the most important results. For very hot white dwarfs ($`T_{\mathrm{eff}}>`$ 20 000 K), the temperature in the innermost parts of the disc exceeds the hydrogen ionization temperature during quiescence; the viscosity is therefore high in these regions, which are thus partially depleted as first suggested by King (k97 (1997)). The transition region between the hot inner disc and the outer, cool parts is strongly destabilized by irradiation, and the model predicts several small outbursts between major ones. In particular, many reflares are expected at the end of a large outburst (see Fig. 2). In certain cases, the reflares may dominated the light curve; this depends on whether the heating front can reach the outer edge of the disc or not. The reflares we obtained do not have the observed amplitudes but it is tempting to attribute the succession of several normal outbursts after a superoutburst in EG Cnc to this effect. Playing with parameters would produce a result corresponding better to the observed lightcurve but the merit of such an exercise is rather dubious considering the important uncertainties of the model itself. ### 4.2 Influence of the secondary irradiation Irradiation of the secondary enhances mass transfer. Hameury et al. (hlh97 (1997)) showed that if one assumes that the effect of irradiation is given by equation (4), outbursts having the general characteristics of superoutbursts (long durations, flat top or exponential decay with an abrupt cut-off) are expected. This model was, however, applied to a case in which the quiescent mass transfer was low enough for the disc to be stable on the cool branch; the instability was triggered by an external perturbation of the mass transfer from the secondary. This was required to explain the very long recurrence times of systems such as WZ Sge when standard values of $`\alpha `$ are assumed. Marginally unstable mass transfer rates as in Warner et al. (wlt96 (1996)) can give similar recurrence times but also in this case the amount of mass accreted during the superoutburst requires a substantial enhancement of mass transfer. Figure 3 shows the light curve obtained when one includes the secondary irradiation in the model. We neglect here disc irradiation and the disc is not truncated. We have taken $`\gamma `$ = 0.5, and all other parameters are as in Fig. (1), for a 1 M primary (i.e. $`\dot{M}_0=3\times 10^{16}`$ g s<sup>-1</sup>). The light curve is similar to those observed in SU UMa systems; it shows several normal outbursts separated by a large one which is sustained by enhanced mass transfer from the secondary. Large outbursts occur when the surface density at the outer edge of the disc is large enough that a cooling wave does not start immediately after the heating wave has arrived; equivalently, the disc mass must be larger than some critical value, and one therefore expects that the recurrence time of such large outbursts $`T_\mathrm{s}`$ varies roughly as $`\dot{M_0}^1`$. This, however, requires that the disc mass keeps increasing despite the presence of small outbursts, which means that $`\dot{M}_0`$ must be large enough to refill the disc with more mass than is lost during such outbursts. For $`\dot{M}_0=3\times 10^{15}`$ g s<sup>-1</sup>, which is more appropriate for short period systems, one does not get long outbursts, at least for the value of $`\alpha `$ considered here. Short outbursts are (as all our outbursts) of the inside-out type, and their recurrence time $`T_\mathrm{n}`$ is the viscous time, and therefore do not depend on the mass transfer rate. The correlation between $`T_\mathrm{s}T_\mathrm{n}^{0.5}`$ found in SU UMa systems (Warner 1995b ) is interpreted, in the framework of the tidal-thermal instability (TTI) model, as resulting from $`T_\mathrm{s}`$ varying as $`\dot{M}^1`$, as in our case, and from $`T_\mathrm{n}\dot{M}^2`$ for outside-in outbursts (Osaki 1995a ). One must however be careful with such a simple interpretation. This explanation is valid only if $`\dot{M}`$ is the only parameter determining both $`T_\mathrm{n}`$ and $`T_\mathrm{s}`$; this is clearly not the case, as quantities such as the viscosity in quiescence (that may vary by orders of magnitude from WZ Sge type systems to “normal” SU UMa’s), the disc radius (to the power 5.6), and the orbital period enter together with $`\dot{M}`$ in expressions for $`T_\mathrm{n}`$ and $`T_\mathrm{s}`$ (Osaki 1995a ). It must also be stressed out that, for the low mass transfer rates of SU UMa stars, outside-in outbursts are not a natural outcome of the models and are produced by lowering the value of $`\alpha _\mathrm{c}`$ or by making it an appropriate function of radius. The time evolution of the outer disc radius is different from the predictions of the TTI model in several respects: (i) $`r_{\mathrm{out}}`$ varies during a normal outburst, whereas in the TTI model, $`r_{\mathrm{out}}`$ remains roughly constant during an outburst after an initial increase during the rise (ii) $`r_{\mathrm{out}}`$ oscillates at the beginning of a large outburst, (iii) the disk extends to a larger radius during a large outburst than during a short outburst, allowing for the possibility of the development of superhumps if the radius can reach the 3:1 resonance radius, whereas in the TTI model the disk size varies by 30% during a superoutburst, with an average that is smaller than the average size during the previous normal outburst, and (iv) $`r_{\mathrm{out}}`$ remains approximately constant during a superoutburst, showing only a slow decline. Our results are similar to those of Smak (1991b ), who considered the effect of enhanced mass transfer during superoutbursts, and concluded that observed disc radius variations in Z Cha and the length of the cycle in VW Hydri appeared to support the enhanced mass transfer model. The main difference with our work comes from the approximations describing the effect of illumination: whereas we assume a dependence between $`\dot{M}_{\mathrm{tr}}`$ and $`\dot{M}_{\mathrm{acc}}`$ given by Eq. 4, in Smak’s model $`\dot{M}_{\mathrm{tr}}`$ is increased by about one order of magnitude after the maximum of a normal outburst, during a fixed period. As a consequence, the disc radius we obtain at the end of a large outburst is much smaller than in Smak’s model. In our case, when the cooling wave starts propagating, the accretion rate onto the white dwarf, and hence the mass transfer from the secondary, is unaffected, and the disc contracts as in the unilluminated case, whereas in Smak’s model, the disk expands rapidly when mass transfer is reduced by a factor 10; the surface density then drops at the outer edge below the critical value, and a cooling wave starts in quite a large disc. Ichikawa et al. (iho93 (1993)) also considered a mass transfer outburst as the source of superoutbursts. They compared the resulting lightcurves with those produced by the tidal–thermal model and concluded in this last model gives a much better representation of observed properties of SU UMa’s system. One should stress, however, that in Ichikawa et al. (iho93 (1993)) superoutbursts are triggered by a mass transfer ‘instability’. In our case, superoutbursts are triggered by the usual thermal-viscous instability and it is only the subsequent evolution of the outburst which is modified by an enhanced mass transfer. In Ichikawa et al. (iho93 (1993)), the mass transfer $`\dot{M}_{\mathrm{tr}}`$ is increased by a factor 100, whereas in our case the peak mass transfer rate from the secondary is $`1.2\times 10^{17}`$ g s<sup>-1</sup>, i.e. increased only by a factor 4. It is much smaller than the maximum possible rate from an irradiated low-mass star (see Section 2.1). Note that the average mass transfer rate, as given by Eq. (4) is larger than $`\dot{M}_0`$, and thus larger than for Fig. 1; it is in this case $`4.8\times 10^{16}`$ g s<sup>-1</sup>. Observations are, for the moment, not of much help in deciding which of the models is right. There are good reasons to believe that a tidal instability is required to account for the superhump phenomena. What is not known, however, is (i) the increase in the tidal torque resulting from this instability, and (ii) the radius at which the instability stops. There is also good evidence for an enhanced mass transfer, due to irradiation, during outbursts but a reliable description of this effect is missing. In both models, however, the observed correlation with superoutburst and normal outburst frequency can be obtained only by playing with the viscosity prescription which, of course, is not very satisfactory. The quiescent luminosities and accretion rates are almost identical in the standard and enhanced mass transfer cases, as expected, since after the passage of the cooling wave, the disc has essentially forgotten its initial conditions; differences arise only from the different disk sizes which are smaller in the illuminated case because of the large mass transfer increase. ### 4.3 Parameter dependence #### 4.3.1 Mass transfer rate In this section we shall consider the combined effect of irradiation of both the disc and the secondary and shall determine how the resulting lightcurves depend on the value of rate $`\dot{M}_0`$ at which mass is transferred from an unilluminated secondary. Figure 4 shows various light curves obtained by varying $`\dot{M}_0`$. We considered a system containing a 1 M primary, whose radius is $`5\times 10^8`$ cm and surface temperature 35,000 K in quiescence. We have taken $`\gamma `$ = 0.5, and $`\dot{M}_0`$ ranges from $`10^{16}`$ to $`7\times 10^{16}`$ g s<sup>-1</sup>; this corresponds to average transfer rates in the range 1.5 – 8.6 $`\times 10^{16}`$ g s<sup>-1</sup>. The inner disc radius is equal to the white dwarf radius. The white dwarf temperature has been chosen in the upper range of observed values in order to emphasize illumination effects. The effect is quite dramatic. Light–curves corresponding to cases (a) and (b) do not seem to be observed (as mentioned in the previous section the mass transfer rate is too low for long outburts to be present). It might be that the corresponding systems exist, but have not yet been discovered because they are intrinsically rare – the parameters of Fig. 4 are at the upper range of allowed values, or that some of the curves we obtain are artifacts due to our oversimplified treatment of the secondary response to illumination. Lightcurves (c) and (d), however, compare very well with those of systems having very short supercycles such as RZ LMi. In our model they are obtained for high mass transfer rates, which is natural since these systems spend most of their time in the high state, whereas in the tidal–thermal instability such lightcurves require an ad hoc reduction of the parameter describing the tidal interaction. The agreement with RZ LMi can be improved; in particular a reduction of the duration of the long outburst will be obtained by decreasing $`\gamma `$. One should note, however, that we have assumed that $`\dot{M}_{\mathrm{tr}}`$ responds immediately to changes in $`\dot{M}_{\mathrm{acc}}`$, whereas one could argue that there is a delay of the order of one or two days between illumination and the increase of mass transfer as discussed above. The introduction of such a delay makes the occurrence of long outbursts more difficult, as these require a near balance between mass transfer and accretion onto the white dwarf that must be established within a short outburst. We checked that if we use in Eq. (4) the average of the mass accretion rate over the past 2.5 days (the duration of short outbursts), long outbursts are suppressed. Finally, it is worth pointing out that small outbursts in Fig. 4 are intrinsically different from those obtained when the disc illumination is not taken into account. Whereas in a non–irradiated disc small amplitude outbursts appear when the heating front cannot bring the whole disc into a hot state, here the disc never returns to quiescence in its inner parts, so the cooling wave is reflected into a heating wave when it gets close to the stable hot inner part of the disc. The amplitude of these reflares grows as a consequence of the enhanced mass transfer during maximum, until the disc mass has grown up to a point where a self-sustained long outburst is possible. #### 4.3.2 Viscosity The reflares properties also depend on the ratio $`\alpha _{\mathrm{hot}}/\alpha _{\mathrm{cold}}`$ : the smaller this ratio, the more important the reflares (see Menou et al. mhln99 (1999) for a discussion of this effect in the context of X-ray transients). This is simply due to the fact that, the lower this ratio, the larger $`\mathrm{\Sigma }/\mathrm{\Sigma }_{\mathrm{max}}`$ after the passage of a cooling front, $`\mathrm{\Sigma }_{\mathrm{max}}`$ being the maximum surface density on the cold stable branch; in the limiting case $`\alpha _{\mathrm{hot}}=\alpha _{\mathrm{cold}}`$ there are no outbursts (Smak s84 (1984)), but a heating/cooling wave that propagates back and forth. Fig. 5 shows the effect of changing $`\alpha _{\mathrm{cold}}`$ to a smaller (by a factor 2) value. Successive reflares no longer reach the outer edge of the disc, and their amplitude therefore decreases from one mini-outburst to the next one. This accounts for the presence of flat top outbursts which were absent in Fig. 4b. The lightcurves are similar for all mass transfer rates, showing the pattern of Fig 5 with longer recurrence times for smaller $`\dot{M}_{\mathrm{tr}}`$. The only exception is for $`7\times 10^{16}`$ g s<sup>-1</sup>, which is close to stability, and for which the main outbursts are of the outside-in type. #### 4.3.3 Outer disc radius Since small discs favour large reflares, it is not surprising that when one considers discs with average $`r_{\mathrm{out}}=1.3\times 10^{10}`$ cm, and one takes $`\alpha _{\mathrm{cold}}`$ = 0.02 and $`\alpha _{\mathrm{hot}}`$ = 0.2, one obtains a combination of the lightcurves shown in the two previous sections; Fig 6 is a good example of this. It is worth noting that such a light curve is reminiscent of that of EG Cnc, even though the timescales are not quite the same. We do obtain the right pattern for the reflares, but we do not reproduce the very long superoutburst of EG Cnc (100 days), that would require $`\gamma `$ to be very close to unity, meaning that the linear approximation in Eq. 4 is invalid. For WZ Sge, one already had to assume a relatively large value of $`\gamma `$ (0.87) in order to reproduce the observed 25 days duration; since the outburst duration varies as $`1/\mathrm{log}(\gamma )`$ (Hameury et al. hlh97 (1997)), we would need $`\gamma =0.97`$ to obtain 100 days. Another difference with EG Cnc is the amplitude of the minioutbursts: the observed ones have approximately the same amplitude, whereas we get two identical minioutbursts, the others being of decreasing amplitude. We have not been able to reproduce this behaviour with our parameterization; a possible solution is to introduce a time dependent temperature of the white dwarf. This is expected, because the superoutburst lasted long enough to heat up the surface of the white dwarf that will then cool. If the rebrightenings of EG Cnc are indeed due to illumination effects, this implies that $`\alpha _{\mathrm{cold}}`$ cannot be small as we do not obtain reflares when $`\alpha _{\mathrm{cold}}`$ is significantly less than 0.01; Osaki et al. (ost97 (1997)) reached the same conclusion, but on different grounds; they assumed that $`\alpha _{\mathrm{cold}}`$ was increased to 0.1 during the superoutburst, remained high for 2 months, and then decreased back to small values (0.001), and had therefore to set $`\alpha _{\mathrm{cold}}`$ to be an explicit function of time. #### 4.3.4 Inner disc radius Finally we consider the effect of removing the inner disc regions, keeping both the disc and secondary irradiated. Apart from increasing the delay between the onset of an outburst in the disc and accretion onto the white dwarf, a large inner disc radius has a stabilizing effect on the disc itself, by preventing inside-out outbursts. This effect is quite noticeable in the case where the white dwarf surface temperature is high: if the inner disc radius $`r_{\mathrm{in}}`$ is large enough, the unstable transition region between the stable region heated above hydrogen ionization temperature and the cooler, quiescent external part does not exist. This suppresses the bounces after a longer outburst, as can be seen in figure 7. For a given mass transfer rate, there is a critical value of the inner radius above which the disc is stable; when $`r_{\mathrm{in}}`$ approaches this value, the recurrence rate goes to infinity. Arbitrarily long recurrence rates could therefore be expected, but only at the expense of very fine tuning; in the normal case where the mass accretion rate onto the white dwarf is negligible in quiescence as compared with the mass transfer rate from the secondary, the reasoning used by (Smak s93 (1993)) applies. The recurrence time $`t_{\mathrm{rec}}`$ is equal to: $$t_{\mathrm{rec}}=\frac{\mathrm{\Delta }M}{\dot{M}_{\mathrm{tr}}}=f\frac{M_{\mathrm{crit}}}{\dot{M}_{\mathrm{tr}}}$$ (8) where $`f`$ is the ratio of the amount of mass transferred during an outburst $`\mathrm{\Delta }M`$ and the maximum possible disc mass $`M_{\mathrm{crit}}`$, obtained assuming that the surface density is everywhere the critical surface density. Numerical models show that the surface density is not very far from its critical value even at large radii, and that the amount of mass transferred during a normal outburst is typically 10% of the total disc mass. Therefore, $`f`$ is not a very small parameter that could freely vary, and large changes in $`t_{\mathrm{rec}}`$ cannot result from variations in $`r_{\mathrm{in}}`$ alone. A similar situation is encountered in the case of soft X-ray transients (Menou et al. mhln99 (1999)). ## 5 Conclusions We have shown that many types of light curves can be produced by numerical models that include the illumination of both the secondary and the accretion disc, thereby explaining a great variety of observed light curves. These effects account for phenomena such as post-outburst rebrightening (e.g. EG Cnc), long outbursts (U Gem for example), or SU UMa systems with extremely short supercycles. In order to explore further these possibilities, one would need to determine from observations the mass transfer rate from the secondary as a function of the mass accretion rate onto the white dwarf, with a better accuracy than it is available now. Despite the fact that our approximations are very crude, in particular the one concerning the response of the secondary to illumination, we can nevertheless draw a number of conclusions. First, the illumination of the disc is important only if the white dwarf is relatively massive, so that it can have a high temperature without contributing too much to the light emitted by the system in outburst, and that the efficiency of accretion is high. Rebrightenings also require $`\alpha `$ not to be too low in quiescence. The fact that we can reproduce an alternance of normal and long outbursts when the illumination of the secondary is included does not of course imply that the thermal-tidal instability model for SU UMa is incorrect; a tidal instability is most probably required to account for the superhump phenomenon. The question of the precise role of this instability is, however, still open and our results raise some doubt on the validity of the parameters derived when fitting the observations, in particular for systems having a very short supercycle. This also means that the determination of the viscosity from the modeling of light curves is a far more difficult task than previously estimated. We obviously need some progress in the determination of the tidal torque; we also need to know how the secondary responds to illumination. We finally should include 2D effects in our models. First because the orbits in the outer disc are far from being circular, and second because the presence of a hot spot whose temperature can be of order of 10,000 K could in principle significantly alter the stability properties of the outer disc. ###### Acknowledgements. We thank the referee, Professor Y. Osaki, for helpful comments and criticisms. This research was supported in part by the National Science Foundation Grant No. PHY94-07194.
no-problem/9911/astro-ph9911046.html
ar5iv
text
# Cluster versus POTENT Density and Velocity Fields: Cluster Biasing and Omega ## 1 Introduction The basic hypothesis underlying the study of large-scale structure is that it grew out of initial fluctuations via gravitational instability (GI). In the linear regime, this theory predicts a relation between the peculiar velocity and density fluctuation fields, $`vvv=f(\mathrm{\Omega })\delta `$, with $`f(\mathrm{\Omega })\mathrm{\Omega }^{0.6}`$. From observations we can deduce the density field of galaxies or clusters rather than the density field of the underlying matter distribution. One then needs to assume a relation between the galaxy or cluster fluctuation field and that of the mass. A first order approximation is that of a linear “biasing” relation (hereafter LB) in which the two fields, smoothed on the same scale, obey the relation $`\delta _o=b_o\delta `$. Thus, GI+LB boil down to a simple relation between observables, $$vvv=\beta _o\delta _o,\beta _o\mathrm{\Omega }^{0.6}/b_o.$$ (1) The density field of the extragalactic objects can be derived from a whole-sky redshift survey, while the velocity divergence can be reconstructed from a sample of redshifts and distances inferred by Tully-Fisher-like distance indicators. Therefore, combining these data allow a measure of $`\beta _o`$, which, subject to some a priori knowledge of the biasing parameter $`b_o`$, provide constraints on the cosmological density parameter $`\mathrm{\Omega }`$. A related analysis, invoking the integral of equation (1), can be performed using velocities rather than densities. The efforts to measure $`\beta `$ from various data sets using different methods are reviewed in e.g., Dekel (1994, 1997), Strauss & Willick (1995). The most reliable density-density analysis, incorporating certain mildly-nonlinear corrections, is the recent comparison of the IRAS 1.2 Jy redshift survey and the Mark III catalog of peculiar velocities yielding, at Gaussian smoothing of $`12h^1\mathrm{Mpc}`$, $`\beta _{\mathrm{𝐼𝑅𝐴𝑆}}=0.89\pm 0.12`$ (Sigad et al. 1998, PI98; replacing an analysis of earlier data by Dekel et al. 1993). An analysis of optical galaxies has provided a somewhat lower value for $`\beta _{\mathrm{opt}}`$ (Hudson et al. 1995), in accordance with the expected higher biasing parameter for early-type galaxies as demonstrated by their stronger clustering (cf. Lahav, Nemiroff & Piran 1990). Recent velocity-velocity comparisons typically yield values of $`\beta _{\mathrm{𝐼𝑅𝐴𝑆}}0.50.6\pm 0.1`$ (Willick et al. 1997b and references therein, Willick & Strauss 1998; Davis, Nusser & Willick 1996, da Costa et al. 1998, Branchini et al. 1999). The main source of uncertainty in the interpretation of the $`\beta `$ estimates arises from our ignorance concerning the biasing relations. Fortunately, we do have a handle on the relative biasing parameters, based, for example, on the relative amplitudes of the correlation functions of the different types of objects, which should scale like $`b^2`$. Since different classes of extragalactic objects are assumed to trace the same velocity field, one can hope to tighten the constraints on $`\mathrm{\Omega }`$ by deriving $`\beta `$ for several different types of objects. Clusters of galaxies are promising candidates for this purpose because they are well-defined objects and are sampled quite uniformly to large distances, much larger than the available galaxy peculiar velocity samples. The use of the cluster distribution to probe the large-scale dynamics has been mainly restricted to dipole analyses, where the predicted velocity at the Local Group (LG) is compared to its observed motion relative to the CMB frame (Scaramella, Vettolani & Zamorani 1991; Plionis & Valdarnini 1991, PV91; Plionis & Kolokotronis 1998). It has been found that the directions of the two dipoles converge when using a large enough sample of clusters ($`>150h^1\mathrm{Mpc}`$), as expected from the assumed global homogeneity of the cosmological model (and contrary to the finding of Lauer and Postman 1994, based on their attempt to directly measure peculiar velocities for clusters). Once the cluster distribution is properly corrected from redshift to real space, the corresponding value of $`\beta `$ derived from the dipole is $`\beta _c0.21`$ (Branchini & Plionis 1995, 1996, BP96; Scaramella 1995a; Branchini, Plionis & Sciama 1996). This estimate is higher than the value derived without this correction (Scaramella et al. 1991; PV91), and is consistent with $`\mathrm{\Omega }1`$ for cluster biasing parameter of $`b_c45`$ as indicated by the cluster correlation analyses. The validity of the LB assumption for clusters might be questioned. In particular, such a large biasing parameter cannot follow the linear relation in deep underdensities. However, because of their low number density, clusters trace the underlying mass density field with a large inherent smoothing scale set by their mean separation. This has the effect of decreasing the density contrast and restoring the plausibility of the LB hypothesis over a large fraction of the volume sampled. It turns out that the bulk motion, as predicted from the cluster distribution with $`\beta _c0.2`$ inside a sphere of radius $`50h^1\mathrm{Mpc}`$ about the LG, is consistent with that derived directly from galaxy peculiar velocities (see Dekel 1997, Dekel et al. 1998; Giovanelli et al. 1996, 1998a, 1998b). However, the estimate of $`\beta _c`$ from the dipole at one point, or from the bulk flow, naturally suffers from severe cosmic scatter (e.g., Juskiewicz, Vittorio & Wyse 1990). The cosmic scatter can be reduced if the comparison is made at several independent points. Branchini (1995) and Plionis (1995) have attempted to compare predicted velocities from the cluster distribution to observed peculiar velocities of groups and clusters from Tormen et al. (1993), Hudson (1994), and Giovanelli et al. (1997), obtaining again $`\beta _c0.2`$. These analyses, however, are of limited validity since they compare smoothed and unsmoothed velocities. The purpose of this work is to measure $`\beta _c`$ by comparing the Abell/ACO cluster distribution and the galaxy peculiar velocities of the comprehensive Mark III catalog as analyzed by POTENT. The comparison is done alternatively at the density-density level and at the velocity-velocity level, and involves a careful error analysis. In § 2 we summarize the Mark III data, the POTENT method and the associated errors. In § 3 we describe the reconstruction of the cluster density and velocity fields and the various sources of error. In § 4 we perform a quantitative comparisons of the cluster and POTENT fields, in order to determine $`\beta _c`$. We conclude our results in § 5. ## 2 POTENT Reconstruction from Peculiar Velocities The POTENT procedure recovers the underlying mass-density fluctuation field from a whole-sky sample of observed radial peculiar velocities. The steps involved are: * (a) preparing the data for POTENT analysis, including grouping and correcting for Malmquist bias, * (b) smoothing the peculiar velocities into a uniformly-smoothed radial velocity field with minimum bias, * (c) applying the ansatz of gravitating potential flow to recover the potential and three-dimensional velocity field, and * (d) deriving the underlying density field by an approximation to GI in the mildly-nonlinear regime. The POTENT method, which grew out of the original method of Dekel, Bertschinger & Faber (1990, DBF), is described in detail in Dekel et al. (1998, D98) and is reviewed in the context of other methods by Dekel (1997, 1998). Farther improvements since DBF have been introduced which we use in the present analysis. They are discussed in detail by Sigad et al. (1998) We use the Mark III catalog of peculiar velocities (Willick et al. 1995, 1996, 1997a), which is a careful compilation of several data sets consisting of $`3000`$ spiral and elliptical galaxies. The non-trivial procedure of merging the data sets accounts for differences in the selection criteria, the quantities measured, the method of measurement and the TF calibration techniques. The data per galaxy consist of a redshift $`z`$ and a “forward” TF (or $`D_n\sigma `$) inferred distance, $`d`$. The radial peculiar velocity is then $`u=czd`$. This sample enables a reasonable recovery of the smoothed dynamical fields in a sphere of radius $`50h^1\mathrm{Mpc}`$ about the Local Group, extending to $`70h^1\mathrm{Mpc}`$ in some well-sampled regions. The POTENT method is evaluated using mock catalogs. The mock catalogs and the underlying $`N`$-body simulation are described in detail in Kolatt et al. (1996, K96). Here we only stress that a special effort was made to generate a simulation that mimic the actual large-scale structure in the real universe, in order to take into account any possible dependence of the errors on the signal. ### 2.1 Errors in the POTENT Reconstruction D98 and PI98 demonstrate how well POTENT can do with ideal data of dense and uniform sampling and no distance errors. The reconstructed density field, from input that consisted of the exact, G12-smoothed radial velocities, is compared with the true G12 density field of the simulation. The comparison is done at grid points of spacing $`5h^1\mathrm{Mpc}`$ inside a volume of effective radius $`40h^1\mathrm{Mpc}`$. No bias is introduced by the POTENT procedure itself and they find a small scatter of $`2.5\%`$ that reflects the accumulating effects of small deviations from potential flow, scatter in the non-linear approximation and numerical errors. Using the mock catalogs described by K96, we want to check and quantify how well the POTENT reconstruction method works on our sparse and noisy data. Our goal is to eventually compare the POTENT fields to the density and velocity fields obtained from the distribution of clusters. Since the clusters are sparse tracers of the mass, we need to explore also smoothing radii larger than the G12 (commonly used in POTENT applications), and we check the G15 and G20 cases as well. The errors due to sparse sampling and nonlinear effects are expected to be smaller for the larger smoothing scales, while the sampling-gradient bias may increase. For each smoothing radius, we execute the POTENT algorithm on each of the 20 noisy mock realizations of the Mark III catalog, recovering 20 corresponding density and velocity fields. We will later consider the individual fields as well as the mean fields averaged over the mock catalogs. The error in the POTENT density field at each point in space, $`\sigma _{\delta _p}`$, is taken to be the rms difference over the realizations between $`\delta _p`$ and the true density field of the simulation smoothed on the same scale. The errors on the smoothed velocity fields, $`\sigma _{v_p}`$, are estimated by a similar procedure from the Y supergalactic component of the mock velocity field. We evaluate the density and the velocity fields and their errors at the points of a Cartesian grid with $`5h^1\mathrm{Mpc}`$ spacing. In the well-sampled regions, with the G15 smoothing, the errors for the density are typically $`\sigma _{\delta _p}0.10.3`$ and $`\sigma _{v_p}50250\mathrm{km}\mathrm{s}^1`$, but they are much larger in certain regions at large distances. The error estimates $`\sigma _{\delta _p}`$ and $`\sigma _{v_p}`$ are two of the criteria used to exclude the noisy regions from the comparison with the clusters. The third one is the distance from the 4-th neighbouring object in the Mark III catalog, $`R_4`$, which provide us with a measure of the poor sampling in the parent velocity catalog. Two more cuts have been applied on the cluster density and velocity fields, using the errors $`\sigma _{v_c}`$ and $`\sigma _{\delta _c}`$ obtained from the mock catalogs analysis in § 3.3.2. Furthermore, we only consider objects within $`R=70h^1\mathrm{Mpc}`$ and outside the Zone of Avoidance ($`|b|>20^{}`$). Our last constraint is on the misalignment angle between the cluster and the POTENT velocity vectors, $`\mathrm{\Delta }\theta `$, which we impose to be smaller than 45. The reason for such additional cut is that we are assuming all along LB as a working hypothesis. This predicts, for ideal data, that the velocity vectors reconstructed from the clusters’ distribution should be aligned with the velocity vectors of mass deduced by POTENT. However, the various random errors and systematics in both types of real data analysed here cause deviations from this simple picture. In our “standard comparison volume” we restrict the comparison only to points where the velocity vectors of the POTENT and cluster fields are broadly aligned with each other. Note that the misalignment constraint may, in principle, affect our $`\beta `$ estimate and therefore it will be dropped in some of the robustness tests performed in § 4.4 and § 4.5. A “standard comparison volume” is defined trough the set of cuts reported in Table 3. The two cuts $`\sigma _{v_p}`$ and $`\sigma _{v_c}`$ turned out to be ineffective, in the sense that they are redundant for reasonable choices of the other parameters, and were not implemented. The $`\sigma _{\delta _c}`$ constraint is obtained by scaling $`\sigma _{\delta _p}`$ by the $`\beta _c`$ of BP96. For the sake of consistency, we perform all tests with the mock catalogs using the same standard cuts, even though the $`\mathrm{\Delta }\theta `$ cut does not affect the $`\delta `$-$`\delta `$ comparison. The standard volume, $`V_{st}`$, depends on the smoothing applied. For a Gaussian filter of 15 h<sup>-1</sup>Mpchas an effective radius $`R_e38h^1\mathrm{Mpc}`$ (defined by $`(4\pi /3)R_e^3=V_{st}`$). The rms of $`\sigma _{\delta _p}`$ there is $`0.18`$ and of $`\sigma _{v_p}`$ it is $`150\mathrm{km}\mathrm{s}^1`$. Finally, note that the misalignment criterion depends on the the particular velocity field of the generic mock catalog and therefore the same standard cuts define slightly different comparison volumes in each of the mock Mark III and cluster catalogs tested. In what follows we will present the results as the average of the individual results obtained for each catalog, and for illustrative purposes also show the results obtained for the mean fields, averaged over the mock catalogs. (The volumes corresponding to the individual mock catalogs typically share $`80\%`$ of their points with the volume defined for the average fields). Part of these errors are systematic. The systematic errors can be evaluated by inspecting the average of results over the mock catalogs or by comparing directly the average POTENT density and velocity fields, to the underlying smoothed fields of the simulation. The top panel of Figure 1 shows this comparison for the G15 smoothed density fields, at the points of a uniform grid inside the standard volume. The residuals in this scatter plot ($`\delta _pvs.\delta _t`$) are the local systematic errors. Their rms value over the standard volume is $`0.08`$. The corresponding rms of the random errors ($`\delta _pvs.\delta _p`$) is $`0.16`$. The systematic and random errors add in quadrature to give the total error ($`\delta _pvs.\delta _t`$), whose rms over the realizations at each point, $`\sigma _p`$, is used in the analysis below. To quantify the effect of these errors on the determination of $`\beta `$ we perform a regression of $`\delta _p`$ on $`\delta _t`$, for each mock catalog and for the average field, by minimizing the following $`\chi ^2`$: $$\chi ^2=\underset{i=1}{\overset{N_{tot}}{}}\frac{(\delta _{p,i}AB\delta _{t,i})^2}{\sigma _{\delta _p,i}^{}{}_{}{}^{2}},$$ (2) The figure shows no considerable systematic deviations from the $`\delta _p=\delta _t`$ line (the slope of the regression for the average field comes out $`1.01`$). The average of the slopes over the 20 mock realizations comes out slightly deviant from unity, $`1.06`$, with a standard deviation of $`0.17`$. For the other smoothings G12 and G20, in the standard comparison volume, the average of slopes is $`1.06\pm 0.09`$ and $`0.98\pm 0.28`$, respectively. For other choices of the cuts (see § 4.4 § 4.5 for some illustrative examples) the slope changes by a couple of percent, e.g., for G15, the typical deviation from unity is by up to $`5\%`$ going either way. Finally, only negligible zero point offsets have been detected in all of the above comparisons. These results indicate that the final systematic errors are hardly correlated with the signal or with themselves, contributing no significant bias in comparisons with density from redshift surveys. An analogous comparison has been performed between the supergalactic Y components of the velocity fields at the same points. We limit our analysis to this component only as it is the one least affected by the uncertainties of the mass distribution near the galactic plane in the forthcoming comparison with cluster velocities. Using the two remaining Cartesian components would require an estimate of possible systematic errors that are uncertain for the cluster case (see § 3.3.2). The bottom panel of Figure 1 shows the corresponding scatter diagram of the average field vs. the underlying one, again with G15 smoothing in the standard volume. The rms value of the residuals over the comparison volume which represent the local systematic error, ($`v_p\mathrm{vs}.v_t`$), is $`75\mathrm{km}\mathrm{s}^1`$. The rms value of the random errors around the average, ($`v_p\mathrm{vs}.v_p`$), is $`130\mathrm{km}\mathrm{s}^1`$. Visual inspection of the figure shows clear signs of systematic errors. The peculiar morphology in the velocity–velocity scatterplot reflects correlated velocities within individual cosmic structures. Indeed, the coherence length of the velocity field is much larger than for the density field leading to oversampling and correlations among the errors. The overall effect on the slope is a bias toward smaller values than unity. The slope of the best-fitting line for the average field is in this case $`0.81`$, and the average of slopes over the mock catalogs reflects this as well, giving an average slope of $`0.80\pm 0.18`$. The average slope is $`0.93\pm 0.13`$ for the G12 case, and it is $`0.76\pm 0.31`$ for G20. When varying the volume, in the G15 case, the bias is typically $`1222\%`$. Thus, the velocity comparisons tend, in general, to be less robust than the density comparisons and also more sensitive to the smoothing scale. This is probably partly due to the larger cosmic scatter in the velocity field, to larger systematic biases in the POTENT analysis (in particular the window bias and the sampling-gradient bias) which become more severe for large smoothing scales, and to correlation among the errors. The POTENT output of the real Mark III data is similarly provided, for the three different smoothings, on a Cartesian grid of spacing $`5h^1\mathrm{Mpc}`$, within a volume of radius $`80h^1\mathrm{Mpc}`$. The errors at each grid point ($`\sigma _{\delta _p}`$ and $`\sigma _{v_p}`$) are taken to be the error estimates of the mock catalogs detailed above, i.e., the rms difference over the realizations between the recovered fields and the true underlying one. ## 3 Reconstructing the Cluster Density and Velocity Fields The present analysis is based on the real space cluster distribution and peculiar velocities recovered from the observed distribution of Abell/ACO clusters in redshift space. The details of our reconstruction method are described in BP96. Here we briefly describe the data and sketch the main features of the procedure including the error analysis. It is worth noticing that in the present comparison with POTENT we are mainly interested in a local region of radius $`70h^1\mathrm{Mpc}`$, where the reconstruction technique is more reliable than at larger distances. ### 3.1 Cluster Data The cluster sample used in BP96 contains all the Abell and ACO clusters (Abell 1958; Abell, Corwin and Olowin 1989) of richness class $`R0`$ within $`250h^1\mathrm{Mpc}`$, $`|b|13^{}`$ and $`m_{10}17`$ (where $`m_{10}`$ is the magnitude of the tenth brightest cluster galaxy as corrected in PV91). The Abell and ACO catalogs were unified into a statistically homogeneous whole-sky sample of clusters using the distance-dependent weighting scheme of P91. The sample used in this work contains the same $`500`$ clusters, for which $`96\%`$ now have measured redshifts, most recently from the ESO Nearby Abell Cluster Survey (Katgert et al. 1996, ENACS). For the remaining $`20`$ clusters the redshifts were estimated from the $`m_{10}`$$`z`$ relation calibrated as in PV91. The results from this improved sample turn out to be fully consistent with the original ones of BP96. ### 3.2 Reconstruction of Uniform Cluster Catalogs in Real Space Our reconstruction procedure is of two steps. First, Monte Carlo techniques are used to correct for observational biases and return a whole-sky distribution of clusters in redshift space. Then, this distribution is fed into an iterative reconstruction procedure (similar in spirit to Yahil et al. 1991) which assumes linear GI+LB to recover the real-space positions and peculiar velocities of the clusters. The main observational biases arise from a systematic mismatch between the Abell and ACO catalogs and from the latitude-dependent Galactic obscuration; the radial selection is not an issue because it is quite uniform in the volume relevant for our analysis. To minimize the possible systematic errors in the model cluster velocity field we need to unify the Abell and ACO catalogs into a statistically homogeneous whole-sky sample of clusters. We obtain this by using the distance-dependent weighting scheme of PV91 which enforces the same number density in equal volume shells for the two cluster populations. The number of radial shells is left as a free parameter. To correct for Galactic obscuration, we generate a set of cluster catalogs of uniform sky coverage, by adding a population of synthetic clusters. The Galactic obscuration at $`|b|20^{}`$ is modelled by a cosecant law, $`𝒫(b)`$. As in BP96 we have chosen two different sets of absorption coefficients to account for observational uncertainties. Synthetic clusters are added with a probability proportional to $`𝒫(b)`$ such that they are spatially correlated with the real clusters according to the observed cluster-cluster correlation function. Within the ZoA, $`|b|<20^{}`$, the volume is filled with synthetic clusters, in bins of redshift and longitude, by cloning the cluster distribution in the adjacent latitude strips outside the ZoA. Both real and synthetic, Monte Carlo generated clusters are mass weighted to determine the density field to be used in equation (3) below. The mass of each real cluster is proportional to the number of galaxies per cluster listed in the Abell catalog. The mass of synthetic clusters is set equal to the mass of the real ones. The redshift space distortions are corrected by an iterative procedure based on linear theory and linear biasing. Equation (1) can be inverted to yield $$\text{v}=\frac{\beta _c}{4\pi }d^3x^{}\frac{\delta (\text{x}^{})(\text{x}^{}\text{x})}{|\text{x}^{}\text{x}|^3}.$$ (3) This is used, in each iteration, to compute the radial peculiar velocities of the individual clusters, $`u`$, in the LG frame, and correct for their real distances, $`r`$, via $`r=czu`$. To avoid strong nonlinear effects, the force field generated by the point-mass clusters is smoothed by a top-hat window of radius $`15h^1\mathrm{Mpc}`$, chosen to be comparable to the cluster-cluster correlation length (see BP96). A meaningful comparison with POTENT, in view of the large mean separation between clusters ($`25h^1\mathrm{Mpc}`$), requires that we smooth further the density and velocity fields. As input to this procedure one has to assume a value for $`\beta _c`$, which affects the peculiar velocities but has only a weak effect on the real-space distance as long as $`\beta _c`$ is in the right ballpark (see BP96). We have assumed $`\beta _c=0.21`$ based on matching the dipoles of the CMB and the cluster distribution (e.g., BP96). ### 3.3 Errors in the Cluster Positions and Velocities Ideally, we would like to implement the same POTENT error assignment procedure also for the cluster case. However, intrinsic difficulties in modelling the Abell/ACO selections criteria hamper the compilation of mock cluster catalogs from the K96 simulation. A problem which is made worse by the size of the N-body computational volume which is smaller than the one spanned by the real clusters’ population. We choose instead to evaluate cluster errors using a hybrid scheme in which: * The Monte Carlo procedure of adding synthetic clusters and varying the parameters is also used to estimate the random and systematic errors that arise both from the uncertainties in modelling the observational biases and the approximations in the reconstruction. * A mock catalog analysis, similar to the one used to assess the POTENT errors, is implemented to quantify the additional random errors that arise from the sparseness of the clusters’ sampling. #### 3.3.1 Monte Carlo Analysis The reconstruction procedure depends on a number of parameters that are only weakly constrained by observational data or theoretical arguments, such as the galactic absorption coefficients, the force smoothing length, and the weighting scheme used to homogenise the Abell and ACO catalogs. BP96 evaluated the sensitivity of the derived density and velocity fields to these parameters by allowing the parameters to vary about the standard set defined in their Table 2. The total uncertainties in the cluster positions and in the radial velocities, estimated in the CMB frame, arise from several different sources: * Intrinsic errors of the reconstruction procedure, which we estimate by the standard deviation of the cluster distances over 10 Monte-Carlo realizations of the same choice of parameters. * Observational errors, accounting for the freedom in the values of the free parameters, which we estimate by the standard deviation of the distances over reconstructions with different sets of values for the parameters. * Shot-noise error (eq. of BP96), due to the uncertainty in the mass per cluster which we assume proportional to the number of galaxies listed in the Abell catalog (see BP96). This error is estimated to be of $`70\mathrm{km}\mathrm{s}^1`$ (one dimensional). The shot-noise due to the sparseness of the mass tracers will be estimated numerically in § 3.3.2. * Weight uncertainty. This error accounts for uncertainties in the relative weighting of Abell versus ACO clusters to correct for systematic differences between these catalogs. For this purpose, we have performed 10 different reconstructions in which the weights were randomly scattered about the standard weights of BP96 (their eq. ), following a Gaussian distribution of width that equals the Poisson error in the relative number densities of Abell and ACO clusters at every given distance. The estimated typical weight uncertainty turns out to be $`85\mathrm{km}\mathrm{s}^1`$. * Projection uncertainty. A worry when using the Abell/ACO clusters for statistical purposes is the contamination of cluster richness due to projection of foreground and background galaxies (e.g., Dekel et al. 1989 and references therein). The resulting uncertainty in the galaxy count per cluster has been recently estimated (Van Haarlem et al. 1997, using N-body simulations; Mazure et al. 1996 using the ENACS survey) to be $`17\%`$. To translate this error into a distance error, we have performed 10 reconstructions where the cluster richness were randomly perturbed by a $`17\%`$ Gaussian, yielding an error of $`80\mathrm{km}\mathrm{s}^1`$. An upper bound to the total error for each Abell/ACO cluster can be estimated by adding in quadrature all the above errors, as if they are all independent. This results in an average error of $`254\mathrm{km}\mathrm{s}^1`$, with a large spread of $`\pm 117\mathrm{km}\mathrm{s}^1`$. If add in quadrature only the observational, intrinsic and shot-noise errors, which are independent of each other, the average error drops only to $`218\mathrm{km}\mathrm{s}^1`$, indicating that our upper bound is not a far over-estimate of the true error. Figure 2 shows the distribution of the total reconstruction errors in the line of sight component of the peculiar velocities for the $`500`$ clusters of our subsample. Unlike the intrinsic ones, observational, shot-noise, weighting and projection errors are isotropic. Therefore, they are also representative of the uncertainties along the supergalactic Y component of the velocity fields and they will be used to estimate the cluster velocity errors $`\sigma _{vc}`$ in § 3.4). For the comparison with POTENT we should identify the regions where the reconstruction from clusters is reliable. The errors are naturally larger in regions where the fraction of observed clusters is lower. This effect is clearly seen when we plot the intrinsic error per cluster as a function of Galactic latitude (Fig. 3). As expected, no radial dependence has been detected for the errors within the volume used for the present analysis. #### 3.3.2 Mock Catalog Analysis Due to their low number density, clusters of galaxies sparsely sample the underlying density and velocity fields. This introduces an intrinsic scatter, sometimes termed ‘shot noise’ as well, when comparing the cluster and the mass fields. This is closely related to the expected scatter from the stochasticity in the bias relation (e.g.Dekel & Lahav 1999). This sort of random errors are not included a-priori in our cluster error estimates, but need to be accounted for in the actual comparisons of the clusters fields to the POTENT reconstructions. We further assess the reliability of the cluster fields, and get a crude estimate of the additional scatter, using the same N-body simulation that was the basis for the Mark III mock catalogs. In the present case, however, we obtain just one mock catalog of clusters from the simulation. We use a friends-of-friends algorithm to identify groups among the particles in the simulation. The richest groups above some threshold, fixed so as to have the same number density as the Abell/ACO clusters, are identified as the mock clusters. To mimic the properties of the real cluster distribution we need to extend our mock sample out to $`250h^1\mathrm{Mpc}`$. Since this exceeds the size of the simulation, we obtain the mock cluster distribution by duplicating the clusters within the computational box using the periodic boundary conditions. The peculiar velocities from clusters are then computed from equation (3) and both the cluster density and the velocity fields are smoothed at the points of a cubic grid with $`5h^1\mathrm{Mpc}`$ spacing. The smoothed cluster fields are then compared with the true underlying fields of the simulation, smoothed on the same scale. Under the GI+LB assumptions, the fields are simply related by the biasing factor between clusters and mass in the simulation, both in the density case and for the velocities (for our $`\mathrm{\Omega }=1`$ simulation). We use the same “standard comparison volume” considered for the POTENT vs. true comparisons and used later for the POTENT vs. cluster comparisons. Figure 4 shows the results for the G15 case. The slopes of the best fitting lines for the $`\delta `$-$`\delta `$ comparison ($`0.32`$, top panel) and for the $`v_y`$-$`v_y`$ comparison ($`0.30`$, bottom panel) have been estimated by assigning equal weight to all points in the plots. They are a measure of the relative biasing between clusters and mass ($`b_{c}^{}{}_{}{}^{1}`$, when regressing clusters on true fields). The average value of the 20 volumes defined by the POTENT mock catalogs with the same criteria is $`0.31\pm 0.03`$ for the density fields and $`0.33\pm 0.05`$ for the velocities. For general variations in the comparison volume, values of $`0.300.38`$ are typically obtained, with a slight tendency of the values obtained from velocities to be higher than those obtained from densities, within this range. Unlike the POTENT analysis in § 2.1, we do not know the ‘true’ expected slope for the mock clusters and this sort of comparisons are in practice a way to define it. Variations between the values obtained from densities and from velocities may arise because of the larger cosmic scatter for the velocities and uncertainties in modelling the cluster distribution outside the computational volume. Another possible cause for the mismatch is the already mentioned strong correlation among the errors in the $`v_y`$-$`v_y`$ analysis. Note that the difference between the slopes obtained from the $`\delta `$-$`\delta `$ and the $`v_y`$-$`v_y`$ is smaller than the outcome of the POTENT analysis in § 2.1, meaning the the various error sources affecting the $`v_y`$-$`v_y`$ comparison tend to compensate each other. The distribution of the distant clusters may significantly affect the cluster velocities while it is almost irrelevant when computing the smoothed density field within 70 $`h^1`$ Mpc. We therefore regard $`b_{c}^{}{}_{}{}^{1}=0.31`$ as the ‘true’ value for the G15 standard case. Similar values are obtained for $`b_{c}^{}{}_{}{}^{1}`$ with the other smoothing scales. A considerable scatter about the regression lines is found in both the $`\delta `$-$`\delta `$ and the $`v_y`$-$`v_y`$ comparisons. Since the clusters in this case are free from the observational and modelling errors, this scatter is a manifestation of the additional inherent scatter in the cluster fields mentioned above. For G15, the detected scatter is $`\sigma _{\delta }^{}{}_{}{}^{int}=0.36`$ in the density case and $`\sigma _{v}^{}{}_{}{}^{int}=300\mathrm{km}\mathrm{s}^1`$ for the velocities. The change with smoothing scale is as expected: for G12 the scatter is larger (0.46 and 400, for densities and velocities, respectively) and for G20 the scatter is smaller (0.24 and 200). These estimates are quite robust to changes in the comparison volume. In what follows, we adopt these dispersions as a measure of the intrinsic scatter of the cluster fields, for the POTENT-cluster mock tests in § 4.2, and also for the comparisons with the real data. The drawback of the latter assumption is the fact that the mock clusters do not accurately match the Abell/ACO cluster distribution. Most of the mock clusters do not correspond on a one-to-one basis to the Abell/ACO ones, and are less spatially correlated. Furthermore, since the mock clusters are identified from a simulation based on IRAS galaxies, which tend to avoid high density regions, they might represent density peaks of a systematically lower amplitude with respect to those of the Abell/ACO clusters. Finally, the a-similarity could be even more severe for the velocity calculations because of the duplication procedure adopted outside the N-body computational volume. Still, not having a better way to accurately determine this scatter for the real data, and understanding that its magnitude mainly depends on the sparseness of clusters and smoothing adopted, we believe that our approach does give a crude estimate of the effect. It is interesting to check the plausibility our results by comparing them to the analytic estimates of shot noise computed according to Yahil et al. (1991). For G15, and within a radius of 70 h<sup>-1</sup>Mpc, we obtain $`\sigma _{\delta }^{}{}_{}{}^{an.}=0.42`$ for the $`\delta `$-$`\delta `$ case and $`\sigma _{v}^{}{}_{}{}^{int}=1540\beta _c\mathrm{km}\mathrm{s}^1`$ for the$`v_y`$-$`v_y`$ one. Scaling to $`\beta _c=0.21`$ value of BP96 we obtain that in both cases the analytic shot noise is close, although somewhat larger, to the scatter in the simulations. The explicit assumption made here is that the intrinsic scatter found in the mock simulation is representative for the real clusters too, and is independent of the other sources of error and the underlying field. The plausibility of these hypotheses will be assessed a posteriori when comparing the real cluster and POTENT fields. As for the POTENT analysis (§ 2.1), the cluster velocity analysis has been limited to the supergalactic Y component, that is less prune to systematics. More extended mock catalog analyses, based however on the distribution of IRAS galaxies, have demonstrated this point (Branchini et al. 1999). The other two Cartesian components are affected by systematic errors, arising from the cloning procedure that is used to fill the ZoA. Given the larger extent of the ZoA in the cluster case, we expect comparable, if not larger, systematics to affect the cluster velocity field. Correcting for this bias would require an error analysis based on more realistic Abell/ACO mock catalogs that are not currently available. Therefore, we restrict our analysis to a $`v_y`$-$`v_y`$ comparison, under the working hypothesis that the Y component of the cluster velocity field is only affected by random errors. ### 3.4 Smoothed Density and Velocity Fields For the purpose of the comparison with POTENT, we compute smoothed density and velocity fields at the points of a cubic grid with spacing $`5h^1\mathrm{Mpc}`$ inside a box of side $`320h^1\mathrm{Mpc}`$ centered on the Local Group. We first generate 20 Monte-Carlo realizations of our standard model as described above. To mimic the effect of observational errors and shot noise, we perturbed the cluster distances with a Gaussian noise of $`150`$ km s<sup>-1</sup>. This value is slightly larger than the sum in quadrature of average observational and shot noise errors and corrects for the positive tail in the error distribution similar to the one observed in Figure 2. The intrinsic error is not included here because it will enter implicitly when we later average over the 20 fields. A Cloud-in-Cell (CIC) scheme is used to translate the discrete cluster distribution into a density field at the grid points. The peculiar velocities at the grid points are recomputed from the reconstructed cluster distribution via linear GI+LB, with the force field smoothed by a small-scale top-hat window of radius $`5h^1\mathrm{Mpc}`$. We minimize the scale of the top-hat force smoothing in order to eventually end up as close as possible to Gaussian smoothing on scales $`12h^1\mathrm{Mpc}`$. The 20 fields are then smoothed further with a Gaussian window of a larger radius $`(R_s^2R_1^2)^{0.5}`$, where $`R_1`$ is the Gaussian smoothing radius equivalent in volume to the $`5h^1\mathrm{Mpc}`$ top-hat force smoothing. As mentioned already, we try three different smoothing scales, of $`R_s=12`$, $`15`$ and $`20h^1\mathrm{Mpc}`$. These 20 fields are averaged to give the final smoothed fields used in the next section. Each of these 20 fields is affected by observational and shot noise errors. Moreover, since they represent 20 different Monte-Carlo realizations of the cluster distribution for the same choice of parameters, we can account for the intrinsic errors by the very same averaging procedure. Indeed, the standard deviation over the 20 realizations represent the cumulative effect of intrinsic, observational and shot noise errors discussed in § 3.3.1. The only contribution left to the total error budget is the intrinsic scatter estimated in § 3.3.2. This is modeled as a Gaussian noise and is added in quadrature at each gridpoint. Therefore, the error estimates for the smoothed cluster fields, $`\sigma _{\delta _c}`$ and $`\sigma _{v_c}`$, are obtained by taking the standard deviation over the 20 catalogs and adding in quadrature a Gaussian noise of amplitude equal to the intrinsic scatter. ## 4 Measuring $`\beta _c`$ We determine $`\beta _c`$ in two different ways, via $`\delta `$-$`\delta `$ and $`v_y`$-$`v_y`$ comparisons. Because the $`\delta `$-$`\delta `$ comparison is local, it avoids the incomplete sky coverage in the ZoA, but it uses only a small number of clusters (18 within $`70h^1\mathrm{Mpc}`$). The $`v_y`$-$`v_y`$ comparison, on the other hand, involves an integral over the cluster distribution in an extended volume, but it suffers from a large uncertainty due to the unknown cluster distribution in the ZoA and beyond the sample’s edge. Therefore, the two methods are expected to suffer from different biases and can provide us with two estimates of $`\beta _c`$ that are somewhat complementary. We wish to restrict the quantitative comparison to the regions in space where both the errors in the cluster and POTENT fields are reasonably small. On the other hand, we wish to maximize the number of independent volumes compared in order to minimize the cosmic scatter. We therefore need to optimize our choice of comparison volume, and test the robustness of the results to changes in this volume. The comparison volume has already been introduced in § 2.1. The natural parameters for defining it the are the measure of poor sampling of the Mark III catalog, $`R_4`$, our estimate of the random errors in the POTENT and cluster density fields ($`\sigma _{\delta _p}`$ and $`\sigma _{\delta _c}`$) and the corresponding errors in the velocity fields ($`\sigma _{v_p}`$ and $`\sigma _{v_c}`$). The latter turned out to be ineffective cuts, in the sense that they are redundant for reasonable choices of the other parameters. Our ‘standard’ cuts according to these parameters are reported in Table 3. As mentioned already in § 2.1 we impose as well a constraint on the misalignment angle between the cluster and POTENT velocity vectors. This serves as an additional classifier of “good” points for the comparison, and helps avoiding regions where we might have large, perhaps unaccounted for, errors. In our main analysis we restrict the comparison to points with a maximal misalignment angle of $`\mathrm{\Delta }\theta <45^{}`$. We later relax this constraint and verify the robustness of the results. We take the G15 smoothing as our standard case, with the above set of criteria defining our ‘standard’ comparison volume. ### 4.1 The $`\beta _c`$ Fitting Method The assumption underlying the $`\beta _c`$ estimations is that the density and velocity fields recovered above are consistent with the model of GI+LB. In this framework the POTENT and the cluster fields are linearly related: $$p=\beta _cc+A,$$ (4) where $`p`$ and $`c`$ stand for the POTENT and cluster and represent either $`\delta `$ or the supergalactic Y velocity component. The cluster errors, $`\sigma _c`$, are comparable to the POTENT errors, $`\sigma _p`$. The best-fit parameters are therefore obtained by minimising the quantity $$\chi ^2=\underset{i=1}{\overset{N_{tot}}{}}\frac{(p_iA\beta _cc_i)^2}{(\sigma _{p,i}^{}{}_{}{}^{2}+\beta _c^2\sigma _{c,i}^{}{}_{}{}^{2})},$$ (5) where the subscript <sub>i</sub> refers to any of the $`N_{tot}`$ gridpoints within the comparison volume. Since the fields have been smoothed on scales much larger then the grid separation, these points are, however, not independent. As in Hudson et al. (1995; see also Dekel et al. 1993), we estimate the effective number of independent points, $`N_{eff}`$, as $$N_{eff}^1=N_{tot}^2\underset{j=1}{\overset{N_{tot}}{}}\underset{i=1}{\overset{N_{tot}}{}}\mathrm{exp}(r_{ij}^2/2R_s^2),$$ (6) where $`r_{ij}`$ is the separation between gridpoints $`i`$ and $`j`$. This expression weighs the dependent grid points taking into account properly the finite comparison volume and its specific shape. This estimate is thus more accurate than the simplistic ratio of the comparison volume over the effective volume of the smoothing window, which assumes an infinite comparison volume. We account for the oversampling problem by using an effective $`\chi ^2`$ statistics defined by $`\chi _{eff}^2(N_{eff}/N_{tot})\chi ^2`$, which is equivalent to multiplying the individual errors by the square root of the over-sampling ratio $`N_{tot}/N_{eff}`$. The assumption we make is that this new statistics is approximately distributed like a $`\chi ^2`$ with $`N_{eff}`$ degrees of freedom. In what follows we use it to assess the errors in $`\beta _c`$ and $`A`$. ### 4.2 Testing the Comparison Before performing the comparisons with the real data, we wish to further quantify the possible systematics that might enter, verify the validity of the smoothing scheme adopted and find the optimal smoothing scale for the comparisons. Also, we would like to understand whether the intrinsic differences between the density and velocity field comparisons can affect the results. We do this by comparing the mock POTENT fields of § 2.1 with the mock cluster ones from § 3.3.2. The results of the density and velocity comparisons, within the standard comparison volume, for each of our three smoothing scales, are reported in Table 3. The values quoted in the table are all the mean values averaged over the results of the 20 mock POTENT catalogs. The G15 case is illustrated in Figure 5, where the average POTENT fields are compared to the cluster ones. The values obtained for $`\beta _c`$ in the G15 case are encouragingly close to the “true” value of $`0.31`$ (§ 3.3.2). The typical range of values obtained when altering the chosen volume for the comparison, from densities and velocities, lie in the range $`0.260.35`$. There are small differences for the other smoothings, with a tendency towards smaller $`\beta _c`$ for the G20 case. All variations are, however, well within the formal $`\chi _{eff}^2`$ error-bars. In all cases, the zero-point found is hardly significant and is consistent with zero, within the error-bars. The $`\chi ^2/N_{eff}`$ values (defined as S in the table) are only slightly smaller than unity for the density comparisons, well within the accepted range given the small number of degrees of freedom ($`1\pm 0.4`$ for G15), but they are significantly smaller for the velocities. A similar trend was also found when comparing the POTENT fields with the true N-body ones. This may indicate that model velocities are more correlated than the scaling to $`N_{eff}`$ obtained for the densities. If true, then our $`\chi _{eff}^2`$ distribution deviates from that of a $`\chi ^2`$ with $`N_{eff}`$ degrees of freedom, resulting in an error overestimate and in a $`\chi ^2/N_{eff}<1`$ for velocities. Note that this effect is also associated with the application of the alignment constraint, and without it the value of $`\chi ^2`$ increases somewhat. So, perhaps for aligned vectors our errors are over estimated. Another, possibly not exclusive, explanation, which is also suggested by the POTENT vs. N-body fields comparison, is the existence of systematic errors that do not average to zero. The formal error-bars obtained for $`\beta _c`$, in all cases, are significantly larger (by a factor of $`2`$) than the spread of values obtained from the 20 Mark III mock catalogs. Again, this may suggest that our effective $`\chi ^2`$ statistic recover the correct slope but overestimates the errors on $`\beta _c`$ also for the $`\delta `$-$`\delta `$ comparison. In summary, the important conclusion from the comparison of the mock data is that our method provides a fairly reliable estimate of $`\beta _c`$ with no gross biases. The results are fairly robust to changes in the comparison volume and the smoothing scale. The reasonable $`\chi ^2`$ values obtained for the densities versus the too-low values for the velocities, incline us to regard, also here, the density comparison as the more reliable one. Our purpose is to constrain $`\beta _c`$ in a meaningful way by comparing the density and velocity fields extracted from the cluster distribution with the fields recovered by POTENT from peculiar velocities. The competing obstacles are the very sparse sampling of the underlying density field by the clusters, on the one hand, and the limited volume sampled by peculiar velocities, on the other. The former dictates the use of a large smoothing scale, because small-scale structure is not traced properly by the clusters, while the latter calls for a relatively small smoothing scale, in order to minimize the cosmic scatter associated with the number of independent volumes, and the systematic biases in the method. The rough agreement between the results of different smoothings is encouraging. Large smoothing scales, such as the G20 case, are perhaps more pruned to systematics, and in any case, since they reduce the number of independent data points, they can only constrain $`\beta _c`$ very weakly with large uncertainties. These considerations, along with the fact that it matches the intercluster separation, made us choose the G15 filter as our standard for the real data analysis. As mentioned before, the formal errorbars from the $`\chi _{eff}^2`$ statistics may overestimate the actual uncertainty in the results, but we conservatively choose to stick with these. ### 4.3 Visual Comparison of Maps Figures 6 display the G15 density and velocity fields (in the CMB frame) from the clusters (left) and POTENT (right) reconstructions of the real data in three slices parallel to the Supergalactic plane, within a sphere of radius $`80h^1\mathrm{Mpc}`$ about the Local Group. The clusters’ densities and velocities are scaled by $`\beta _c=0.21`$. The heavy line delineates our standard comparison volume. The similarity between the two density fields is evident in most regions. In both fields, the dominant features are the Great Attractor (on the left), the Perseus-Pisces supercluster (on the right), and the great void in between. On the other hand, the Coma supercluster, seen in the clusters map near $`(X,Y)(0,70)`$, is not reproduced at the same position in the POTENT map. Differences are also seen in the upper-right quadrant of the $`Z=25h^1\mathrm{Mpc}`$ plane. There is also some qualitative agreement between the velocity fields, but it is less striking. The main features common to the two fields are the convergences into the Great Attractor and into Perseus Pisces. The main difference is an additional bulk flow from right to left for the POTENT field, apparent in the three slices. Another feature absent in the POTENT field is the infall into Coma seen in the cluster field. Note that the main discrepancies lie outside the comparison volume, in regions where the errors are expected to be large at least in one of the reconstructions. These regions will be excluded from the quantitative comparison below. It is also worth noticing that the density–velocity maps for the clusters are very similar to those obtained by Scaramella (1995b) from the same Abell/ACO cluster catalogues but using a somewhat different technique. ### 4.4 Estimating $`\beta _c`$ by a Density Comparison We perform the $`\delta \delta `$ comparison within the standard volume. The errors in the POTENT field have been evaluated in § 2.1, and for clusters we use the error estimates of § 3.3. The results for the three smoothing radii are displayed in the first three rows of Table 3. For the preferred G15 case we find $`\beta _c=0.20\pm 0.07`$, and the best-fit value stays essentially the same for the other cases (with the errorbar increasing with the smoothing, due to the smaller number of effective independent points). No significant zero point offset is found in any of the cases. The $`\delta `$-$`\delta `$ scatterplot is displayed in Figure 7 for the G15 case. The solid line is the best-fit from the $`\chi ^2`$ minimization. We have tried several variants of the comparison volume, in order to check the sensitivity of our results. Two representative examples are reported in the last two rows of Table 3. In the forth case there we have considered the original standard volume but with a stricter $`R_4`$ cut, $`R_4<10h^1\mathrm{Mpc}`$. The last column shows the results for the most interesting experiment, i.e. the one in which the misalignment constraint has been removed. The results of these tests all confirm the robustness of the $`\beta _c0.2`$ value. For the density comparisons, we generally get $`\chi _{eff}^2/N_{eff}1`$, indicating a good fit. Note, that relaxing the constraint on the misalignment angle more than doubles the number of gridpoints considered. As outlined in § 3.3.2, the present results have been obtained assuming that the scatter found in the mock cluster fields is representative of the intrinsic scatter in the real case and that it is independent of the other sources of error which form $`\sigma _{\delta _c}`$. The resulting $`\chi _{eff}^2`$ values are an indication that these are indeed fair assumptions. ### 4.5 Estimating $`\beta _c`$ by a Velocity Comparison As already pointed out, it is important to perform the POTENT-cluster regression for the velocities on the ground of its complementarity with the $`\delta `$ analysis. Also, as we have already discussed in § 3.3.2, we limit the comparison to the supergalactic Y component which is the more robust of the components. We use the same minimising procedure adopted in for the $`\delta `$-$`\delta `$ comparison. The results are displayed in the right half of Table 3. For our standard G15 case the result is now $`\beta _c=0.25\pm 0.05`$, somewhat higher than the density case, but still consistent within the errorbars. The scatterplot for this case is shown in Figure 8. As was the case for the $`\delta `$-$`\delta `$ comparisons, no significant offset is detected, and the $`\beta _c`$ value is quite robust for the different smoothing scales and under variations of the comparison volume (the changes in the resulting $`\beta `$ are well below the $`1\sigma `$ significance level). Note again the peculiar morphology in the scatterplot, arising from the coherency in the peculiar velocities within independent cosmic structure. Although it may seem that the POTENT and cluster velocity fields differ in a large scale bulk flow component, more quantitative, volume limited comparisons performed by Branchini Plionis and Sciama (1996) and Branchini et al. (1999) have shown that the two bulk flows agree in amplitude and direction for a value of $`\beta _c0.21`$ It is especially interesting to check the effect of removing the alignment constraint (the fifth case in the table). This is a demanding robustness check since it extends the $`v_y`$-$`v_y`$ comparison volume to points for which the velocity vectors can be severely misaligned. It is encouraging that, even in this case, the slope of the best fitting line changes only by 4 %. The $`\chi ^2/N_{eff}`$ values lie somewhat below unity for all the cases explored but for the one in which we have removed the alignment constraint. In this las case we obtain $`\chi ^2/N_{eff}1`$ both for the $`\delta `$-$`\delta `$ and $`v_y`$-$`v_y`$ comparisons. A similar behavior was also obtained for the mock comparisons (§ 4.2). The errors $`\sigma _{\beta _c}^v`$ obtained from the real analysis are smaller than those obtained from the mock and listed in Table 3. Even accounting for the difference in the values of $`\beta _c`$, the two error estimates differ by a factor of $`2`$. This mismatch probably arises from the characteristics of the mock velocity fields. Indeed, the small computational box used in the original K96 simulation and the constraint of having a vanishing bulk velocity on the scale of the box, produce a remarkably quiet velocity field with a bulk velocity of only 100 km s<sup>-1</sup>already on a scale of 40 h<sup>-1</sup>Mpc. This velocity field has been used to estimate the POTENT and part of the cluster velocity errors. Real velocities, however, are larger than the mock ones and these uncertainties probably underestimates the errors for the real case, leading to the smaller $`\sigma _{\beta _c}^v`$ value listed in Table 3. The $`v_y`$-$`v_y`$ comparison described above has been performed in the CMB reference frame. Predicting velocities from galaxy redshift surveys is commonly done though in the LG frame, in order to minimize the influence of mass concentrations from outside the sample volume. The LG frame might therefore be considered the natural frame in which to perform comparisons with reconstructed velocities. In our case, the velocities are reconstructed from the far-extending cluster catalog, which alleviates the above problem and we regard a CMB comparison as reliable. Furthermore, performing the comparison in the LG frame would introduce extra complexities, requiring a somewhat ad-hoc transformation to a common LG frame for both the cluster and POTENT velocity fields. As a crude test of the sensitivity of our results to changes in the framework of reference, we shift both velocity fields to the cluster LG frame, as defined by the smoothed cluster velocity at the origin (with a reasonable choice for $`\beta _c`$). Alternatively we consider the peculiar velocities relative to the central observer of each reconstructed velocity field independently. The standard G15 comparison of the Y components gives $`\beta _c=0.24\pm 0.05`$ and $`0.25\pm 0.04`$ respectively, for these two cases, demonstrating once more the robustness of our result. ## 5 Conclusions We have used the smooth matter fluctuation field obtained by applying the POTENT machinery to the Mark III dataset and compared it to the density field deduced from the Abell/ACO cluster distribution. A similar comparison has also been performed between the reconstructed cluster velocities and those from the Mark III catalog, smoothed on the same scale. We have performed a careful error analysis using mock galaxy and cluster catalogs derived from N-body simulations. The mock catalogs used in our POTENT error analysis were especially designed to reproduce the Mark III characteristics. Uncertainties in the cluster fields, on the other hand, were evaluated using a hybrid procedure which extends the Monte-Carlo error analysis of BP96 and is complemented with a similar to the POTENT mock catalog analysis. Cluster and POTENT fields show remarkable similarities within $`70h^1\mathrm{Mpc}`$, while their major discrepancies are usually confined into regions where the cluster or the POTENT reconstructions are known to be unreliable. Quantitative comparisons between cluster and POTENT fields have been performed in an attempt to estimate the cluster $`\beta `$ parameter. The results are quite robust and for the standard G15 case we find $`\beta _c=0.20\pm 0.07`$ from the $`\delta `$-$`\delta `$ regression, and a somewhat larger value of $`\beta _c=0.25\pm 0.05`$ from the $`v_y`$-$`v_y`$ case. This systematic discrepancy is within the $`1\sigma `$ significance level, but it is present in all the cases explored. We therefore choose to quote a joint estimate for $`\beta _c`$ of $`0.22\pm 0.08`$. Some differences between the two values are not unexpected given the different nature of the comparisons. A similar regression based on the mock catalogs showed that some discrepancies do exist. However, in the mock tests the difference between the two values was of smaller magnitude and in the opposite direction. The different trends between the real and mock results could arise from the different modelling of the mass distribution outside the sampled regions, which can affect the cluster velocity field. There are other indications for regarding the $`\delta \delta `$ results as being more reliable. The $`\chi ^2/N_{eff}`$ values for the density comparison were around unity, while systematically lower values were obtained for the velocities. Also the POTENT velocity field was found in the mock catalog analysis to suffer of more biases. The present analysis suggests a value of $`\beta _c0.200.25`$ for clusters, in accordance with previous estimates. The distribution of clusters is expected to be biased with respect to the distribution of galaxies with a biasing factor $`b_{cg}34`$ (e.g., from the different correlation lengths obtained for clusters and for galaxies; Bahcall & Soneira 1983, Huchra et al. 1990). Peacock and Dodds (1994) find such values for the biasing factors, derived from the ratios of power spectra calculated for different datasets. Their quoted relative biasing factors for Abell clusters, radio galaxies, optical galaxies and IRAS galaxies is $`4.5:1.9:1.3:1`$, respectively. Recent results from a comparison of the cluster density and velocity fields to the fields recovered from the PSC$`z`$ redshift survey constrain this parameter to $`b_{cg}=4.4\pm 0.6`$, with respect to IRAS galaxies (Branchini et al. 1999). Joined with our constraint on $`\beta _c`$ this implies $`\beta _I1`$ with, however, a 1-$`\sigma `$ uncertainty of $`50`$ %. Although our analysis cannot provide us with a firm $`\beta _c`$ determination, due to the large uncertainties associated with the $`\delta `$-$`\delta `$ and $`v_y`$-$`v_y`$ comparisons, it leads toward a value of $`\beta _c`$ which is consistent with an Einstein de Sitter universe for a reasonable cluster linear bias parameter of $`b_c4.5`$. Our value of $`b_c`$ is a linear fit to the $`\delta `$-$`\delta `$ and $`v_y`$-$`v_y`$ scatterplots. Under the assumption of Linear Biasing $`b_c`$ represents the relative biasing of Abell/ACO clusters with respect to the underlying mass density field. Linear Biasing, however, needs not to be a good approximation for clusters since the large value of $`b_c`$ causes the LB hypothesis to break down in those regions where $`\delta <b_c^1`$. Methods to measure the degree of nonlinearity in the biasing relation have been recently developed and applied to galaxy distribution (e.g. Lemson et al. 1999 and Narayanan et al. 1999, Sigad, Dekel and Branchini 1999) but not yet to clusters of galaxies. However, there are indirect evidences of the small deviation from linear biasing. The first one comes from the visual inspection of the $`\delta `$-$`\delta `$ scatterplot in Figure 7 which does not deviate appreciably from LB expectations (the linear fit). A more convincing evidence of the small deviations from LB approximation is obtained by performing the $`\delta `$-$`\delta `$ comparison for various smoothing filters. Increasing the smoothing length decreases the amplitude of density fluctuations and reduces the size of those regions in which the constraint $`\delta <b_c^1`$ cause the LB model to fail. As shown in Table 3, increasing the smoothing radius from 12 to $`20h^1\mathrm{Mpc}`$ doesn’t change $`\beta _c^\delta `$ significantly showing that regions where LB does not apply play a little role in our analysis. As a consequence, and for all practical purposes, Linear Biasing is a good approximation on the scales relevant for our analysis and and therefore we can regard $`b_c`$ as the biasing parameter for Abell/ACO clusters. ## Acknowledgments We thank the referee Michael Strauss for his helpful comments and suggestions. We are grateful to Ami Eldar for his help in applying POTENT to the mock catalogs to Tsafrir Kolatt for providing the mock simulation and catalogues and to Yahir Sigad for helpful discussions. MP and EB warmly thank Bepi Tormen for providing his grouped version of Mark II catalogue which was used in some of the preliminary work. EB has been supported by an EEC Human Capital and Mobility fellowship and acknowledges the hospitality of the Hebrew University, where this work has been completed IZ acknowledge supports by the DOE and the NASA grant NAG 5-7092 at Fermilab. ## ## Figure Captions Figure 1 Systematic errors in the POTENT analysis. The POTENT fields recovered from the noisy and sparsely sampled mock data are compared with the “true” G15 fields of the simulation. The comparison is at uniform grid points within our “standard comparison volume” of effective radius $`40h^1\mathrm{Mpc}`$. Plotted, in both cases, is the POTENT field averaged over the 20 realizations. Top: The POTENT density field vs. the true density field. Bottom: The POTENT supergalactic Y-component of the velocity field vs. the true velocities in the simulation. Figure 2 Histograms of the global random+systematic errors from the Monte Carlo analysis. The plot shows the frequency of the uncertainties on the line of sight component of the reconstructed cluster velocities. Units are $`\mathrm{km}\mathrm{s}^1`$. Figure 3 Environmental dependency of Monte Carlo errors. Dependence of the intrinsic errors on galactic latitude. Abell/ACO clusters in the northern galactic hemisphere are shown as filled dots while open dots represent southern clusters. Figure 4 Random errors in the cluster mock analysis. Cluster mock density and velocity fields are obtained from the clusters identified in the K96 N-body simulation and compared to the underlying fields, all with G15 smoothing. The comparison is at the same points shown in Fig. 1 Top: $`\delta `$-$`\delta `$ comparison. Bottom: $`v_y`$-$`v_y`$ comparison. The solid lines corresponds to the average best-fit lines. The scatter about the fitting lines is an estimate of the intrinsic scatter in the cluster fields. Figure 5 Comparing the mock POTENT and cluster fields. The averaged mock POTENT fields of § 2.1 are compared with mock cluster ones of § 3.3.2, within the standard comparison volume. Top: $`\delta `$-$`\delta `$ comparison. Bottom: $`v_y`$-$`v_y`$ comparison. The values quoted are the average over the 20 catalogs of $`\beta _c`$ and of its estimated error from the $`\chi ^2`$ fit. The solid lines represent the best-fits averages. Figure 6 Density fluctuations and projected velocity field in supergalactic X-Y planes. The Mark III-POTENT case is shown on the right and the cluster fields on the left, all G15 smoothed. The density contour spacing is $`\mathrm{\Delta }\delta =0.15`$, solid contours refer to overdense regions while dashed contours refer to negative overdensities. The thick line indicates the $`\delta =0`$ contour. The heavy line defines the standard comparison volume. The length of the velocity vectors have been drawn on the scale of the plot. The cluster density fluctuations and velocities are scaled by $`\beta _c=0.21`$. Top panel shows the plane defined by supergalactic $`Z=+2500\mathrm{km}\mathrm{s}^1`$, Middle panel shows the supergalactic plane of $`Z=0\mathrm{km}\mathrm{s}^1`$, and the lower panel the plane defined by $`Z=2500\mathrm{km}\mathrm{s}^1`$. Figure 7 POTENT versus cluster G15 density field from the real data, at gridpoints within the comparison volume. The solid line results from the linear best fit. Figure 8 POTENT versus cluster G15 velocity field from the real data. Only the Y supergalactic components at gridpoints within the comparison volume are considered. The best-fit line is marked as well.
no-problem/9911/quant-ph9911062.html
ar5iv
text
# Atomic Quantum Computer ## Abstract The current proposals for the realization of quantum computer such as NMR, quantum dots and trapped ions are based on the using of an atom or an ion as one qubit. In these proposals a quantum computer consists from several atoms and the coupling between them provides the coupling between qubits necessary for a quantum gate. We discuss whether a single atom can be used as a quantum computer. Internal states of the atom serve to hold the quantum information and the spin-orbit and spin-spin interaction provides the coupling between qubits in the atomic quantum computer. In particular one can use the electron spin resonance (ESR) to process the information encoded in the hyperfine splitting of atomic energy levels. By using quantum state engineering one can manipulate the internal states of the natural or artificial (quantum dot) atom to make quantum computations. Quantum computers have an information processing capability much greater than the classical computers. Considerable progress in quantum computing has been made in recent years. A number of quantum algorithms have been developed and experimental implementations of small quantum computers have been achieved . In particular such realizations of quantum computers as NMR , ion traps , cavity QED and quantum dots have been proposed. The proposed technologies for realization of quantum computer have serious intrinsic limitations . In particular NMR devices suffer from an exponential attenuation of signal to noise as the number of qubits increase and an ion trap computer is limited by the frequencies of the vibrational modes in the trap. In this note we discuss a possible realization of quantum computer which perhaps can help to avoid these limitations. Basic elements of quantum computer are qubits and logic elements (quantum gates). A qubit is a two-state quantum system with a prescribed computational basis. The current proposals for the experimental realization of quantum computer are based on the implementation of the qubit as a two-state atom or an ion. Quantum computer in these schemes is a molecular machine because it is built up from a number of coupled atoms or quantum dots. Here we propose to do quantum computations using a single atom . In this scheme the atomic quantum computer is a single atom. It is interesting to study such an atomic machine theoretically but it could have also some advantages for the practical realization with respect to the molecular machines. It is well known that in atomic physics the concept of the individual state of an electron in an atom is accepted and one proceeds from the self-consistent field approximation, see for example . The state of an atom is determined by the set of the states of the electrons. Each state of the electron is characterized by a definite value of its orbital angular momentum $`l`$, by the principal quantum number $`n`$ and by the values of the projections of the orbital angular momentum $`m_l`$ and of the spin $`m_s`$ on the $`z`$-axis. In the Hartree-Fock central field approximation the energy of an atom is completely determined by the assignment of the electron configuration, i.e., by the assignment of the values of $`n`$ and $`l`$ for all the electrons. One can implement a single qubit in atom as a one-particle electron state in the self-consistent field approximation and multi-qubit states as the corresponding multi-particle states represented by the Slater determinant. Almost all real spectra can be systematized with respect to $`LS`$ or $`jj`$ coupling schemes. Every stationary state of the atom in the $`LS`$ coupling approximation is characterized by a definite value of the orbital angular momentum $`L`$ and the total spin $`S`$ of the electrons. Under the action of the relativistic effects a degenerate level with given $`L`$ and $`S`$ is split into a number of distinct levels (the fine structure of the level), which differ in the value of the total angular momentum $`J`$. The relativistic terms in the Hamiltonian of an atom includes the spin-orbit and spin-spin interaction. There is also the further splitting of atomic energy levels as a result of the interaction of electrons with the spin of the nucleus. This is the hyperfine structure of the levels. One can use these interactions to build quantum logic gates. As a simple example let us discuss how the hyperfine splitting can be used to do quantum computations on a single atom. Let us consider the Hamiltonian which includes both nucleus and electron for a case of quenched orbital angular momentum. If one assumes that the electron spin Zeeman energy is much bigger than the hyperfine coupling energy then one gets an approximate Hamiltonian $$=g\beta HS_z\gamma _n\mathrm{}HI_z+AS_zI_z$$ Here $`S_z`$ and $`I_z`$ are the electron and nuclear spin operators, $`H`$ is the magnetic field which is parallel to the $`z`$-axis, $`\beta `$ is the Bohr magneton, $`\gamma _n`$ is the nuclear gyromagnetic ratio, $`A`$ is the hyperfine coupling energy and $`g`$ is the $`g`$-factor. Let us consider the simplest case of nuclear and electron spins of 1/2. Then a single qubit is a nuclear spin $`|m_I>`$ and electron spin $`|m_S>`$ function, where $`m_I`$ and $`m_S`$ stand for eigenvalues of $`I_z`$ and $`S_z`$. The two-qubit states are the eigenfunctions of the Hamiltonian $``$ and they are given by the product of the nuclear spin and electron spin functions $$|m_I,m_S>=|m_I>|m_S>$$ The coupling used to produce magnetic resonances is an alternating magnetic field applied perpendicular to the static field. The possible transitions produced by an alternating field are found by considering a perturbing term in the Hamiltonian $$_m(t)=(\gamma _e\mathrm{}S_x\gamma _n\mathrm{}I_x)H_x\mathrm{cos}\omega t$$ Many of the basic principles of nuclear magnetic resonance apply to electron magnetic resonance (ESR). However there are some special features of spin echoes that arise for electron spin resonance which are not encountered in nuclear magnetic resonance. This is because in many cases the nuclear quantization direction depends on the electron spin orientation. It is well known that any quantum algorithm can be implemented with one-qubit rotations and two-qubit controlled-NOT gate, see . The implementation of the controlled-NOT gate by using pulse sequences is well known in NMR . For example it can be represented as a network which includes one qubit Hadamard gates and a $`4\times 4`$ matrix which can be implemented as the following pulse sequence $$(90^0I_z)(90^0S_z)(90^02I_zS_z)$$ Two-qubite realizations of the Deusch-Jozsa algorithm and the Grover algorithm have been accomplished using NMR spectroscopy of spin 1/2 nuclei of appropriate molecules in solution . One can use the similar technique in the case of ESR. If computers are to become much smaller in the future, the miniaturization might lead to the atomic quantum computer. One of advantages of the atomic quantum computer is that quantum state of a single atom can be stable against decoherence, for a discussion of the decoherence problem in quantum computing see and references therein. Recent experimental and theoretical advances on quantum state engineering with a natural and artificial (quantum dots) atoms and the development of methods for completely determining the quantum state of an atom show that quantum computations with a single atom should be possible. To summarize, I propose using a single atom to do quantum computations. Such an atom can be also used, of course, as a part of a computational network. I discussed the simple realization of the two-qubit atomic quantum computer based on ESR and hyperfine splitting. However the idea of atomic quantum computer is more general. To build a multi-qubit atomic quantum computer one has to use the fine and hyperfine splitting of energy levels to process the information encoded in the multielectron states. In principle one can build an atomic quantum computer based on a natural or artificial (quantum dot) atom . I am grateful to M. Ohya and N. Watanabe for stimulating discussions on quantum computing.
no-problem/9911/quant-ph9911025.html
ar5iv
text
# Quantum key distribution without alternative measurements Phys. Rev. A 61, 052312 (2000). After its publication, Zhang, Li, and Guo showed that the protocol is insecure against a particular eavesdropping attack (quant-ph/0009042). A modified version which avoids this attack is presented in quant-ph/0009051. ## Abstract Entanglement swapping between Einstein-Podolsky-Rosen (EPR) pairs can be used to generate the same sequence of random bits in two remote places. A quantum key distribution protocol based on this idea is described. The scheme exhibits the following features. (a) It does not require that Alice and Bob choose between alternative measurements, therefore improving the rate of generated bits by transmitted qubit. (b) It allows Alice and Bob to generate a key of arbitrary length using a single quantum system (three EPR pairs), instead of a long sequence of them. (c) Detecting Eve requires the comparison of fewer bits. (d) Entanglement is an essential ingredient. The scheme assumes reliable measurements of the Bell operator. The two main goals of cryptography are for two distant parties, Alice and Bob, to be able to communicate in a form that is unintelligible to a third party, Eve, and to prove that the message was not altered in transit. Both of these goals can be accomplished securely if both Alice and Bob are in possession of the same secret random sequence of bits, a “key” . Therefore, one of the main problems of cryptography is the key distribution problem, that is, how do Alice and Bob, who initially share no secret information, come into the possession of a secret key, while being sure that Eve cannot acquire even partial information about it. This problem cannot be solved by classical means, but it can be solved using quantum mechanics . The security of protocols for quantum key distribution (QKD) such as the Bennett-Brassard 1984 (BB84) , E91 , B92 , and other protocols , is assured by the fact that while information stored in classical form can be examined and copied without altering it in any detectable way, it is impossible to do that when information is stored in unknown quantum states, because an unknown quantum state cannot be reliably cloned (“no-cloning” theorem ). In these protocols security is assured by the fact that both Alice and Bob must choose randomly between two possible measurements. In this paper I introduce a QKD scheme which does not require that Alice and Bob choose between alternative measurements. This scheme is based on “entanglement swapping” between two pairs of “qubits” (quantum two-level systems), induced by a Bell operator measurement . The Bell operator is a nondegenerate operator which acts on a pair of qubits $`i`$ and $`j`$, and projects their combined state onto one of the four Bell states $`|00_{ij}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\left(|0_i|0_j+|1_i|1_j\right),`$ (1) $`|01_{ij}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\left(|0_i|0_j|1_i|1_j\right),`$ (2) $`|10_{ij}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\left(|0_i|1_j+|1_i|0_j\right),`$ (3) $`|11_{ij}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\left(|0_i|1_j|1_i|0_j\right).`$ (4) Entanglement swapping works as follows. Consider a pair of qubits, $`i`$ and $`j`$, prepared in one of the four Bell states, for instance, $`|11_{ij}`$. Consider a second pair of qubits $`k`$ and $`l`$ prepared in another Bell state, for instance, $`|01_{kl}`$. If a Bell operator measurement is performed on $`i`$ and $`k`$, then the four possible results “$`00`$,” “$`01`$,” “$`10`$,” and “$`11`$” have the same probability to occur. In fact, the outcome of each measurement is purely random. Suppose that the result “$`00`$” is obtained, consequently the state of the pair $`i`$ and $`k`$ after the measurement is $`|00_{ik}`$. Moreover, the state of $`j`$ and $`l`$ is projected onto state $`|10_{jl}`$. Therefore, the state of $`j`$ and $`l`$ becomes entangled although they have never interacted. I will denote the initial state of the pairs $`i`$, $`j`$ and $`k`$, $`l`$, in the previous example by $`|11_{ij}|01_{kl}`$, and the final state of the pairs $`i`$, $`k`$ and $`j`$, $`l`$ by $`|00_{ik}|10_{jl}`$. Suppose that the initial state of the pairs $`i`$, $`j`$ and $`k`$, $`l`$ is a product of two Bell states and, as in the previous example, a Bell operator measurement is executed on two qubits, one of each pair; then, after the measurement the state of the pairs $`i`$, $`k`$ and $`j`$, $`l`$ becomes a product of two Bell states. All possibilities are collected in Table I. The proposed scheme for QKD is illustrated in Fig. 1 and it is described as follows. (i) Consider six qubits numbered $`1`$ to $`6`$. Alice prepares qubits $`1`$ and $`2`$ in the Bell state $`|11_{12}`$, and qubits $`3`$ and $`5`$ in the Bell state $`|10_{35}`$. In a remote place, Bob prepares qubits $`4`$ and $`6`$ in the Bell state $`|10_{46}`$. All this information is public. $`2`$ and $`6`$ will be the only transmitted qubits during the process. Alice will always retain qubits $`1`$, $`3`$, and $`5`$; and Bob will always retain qubit $`4`$. (ii) Alice transmits qubit $`2`$ to Bob using a public channel. This channel must be a transmission medium that isolates the state of the qubit from interactions with the environment. (iii) Alice secretly measures the Bell operator on qubits $`1`$ and $`3`$, and Bob secretly measures the Bell operator on qubits $`2`$ and $`4`$. The results of both experiments are correlated, although Alice and Bob do not know how as yet. The purpose of the next step is to elucidate how the results are correlated without publicly revealing either of them. (iv) Bob transmits qubit $`6`$ to Alice using a public channel. Then Alice measures the Bell operator on qubits $`5`$ and $`6`$, and publicly announces the result. Suppose that Alice has obtained “$`11`$” in her secret measurement on qubits $`1`$ and $`3`$. Then, since the initial state of $`1`$, $`2`$, $`3`$, and $`5`$ was $`|11_{12}|10_{35}`$, by using Table I Alice knows that the state of $`2`$ and $`5`$ is $`|10_{25}`$. In addition, suppose that Alice obtains “$`00`$” in the public measurement on $`5`$ and $`6`$. Then, since she knows that the previous state of $`2`$, $`4`$, $`5`$, and $`6`$ was $`|10_{25}|10_{46}`$, by using Table I Alice knows that Bob has obtained “$`00`$” in his secret measurement on $`2`$ and $`4`$. Following a similar reasoning, Bob can know that Alice has obtained “$`11`$” in her secret measurement on $`1`$ and $`3`$. Previously, Alice and Bob have agreed to choose the sequence of results of Alice’s secret measurements to form the key. The two initial bits of the key are therefore “$`11`$.” The public information shared by Alice and Bob is not enough for Eve to acquire any knowledge of the result obtained by one of the parts. Using this information Eve only knows that one of the following four possible combinations of results for Alice and Bob’s secret measurements have occurred: “$`00`$” for Alice’s result and “$`11`$” for Bob’s, “$`01`$” and “$`10`$,” “$`10`$” and “$`01`$,” and “$`11`$” and “$`00`$.” One Bell state can be transformed into another just by rotating one of the qubits. Using this property, Alice (Bob) can change the Bell state of qubits $`1`$ and $`3`$ ($`2`$ and $`4`$) to a previously agreed public state. Then the situation is similar to (i) and the next stage of the process can be started. This scheme for QKD has the following features. (a) It improves the rate of generated bits by transmitted qubit. In BB84 and in B92 (and in E91), Bob (and Alice) must choose between two alternative measurements in order to preserve security. This implies that the number of useful random bits shared by Alice and Bob by transmitted qubit, before checking for eavesdropping, is $`0.5`$ bits by transmitted qubit, both in BB84 and B92 (and $`0.25`$ in E91), or at the most, it can be made to approach 1 in Ref. . In our scheme the rate is $`1`$ bit by transmitted qubit. This is so because Alice and Bob always perform the same kind of measurement, a Bell operator measurement, and therefore, each of them acquires two correlated random bits after each stage of the process. In each of these stages, only two qubits are transmitted (one from Alice to Bob and another from Bob to Alice). This improvement is very useful since a key must be as large as the message to be transmitted (written as a sequence of bits), and cannot be reused for subsequent messages . (b) It only requires a single quantum system (three EPR pairs) instead of a long sequence of quantum systems, to generate a key of arbitrary length. By contrast with previous schemes, in the one presented here no source of qubits is needed. The same two qubits (qubits 2 and 6) are transmitted to and from Alice and Bob over and over again . (c) The detection of Eve requires the comparison of fewer bits. The transmitted qubits do not encode the bits that form the key, but only the type of correlation between the results of the experiments that allow Alice and Bob to secretly generate the key. Therefore, intercepting and copying them does not allow Eve to acquire any information about the key. In fact, the state of the transmitted qubits is public. However, Eve can use a strategy —also based on entanglement swapping— to learn Alice’s sequence of secret results. This strategy is illustrated in Fig. 2 and is described as follows. (1a) Consider the same scenario as in (i) but suppose Eve has two additional qubits $`7`$ and $`8`$, initially prepared in a Bell state, for instance, $`|00_{78}`$. (1b) Eve intercepts qubit $`2`$ that Alice send to Bob and makes a Bell operator measurement on qubits $`2`$ and $`8`$. Then qubits $`1`$ and $`7`$ become entangled in a known (to Eve) Bell state. For instance, if after Eve’s measurement the state of $`2`$ and $`8`$ is $`|00_{28}`$, then the state of $`1`$ and $`7`$ becomes $`|11_{17}`$. (2) Therefore, after Eve’s intervention the real situation is not that described in (ii). Now qubit $`1`$ is entangled with Eve’s qubit $`7`$, and $`2`$ is entangled with Eve’s $`8`$. (3a) In this new scenario, after Alice’s (Bob’s) measurement on qubits $`1`$ and $`3`$ ($`2`$ and $`4`$), the state of qubits $`5`$ and $`7`$ ($`6`$ and $`8`$) becomes a Bell state. For instance, if Alice (Bob) obtains “$`11`$” (“$`00`$”), the state of qubits $`5`$ and $`7`$ ($`6`$ and $`8`$) would be $`|10_{57}`$ ($`|10_{68}`$). However, these states are unknown to Eve, because she (still) does not know the results of Alice’s and Bob’s measurements. (3b) Eve intercepts qubit $`6`$ that Bob sends to Alice and makes a Bell operator measurement on qubits $`6`$ and $`8`$. This reveals the state they were in. Then Eve can know Bob’s result. For instance, in our example, Eve would find “$`10`$” and would know that Bob’s result was “$`00`$.” (3c) Eve makes a Bell operator measurement on qubits $`7`$ and $`8`$. Then qubits $`5`$ and $`6`$ becomes entangled in a Bell state (still) unknown to Eve, because she does not know Alice’s secret result. For instance, if Eve obtains “$`01`$,” then qubits $`5`$ and $`6`$ would be in the state $`|01_{56}`$. (4) Eve gives qubit $`6`$ to Alice. Alice makes a measurement on $`5`$ and $`6`$ and announces the result. Then Eve can know the previous state of $`5`$ and $`7`$ ($`|10_{57}`$, in our example) and the result of Alice’s measurement on $`1`$ and $`3`$ (“$`11`$,” in our example). However, Eve’s intervention changes the correlation that Alice and Bob expect between their secret results. For instance, in our example, Bob, using his result and the result publicly announced by Alice, thinks that the two initial bits of the key are “$`10`$.” As in previous QKD protocols, in our scheme Alice and Bob can detect Eve’s intervention by publicly comparing a sufficiently large random subset of their sequences of bits, which they subsequently discard. If they find that the tested subset is identical, they can infer that the remaining untested subset is also identical, and therefore can form a key. In BB84, for each bit tested by Alice and Bob, the probability of that test revealing the presence of Eve (given that Eve is indeed present) is $`\frac{1}{4}`$. Thus, if $`N`$ bits are tested, the probability of detecting Eve (given that she is present) is $`1\left(\frac{3}{4}\right)^N`$. In our scheme if Alice and Bob compare a pair of bits generated in the same step, the probability for that test to reveal Eve is $`\frac{3}{4}`$. Thus if $`n`$ pairs ($`N=2n`$ bits) are tested, the probability of Eve’s detection is $`1\left(\frac{1}{2}\right)^N`$. This improvement in the efficiency of the detection of eavesdropping has been pointed out for a particular eavesdropping attack, it would be interesting to investigate whether more general attacks exist and whether the improvement in efficiency is also present in these cases. (d) It uses entanglement as an essential tool. QKD was the first practical application of quantum entanglement . However, as shown in Ref. , entanglement was not an essential ingredient, in the sense that almost the same goals can be achieved without entanglement. However, subsequent striking applications of quantum mechanics such as quantum dense coding , teleportation of quantum states , entanglement swapping , and quantum computation , are strongly based on quantum entanglement. The scheme described here relies on entanglement in the sense that it performs a task —QKD with properties (a), (b), and (c)— that cannot be accessible without entanglement. The practical feasibility of the scheme described in this paper hinges on the feasibility of a reliable (i.e., with 100% theoretical probability of success) Bell operator measurement. Bell operator measurements are also required for reliable double density quantum coding and teleportation. As far as I know, the first proposals for a reliable Bell operator measurement are those which discriminate between the four polarization-entangled two-photon Bell states using entanglement in additional degrees of freedom or using atomic coherence . It is not expected that the protocol for QKD introduced in this paper will be able to improve existing experiments for real quantum cryptography in practice. Its main importance is conceptual: it provides a different quantum solution to a problem already solved by quantum mechanics. The author thanks J. L. Cereceda, O. Cohen, A. K. Ekert, C. Fuchs, T. Mor, and B. Orfila for helpful comments. This work was supported by the Universidad de Sevilla (Grant No. OGICYT-191-97) and the Junta de Andalucía (Grant No. FQM-239). FIG. 1. QKD scheme based on entanglement swapping. The bold lines connect qubits in Bell states, the dashed lines connect qubits on which a Bell operator measurement is made, and the pointed lines connect qubits in Bell states induced by entanglement swapping. “$`00`$” means that the Bell state $`|00`$ is public knowledge, $`(00)`$ means that it is only known to Alice, $`[00]`$ means that it is only known to Bob, $`|00|`$ means that it is unknown to all the parts, $`[(00)]`$ means that it is only known to Alice and Bob, etc. FIG. 2. Eve’s strategy to obtain Alice’s secret result. $`\{00\}`$ means that the Bell state $`|00`$ is only known to Eve. The remaining notation is the same as in Fig. 1.
no-problem/9911/astro-ph9911317.html
ar5iv
text
# Non-LTE Models and Theoretical Spectra of Accretion Disks in Active Galactic Nuclei. III. Integrated Spectra for Hydrogen-Helium Disks ## 1. Introduction A black hole that steadily accretes gas from its surroundings at high rates, but below the Eddington limit, should form a geometrically thin accretion disk supported by the residual angular momentum of the gas. For black hole masses in the range $`10^810^9`$ M, thought to be typical of bright active galactic nuclei (AGN) and quasars, the peak effective temperature is expected to be around $`10^410^5`$ K. This estimate is roughly consistent with the observation that quasar spectra peak in the ultraviolet, bolstering the belief that quasars are indeed powered by accretion onto massive black holes. Over the years, many authors have attempted to calculate detailed theoretical spectra of geometrically thin, optically thick accretion disks in order to compare with observation, with varying degrees of sophistication (e.g. Kolykhalov & Sunyaev 1984; Sun & Malkan 1989; Laor & Netzer 1989; Ross, Fabian, & Mineshige 1992; Shimura & Takahara 1993, 1995; Dörrer et al. 1996; and Sincell & Krolik 1998). As reviewed recently by Krolik (1999a) and Koratkar & Blaes (1999), these and other models generally suffer a number of problems when trying to simultaneously explain various features of the observations, e.g. optical/ultraviolet continuum spectral shapes, the lack of observed features at the Lyman limit of hydrogen, polarization, the origin of extreme ultraviolet and X-ray emission, and correlated broadband variability. It may be that the resolution of these problems requires drastic revision of the accretion disk paradigm. Alternatively, it may simply be that the theoretical modeling to date is still too crude to do justice to the inherent complexities of the accretion flow. In particular, inclusion of non-LTE effects and detailed opacity sources, Comptonization, and interaction between the disk and X-ray producing regions should all be taken into account. We have embarked on a long term program to construct detailed model spectra of accretion disks in the axisymmetric, time-steady, thin-disk approximation. At the peak temperatures, the most important opacity is provided by electron scattering, but bound-free and free-free continuum opacities due to hydrogen and helium can also be significant, so we include these opacities in this study. Since scattering can be the dominant opacity and the densities can be rather low, departures from local thermodynamic equilibrium (LTE) can be significant, so we include non-LTE effects as well. We include the effects of relativity on the disk structure and on the transport of radiation from the disk to infinity. We construct a grid of models for a black hole spin of $`a/M=0`$ (Schwarzschild black hole), and 0.998 (maximum rotation Kerr black hole), luminosities between $`3\times 10^4L_{\mathrm{Edd}}`$ and $`0.3L_{\mathrm{Edd}}`$, and black hole masses between $`M_910^9M/M_{}=0.125`$ to $`M_9=32`$. To determine the surface mass density of the disk, we assume that the viscous stress scales as $`t_{r\varphi }=\alpha P_{\mathrm{total}}`$ (Shakura & Sunyaev 1973), where $`t_{r\varphi }`$ and $`P_{\mathrm{total}}`$ are the vertically integrated viscous stress and total vertically-integrated pressure, respectively. In previous papers (Hubeny & Hubeny 1997 and 1998a, hereafter Papers I and II respectively), we presented detailed spectra and the vertical structure of individual annuli. In this paper we integrate such spectra over radius for a grid of 99 mass/accretion rate combinations appropriate for quasars. We do not include annuli with low effective temperatures ($`T_{\mathrm{eff}}<4000`$ K) as these require molecular opacities for accurate computation. The spectra of hot annuli can be affected by Compton scattering, which we have not included in the calculation, but will in future work. It is likely that metal opacities will also modify the final spectrum (cf. Hubeny & Hubeny 1998b), so we consider this work as a benchmark for future metal line-blanketed models. This paper is organized as follows. In section 2 we describe our model assumptions and computational methods. Then in section 3, which constitutes the bulk of the paper, we present our results. We start by showing the vertical structure of individual annuli within the set of accretion disk models, along with their local emergent flux. We then discuss the internal physical self-consistency of these models, before presenting the full disk-integrated spectra. We finish section 3 with a discussion of a number of observationally driven issues: optical/ultraviolet colors, spectra in the hydrogen Lyman limit region, polarization, and ionizing continua. Finally, in section 4 we summarize our conclusions, in particular pointing out the additional physics which will be included in future papers of this series. ## 2. Model Assumptions and Computations To construct a model of an accretion disk, we assume the vertical disk structure can be well approximated by one-dimensional equations; that is, we assume the disk is locally plane parallel. We assume that, on average, the disk is static in the corotating frame (in reality disks are subject to many instabilities which invalidate this approximation), and that the only energy transport is due to radiation flux in the vertical direction, i.e. we ignore convection and conduction. By assuming time-steadiness and local radiation of dissipated heat, we can write down and solve the equations for the disk structure (Page & Thorne 1974); these equations are summarized in Paper II. For a given radius $`r`$, the (one-sided) flux (and thus the effective temperature, $`T_{\mathrm{eff}}`$) is determined by $$F(r)\sigma _BT_{\mathrm{eff}}^4=\frac{3GM\dot{M}}{8\pi r^3}R_R,$$ (1) where $`\sigma _B`$ is the Stefan-Boltzmann constant, $`\dot{M}`$ the mass accretion rate, and $`R_R`$ is a relativistic correction factor (Page & Thorne 1974, in the notation of Krolik 1999a). Following the usual practice for geometrically thin accretion disks, we assume that there is no torque at the innermost stable circular orbit in all our disk models. There are reasons to question this assumption (Krolik 1999b, Gammie 1999); if it fails, the disk spectrum and polarization could be substantially changed (Agol & Krolik 1999). We have calculated models for two values of the viscosity parameter: $`\alpha =0.01`$, and $`\alpha =0.1`$. The choice of $`\alpha =0.01`$ is near the value expected from simulations of the magneto-rotational instability in accretion disks (e.g. Balbus & Hawley 1998), while $`\alpha =0.1`$ represents a typical value of the viscosity parameter used in other studies. Smaller $`\alpha `$ or a stress which scales in proportion to the gas pressure would lead to a larger surface mass density, higher density, and, presumably, spectra closer to LTE. If the disk rotates on cylinders, and the angular frequency of each cylinder is the one appropriate to a circular orbit at the midplane, there is a vertical component to the effective gravity proportional to height $`z`$ above the disk midplane. We ignore the self-gravity of the disk, an excellent approximation for the radii important to our problem. We have assumed that the local dissipation is proportional to the local density, except for the top 1% of the disk where we force the dissipation to decline – see Paper II. This distribution is chosen in order to yield a hydrostatic equilibrium solution in the bulk of the disk when radiation dominates (cf. Shakura & Sunyaev 1973) while retaining the possibility of thermal balance in the outer-most layers even in the absence of Comptonization. In Paper II (in particular, see figures 11 and 12 there) we showed that neither the choice of the division point between the regions of vertically constant and declining viscosity, nor the slope of the power law for the viscosity in the declining regime, changes the predicted continuum emergent spectrum significantly. Disks with this structure can be convectively unstable, however. Convection can be expected to accelerate heat loss, leading to a disk structure that is rather thinner and denser (Bisnovatyi-Kogan & Blinnikov 1977), although perhaps not substantially so (Shakura, Sunyaev & Zilitinkevich 1978). Even annuli with entropy increasing upward, so that they are stable according to the usual Schwarzschild criterion, may still nevertheless be subject to convective instabilities mediated by thermal conduction along weak magnetic field lines (Balbus 1999). How this would manifest itself in the presence of turbulence driving the radial angular momentum transport is not clear, however. In any case, as stated above, we completely ignore convective heat transport in all our models here. When either radiation or gas pressure dominates, the surface mass density may be readily calculated in the grey, one-zone, diffusion approximation. However, when the two are comparable to each other, the disk structure equations combine to form a single tenth-order polynomial equation in one of the variables (e.g., $`\mathrm{\Sigma }^{1/4}`$). We solve this equation iteratively, using the two limiting regimes to bracket the root. Once the effective temperature, surface mass density, and dissipation profile are determined, the vertical disk structure can be computed. To compute the vertical structure of a given annulus, we solve simultaneously the entire set of structural equations: hydrostatic equilibrium in the vertical direction; local energy balance; radiative transfer; and, since we are not generally assuming LTE, statistical equilibrium for all selected energy levels of all selected atoms and ions. The equations are solved for the whole extent of the disk between the midplane and the surface using proper boundary conditions. For details, the reader is referred to Paper II. We stress that no ad hoc assumptions about the nature of the radiation field or the radiative transfer are made; for instance, we do not use the diffusion approximation or an escape probability treatment, or any assumption about the angular dependence of the specific intensity; the radiative transfer is solved exactly. To represent the radiation field, we use about 150 frequency points placed to define all continuum edges and resolve any frequency-dependent structure in the emergent intensity. The minimum frequency is set to $`10^{12}`$ Hz, while the highest frequency is chosen so that even at the midplane the intensity for frequencies higher than the maximum is negligible. In terms of the midplane temperature (Paper II and Hubeny 1990) $$T_{\mathrm{mid}}T_{\mathrm{eff}}(3\tau _{\mathrm{mid}}/8)^{1/4}=T_{\mathrm{eff}}[(3/8)(\mathrm{\Sigma }/2)(\chi _R)]^{1/4},$$ (2) this goal can be achieved by setting the maximum frequency to $$\nu _{\mathrm{max}}=17kT_{\mathrm{mid}}/h.$$ (3) Here $`\chi _R`$ is the Rosseland mean opacity (per gram), which we take for simplicity to be 0.34, the value corresponding to an opacity dominated by electron scattering in a fully ionized H-He plasma of solar abundance. We consider disks composed of hydrogen and helium only. We include metals in computing the molecular weight, but for this work, we ignore their effects on the opacity. Hydrogen is represented essentially exactly: the first 8 principal quantum numbers are treated separately, while the upper levels are merged into a single non-LTE level accounting for level dissolution as described by Hubeny, Hummer, & Lanz (1994). We do, however, assume complete $`l`$-mixing; given the high electron density in these environments, this should be a good approximation. Neutral helium is represented by a 14-level model atom, which incorporates all singlet and triplet levels up to $`n=8`$. The 5 lowest levels are included individually; singlet and triplet levels are grouped separately from $`n=3`$ to $`n=5`$, and we have formed three superlevels for $`n=6,7,`$ and 8. The first 14 levels of He<sup>+</sup> are explicitly treated. We assume a solar helium abundance, $`N(\mathrm{He})/N(\mathrm{H})=0.1`$. The opacity sources we include are all bound-free transitions (continua) from all explicit levels of H, He I, and He II; free-free transitions for all three ions, and electron scattering. For the coolest models ($`T_{\mathrm{eff}}<9000`$ K), we also consider the H<sup>-</sup> bound-free and free-free opacity, assuming LTE for the H<sup>-</sup> number density. In this paper, we assume coherent (Thompson) scattering. As was discussed in Paper II, effects of non-coherent (Compton) scattering are negligible for models with $`T_{\mathrm{eff}}`$ around or below $`10^5`$ K. Therefore, most models of the present grid (see Sect.3) are not influenced by the effects of Comptonization, although the hottest ones may be. We have recently implemented Comptonization in our modeling code, and checked that this is indeed the case; however, we choose to neglect Comptonization in this paper in order to provide a benchmark grid of models computed using classical approximations of H-He composition and without Comptonization. The effects of Compton scattering will be included in a future paper, where we will also discuss in detail its influence on the emergent spectra. It can be expected to become especially important when the dissipation per unit mass is enhanced near the disk’s surface. The structure equations are highly nonlinear, but are very similar to corresponding equations for model stellar atmospheres. We use here the computer program TLUSDISK, which is a derivative of the stellar atmosphere program TLUSTY (Hubeny 1988). The program is based on the hybrid complete-linearization/accelerated lambda iteration (CL/ALI) method (Hubeny & Lanz 1995). The method resembles traditional complete linearization, however the radiation intensity in most (but not necessarily all) frequencies is not linearized; instead it is treated via the ALI scheme (for a review of the ALI method, see e.g., Hubeny 1992). Moreover, we use Ng acceleration and the Kantorovich scheme (Hubeny & Lanz 1992) to speed the solution. We start with a grey atmosphere solution (Hubeny 1990), first solving for the disk structure assuming that the statistical equilibrium is described by LTE. Using this as a starting point, we drop the LTE assumption and compute the disk structure using the full statistical equilibrium equations, as described in Paper II, but assuming that the line transitions are in detailed radiative balance. In other words, the statistical equilibrium equations explicitly contain the collisional rates in all transitions, and radiative rates only in the continuum transitions. We have considered 70 discretized depth points. The top point is set to $`m_1=10^3`$ g cm<sup>-2</sup>, where $`m`$ is the column mass, i.e., the mass in a column above a given height. The last depth point is the column mass corresponding to the midplane, and is given by $`\mathrm{\Sigma }/2`$. The depth points are equally spaced in logarithm between these two values. The models were computed on a DEC Alpha with 500 MHz clock speed; with the above values for the number of frequency and depth points, the LTE models for individual rings required typically 5-10 iterations with approximately 2.5 seconds per iteration, while the non-LTE models required typically 5-10 (for hotter annuli), or 10-30 iterations (for cooler annuli), with about 6 seconds per iteration. We do not compute here models treating radiative rates in line transitions explicitly. We have considered such models in Paper II and found that including lines explicitly does not change the vertical structure or emergent continuum radiation significantly. These models are also much more time consuming. However, the most important physical point is that we have found in test calculations that line profiles are influenced significantly by the effects of Compton scattering. Since we are neglecting the Comptonization here to provide a benchmark grid of classical H-He models without Comptonization, we feel that including lines at this stage would not be much more than a numerical exercise. We therefore defer treatment of more realistic models including lines and Comptonization to a future paper. In the course of calculating atmosphere models, we sometimes ran into difficulties with convergence. For very low and very high temperatures, we could not get convergence, a problem which may be ameliorated in the future by including more sources of opacity (such as metals at high temperatures and molecules at low temperatures). At certain radii and certain depths within the disk, we found ionization fronts in helium around $`T_{\mathrm{eff}}35,000`$K which are very narrow in extent and very sensitive to the temperature within the disk, causing limit cycle behavior where the helium alternates between being mostly doubly ionized or recombined to a singly ionized stage during successive steps, preventing solution of the atmosphere structure. We solved this problem by computing the disk structure for radii just smaller or larger than the radii where the front exists, and then using these solutions as starting solutions for the radii at which the fronts exist. In practice, the range of radii where this problem exists is narrow so that the uncertainty in the structure does not affect the overall disk spectrum. We also found similar He I/He II ionization fronts at lower temperatures, $`T_{\mathrm{eff}}15,000`$K, and for hydrogen around and below 9000 K. To find the total disk spectrum, we divided the disk into 25-35 radial rings, spaced (roughly) logarithmically. At each ring, after computing the vertical structure, we perform a detailed radiation transfer solution for the Stokes vector as a function of frequency and angle. The spectrum is found by integrating the total emergent intensity over the disk surface using our relativistic transfer function code (Agol 1997). The transfer function computes the trajectories of photons from infinity to the disk plane, finding the emitted radius, redshift, and intensity at each image position at infinity for a given observation (Cunningham 1975). In this paper, we neglect the effects of radiation which returns to the accretion disk. ## 3. Results The parameter space of our grid of models is displayed in figure 3. The defining parameters are $`M_9`$, the black hole mass expressed in $`10^9M_{}`$, and $`\dot{M}`$, the accretion rate in units of $`M_{}`$ yr<sup>-1</sup>. The grid covers nine values of the black hole mass between $`M_9`$ = 1/8 and 32; each subsequent mass is twice the previous mass. An analogous approach is used for the mass accretion rate, i.e. eleven values for each black hole mass which are powers of 2 times 1 $`M_{}`$ yr<sup>-1</sup>. The highest value of the accretion rate in each case is chosen to make $`L/L_{\mathrm{Edd}}=0.286`$; i.e., $`\dot{M}(M_{}\mathrm{yr}^1)=2M_9`$. In the following text, we refer to this highest value as $`L/L_{\mathrm{Edd}}=0.3`$. the ten subsequent values have half the accretion rate (and luminosity) of the previous model. Our grid spans a range similar to models which have been previously used to fit quasar spectra (Sun & Malkan 1989; Laor 1990). Our basic grid assumes $`a/M=0.998`$. However, we have also constructed a parallel grid for a Schwarzschild black hole with the same black hole masses and values of $`L/L_{\mathrm{Edd}}`$. The mass accretion rates for $`a/M=0`$ are a factor of 5.613 higher than the corresponding values for $`a/M=0.998`$ because the radiative efficiency of a Kerr disk is that much greater (e.g. Shapiro & Teukolsky 1983). For all disk models, we compute detailed vertical structure at the following radii (expressed as $`r/r_g`$, where $`r_g=GM/c^2`$ is the gravitational radius): $`r/r_g`$ = 1.5, 1.7, 2, 2.5, 3, 3.5, 4, 5, 6, 7, 8, 9, 10, 12, 14, 16, 18, 20, 25, 30, 40, 50, 60, 70, 80, 90, 100, 120, 140, 160, 180, 200, 250, 300, 400, 500; that is, while the corresponding effective temperature $`T_{\mathrm{eff}}>4000`$ K. For the Schwarzschild grid, we use the above values of $`r/r_g`$, multiplied by 5. The actual number of computed annuli thus depends on the basic parameters of the disk. ### 3.1. Properties of Models In this section, we discuss the behavior of individual annuli, while in the rest of the paper we will present the emergent radiation integrated over the whole disk. We have chosen a model with $`M_9=1`$, $`\dot{M}=1`$ (i.e., $`L/L_{\mathrm{Edd}}=0.15`$) as a representative model for displaying various quantities. The behavior of other individual disk models is similar. Figure 3.1 displays the local electron temperature as a function of position for all individual annuli. The position is expressed as column mass, $`m`$, above the given depth. As mentioned in Sect. 2, the uppermost point was chosen to be $`m=10^3`$ g cm<sup>-2</sup>, while the highest value corresponds to the midplane of the disk. The temperature is a nearly monotonic function of depth, although there is a slight temperature rise at the surface for some models. The detailed behavior of temperature for several representative annuli was discussed extensively in Paper II. The behavior of temperature for all annuli is easily understood. In the LTE approximation, the midplane temperature is given by equation (2), i.e. it is proportional to $`T_{\mathrm{eff}}(r)[m_0(r)]^{1/4}`$, where $`m_0\mathrm{\Sigma }/2`$ is the total column density at the midplane. Neglecting for simplicity the relativistic corrections, $`T_{\mathrm{eff}}r^{3/4}`$ \- see equation (1), while the column density $`m_0r^{3/2}`$ for radiation pressure dominated annuli (see Eq. 18 of Paper II), which is the case for the models considered in figure 3.1. Therefore, the LTE approximation predicts that the midplane temperature, $`T_{\mathrm{mid}}r^{3/8}`$. In contrast, the surface temperature is proportional to $`T_{\mathrm{eff}}`$; therefore, $`T_{\mathrm{surf}}r^{3/4}`$. Figure 3.1 shows that these scalings do in fact hold approximately. The ratio of the lowest and highest radii is roughly 45, so if the LTE approximation held, the surface temperature would vary by a factor of 17 at the surface, and by a factor of 4.2 at the midplane. The model values are 14 and 4.6, respectively. Figure 3.1 displays the run of mass density for the individual annuli. The central density is lowest for the inner annuli, and increases with increasing distance from the black hole, while the density close to the surface exhibits a more or less reverse behavior. Note that the disk becomes optically thin below a column mass of $`m\chi _R^12.9`$ g cm<sup>-2</sup>, where the density is substantially lower (in some cases by orders of magnitude) than the midplane density. This should be borne in mind when comparing our results to those of previous workers who assumed constant density slabs (e.g. Laor & Netzer 1989). An explanation of the behavior of the density is given in the Appendix. A more interesting behavior is exhibited by the H I and He II ground state number densities, displayed in figures 3.1 and 3.1. For hot, inner annuli, hydrogen remains ionized throughout the entire vertical extent of the disk, while starting with the annulus at $`r=40r_g`$ (with $`T_{\mathrm{eff}}15,000`$K), there is an appreciable portion of neutral hydrogen in the outer layers ($`m<10^2`$ g cm<sup>-2</sup>). Consequently, the Lyman edge opacity for these latter models becomes rather large (see below). Similarly, helium exhibits a transition, between $`r=12r_g`$ (with $`T_{\mathrm{eff}}36,000`$K) and $`r=14r_g`$ (with $`T_{\mathrm{eff}}32,500`$K) from being predominantly doubly ionized, to a situation where the single ionized helium is a dominant stage of ionization at some outer part of the disk. For the annuli with $`r<40r_g`$ (with $`T_{\mathrm{eff}}>15,000`$ K), helium remains singly ionized throughout the entire upper part of the disk, while for more distant, cooler annuli, neutral helium becomes more and more dominant in the outer layers. Figure 3.1 displays the emergent flux for all annuli. The upper panel shows the non-LTE models, while the lower panel shows the LTE models. The behavior of the emergent flux is analogous to that discussed at length in Papers I and II. Non-LTE models exhibit the He II Lyman edge in emission for $`r<12r_g`$ (with $`T_{\mathrm{eff}}>36,000`$K), while for more distant annuli the flux in the He II Lyman continuum drops to almost zero. This is a consequence of the transition from the dominance of double-ionized helium to singly-ionized helium, as displayed in figure 3.1. Analogously, the neutral-hydrogen annuli for $`r>40r_g`$, with $`T_{\mathrm{eff}}>15,000`$ K, (see figure 3.1) show a significant hydrogen Lyman edge in absorption. A comparison between LTE and non-LTE results reveals several interesting effects: in LTE, the He II Lyman edge appears to be in weaker emission or even in absorption for hot, doubly-ionized, annuli. Most importantly, non-LTE effects reduce the hydrogen Lyman edge (cf. Sun & Malkan 1989, Shields & Coleman 1994, Störzer et al. 1994, Papers I and II): the edge predicted in non-LTE models is typically in weaker emission for emission edges, and in weaker absorption for absorption edges. We discuss the behavior of the hydrogen Lyman edge in the full disk-integrated spectra in more detail in Sect. 3.5 below. Finally, figure 3.1 shows a comparison of predicted local non-LTE flux and the blackbody flux corresponding to the same effective temperature. As already discussed in Paper I, the non-LTE models exhibit a much wider spectral energy distribution than blackbodies. Consequently, an often-used mapping of a given spectral region to a certain radial position in the disk, which may be used for a blackbody flux because of its sharp variation with frequency, is much less satisfactory for more realistic non-LTE vertical structure models. Another crucial feature of the present models is that compared to blackbodies the energy is redistributed. At low temperatures, absorptive opacity is relatively important, so that flux is shifted from frequen- cies where the opacity is high to frequencies where it is low; i.e., away from ionization edges. Another way of viewing this effect is to note that in bands where the opacity is low, escaping photons are created deeper inside the disk where the temperature is higher. At high temperatures, the opacity becomes scattering dominated. In these rings, there is a trade-off between the effect just mentioned (lower opacity bands allow one to see photons created at higher temperature) and scattering blanketing (which retards the escape of photons created deep inside). The latter effect is the one that creates a “modified blackbody” spectrum when there is no temperature gradient. For much of the parameter range of interest, the net effect is to shift flux toward higher frequencies. ### 3.2. Consistency of Models Having computed the detailed vertical structure of the annuli, we have to address the question of self-consistency of the model assumptions. First, we check that the computed disks are indeed geometrically thin, i.e., the disk height, $`H`$, is much smaller than the radial coordinate $`r`$. In figure 3.2 we show the ratio of the disk height to the radial coordinate, $`H/r`$, as a function of the radial coordinate (in units of gravitational radius), for a disk with $`M_9=1`$, and for various values of $`\dot{M}`$ (or luminosity). The behavior of disks for other values of the black hole mass is almost identical. (This is expected for radiation pressure supported, electron scattering dominated disks, because then $`H/r`$ can be written as $`L/L_{\mathrm{Edd}}`$ times a function of $`r/r_g`$, cf. equation 53 of Paper II.) Only for the most luminous disk does the ratio $`H/r`$ approach 10% at $`r/r_g=2.5`$. The height of other annuli is lower, and the maximum height for less luminous disks is progressively lower. These results are in good agreement with the older models of Laor & Netzer (1989, cf. their figure 1 and the surrounding discussion). We therefore conclude that our disks do indeed satisfy the thin disk approximation. Another important concern is the presence of vertical density inversions within the disk. Since we neglect convection, sharp temperature gradients and density inversions are found at ionization transitions occurring in regions where gas pressure contributes significantly to support against gravity. In figure 3.2 we display the structure of several annuli that illustrate this behavior. The hottest annulus shown in that figure ($`r/r_g=50`$) has $`T_{\mathrm{eff}}=9400`$ K; it is the outermost ring with no density inversion. Cooler, more distant annuli show a progressively stronger inversion. The inversion is created by the abrupt fall in electron density when H recombines; in order to maintain the pressure gradient required for hydrostatic balance, the mass density must increase to compensate. A sharp rise of local temperature, displayed in the upper panel of figure 3.2, is caused by a rapid increase of opacity when going inward due to an ionization front. The temperature is essentially a function of the Rosseland mean opacity, $`T\tau _{\mathrm{Ross}}^{1/4}`$, but since $`\tau _{\mathrm{Ross}}`$ is a sharp function of the column mass $`m`$, the function $`T(m)`$ also exhibits a sharp increase with $`m`$. These models should be taken with caution. In the absence of detailed hydrodynamical calculations which would allow for convective instability and determine the vertical structure properly, we do not know what is the correct emergent radiation. We can nevertheless obtain a rough indication of model uncertainties by comparing the predicted flux from the non-LTE models that neglect convection, and a blackbody flux corresponding to the same effective temperature. We present in figure 3.2 emergent flux for three selected annuli shown in figure 3.2, for $`r/r_g=50,90,160`$, with corresponding effective temperature equal to 9400, 6200, 4100 K, respectively. The predicted flux for non-LTE models is shown together with the corresponding blackbody flux. We see that the models at the hotter end of the density inversion sequence exhibit an appreciable Balmer edge, while the cooler models are reasonably well approximated by blackbodies. We thus feel that approximating even cooler models, which we do not compute here, by blackbodies is probably not worse than other approximations that underlie the entire calculation. ### 3.3. Disk-integrated Spectra In this section, we present disk-integrated spectra for selected disk models. The full grid of models is not presented here, but is available to interested researchers upon request. Note that in all the spectral energy distributions shown in this paper, the quantity $`L_\nu `$ is the specific luminosity that an observer along a particular viewing angle would infer the source to have if it were isotropic, i.e. if $`F_\nu `$ is the measured specific flux and $`d`$ is the distance to the source, then $`L_\nu 4\pi d^2F_\nu `$. In the subsequent plots, we display, as is customary, the quantity $`\nu L_\nu `$, which represents a luminosity per unit logarithmic interval of frequency (photon energy). As discussed above, we consider the spectra of the outer annuli which were not computed (for $`T_{\mathrm{eff}}<4000`$ K) to be given by the black-body energy distribution corresponding to the effective temperature of the annulus. An important parameter is the outer cutoff of the disk. In order to avoid problems with an improper choice of the outer cutoff, we have chosen the cutoff radius $`r_{\mathrm{out}}`$ in such a way that $`T_{\mathrm{eff}}(r_{\mathrm{out}})`$ is equal to a specific limiting temperature, $`T_{\mathrm{lim}}`$. We have chosen $`T_{\mathrm{lim}}=1000`$ K, which guaranties that the total emergent flux at $`\nu =10^{14}`$, which is the lowest frequency considered in our integrated disk spectra, is not significantly influenced by cooler annuli with $`T_{\mathrm{eff}}<T_{\mathrm{lim}}`$ to within a few per cent. Figure 3.3 shows the effect of the outer cutoff, as well as of the degree of approximation in the treatment of cool annuli, for the disk model displayed in figure 3.2 ($`M_9=2`$; $`\dot{M}=1M_{}`$ yr<sup>-1</sup>, i.e., $`L/L_{\mathrm{Edd}}=0.07`$). Three cutoff radii are shown: $`r_{\mathrm{out}}/r_g=60`$, which is the radius where the density inversion sets in; $`r_{\mathrm{out}}/r_g=160`$ (the radius where $`T_{\mathrm{eff}}`$ reaches the our minimum value of 4000 K), and finally the default value which corresponds to $`T_{\mathrm{lim}}=1000`$ K, which in this case happens at $`r_{\mathrm{out}}/r_g1060`$. Neglecting all annuli cooler than 9000 K (dotted line) does not influence the flux blueward of the Balmer limit, but the optical and IR flux are seriously underpredicted. Neglecting all non-computed annuli cooler than 4000 K (dashed lines) produces the correct flux in the region of the Balmer edge, and very nearly the correct flux in the optical range ($`\nu >4\times 10^{14}`$ Hz). The effect of uncertainties in the models with density inversion ($`T_{\mathrm{eff}}<9000`$ K) is estimated by comparing bold lines, which show spectra computed for non-LTE models down to $`T_{\mathrm{eff}}=4000`$ K, with the corresponding thin lines, showing the predicted integrated spectra when all annuli with $`T_{\mathrm{eff}}9000`$ K are assumed to emit locally as blackbodies. The maximum difference in the integrated spectra is found redward of the Balmer edge; the non-LTE models of cool annuli produce an integrated flux that is about 16 % higher than that corresponding to replacing the $`T_{\mathrm{eff}}<9000`$ K annuli by blackbodies. Therefore, the effect is not very large, which gives us confidence that our overall integrated spectra are not significantly influenced by the uncertainties associated with the density-inversions present in the cool annuli. However, the predicted feature at the Balmer edge should be viewed with caution. We first present integrated spectra for a disk of given mass and luminosity, $`M_9=1`$, $`L/L_{\mathrm{Edd}}=0.15`$ (i.e. $`\dot{M}=1M_{}`$ yr<sup>-1</sup>), and for various values of the inclination angle $`i`$, ranging from $`\mathrm{cos}i=0.99`$ (i.e., the disk seen almost face-on), to $`\mathrm{cos}i=0.01`$ (the disk seen almost edge-on) – figure 3.3. We can clearly see the impact of relativistic boosting and aberration: the flux at the highest frequencies (radiated where the disk is most relativistic) is boosted strongly for viewing angles near the disk plane. Also for such viewing angles, gravitational light bending counteracts the Newtonian “$`\mathrm{cos}\theta `$” projection effect. In the following, we present the integrated spectra for one value of inclination. We chose $`\mathrm{cos}i=0.8`$ (i.e., $`i37^{}`$), which is a value relatively close to face-on, and which thus may serve as a typical value for type 1 AGN and QSO’s based on unification arguments (e.g., Krolik 1999a). Figure 3.3 shows the integrated spectra for one particular value of the black hole mass, $`M_9=1`$, and for various luminosities (mass accretion rates). Full lines display non-LTE models, while the dotted lines display LTE model predictions. The spectral energy distribution is hardest for the largest luminosity. The non-LTE effects are most important in the He II Lyman continuum ($`\nu >1.36\times 10^{16}`$ Hz), and also for predicting the detailed shape of the hydrogen Lyman edge for intermediate and low luminosities. In figure 3.3 we display the sequence of predicted spectra for models with a fixed mass accretion rate (i.e., total luminosity) and varying black hole mass. The energy distribution is progressively shifted towards more energetic photons for lower masses, because disks around less massive holes have smaller radiating surface areas. The non-LTE effects are important for all disks; for hotter ones the largest departures from LTE are seen in the He II Lyman continuum, and in cooler ones in the hydrogen Lyman continuum. Figure 3.3 shows the spectral energy distribution for a sequence of disk models which have the same $`T_{\mathrm{eff}}(r/r_g)`$ distribution. Since $$T_{\mathrm{eff}}^4M\dot{M}r^3R_R(r/r_g)M^2\dot{M}(r/r_g)^3R_R(r/r_g),$$ (4) the same $`T_{\mathrm{eff}}`$ distribution is obtained for models with fixed $`\dot{M}/M^2`$. If disks radiated as blackbodies, all the spectra of the sequence would be exactly the same, only vertically shifted in the absolute value of the emergent flux. Indeed, the spectra are similar in the long-wavelength (optical and IR) portion of the spectrum, but they differ appreciably in the UV and EUV spectral region. In particular, the Lyman edges of hydrogen and He II change their appearance significantly. Note that the non-LTE and LTE models for lower black hole masses in this figure are very nearly the same. This is expected, as the average density $`\rho M/\dot{M}^2`$ times some function of $`r/r_g`$ for radiation pressure dominated annuli. Hence for fixed $`\dot{M}/M^2`$, $`\rho `$ scales as $`M^3`$, implying that the lower black hole mass models in figure 3.3 have higher average densities and should therefore be closer to LTE. Finally, in figure 3.3 we present a sequence of models with the same $`L/L_{\mathrm{Edd}}`$. Since $`L/L_{\mathrm{Edd}}\dot{M}/M`$, we chose a sequence where $`M_9`$ and $`\dot{M}`$ (in $`M_{}`$ yr<sup>-1</sup>) have the same value; the corresponding $`L/L_{\mathrm{Edd}}=0.15`$. Again, the shape of the spectrum is quite similar in the optical and IR regions, while there is a progressively larger portion of EUV radiation for lower black hole masses. The non-LTE effects are extreme for high-mass holes in the He II Lyman continuum, for which the LTE models predict virtually zero flux. #### 3.3.1 Effects of changing the viscosity parameter $`\alpha `$. In figure 3.3.1 we compare the predicted spectra for disk models with $`M_9`$ =1 for two values of the viscosity parameter $`\alpha `$. Although the spectra exhibit some differences, the effect of different $`\alpha `$ is very small in the optical and UV region; the only appreciable differences are found in the He II Lyman continuum. This result is very encouraging because it shows that the effect of the ad hoc viscosity parameter $`\alpha `$ is rather small, and therefore the predicted spectra are not influenced significantly by this inherent uncertainty. Similar conclusions were reached in Paper II, albeit for a few representative annuli. The sense of the net effect is that larger $`\alpha `$ leads to a lower density and thus larger departures from LTE, which cause a somewhat higher flux in the He II Lyman continuum. #### 3.3.2 Schwarzschild black holes Finally, we present several representative spectra for the case of a Schwarzschild black hole. Again, the full set of spectra is available upon request. In figure 3.3.2 we compare the predicted spectra for disk models with $`M_9`$ =1, for the maximum rotating Kerr and Schwarzschild black holes. The total luminosity of the corresponding pairs of models is equal; the mass accretion rate is therefore a factor of 5.613 higher for the Schwarzschild case to adjust for the different efficiencies. The Kerr spectra tend to extend to higher frequencies for several reasons all arising from the fact that their disks extend in to smaller radii. As a result, the maximum effective temperature found in the disk is higher, and the relativistic effects strengthening the high frequency spectrum away from the axis are also greater. This completes our presentation of the overall integrated spectra of our grid of models. We now address some of their observational implications. ### 3.4. Comparison with Other Models We first compare the spectrum of an LTE model with that computed by Sincell & Krolik (1998). Figure 3.4 shows a comparison between one of their spectra and ours, computed for the same parameters. The agreement is satisfactory, as the spectra differ by at most 20% near the peak. The differences that do exist may be due to a number of factors, ranging from technical numerical contrasts in the methods to the detailed physical assumption made in both papers. In figure 3.4 we show a second comparison, this time to a Laor & Netzer (1989; Laor 1990) model, kindly supplied by A. Laor. We choose a model with $`M_9=1`$, $`\dot{M}=1`$ M yr<sup>-1</sup>, (i.e., $`L/L_{\mathrm{Edd}}=0.15`$), $`\alpha =0.01`$, and two values of $`\mathrm{cos}i`$: 0.8 and 0.2. We have integrated the local disk spectra out to a cutoff radius $`r_{\mathrm{out}}/r_g=217.8`$ to agree with Laor’s value. The predicted spectra are generally similar, although there are several interesting differencies. Our models produce larger flux in the immediate vicinity of the He II Lyman edge (due to non-LTE effects leading to an emission edge for the hottest annuli), but a lower flux for the highest frequencies (likely because Laor & Netzer take into account the effects of self-irradiation of the disk). Our models produce lower flux in the UV and optical regions, which is a consequence of the different vertical structure and of non-LTE effects. Although the Laor & Netzer models do not simply assume local blackbody flux, figure 3.1 is nevertheless quite indicative because it shows that local blackbodies also produce a significantly larger flux in the UV and optical region. At IR wavelengths both models coincide because both use local blackbody flux for the cool annuli. ### 3.5. Optical/Ultraviolet Colors A common criticism of accretion disk models is that if they are to produce substantial ionizing photon flux, then they should have blue optical/ultraviolet colors. This is based on the long wavelength, low frequency behavior of a blackbody disk, which has $`F_\nu \nu ^\beta `$, with $`\beta =1/3`$. We address the issue of ionizing photon flux in section 3.8 below, but here we wish to point out that our disks have quite red optical/ultraviolet colors. Figure 3.5 shows the logarithmic spectral slope $`\beta `$ as measured between 1450$`\mathrm{\AA }`$ and 5050$`\mathrm{\AA }`$, where $`\beta `$ is defined by the two corresponding frequencies by $$\frac{F_{\nu 1}}{F_{\nu 2}}=\left(\frac{\nu _1}{\nu _2}\right)^\beta .$$ (5) Our disk models have colors near the median value $`\beta =0.32`$ for bright quasars (Francis et al. 1991). Indeed, even disks with local blackbody emission have such red colors for these accretion rates and masses (Koratkar & Blaes 1999). This is because the temperatures are cool enough that the 1450–5050$`\mathrm{\AA }`$ spectra are not in fact in the long-wavelength limit. Note that our model spectra can be somewhat bluer than blackbody disks, but they are still sufficiently red to explain the colors observed in bright quasars. Figure 3.5 shows some representative Kerr disk models viewed at $`i=37^{}`$ compared to the Francis et al. (1991) composite quasar spectrum. The shorter wavelength composite spectrum of Zheng et al. (1997), scaled to match the Francis et al. spectrum at 1285 $`\mathrm{\AA }`$, is also shown. This latter composite is thought to be more trustworthy than the Francis et al. composite at wavelengths shorter than the Ly$`\alpha `$/NV line because of corrections for intervening absorbers. The models were chosen to have the right color based on figure 3.5, and then a least squares fit was done to determine a single multiplicative normalization factor. The fit used supposedly line-free continuum windows of the Francis et al. (1991) composite spectrum, as defined by their figure 7: 1283-1289$`\mathrm{\AA }`$, 1321-1329$`\mathrm{\AA }`$, 1455-1475$`\mathrm{\AA }`$, 2196-2208$`\mathrm{\AA }`$, 2236-2246$`\mathrm{\AA }`$, 3024-3036$`\mathrm{\AA }`$, 3928-3936$`\mathrm{\AA }`$, 4035-4045$`\mathrm{\AA }`$, 4150-4220$`\mathrm{\AA }`$, and 5464-5476$`\mathrm{\AA }`$. Note that the models shown have $`\dot{M}/M^2`$ ranging from 1/16 to 1/2, with the lower mass models having the smaller values of this quantity. This is consistent with the behavior shown in figure 3.3: models with fixed $`\dot{M}/M^2`$ have slightly bluer optical/UV colors for decreasing black hole mass. These fits demonstrate that, while it is easy to recover the overall red color of the Francis et al. (1991) composite spectrum, explaining the shorter wavelength far ultraviolet emission seen in the Zheng et al. (1997) composite is more problematic. Two of the models shown in figure 3.5 do in fact bracket the Zheng et al. composite, but they turn over at the shortest wavelengths shown in the figure. This might be a problem in view of the fact that an extrapolation of the Zheng et al. composite joins up with the ROSAT soft X-ray composite of Laor et al. (1997), implying that there is no cutoff. However, it is also conceivable that some other choice of model parameters (disk inclination, black hole spin) might work better. Notice also that Zheng et al. suggested that in order to explain the Lyman continuum flux one has to invoke the presence of a Comptonizing corona with temperature about $`4\times 10^8`$ K with optical depth of the order of unity. In any case, composite spectra made from many sources may of course be unphysical, but these fits probably give some indication of how our models will fare in explaining data from individual sources. It is also noteworthy that the low luminosity models shown in figure 3.5 have rather strong spectral features in the region of the hydrogen Lyman limit, which are generally not observed in quasars. However, the models with the lowest luminosities are probably not representative of the quasars that make up the composite spectra. We now address this important issue of Lyman edges in AGN. ### 3.6. The Lyman Edge Region Quasars and active galactic nuclei are observed to have almost no intrinsic spectral features near the hydrogen Lyman limit (e.g. Antonucci, Kinney, & Ford 1989; Koratkar, Kinney, & Bohlin 1992), and this has been a longstanding problem with accretion disk models (e.g. Krolik 1999a, Koratkar & Blaes 1999). To quantify the strength of Lyman edge features in our model disk spectra, we calculate the relative change in flux at $`\pm 50\mathrm{\AA }`$ across the edge according to $$\frac{\mathrm{\Delta }F_\lambda }{F_\lambda }\frac{F_{962\mathrm{\AA }}F_{862\mathrm{\AA }}}{\mathrm{min}[F_{962\mathrm{\AA }},F_{862\mathrm{\AA }}]}.$$ (6) Hence $`\mathrm{\Delta }F_\lambda /F_\lambda `$ is positive for an absorption edge and negative for an emission edge. Figure 3.6 shows the variation of this quantity with accretion luminosity and disk inclination angle for all our models around a $`M_9=1`$ Kerr hole. Figure 3.6 shows the same thing but for a fixed inclination angle of 37 and varying black hole masses. As was already apparent from the overall spectra shown in previous sections, substantial Lyman absorption edges are present in all our low luminosity disk models at modest, near face-on viewing angles. However, high luminosity models have greatly reduced edges, particularly for the lower mass black holes. Because of their lower effective temperatures, higher mass black holes require higher Eddington ratios before the absorption edges are removed. It is important to emphasize that the absorption and emission edges can be smeared out by the varying Doppler shifts and gravitational redshifts in the accretion flow around the black hole when the viewing direction is at least somewhat off-axis. This is illustrated in figures 3.6 and 3.6, which show the actual spectral energy distributions of some of our models in the Lyman limit region. The high luminosity models, which are probably most relevant for observed quasars, show very little in terms of sharp changes in flux, and our edge strength parameter shown in figures 3.6 and 3.6 really reflects an overall spectral slope, not an emission edge, in this wavelength region. These high luminosity models are still capable of explaining the observed red colors of quasars, as shown in figures 3.5 and 3.5. Reduction of the Lyman edge feature is caused both by relativity and by summing over emission and absorption edges from the individual annuli at different radii. Only relativity (i.e. Doppler shifts) can smear out a flux discontinuity, however. We have tried integrating spectra without the relativistic transfer function for the $`M=10^9`$ M, $`\dot{M}=1`$ M yr<sup>-1</sup>, $`\mathrm{cos}i=0.8`$ case and found that without relativity, a substantial emission edge discontinuity exists in the integrated spectrum at 912 $`\mathrm{\AA }`$. Relativity is not sufficient to smear out absorption edges in the low luminosity models, because the absorption edges are very strong in the spectra of all annuli in such models. Instead, the edges are simply blue shifted or red shifted away from 912 $`\mathrm{\AA }`$. Sharp changes in slope are present in the low luminosity Kerr spectra near $`750\mathrm{\AA }`$ for $`\mathrm{cos}i=0.5`$, and in most of the spectra near $`840\mathrm{\AA }`$ for $`\mathrm{cos}i=0.8`$. In both cases these are at substantially shorter wavelengths than the Lyman limit because of Doppler blue shifts, but they might nevertheless be observable in quasar spectra. From our models, we have calculated the maximum local change in logarithmic slope in the region 812-1012 Å , and the results are illustrated in figure 3.6 for a viewing angle of 37. We choose a sign convention such that a positive value of the slope change indicates a spectrum that becomes steeper as the frequency increases, i.e., positive slope change is associated with absorption at the edge. All models display a similar behavior at this viewing angle, so we only discuss the $`M=10^9`$ M case in detail (cf. figure 3.6 and the bold curve in figure 3.6). At both low and high accretion rates, the maximum slope change occurs at $`840\mathrm{\AA }`$ for this black hole mass. As the accretion rate diminishes, $`\mathrm{\Delta }(d\mathrm{ln}F_\lambda /d\mathrm{ln}\lambda )`$ grows, reflecting an increasingly stronger smeared absorption edge. The reverse is true at high accretion rates. At intermediate accretion rates, the maximum slope discontinuity shifts to $`900\mathrm{\AA }`$, where the spectra change from positive to negative slopes, reflecting a slight maximum in the flux around this wavelength for these models. In conclusion, the disk-integrated theoretical spectra for high Eddington ratio ($`L/L_{\mathrm{Edd}}`$) disks do not show significant features at the Lyman edge at 912 Å, for both Kerr and Schwarzschild black holes. The only associated feature is a change of the slope of the Lyman continuum, blue shifted by $`100200`$ Å, depending on the inclination, mass, and to some extent on the black hole spin. It is likely that even this feature will be affected by additional physics, particularly metal line blanketing, which we will address in a future paper. ### 3.7. Polarization In addition to computing spectra, we have also calculated the polarization in our complete grid of models, and this information is also available on request. Once again, no ad hoc assumptions are made here: the polarization is computed exactly in the radiative transfer calculation. In order to keep the parameter space as simple as possible, we do not include the effects of Faraday rotation by magnetic fields in the photosphere of the accretion disk, which can be important in modifying the polarization signature (e.g. Agol, Blaes, & Ionescu-Zanetti 1998). Figure 3.7 shows the degree and position angle of the polarization for various inclination angles for our $`\alpha =0.01`$, $`\dot{M}=1`$ M yr<sup>-1</sup> disk models around a $`M=10^9`$ M Kerr hole. These results are quite similar to those of Laor, Netzer, & Piran (1990). In particular, the plane of polarization is parallel to the disk plane at optical/UV frequencies, but rotates at higher frequencies due to general relativistic effects. Our predicted polarizations are higher than those of Laor et al. (1990), and our polarization generally dips redward and rises blueward of each continuum edge (cf. especially the hydrogen Balmer edge and He II Lyman edge). These differences are due largely to our more careful treatment of the vertical structure, the radiation field anisotropy, and the overall effects of absorption opacity. Polarization near the Lyman limit of hydrogen has produced considerable recent interest due to the observation of steep rises in some quasars (Impey et al. 1995, Koratkar et al. 1995, Koratkar et al. 1998). For illustration purposes, we show the degree of polarization at a viewing angle of 60 predicted by our models for $`M_9=1`$ black holes in figure 3.7. This figure should be compared to figure 3.6, which shows the corresponding spectra. Cooler disk models generally produce large polarizations, even larger than that for a pure electron scattering atmosphere (2.25% at this viewing angle, Chandrasekhar 1960). The reason for this is the enhanced anisotropy (limb darkening) of the radiation field due to vertical thermal source function gradients (Blaes & Agol 1996). However, steep rises in polarization, and a steep rise in polarized flux as observed in PG 1630+377 (Koratkar et al. 1995), are not produced by our models. This is partly due to the smearing and rotation of the plane of polarization by the relativistic transfer function (cf. Shields, Wobus, & Husfeld 1998). The optical/UV polarization shown in figure 3.7 (in degree, position angle, and wavelength dependence) is generally not observed in AGN and quasars (Berriman et al. 1990, Antonucci et al. 1996). Once again, it is likely that the polarization of the radiation field will be affected by additional physics. In addition to Faraday rotation (which usually suppresses polarization), the additional absorption opacity from metal line blanketing in this region of the spectrum will probably reduce the polarization. Dust and electron scattering at larger distances from the continuum source can also modify the polarization signature (Kartje 1995). ### 3.8. Ionizing Continua The radiation from accretion disks is often thought to supply most of the photoionizing continuum for the broad and narrow line regions of AGN. In view of this important application, we present the number of photons in the HI and HeII ionizing continua for disks with a range of accretion rates and inclination angles around $`10^9`$ M Kerr holes in figures 3.8 and 3.8. For comparison, we also show the ionizing continua for the corresponding models with local blackbody emission. For this particular black hole mass, our models generally predict somewhat fewer ionizing photons in the hydrogen Lyman continuum than the corresponding blackbody disks. The exception is for near face-on disks at high luminosities. The reason is that at each annulus in the disk, our models generally have fewer low energy photons and more high energy photons compared to a blackbody at the same effective temperature (cf. figure 3.1 and section 3.1 above). This can reduce the hydrogen Lyman continuum, while at the same time increasing the HeII Lyman continuum. Indeed, figure 3.8 shows that, except at low luminosities, our disk models generally produce more HeII Lyman continuum photons than local blackbody disks. The reason for the dearth of photons at low luminosities compared to blackbody models is the strong absorption edges present in these models. Note that the hydrogen Lyman continuum is limb darkened at high accretion rates, peaks at intermediate viewing angles at moderate accretion rates, and is extremely limb brightened at low accretion rates (due to the high Doppler shifts overcoming the intrinsic absorption edges). The HeII Lyman continuum is also limb brightened at all but the highest accretion rates, where it peaks at intermediate viewing angles. These intrinsic anisotropies in the ionizing continuum produced by the disk may have important implications for photoionization models of the broad line region of AGN (cf. the much simpler anisotropy model of Netzer 1987) and for the statistics of AGN samples (e.g. Krolik & Voit 1998). ## 4. Conclusions We have presented a grid of AGN disk models for a wide range of basic parameters, the black hole mass and the mass accretion rate, and for two values of the viscosity parameter $`\alpha `$ (0.01 and 0.1), and two values of the black hole angular momentum: maximum rotation Kerr black hole with $`a/M=0.998`$, and Schwarzschild black hole with $`a/M=0`$. The basic aim of the present study was to construct a benchmark grid of models, based on simple, “classical” approximations, to which all our future, more elaborate models will be compared. The most important physical approximations that define the classical model are the following: (i) The energy is generated by turbulent viscous dissipation, with the vertically-averaged viscosity described through the Shakura-Sunyaev parameter $`\alpha `$. (ii) The vertical dependence of kinematic viscosity is described through a two-step power law in the column mass. (iii) Convection and conduction are neglected. (iv) No external irradiation or self-irradiation of the disk is considered. (v) Electron scattering is treated as coherent (Thomson) scattering, i.e., the effects of Comptonization are neglected. (vi) Thermal opacity and emissivity of H and He only are included here, i.e., the effects of metals are neglected. (vii) Effects of line opacity are neglected. The underlying assumption, not listed here, is that the 1-D approach is appropriate, i.e., that the disk may be described as a set of mutually non-interacting, concentric annuli. What is, however, treated exactly in this study, is the simultaneous solution of all the structural equations, without making any approximations concerning the behavior of the radiation intensity. Likewise, no a priori assumptions about atomic level populations (e.g. LTE) are made. The local electron temperature and density, mass density, and atomic level populations, are determined self-consistently with the radiation field. Once the local spectra of all annuli are computed, the spectrum of the full disk is found by integrating the emergent intensity over the disk surface using our relativistic transfer function code. We can summarize some of the overall spectral features of our benchmark grid as follows. Compared to multitemperature blackbody accretion disk models, our spectra generally have lower fluxes at low frequencies and higher fluxes at high frequencies. This difference is amplified further by relativistic effects, which are strongest for edge-on disks in Kerr spacetimes. Disks with different accretion rates around different mass black holes do not exhibit the same spectral energy distribution even if they have the same effective temperature distribution. Spectral slopes in the optical/UV region are significantly redder than the canonical $`F_\nu \nu ^{1/3}`$ low frequency accretion disk spectrum. Non-LTE effects are important in all but the highest density disks, enhancing the He II Lyman continuum and generally reducing the strength of features near the HI Lyman limit. HI Lyman edge discontinuities are only present in the cooler, low luminosity disk models. High Eddington ratio models exhibit no discontinuities, but they do show sharp changes in spectral slope, albeit at wavelengths substantially blue shifted from 912 $`\mathrm{\AA }`$. We stress again that by neglecting convection, the “cool” models ($`T_{\mathrm{eff}}<9,000`$ K may be significantly altered; therefore the predicted optical and IR continuum flux (and, in particular, the Balmer edge region) should be used with caution. Finally, our models show substantial wavelength dependent optical/UV polarization which is parallel to the disk plane, a result which is likely to be modified by the effects of Faraday rotation and additional sources of opacity, both of which can suppress this polarization. In future papers of this series, we will systematically relax more and more of the classical assumptions listed above. Among these, relaxing assumptions (vi) are (vii) is straightforward, since our computer program TLUSDISK is fully capable of handling these situations. The only concern is that generating such models will require much more computer time than required for the present models. Also, one has to collect a large amount of atomic data, but we will profit enormously from already existing collections made for the purposes of modeling stellar atmospheres, or from data being included in current photoionization codes. Relaxing approximation (v) is less straightforward, but we have recently solved the problem and implemented Comptonization in TLUSDISK. Also, approximation (iv) may in principle be easily relaxed by adjusting the surface boundary conditions of each annulus. However, relaxing assumptions (i) - (iii) is much more difficult. Here, we have to rely on detailed magnetohydrodynamic simulations to guide us in how to describe convection, and how to choose the most appropriate parameterization of viscosity. Nevertheless, even before such simulations are available, we can investigate phenomenologically the impact of a dissipation rate that varies with altitude within the disk. The existence of disk coronae suggest, for example, that the heating rate may increase with height, leading to a possible temperature inversion in the upper layers of disk atmospheres. Besides this purely theoretical motivation for improving disk models we will also use the present grid of models to analyze a large volume of observed quasar spectra. Such a study will bring interesting results whether or not the models actually fit the data. If they do, this will be a strong argument in support of the accretion disk hypothesis, and of the adequacy of our theoretical description of accretion disks. If not, such a study will provide us with important clues as to which aspects of the theoretical description should be improved, and/or what other observational constraints will be needed in the future to settle these questions. We thank Ari Laor for providing us with his spectral models for comparison with those presented here. This work was supported in part by NASA grant NAG5-7075. ## Appendix - Density Structure of the Disk We consider here some details of the density structure in the case of a radiation pressure supported disk, without assuming that the gas pressure is totally negligible. We write the vertical hydrostatic equilibrium equation as (see Paper II for details of the formulation) $$\frac{dP_{\mathrm{rad}}}{dz}+\frac{dP_{\mathrm{gas}}}{dz}=Q\rho z,$$ (1) where $`P_{\mathrm{rad}}`$ and $`P_{\mathrm{gas}}`$ are the radiation and gas pressure, respectively, $`\rho `$ is the mass density, $`z`$ is the vertical distance from the disk midplane, and $`Q=(GM/r^3)(C/B)`$ \[in the notation of Paper II; in the notation of Krolik (1999a), $`C/B=R_z(r)`$\], where $`B`$, $`C`$ and $`R_z(r)`$ are the appropriate relativistic corrections. The radiation pressure gradient can be written (Paper II; Krolik 1999a) $$\frac{dP_{\mathrm{rad}}}{dz}=\frac{\rho \chi _\mathrm{H}}{c}F_{\mathrm{rad}},$$ (2) where $`F_{\mathrm{rad}}`$ is the total (frequency-integrated) radiation flux, and $`\chi _\mathrm{H}`$ is the flux-mean opacity. For most applications, the flux-mean opacity is well approximated by the Rosseland-mean opacity, $`\kappa _\mathrm{R}`$. We consider here the case where the local opacity is fully dominated by electron scattering, in which case $`\kappa _\mathrm{R}`$ is constant and roughly equal to 0.34. (In fact, it is not exactly constant because it depends on the degree of ionization - see Paper II; however, we neglect this small effect here.) Further, we introduce (Krolik 1999a) $$F_{\mathrm{rad}}(z)=F_{\mathrm{rad}}^0(z)f(z)\sigma _BT_{\mathrm{eff}}^4f(z),$$ (3) where $`F_{\mathrm{rad}}^0`$ is the total radiation flux at the surface, which is expressed through the effective temperature. We express the gas pressure through the sound speed, $`c_s`$, as $`P_{\mathrm{gas}}=c_s^2\rho `$. The sound speed is given by $$c_s^2=\frac{k}{\mu m_\mathrm{H}}\frac{N}{Nn_\mathrm{e}}T,$$ (4) where $`k`$ and $`m_\mathrm{H}`$ are the Boltzmann constant and the mass of the hydrogen atom, respectively; $`\mu `$ is the mean molecular weight (i.e., the mean mass of a heavy particle per hydrogen atom; in our case of a H-He atmosphere with a solar helium abundance $`\mu =1.4/1.1=1.27`$); $`N`$ is the total particle number density, and $`n_\mathrm{e}`$ is the electron density. The sound speed thus depends primarily on the temperature, and partly also on the degree of ionization \[via the term $`N/(Nn_\mathrm{e})`$, which varies from 1 – for a neutral medium, to 2.1 – for a fully ionized solar-composition H-He plasma\]. Substituting equations (2), (3), and (4) into (1), we obtain $$\frac{1}{Q}\frac{dc_s^2}{dz}+\frac{c_s^2}{\rho Q}\frac{d\rho }{dz}=H_rf(z)z$$ (5) where we introduce the radiation-pressure scale height, $`H_r`$, as $$H_r\frac{\kappa _RF_{\mathrm{rad}}^0}{cQ}=\frac{\kappa _R\sigma _BT_{\mathrm{eff}}^4}{cQ},$$ (6) which has the meaning of the disk height in the case of negligible gas pressure (see Paper II and Krolik 1999a). We shall consider two cases, (a) the gas pressure is completely negligible, and (b) the gas pressure is taken into account, although it is still smaller than the radiation pressure. In the former case, we take $`P_{\mathrm{gas}}=0`$, i.e. $`c_s=0`$, and thus the l.h.s. of Eq. (5) is zero. We are left with $$H_rf(z)z=0.$$ (7) At the surface, $`f(z_0)=1`$ regardless of the dissipation law, and therefore $`z_0=H_r`$, so the disk height is exactly equal to the radiation pressure scale height. However, the density cancels out exactly because both the gravity force and radiation force are linearly proportional to density (e.g., Krolik 1999a), so the density is undetermined by the hydrostatic equilibrium equation. In usual treatments of the radiation-pressure-supported disk, Eq. (7) is assumed to hold everywhere in the disk, from which it follows that $`f(z)=z/H_r`$. In our approach, however, we assume that $`f`$ is a known function of the column mass, $`m`$. In the case of constant kinematic viscosity (i.e., the dissipation rate proportional to density), we have (see Paper II), $`f=m/m_0`$, where $`m_0`$ is the column mass at the midplane, $`m_0=\mathrm{\Sigma }/2`$. This gives a linear relation between $`z`$ and $`m`$, and since $`\rho =dm/dz`$, we obtain $`\rho (z)=\mathrm{const}=\rho _0`$ in the region of constant dissipation (99% of the total column mass of the disk in our models). In our numerical procedure, we in fact solve Eq. (5) exactly, without assuming that the left-hand-side is negligible, and with $`f`$ given as a known function of $`m`$ (although not of $`z`$, because $`z`$ is one of the state parameters to be solved as a function of $`m`$). To be able to write down a simple analytic solution in the case of non-negligible gas pressure, we approximate the function $`f(z)`$ as $$f(z)=\{\begin{array}{cc}z/H_0,\mathrm{for}z<H_0,\hfill & \\ 1,\mathrm{for}zH_0.\hfill & \end{array}$$ (8) In other words, we assume that the radiation flux linearly increases with $`z`$ until a certain height $`H_0`$, and then remains constant. Such a behavior is roughly observed in the numerical simulations. Since close to the midplane $`z/H_0=1m/m_0`$, so that $`\rho _0=(dm/dz)_{z=0}=m_0/H_0`$. Therefore, $`H_0=m_0/\rho _0`$. Within the present analytical model, $`H_0`$, and thus $`\rho _0`$ are quantities to be determined by the constraint of total column density (see below). We write $$\frac{c_s^2}{Q}=\frac{c_s^2(z=0)}{Q}q(z)\frac{H_g^2}{2}q(z),$$ (9) where $`H_g`$ has the meaning of the gas-pressure scale height corresponding to the midplane conditions (temperature and degree of ionization), and $`q(z)`$ is a correction parameter that accounts for a dependence of $`c_s`$ on $`z`$. The reason why $`H_g`$ is called the gas-pressure scale height is that in the case of negligible radiation force (i.e. $`f(z)0`$ everywhere), the solution of Eq. (5) is given by $$\rho (z)\rho _0\mathrm{exp}[(z/H_g)^2],$$ (10) where $`\rho _0\rho (z=0)`$ is the density at the midplane. In the following, we neglect the dependence of sound speed on height, i.e. we assume $`q(z)=1`$. One can derive appropriate analytical expressions even for a more realistic case at the expense of complicating the analytical formulas considerably, but the present approximation is satisfactory from the point of view of describing the basic physical picture. A similar analysis was already presented by Hubeny (1990). Equation (5) then reads, using Eq. (9), $$\frac{1}{\rho }\frac{d\rho }{dz}=\left(\frac{H_r}{H_0}1\right)\frac{2z}{H_g^2},\mathrm{for}z<H_0$$ (11) $$\frac{1}{\rho }\frac{d\rho }{dz}=(H_rz)\frac{2}{H_g^2},\mathrm{for}zH_0,$$ (12) which has the solution $$\rho (z)=\rho _0\mathrm{exp}\left[\left(1\frac{H_r}{H_0}\right)\left(\frac{z}{H_g}\right)^2\right],\mathrm{for}z<H_0,$$ (13) and $$\rho (z)=\rho _0\mathrm{exp}\left[\left(\frac{zH_r}{H_g}\right)^2\right]\mathrm{exp}\left[\frac{H_r}{H_g}\frac{H_0H_r}{H_g}\right],\mathrm{for}zH_0.$$ (14) The scale height $`H_0`$ is now determined from the condition $`_0^{\mathrm{}}\rho (z)𝑑z=m_0`$. Substituting Eqs. (13) and (14), we obtain after some algebra $$(2/\sqrt{\pi })h_0=\sqrt{h_0/\delta _0}\mathrm{erf}\left(\sqrt{h_0\delta _0}\right)+\mathrm{exp}(h_r\delta _0)\mathrm{erfc}(\delta _0),$$ (15) where $$h_0H_0/H_g,h_rH_r/H_g,$$ (16) and $$\delta _0=h_0h_r,$$ (17) where the error function $`\mathrm{erf}(x)`$ is defined by $`\mathrm{erf}(x)(2/\sqrt{\pi })_0^x\mathrm{exp}(t^2)𝑑t`$, and the complementary error function by $`\mathrm{erfc}(x)(2/\sqrt{\pi })_x^{\mathrm{}}\mathrm{exp}(t^2)𝑑t=1\mathrm{erf}(x)`$. Equation (15) is a transcendental equation for $`h_0`$; however an approximate solution is found to be $$h_0h_r+1/h_r.$$ (18) If $`h_r1`$ (i.e., $`H_rH_g`$), one obtains $`h_0h_r`$, i.e., $`H_0H_r`$, and thus $`\rho (z)\rho _0`$ for $`z<H_r`$; i.e., one recovers the solution for negligible gas pressure. If, however, $`H_g`$ is not completely negligible with respect to $`H_r`$, we obtain $`H_0>H_r`$, and density falls off exponentially with increasing height even in the midplane layer, with a scale height $`H_g/(1H_r/H_0)^{1/2}`$ (see Eq. 13). From the expression for the respective scale heights we see that $`H_r`$ is weakly dependent on $`r`$ ($`H_rD/C`$, so it depends on radial distance only through the relativistic corrections), while $`H_g(T_0/Q)^{1/2}`$ where $`T_0`$ is the temperature at the midplane. The latter scales as $`T_0m_0^{1/4}T_{\mathrm{eff}}r^{3/8}`$, and since $`Qr^3`$, we obtain finally $`H_gr^{21/16}`$, i.e., it increases rapidly with radial distance. For the models displayed in figure 3.1 we have, e.g., at $`r/r_g=5`$ (which is well in the domain of complete dominance of radiation pressure), $`H_r=2.63\times 10^{13}`$ cm, and $`H_g=5.52\times 10^{11}`$ cm, i.e., $`h_r48`$. We thus have $`H_0H_r`$, which is indeed verified by the numerical model. The last (coolest) model displayed there corresponds to $`r/r_g=90`$, and we have $`H_r=6.42\times 10^{13}`$ cm, and $`H_g=2.67\times 10^{13}`$ cm, i.e., $`h_r2.4`$. The correction $`1/h_r`$ to $`h_0`$ is no longer negligible; we obtain $`h_02.8`$, i.e., $`H_07\times 10^{13}`$ cm (the exact value following from the numerical model is $`H_0=7.7\times 10^{13}`$ cm) and the density should show an $`\mathrm{exp}[(z/H)^2]`$ decay with $`z`$ with $`H9.3\times 10^{13}`$ cm. This is indeed roughly consistent with the numerical model.
no-problem/9911/nucl-ex9911002.html
ar5iv
text
# An Energy Feedback System for the MIT/Bates Linear Accelerator ## 1 Introduction and Motivation Beam energy stability is of fundamental importance in any scattering experiment. Not only are the results of such experiments sensitive to fluctuations in beam energy, but energy variations can significantly affect transmission through beam line elements that transport the beam to the experimental area. Instability can therefore result in beam losses, the subsequent creation of large backgrounds in the experimental detectors, and, especially if reliable extraction of observables depends critically on proper background subtraction, an increase in systematic uncertainties. In addition, beams are often injected into internal storage rings, where mismatch between the injected beam energy and the ring energy can again result in the significant losses and large backgrounds associated with beam scrape-off. It is also true that the energy of pulsed electron beams can be susceptible to slow drifts, and, especially if DC supply voltages and power are coupled to the AC line, to fluctuations at 60 Hz. These issues are important at the MIT/Bates Linear Accelerator Center, where pulsed electron beams with energies of up to 1 GeV are generated for transport to two main experimental areas and for injection into the new South Hall Ring (SHR). In order to improve beam energy stability, we have recently designed and implemented a feedback system capable of adjusting the beam energy in response to both 60 Hz fluctuations and slow drifts. A beam position monitor (BPM) installed in a dispersive region of the beam line allows the energy of each pulse to be measured. The BPM signal, which is sensitive to changes in the amplitude or phase of any of the twelve klystrons that supply radio frequency power for the accelerator, is the system’s single “dial”. A phase shifter, the system’s single “knob,” is installed on one of the klystrons to allow small, rapid, computer controlled energy adjustments to effectively compensate for those changes. This work was primarily motivated by a need to minimize beam energy variations on both short and long time scales for two experimental initiatives at MIT/Bates. First, the SAMPLE experiment requires a beam of exceptional stability to measure the parity violating spin-dependent cross section asymmetry of only a few parts per million in elastic electron scattering at backward angles from unpolarized hydrogen and deuterium targets.During test runs, it was found that uncontrolled beam energy fluctuations led to scrape-off on a pair of energy limiting slits upstream of the SAMPLE apparatus. Sensitivity to the small parity violating observable was reduced by the background subsequently created in the detectors. Second, the maintenance of the beam’s energy and position within narrow limits is a prerequisite for the efficient operation of the SHR, a vital component of a new program of internal target experiments with the Bates Large Acceptance Spectrometer Toroid (BLAST). The discussion is divided into four parts. In Section 2, we describe particular features of the accelerator and the beam at MIT/Bates that had significant impact on the design of the feedback system. We discuss in Section 3 the components of the feedback system and their functions in detail. In Section 4, we present results that demonstrate the marked improvement in beam energy stability with the energy feedback system. Finally, in Section 5, we summarize our most significant results and conclusions. ## 2 Beam Structure and Beam Energy Instability At Bates, beams of electrons are accelerated with longitudinal electric fields oscillating in a series of accelerating cavities at a radio frequency (RF) of 2856 MHz. RF power is delivered to the cavities through wave guides from up to twelve klystrons, and beams of different energies are prepared by adjusting the RF amplitude in each klystron. Electrons are injected at all RF phases, but those injected outside of a $`120^{}`$ phase window are immediately “chopped” or deflected into a metal beam stop by a transverse RF electric field. The remaining electrons are “bunched,” or compressed with a longitudinal RF electric field into a phase window of about $`2^{}`$. The RF phases of all but one klystron are optimized with respect to a common reference signal so that the crest of the RF field in each accelerating cavity coincides with the $`2^{}`$ beam “bunches” as they pass through it. Klystron 6B, designated the “vernier”, is shifted away from its crest. The phase of the vernier can be adjusted, compensating for drifts in the RF phase or amplitude of the other klystrons. The RF transmitters deliver power only in short bursts followed by a substantial recovery period. A duty cycle, about 1% for the Bates facility, is therefore superimposed on the RF microstructure of the beam. Typically, the accelerator is configured for beam pulses of 3-25 $`\mu `$s in duration at a rate of 600 Hz. This time structure matches the frequency with which the beam polarization can be reversed at the Polarized Electron Source (PES). At the PES, electrons are photoemitted from a GaAs crystal illuminated by circularly polarized laser light with a helicity that can be chosen randomly at 600 Hz. The beam pulses are synchronized with the laboratory’s 60 Hz AC line voltage, so that every pulse is associated with one of ten “time slots,” $`1n10`$, each of which has a unique phase angle $`\frac{n\pi }{5}`$ with respect to the 60 Hz AC power cycle.The first time slot is triggered by the positive-going zero crossing of the AC line voltage. Ideally, the properties of the beam would be independent of time slot. However, because DC power to the RF transmitters and the magnetic beam line elements is not perfectly isolated from the AC power, beam properties can fluctuate in ways that are highly correlated with the 60 Hz AC line, and therefore with time slot. For example, the strong dependence of beam energy on time slot is shown in Fig. 1. These data, obtained over a period of about 10 s with the energy feedback system disabled, show the beam’s fractional deviation $`\mathrm{\Delta }ϵ/ϵ`$ from the nominal beam energy for each time slot. The 60 Hz AC line voltage is superimposed to show the relative timing of each pulse and to set the vertical time scale. This figure demonstrates that beam energy variation in a single time slot can be at least an order of magnitude smaller than its variation over an entire power cycle. Overall variation can therefore be reduced significantly by a system capable of applying rapid time slot dependent energy corrections. Beam energy is also subject to fluctuations at other frequencies, particularly slow drifts with characteristic times between 10 s and 1000 s. In Fig. 2, for example, the RF phase of the electric field in one of the klystrons with respect to the common RF reference is shown to correlate with temperature variations in the water that provides cooling to the transmitters and magnets. Uncompensated RF phase variations can lead to beam energy variations that, in turn, often result in unacceptable background levels. Our feedback system is designed to compensate for both types of beam energy instability. Required of the system is the ability to * monitor the beam energy $`ϵ_n`$ in each time slot $`n`$, and accurately determine its average over a time interval $`\mathrm{\Delta }t`$ which is small compared with the characteristic period of slow drifts; * estimate, for each time slot, the difference $`\mathrm{\Delta }ϵ_n`$ between the beam energy $`ϵ_n`$ and some ideal energy $`ϵ_0`$; * estimate and store, for each time slot, the phase shift $`\mathrm{\Delta }\varphi _n\mathrm{\Delta }ϵ_n(\frac{ϵ}{\varphi })^1`$ required to minimize $`|\mathrm{\Delta }ϵ_n|`$; * shift the phase of the energy correction klystron sufficiently in advance of each time slot so that the phase is stable before the beam is injected. ## 3 Instrumentation and Operation There are three main components of the energy feedback system. First, energy is monitored with a BPM installed between the two dipole pairs of a magnetic chicane located downstream of the accelerator and shown schematically in plan view in Fig. 3. The chicane disperses the beam horizontally in the region between the dipole pairs. By definition, the trajectory of electrons with the “central” energy passes equidistant from a pair of moveable, heavy metal, water cooled slits, positioned to the left and right of the center of the beam line. Electrons of higher energy and higher rigidity follow shorter trajectories than electrons of the central energy and therefore pass through the chicane to beam left. In contrast, lower energy electrons follow longer trajectories that pass through the chicane to beam right. Typically, the slits are positioned symmetrically with a separation of 33 mm, limiting the beam energy spread in the chicane to $`\pm 0.5\%`$. The BPM, located 1.5 m downstream of the slit, is a non-resonant RF cavity perpendicular to the beam with a diode at each end. The RF pulse structure of the beam induces oscillation in both diodes, with a relative RF phase proportional to the beam’s displacement from the center of the cavity. Due to the correlation of 33 mm/% between horizontal beam position and energy, the phase is also proportional to the beam energy’s relative deviation from the central energy. The error signal, a voltage proportional to this phase, is produced by the BPM’s output stage, and is integrated over the duration of each pulse and digitized in a 16 bit ADC. The intrinsic position resolution of this BPM is of order 50 $`\mu `$m, with typical output voltages, before amplification, of about 3 mV/mm displacement. The second component of the feedback system is a remotely controlled ferrite core phase shifter with a 12 bit digital interface.This digital phase shifter (DPS) is installed on the vernier klystron. Changes in the RF phase stabilize in about 1 ms and can be made in increments as small as 3 mr. The third component is a computer controlled interface between the error signal and the phase shifter. The interface performs three functions. Two of the three functions, data acquisition and energy correction, are controlled by a low level microcode executing CAMAC read and write instructions in synchronization with the 600 Hz pulse rate. Data analysis, a third function of the interface, is performed asynchronously by a separate program. The flow of information between the components of the energy feedback system is indicated schematically by the diagram in Fig. 4 Before enabling the feedback system, the accelerator is prepared according to a standard procedure. First, the ten digital bit patterns that encode the phase shift for each time slot are initialized to the middle of their full range, which is limited in software to 45. Next, all klystrons are phased with respect to the common RF reference. To first order, the beam energy is then independent of small drifts in phase from the RF crest. The phase of the vernier klystron is then manually shifted away from its crest by about 22, half of the system’s full range. In this configuration, beam energy is quite sensitive to automatic adjustments by the feedback system of the vernier’s RF phase. Occasionally, the beam operators adjust one or more of three software parameters in order to optimize the performance of the system. One of these parameters is the ideal energy $`ϵ_0`$, toward which the feedback system is programmed to drive the actual beam energy. This ideal energy is chosen to optimize the experimental running conditions, and must be within 0.5% of the central energy. Otherwise, the beam will collide with the energy limiting slits. The operators can also select the number of beam pulses to be sampled before a new set of phase shifts is determined. This affects the speed with which the system corrects the energy, and the statistical precision of the corrections. A third parameter specifies the sign and magnitude of the phase shifts with respect to beam energy deviations. The value of this parameter, used to control overshoot and undershoot, has normally been obtained from an experimental measurement of $`\frac{ϵ}{\varphi }`$ and has been found to be near 0.0043%/mr (0.075%/degree). However, its optimum value has been shown to vary by at most 10% for widely varying beam tunes and is now rarely remeasured. Following these preparations, energy feedback is enabled. At 1.6 ms prior to each pulse, the CAMAC system extracts a digital bit pattern corresponding to the phase shift for the next time slot from one of ten locations in a LeCroy 8206A CAMAC memory module (MM1) and transfers the pattern to a TTL output register (DSP PR-612) connected to the phase shifter’s digital interface. Within 1 ms, the phase shift of the vernier klystron is stable. After each pulse, the CAMAC system reads the digitized BPM error signal along with a label identifying the current time slot. The digitized data are packaged and distributed to the analysis program at 7.5 Hz. For each time slot $`n`$, the analyzer computes an average error $`\mathrm{\Delta }ϵ_n`$ and a phase shift $`\mathrm{\Delta }\varphi _n`$ from a total accumulation of about 1000 beam pulses (100 in each time slot). These new phase shifts are encoded as digital bit patterns and downloaded to the CAMAC system. However, because the analyzer operates asynchronously, a download of the bit patterns directly into the memory module MM1 can interrupt the 600 Hz access of the microcode to its contents. To prevent this, the new bit patterns are written asynchronously into ten locations of a separate memory module MM2. When all ten patterns have been loaded, an additional flag is set to indicate a Data Ready condition. Prior to the first beam pulse in a ten time slot sequence, the Data Ready condition triggers the transfer to MM1 of the new bit patterns in MM2. The transfer is controlled by the microcode, requires 400 $`\mu `$s, and is completed 1 ms before the next memory access. Although the system’s frequency response would normally be limited to about 1 Hz by the time required to acquire a statistically significant sample, it effectively compensates for 60 Hz fluctuations by simultaneously implementing an independent feedback loop for each of the ten time slots. Separate analog signals, corresponding to the highest and lowest of the ten digital phase shift values for a given sample cycle, are piped from the CAMAC acquisition hardware to the main accelerator control room and monitored as a function of time. Should slow but constant energy drifts cause the feedback algorithm to approach either its high or low software limit, the beam can be rephased, and the digital bit patterns can be reset for all time slots to the center of the feedback system’s software range. ## 4 Energy Feedback Performance Two figures demonstrate the ability of the system to meet its design goals. The first, Fig. 5, shows the behavior of the beam on a time scale of a few seconds as a function of time slot, both with and without the energy feedback system. In each of the two panels the data are averaged over an interval of about 10 s. With feedback disabled, beam energy fluctuates over the range of the AC power cycle by 0.3% (lower panel). With feedback enabled and, in this example, adjusted for an ideal energy 0.4% above the central energy, fluctuations are controlled at the level of about 0.02% (lower panel). Moreover, RMS fluctuations per time slot are reduced by a factor of 2. The second, Fig. 6, shows the effect of the system over long periods of time for an individual time slot. With feedback disabled, the characteristic magnitude of slow beam energy drifts is about 0.2% of the central energy. However, the feedback system, enabled in this case for a set point equal to the central energy, effectively eliminates these drifts. ## 5 Summary and Conclusion Although 60 Hz AC line voltage fluctuations and slow, temperature dependent changes in accelerator hardware can induce energy instabilities, we have developed a reliable feedback system at the MIT/Bates Linear Accelerator which can compensate for these changes. Before the installation of this system, 60 Hz line voltage fluctuations induced energy fluctuations of up to 0.4%, and slow phase variations of thermal origin induced energy fluctuations of up to 0.2%. With the energy feedback system enabled, beam energy is measured on a pulse by pulse basis while rapid changes are made in the RF phase in one of the accelerating cavities, controlling energy fluctuations at the level of 0.02%. Two important sources of energy instability have been effectively eliminated, resulting in a beam with an energy spread that is limited only by the width of the energy distribution within a 3-25 $`\mu `$s beam pulse. This system provides improved beam stability, a decrease in background due to beam scrape-off during transport, and a simplification in the operation of the accelerator. The energy feedback system developed here has proven to be so successful that its use is now a routine part of the standard operating procedure for the MIT/Bates Linear Accelerator. Acknowledgements We gratefully acknowledge the help and support of the MIT/Bates electrical engineering and RF groups. We also thank the Rensselaer Polytechnic Institute group, the SAMPLE collaboration, and all others who participated in several development runs. This work was supported in part by the National Science Foundation and in part by the Department of Energy under Cooperative Agreement No. DE-FC02-94ER40818.A000.
no-problem/9911/hep-ph9911450.html
ar5iv
text
# The Dimensions Of Field Theory : From Particles To Strings ## 1 Birth, Decline And Rebirth Of Field Theory If one must choose one single item of Twentieth Century Physics which stands out by the yardstick of most pervasive and decisive influence on its total development, Quantum Field Theory (QFT) certainly wins hands down. Historically, QFT was born out of the marriage of Relativity and Quantum Theory, at a hefty price of mathematical self-consistency underlying the celebrated Dirac Theory, whose full significance took several stages to unfold through the vissicitudes of logical deduction, going well beyond the immediate discovery of the positron. Indeed of far greater significance from the conceptual point of view, was the realization that the ”sea of negative energy states” was already a tacit admission of the failure of relativistic quantum mechanics of a single particle, in favour of a collective many-particle, or $`field`$ description, a fact which was to be driven home by Dyson in his Cornell lectures of 1952. And once this realization dawned on the pioneers, the Klein-Gordon theory of scalar particles found a natural place in the new scenario, at the hands of Pauli-Weisskopf(1934) who now found little difficulty in quantizing these bosonic particles just as easily as the Dirac theory had done to fermions. Thus was born ”Quantum Field Theory” (QFT) in its full glory, with Anti-matter playing a symmetrical role to Matter, irrespective of its fermionic or bosonic nature. \[ Feynman’s brilliant positron theory was a bold attempt to resurrect the single particle quantum mechanics description via ”zigzag” diagrams (negative time propagation of negative energy electrons), but the more universal language of Field Theory eventually carried the day\]. QFT registered its first major success in the Covariant formulation of QED at the hands of Tomonaga and Schwinger on the one hand, and Feynman on the other, with Dyson playing the catalyst-role in synthesizing the two. This theory, in the course of circumventing unphysical infinities in the measurable quantities, gave rise to a new dogma of $`Renormalizability`$ which was to act as the yardstick of acceptability of theories to come. This dogma, together with the independent principle of ”Gauge Invariance” (already in-built in QED), were to be two pillars of QFT in its march towards greater victories to come, especially in the formulation of strong interaction theories on analogous lines to QED. This led to the Yang-Mills theory (1954) of $`SU(2)`$ gauge bosons as the non-abelian counterpart of QED. Yet the path of QFT for strong interactions was strewn with so many thorns that for a long time it simply refused to move. Indeed at the meson-baryon level there was a (temporary) disenchantment with QFT, in favour of a new paradigm of Hadron Democracy (somewhat akin to the Mach Principle) based on a selection of items from the full QFT package viz., analyticity of S-matrix elements subject only to unitarity. This was the ”Bootstrap” Philosophy of Chew which held its ground for a brief period, until the discovery of Quark substructures (1963-64) brought the Hadrons down from the pedestal of elementary particles to a (more modest) composite status, which told (by hindsight) the reason why the Yang-Mills gauge theory of strong interactions had not worked at the composite hadron level of gauge fields. But now the quarks and gluons offered a fresh basis for the application of the Gauge Principle at a deeper level of elementarity; thus was born the non-abelian QCD for strong interactions (1973). ### 1.1 QFT in Action With the rebirth of QFT, three principal weapons in its arsenal that have stood the test of time may be listed as Renormalizability, Gauge Principle, and Spontaneous Symmetry Breaking. Indeed these are the 3 pillars on which rests the grand edifice of Field Theory encompassing the diverse phenomena of Nature within an integrated structure. In this entire development, Symmetry has all along played the role of the ”guiding light”, with a conservation law (Noether) associated with a specific symmetry (invariance) type (Lorentz, Gauge, Chiral), making use of the powerful language of Group Theory and Topology at its command. And in reverse, Symmetry breaking (spontaneous or dynamical), with its universal appeal, has been a key element in the understanding of a whole range of phenomena from condensed matter to particle physics and early Universe cosmology, with definite experimental and observational support. With the evolution of further unifying principles like $`duality`$ and $`Supersymmetry`$, QFT has greatly consolidated its grip on Particle Physics, and extended its frontiers beyond its traditional domains, to the new heights of String Theory to ”rein in” the formidable force of Gravity. These powerful weapons at the command of QFT have led to new insights which have helped reveal its hold on widely different branches of Physics (which had hitherto evolved on their own steam), a saga of victories in which the techniques of Path Integrals, Renormalization Group (RG) theory and Dirac/Bargmann constrained dynamics have played key roles. Especially noteworthy is the unification of the QFT language with those of Quantum Statistical Mechanics (QSM) and Condensed Matter Physics (CMP), leading all the way to Cosmology and Black Hole Physics (thanks to the unifying power of String Theory). Progress has been far from uniform in the different sectors of QFT. In the Particle Physics domain which represents the Ultimate Laboratory for testing the most profound concepts emanating from QFT, the current state of the art is symbolized by the Standard Model (SM) designed to unify the three gauge sectors of strong, electromagnetic, and weak interactions. Of these, the last two sectors are unified by the Glashow-Salam-Weinberg (GSW) model of $`SU(2)U(1)`$ comprising photons and weak bosons, while a proper understanding of the QCD sector for $`SU(3)`$ strong interaction, still remains a distant goal. In this regard, a partial success has been achieved in the perturbative domain of asymptotic freedom, while the non-perturbative domain of $`Confinement`$ still remains elusive. This is reminiscent of the Churchillian phrase of ”so much effort being spent towards so little effect” that was often used for the two-nucleon problem in the fifties, leading Hans Bethe to invoke his famous ”Second Principle Theory” for effective nuclear interaction, which now seems to have shifted to the quark level. A part of the Book is devoted to the Strong Interaction problem of Quark Confinement from several different angles. And yet the methodology and techniques QFT have shown a striking capacity to handle the problems of widely different sectors of physics well beyond particle physics, without changing the thematic framework, often with great success. Examples are QSM and Toda FT for non-linear systems. Low dimensional Field Theories (2D, 3D) are often useful not only as soluble models designed to throw light on interesting features of QFT which often remain obscured in 4D form, but also as actual prototypes for systems moving in lower dimensions; (e.g., 2D conformally invariant field theory adequately describes the long range behaviour of systems undergoing second order phase transitions). A striking example of 2D QFT is the Schwinger Model (both chiral and non-chiral) which has been subjected to deep scrutiny from several angles, each with rich dividends. Similarly $`(2+1)D`$ QFT , the so-called Chern-Simons (CS) Theory, has found rich applications in Quantum Hall Effect in Condensed Matter Physics. And most significantly from the point of view of formal QFT, Witten’s demonstration of an exact connection of $`(2+1)D`$ QFT with Jones Polynomials did fulfil the long-cherished goal of an exact (non-perturbative) solution of a gauge field theory, at least in 3 dimensions for the first time. ### 1.2 Scope of The Book This Book is an attempt to capture a cross section of the multifaceted flavour of QFT that has evolved over this Century, by putting together a collection (albeit subjective) of Articles by acknowledged experts in their respective fields. Admittedly, the selection is constrained by the accessibility of experts to $`this`$ Editor within a relatively short span of time, and of necessity leaves out many important areas of QFT studies. The style of each Article varies from author to author, but the emphasis by and large is on conceptual and logical aspects of QFT formalism to the topic under study, designed to be instructive for a fairly wide class of readership, (with enough access to references for those who wish to pursue a particular line further), while the actual details of QFT methodology, or applications to phenomenology are outside the scope of the Book. Its theme is defined through a subjectwise classification of its contents under the following heads, in accordance with the specific sector/sectors of physics intended for exposition: A): Basic Srtucture of QFT ($`RG`$ Theory; $`SM`$; $`SSB`$; Confinement) B): Topological Aspects of QFT C): Formal Methods in QFT (QSM; Toda FT; LF and Constrained Dynamics) D): Extension of QFT Frontiers (SUSY; CFT; String Theory) E): QFT In $`(2+1)D`$: CS Theory and Applications F): Methods of Strong Interaction in QFT G): Conclusion (Concerning Foundations of Quantum Theory) In this Introduction to the Book, an attempt has been made to organize the contents under these categories, with a section devoted to each. The topics are arranged in descending order from ‘classical’ to ‘evolving’, with the former playing the background to the latter. The narrative draws freely from the perspective and language provided by the Authors concerned, often without quotes. The referencing, (except for a few special papers which are cited in the running text), is left to the Articles concerned, whose authorships are listed in the order of their appearance in this narrative, in the bibliography at the end. ## 2 Some Core Aspects Of QFT In this Part we collect the Articles relating successively to RG Theory; Electroweak coupling in the Standard Model; the dynamics of Symmetry Breaking in different sectors of physics; and $`two`$ novel mechanisms of Confinement in QCD, one proposed by Gribov and one by Nishijima. ### 2.1 RG Theory in QFT Although the $`Renormalization`$ strategy had originated in QED in the (limited) context of absorbing divergences in physical entities like mass and coupling constant (charge), it turned out that the concept itself has a deeper meaning with much wider ramifications, which was later to get formalized as Renormalization Group $`(RG)`$ Theory. The perturbative $`RG`$ formulated by ”Stuekelberg-Petermann in 1952-53 as a group of infinitesimal transformations, related to finite arbitrariness arising in the S-matrix elements upon elimination of the UV divergences” - (D.V.Shirkov ). In a parallel development, Gell-Mann-Low (1954) derived functional equations for the QED propagators in the UV limit, on the basis of Dyson’s (1949) renormalization transformations, but missed the ‘group character’ implied in these equations. Finally, Bogoliubov-Shirkov (1955-56) put both aspects together and derived the ”$`RG`$-equations” in a form which brings out the ‘scaling’ properties of the electron and photon propagators. Thus $`RG`$ invariance boils down to the invariance of a solution w.r.t. the manner of its parametrization. These equations were further developed and made more rigorous with mathematicians and physicists working in tandem, so that renormalization became a well-developed method at the computational level. But the underlying physical concepts behind these equations took some more time to unfold until after Kadanov’s, and especially Wilson’s pioneering work on the understanding of the ”critical indices” in phase transitions brought out the real physics behind the $`RG`$ equations. Wilson’s work revealed the rich applicational potential of the $`RG`$ ideas in various fields of physics, from ‘critical phenomena’ (spin lattices, polymer theory, turbulence) in condensed matter physics, to QCD parameters like the strong coupling constant $`\alpha _s`$ and the ‘running mass’ $`m(p^2)`$. In particular, the discovery of Asymptotic freedom in QCD allowed physicists to produce a logically consistent picture of renormalization, one in which the perturbative expansions at any high energy scale can be matched with one another, without any need to deal with intermediate expansions in powers of a large coupling constant. Another important aspect of these $`RG`$ equations which has been emphasized by the Dubna School, is the concept of functional self-similarity in mathematical physics, which has led to applications like the study of strong non-linear regimes: asymptotic behaviour of systems described by non-linear partial differential equations; problem of generating higher harmonics in plasmas, and so on. The Book begins with a perspective Article by Dmitri Shirkov on all aspects of the subject, from an introduction to $`RG`$ in QFT to an overview of its methodology, together with applications of $`RG`$ ideas in some important arenas of physics. A relatively new approach to $`RG`$ theory, termed ”Similarity Renormalization Group” (SRG) was launched in this decade by Wilson and Glazek, as well as Wegner, and is based on the perception that divergences are in the first place due to thelocality of the primary interactions. For a proper understanding of the features of the SRG theory, it is enough to consider only the non-relativistic quantum mechanics (the usual UV divergences of relativistic QFT are not relevant here!), where the locality condition on the potentials at all scales corresponds to taking only delta functions and their derivatives. The associated divergences can be regulated by introducing cut-offs whose effects may be removed by renormalization. In the SRG, the transformations that explicitly ”run” the cut-off parameter are developed. These similarity transformations are of course unitary, and constitute the group elements of SRG. They are characterized by a ”running” cut-off on energy differences (not states). If the Hamiltonian is viewed as a large matrix, these cut-offs limit the off-diagonal matrix elements, and as they are gradually reduced, the Hamiltonian is forced towards the diagonal form. The perturbation expansion of the transformed Hamiltonians contains no small energy denominators, so that the expansion does not break down unless the strengths of the interactions themselves are large. With the help of an associated concept of coupling coherence, SRG acquires respectability as a proper theory with the same number of parameters as the original (fundamental) theory. A review of the formalism and working of SRG is given by R J Perry, using as an example the exactly soluble case of a simple 2D delta function to act as a laboratory for testing the convergence of the SRG method in some detail. ### 2.2 Standard Model And Electroweak Coupling The Gauge Principle, as a central ingredient of QFT, needed to be supplemented with fresh ideas and paradigms, within its broad framework, to extend its tentacles further. One such idea was based on the degenerate structure of the vacuum, dominated by vales and hills, which crystallized eventually as a new theme termed ”Spontaneous Symmetry Breaking” $`(SSB)`$, together with its companion ”Dynamical Breaking of Chiral Symmetry” $`(DB\chi S)`$, which would now enable gauge fields to acquire mass in a subtle but self-consistent manner. Armed with this paradigm, the Gauge Theory registered a signal success in the Weak interaction sector, culminating in the Glashow-Salam-Weinberg $`(GSW)`$ Model of Electro-weak Interactions, which offered a unified view of weak and electromagnetic interactions in the form of an $`SU(2)U(1)`$ gauge theory. A more ambitious form of unification of the three principal gauge fields as a straightforward extension of the $`GSW`$, so as also to include the strong interaction (QCD) sector, under the umbrella of ”Grand Unification Theory” $`(GUT)`$, did not unfortunately bear fruit, so that, for the time being, the ”Standard Model” $`(SM)`$ has had to rest content with only a partial unification $`SU(3)SU(2)U(1)`$ of these gauge fields. Nevertheless this episode brings out a truism about the unpredictability of Nature, viz., its refusal to yield to a particular strategy for a second time, merely on the strength of its success on a previous occasion. In a highly instructive and self-contained Article, V Novikov gives a panoramic view of the conceptual and methodological framework of QFT (with the ingredients of gauge principle, renormalization group, and spontaneous symmetry breaking) that have been employed in the formulation of $`SM`$ for elementary particle physics. He dwells in particular on the Higgs mechanism for the generation of the fermion masses for several generations, and brings out the powers of ”loop corrections” in $`SM`$ to predict accurate bounds on the masses of as yet undiscovered particles. This is vividly illustrated by the ”correct” mass of the $`t(op)`$-quark $`ahead`$ of its experimental discovery, stringent limits on the Higgs mass from the ”Landau pole” structure of the running coupling constant, and the windows to the ”physics beyond $`SM`$” that such analyses provide. #### 2.2.1 Discrete Symmetries in SM An essential aspect of the Standard Model concerns the role of discrete symmetries $`P,C,T`$ in determining the structure of the electroweak coupling. This subject has had a long history since the original Lee-Yang discovery of $`P`$-violation, going through successive phases of chiral symmetry (Landau-Salam), $`CP`$ invariance (Lee-Oehme-Yang), its subsequent violation (Cronin-Fitch), and ipso facto (?) $`T`$-violation, a topic of intense experimental activity today. \[This last is of course an immediate consequence of $`TCP`$-invariance (Pauli-Lueders Theorem), which puts the existence of antiparticles exactly on par with particles\]. A brief state- of-the-art review of the subject by P K Kabir follows. ### 2.3 Dynamics Of Symmetry Breaking Just as ”Symmetry dictates interactions”- (C.N.Yang at the First Asia Pacific Conf, Singapore, 1983), the dynamical effects of its $`breaking`$ (whether spontaneously or dynamically) during out-of-equilibrium phase transitions is equally at the root of a whole range of phenomena from condensed matter to particle physics, and so on, all the way to early universe cosmology. Indeed the dynamics of non-equilibrium phase transitions and the $`orderingprocess`$ that occurs until the system reaches a broken symmetry equilibrium stage, have developed in tandem with controlled experimental techniques in many areas of condensed matter physics (binary fluids, ferromagnets, superfluids, liquid crystals), so as to provide a solid basis for describing the dynamics of phase ordering. In cosmology, measurements of Cosmic Microwave Background anisotropies, and the formation of large scale structures in the Universe, provide signatures for phase transitions during and after $`inflation`$. And at the accelerator energies (Brookhaven-RHIC or CERN-LHC), phase transitions predicted by QCD could occur out of equilibrium via pion condensates. In an instructive review on this subject, Boyanovsky and de Vega describe the relevant aspects of the dynamics of symmetry breaking in many areas of physics (from condensed matter to cosmology) vis-a-vis possible experimental signatures. In condensed matter, they address the dynamics of phase ordering, emergence of condensates, and dynamical scaling. In QCD, the possibility of disoriented chiral pion condensates arising from out-of-equilibrium phase transitions is considered. And in the early Universe, the dynamics of phase ordering in phase transitions, is described, especially the emergence of condensates and scaling in Friedman-Robertson-Walker cosmologies, within a QFT framework. ### 2.4 Confinement: Supercharged Nucleus With the failure of $`GUT`$ theories to take care of the strong interaction sector $`SU(3)`$ of the Standard Model, the central issue of Confinement, which has had a long history of approaches ranging from the fundamental to effective types, still remains an unsolved problem. There is a vast literature on the subject, from Lattice QCD to various analytical methods for non-perturbative QCD. Of these, 2 novel approaches to Confinement, which are fairly self-contained, and stand out from the more conventional ones, are included in Part A, leaving the rest for Part F. The first concerns an analogy to a super-charged nucleus, based on an old work of Pomeranchuk and Smorodinsky (1940), which offers the possibility of binding a particle in a small region of space. This method was elaborated in a set of THREE ”Orsay Lectures” by the late Vladimir Gribov during 1992-94. The basic idea is that if the charge $`Z`$ in a nucleus $`N_Z`$ is larger than a critical value $`Z_c180`$, then this nucleus will decay to an atom of charge $`Z1`$ and a positron: $`N_ZA_{Z1}+e^+`$. If the product nucleus is unstable, the process gets repeated until the total charge of the final product is so small that further decay is impossible. Such a supercharged nucleus (a ‘resonance’) cannot exist freely, but only inside an atom, hence is reminiscent of a ‘confined’ state ! The region of stability of such a ‘superbound’ atomic state, (mainly due to the Pauli principle), works out as $`r_0<<r<1/m`$, where $`r_0`$ is the radius of the nucleus, and $`m`$ the electron mass. In these three lectures, which are reproduced in this Book through the courtesy of his long term Associates Dokhshitzer, Ewarz and Nyiri, Gribov gives a leisurely exposition of the detailed working of this mechanism on the confinement of heavy, followed by light, quarks. These ideas have since been extended by the Dokhshitzer Group in their subsequent publications hep-ph/9807224 and hep-ph/9902279, but these are outside the scope of this Book. ### 2.5 Confinement: BRST Mechanism The second approach concerns a perspective on confinement due to Nishijima who relates its mechanism to that of an unbroken non-abelian gauge symmetry in QCD. The logic of this method which was mostly pioneered by Nishijima, may be illustrated for the case of abelian QED as follows. Quantization of of the e.m. field requires ”gauge-fixing”, say by a covariant (Fermi) gauge. This in turn requires introduction of the indefinite (Gupta-Bleuler) metric which, for the selection of physically observable states, must be eliminated by imposing the Lorentz condition on the state vector. There are now 4 kinds of photons (2 transverse, 1 longitudinal, and 1 scalar), of which the two ‘scalar’ photons must have negative norms, so as to ensure manifest covariance of the quantization in the Minkowski space. Now to project out the physical subspace, one introduces a subsidiary (Lorentz) condition (a 4-divergence of a vector field) which represents a free, massless field even under interactions. The photons involved in this operator (called $`a`$-photons) are special combinations of longitudinal and scalar photons with zero norm. A second (orthogonal) combination (called $`b`$-photons) also can be arranged to have zero norm. However the inner product of $`a`$\- and $`b`$\- photons is non-zero; they are ‘metric partners’ (somewhat akin to the 4-vectors $`n_/mu`$, $`\stackrel{~}{n}_\mu `$ defining a covariant null-plane: $`n^2=\stackrel{~}{n}^2=0`$; $`n.\stackrel{~}{n}=1`$). A physical state is defined as one that is annihilated by applying the positive frequency part of the Lorentz condition. And since the S-matrix in QED commutes with this 4-divergence, it transforms physical states into one another, without letting them out of this subspace which now includes only $`t`$ (transverse) and $`a`$-photons, but $`not`$ $`b`$-photons. However the inner product of a physical state with one $`a`$-photon, with another physical state (with or without an $`a`$-photon), vanishes identically. Thus $`a`$-photons give no contribution to observable quantities, and both $`a`$\- and $`b`$-photons escape detection ! This is called $`confinement`$ of longitudinal and scalar photons in QED, a $`kinematical`$ phenomenon ! In QCD, on the other hand, not only $`a`$\- and $`b`$-gluons, but also the $`t`$-gluons are unobservable, giving a $`dynamical`$ orientation to the confinement mechanism. While the basic logic and signature of confinement for non-abelian QCD remains the same as above for abelian QED, some extra ingredients of a highly technical nature are needed to bridge the gap. For not only the observable quantities now depend on the gauge parameter, but the 4-divergence of the gauge field is no longer a free field ! To eliminate the gauge-dependence of physical entities, Faddeev-Popov proposed to average the path integral over the manifold of gauge transformations, resulting in a new term in the Lagrangian (Faddeev-Popop ghost), involving a pair of anticommuting saclar fields whose violation of the Pauli theorem on spin-statistics connection requires introduction of the indefinite metric, as in QED. However, the operator analog of the Lorentz condition is more tricky in this case. It is facilitated by a novel symmetry found by Becchi-Rouet-Stora (BRS) which was originally used for renormalizing QCD. Nishijima successfully exploited this symmetry to construct the requisite operator, and obtained a formal proof of confinement in the QCD case, as an extension of the logic employed for QED. A qualitative sketch of this proof appears in the Article by K Nishijima and M. chaichian . ## 3 Field Theory: Topological Aspects An important sector of QFT that has come to occupy increasing importance in the last two decades, concerns its Topological aspects, as a powerful tool to probe the geometry and topology in $`low`$ dimensions. This illustrates rather vividly the coming together of physicists and mathematicians, this time in building powerful links between quantum theory (through its path integral formulation) on the one hand, and the geometry and topology of low dimensional manifolds on the other. Indeed it appears that the properties of low dimensional manifolds can be nicely unravelled by relating them to infinite dimensional field manifolds, thus providing a powerful tool for studying these manifolds. A unique characteristic of topological field theories is their independence of the metric of curved manifolds on which they are defined. This makes the expectation value of the energy-momentum tensor vanish. Since the only degrees of freedom are topological, there are no $`local`$ propagating degrees of freedom. The operators are also metric independent. These features are addressed in some detail in a self-contained introductory Article by Romesh Kaul on topological QFT regarded as a meeting ground for physicists and mathematicians. ### 3.1 CS Theory And Jones Polynomials Quantum $`YM`$ theories in $`(2+1)D`$ provide a field theoretic framework for the study of ”knots and links” in a given 3-manifold, and illustrate the interplay of QFT and the topology of low dimensional manifolds. A striking result of this connection is that the famous ”Jones Polynomials” of knot theory can be understood in 3D terms. This result was formally demonstrated by Edward Witten about a decade ago in a paper entitled ”Quantum Field Theory And The Jones Polynomial”, thus fulfilling a long-cherished goal of an exact (non-perturbative) solution of a gauge field theory, for the first time in 3 dimensions. Witten showed that the ”Jones polynomial can be generalized from $`S^3`$ to arbitrary 3-manifolds, giving invariants that are computable from a surgery presentation”. Witten further showed that these results shed new light on 2D conformal field theory. In view of the historical importance of this pioneering work in the context of this Book theme, we reproduce (with permission from Springer-Verlag) the celebrated Witten paper (which had appeared in Commun.Math.Phys.121 (1989) 351-399), in full. ### 3.2 Anomalies In QFT An interesting pathology of QFT which has rich topological overtones is the problem of $`anomalies`$ which originated in the famous $`ABJ`$ (1969) paper to resolve the problem of $`\pi ^0\gamma \gamma `$ decay whose hitherto standard explanation in terms of partial conservation of axial current $`(PCAC)`$ used to fall far short of experiment. The $`ABJ`$ paper finally resolved the issue by introducing an ”anomalous” amplitude proportional to $`F_\mu \nu \stackrel{~}{F}_\mu \nu `$ in the $`PCAC`$ relation, whose interpretation brought into focus the pathology of $`symmetrybreaking`$ at the classical level through such ”anomalies” at the QFT level. Such ‘violation’ of gauge symmetry through ‘anomalies’ points to the need for their cancellation, which in turn constitutes an important constraint for physical gauge theories with $`chiral`$ coupling to fermions. In this respect, ”global chiral anomalies” play a key role in the understanding of physical effects associated with topologically non-trivial gauge-field configurations, via the celebrated Atiyah-Singer Theorem. This subject is briefly reviewed by Haridas Banerjee in this Book. ### 3.3 Coherent States In QFT Still another sector of QFT with topological (geometric) features, is the subject of Coherent States which has grown rapidly since its birth 36 years ago at the hands of Glauber and Sudarshan \[R.J.Glauber, Phys.Rev.130, 2529 (1963); E.C.G.Sudarshan, Phys.Rev. Lett.10, 277 (1963)\], although the basic idea dates back to the founder of Quantum Mechanics himself \[Erwin Schroedinger:Naturwissenshaften, 14, 644 (1926)\] in connection with the quantum states of a harmonic oscillator, i.e., almost immediately after the birth of quantum mechanics. Coherent States have 3 main properties: coherence, overcompleteness and intrinsic geometrization, all of which play a fundamental role in QFT. These include the calculation of physical processes involving infinite number of virtual particles; the derivation of functional integrals and various effective field theories; and last not least, the exploration of the origins of topologically non-trivial gauge fields and the associated (gauge) degrees of freedom. All these topics are addressed systematically in a perspective, self-contained review by Wei-Min Zhang . ### 3.4 Pancharatnam-Bargmann-Berry Phase An outstanding example of a topological aspect in quantum mechanics (which may be termed ‘field theory with a finite number of degrees of freedom’), is provided by the existence of a ”geometric phase” in quantum theory which had remained obscured from public view until rather recently when M.Berry (1984) drew attention to it under the term ”quantum adiabatic anholonomy”. Historically, however, the existence of this pathology in physics had first been noted by S.Pancharatnam (1956) in the regime of classical polarization optics, but this important work had somehow gone by default. A similar fate befell a second attempt by V.Bargmann (1964) to resurrect this idea in the context of Wigner’s theorem on the representation of symmetry operators in quantum mechanics. It was only after the work of Berry that its full implications were appreciated within the physics community, but its connection with the Pancharatnam and Bargmann phases was left unattended. In an instructive Article, N Mukunda describes these developments in a proper perspective by emphasizing the mutual connections among these ideas. He also describes the subsequent developments to date, by relating these phases to the presence of a complex vector space and the effect of group action among them. He then goes on to show that the geometric phase is the simplest invariant expression under certain groups of transformation acting on curves in Hilbert space. ### 3.5 Skyrmion Model for Confinement A confinement mechanism with topological overtones is offered by the large $`N_c`$ limit of QCD which has played a crucial role in unifying its premises with a solitonic, hadron-based approach that is known as the $`Skyrmemodel`$ which was discovered by Skyrme (1961), just before quarks (1964) were born. Skyrme’s novelty was to provide a model in which the fundamental fields consisted only of pions, wherein the nucleon was obtained as a certain classical configuration of pion fields. The apparent contradiction of making Fermi fields out of Bose fields was resolved by demanding a non-zero ”winding number” for this (classical) field configuration, thus giving the ”Skyrmion” the status of a topological soliton, which is a solution of a classical field equation with localized energy density. On the face of it, the Skyrme scenario looked so different from the conventional picture of nucleons as a ‘white’ composite of 3 ‘colored’ quarks bound together by their interactions with $`U(3)`$ gauge fields, that a reconciliation between the two pictures appeared rather remote. It turned out however that the Skyrme model could be a plausible approximation to the orthodox QCD picture, one in which a key role is played by the large $`N_c`$ limit of the latter. The logic goes roughly as follows. Despite the increasing strength of QCD at low energies, it is plausible that the pseudoscalar mesons as $`q\overline{q}`$ composites, could still interact relatively weakly with each other, thus permitting the formulation of some effective Lagrangian for the pions, subject of course to the correct symmetries of the underlying gauge theory, which includes a (spontaneously broken) chiral $`SU(N_f)SU(N_f)`$ flavour $`(N_f)`$ symmetry that allows ‘massless’ pseudoscalars to co-exist with massive scalars. An effective Lagrangian on these lines may be obtained from ”a non-linear realization of chiral symmetry”, without the explicit appearance of scalars, a structure which has an uncanny resemblance to the very Lagrangian obtained by Skyrme (1961). How about the baryons in this QCD-motivated ”chiral perturbation theory” picture ? It is here that t’Hooft’s (1974) large $`N_c`$ limit comes into play, with the proportionality to $`N_c`$ for the baryon mass being the signal that the baryon state under study is a soliton of the effective meson theory initiated by Skyrme. In a perspective review of the Skyrme model approach, Joseph Schechter and Herbert Weigel trace its connection with QCD in the large $`N_c`$ limit, and discuss the properties of light baryons treated as solitons, within the framework of an effective Lagrangian of QCD containing only meson degrees of freedom. ## 4 Formal Methods In QFT: Selected Topics The universal language of QFT and its powerful techniques broke fresh ground through the establishment of the equivalence of its tenets with those of Statistical Mechanics which had traditionally been developed on entirely ‘classical’ lines. In the words of A.M.Tsvelik (QFT in CMP, Camb.Univ Press 1995), this equivalence may be succinctly expressed by the following statement: ” QFT of a $`D`$-dimensional system can be formulated as a statistical mechanics of a $`(D+1)`$-dimensional system. This equivalence …. allows one to get rid of non-commuting operators and to forget about time ordering, which seem to be the characteristic properties of quantum mechanics….”. The Path Integral formulation of QFT which is the key element in dispensing with the problem of non-commuting operators in QFT, has had a crucial role in bringing about this vital correspondence of QFT with the partition function in quantum statistical mechanics (QSM). Armed with the powerful techniques of Renormalization Group Theory (RGT), this new approach has opened up a whole vista of applications to new emerging areas like critical phenomena in condensed matter physics. ### 4.1 Unified View of QFT and QSM An important outcome of a unified view of QFT and Quantum Statistical Mechanics has been the emergence of two new areas: Euclidean Field Theory, and Finite Temperature Field Theory. Actually the origins of the former date back to the Fifties at the hands of Wick (”Wick rotation” for the Bethe-Salpeter equation) and Schwinger (as a possible direction for the evolution of QFT), wherein the transition from Minkowski to Euclidean space (via analytic continuation from real to imaginary ”time”) was perceived as a means of curing many ills in QFT, such as positivity and finiteness of norms in the computation of physical quantities. In more recent times, the Euclidean formulation of QFT has led to an interesting relationship between ”stochastic mechanics” (Nelson) and the Feynman-Kac formulae for Green’s functions expressed as path integrals. In a crisp Article in this Book, R.Ramanathan provides a formulation of QFT in Euclidean space-time, to bring out the basic ideas of the Euclidean formulation, as well as the above relationship between the Nelson and Feynman-Kac formulations. Finite Temperature Field Theory on the other hand, (in contrast to zero temperature for Euclidean QFT), provides access to a much wider class of complicated quantum mechanical systems, and addresses questions like thermal averages in QFT, symmetry restoration in theories with spontaneous symmetry breaking, and indeed the evolution of the universe at early times (from the high temperature phase). More recently, chiral symmetry-breaking phase transitions, especially the ”confinement-deconfinement” phase transitions in QCD leading to quark-gluon plasmas $`(QGP)`$, have acquired great interest in view of planned experiments on heavy ion collisions to detect $`(QGP)`$. A few selected topics in Finite Temperature Field Theory are treated in an informative Article by Ashoke Das in this Book. ### 4.2 Integrable Systems: Toda FT Although most approaches to QFT have been traditionally associated with linear partial differential equations, (e.g., Schroedinger, Klein-Gordon, Dirac, Proca), non-linear equations, (i.e., equations where the potential term is non-linear in the field $`\varphi `$), have also been known for some time. Among the earliest non-linear wave equations known in physics are the $`Liouville`$ and $`SineGordon`$ equations. The Liouville equation in 2D arose in the context of a search for a manifold with constant curvature, something like covering the surface with a fishing net whose arc length is constant (knots do not move!), while the ‘threads’ in the net correspond to a local coordinate system on the surface. The ”field” $`\varphi `$ in the Liouville equation is the phase space density $`\rho `$ satisfying the equation $`_x_y\rho =\mathrm{exp}\rho `$, where $`x,y`$ are the local orthogonal coordinates. The Sine-Gordon $`(SG)`$ equation has a similar structure, with $`\mathrm{exp}`$ replaced by $`\mathrm{sin}`$ on the RHS. Variants of these equations, e.g., adding a ‘mass’ term $`m^2\varphi `$ on the LHS, and/or the hyperbolic replacement of $`\mathrm{sin}`$ by $`\mathrm{sinh}`$, etc, give rise to several more varieties of similar types. A third type of non-linear equation which has received much attention, is the so-called $`KdV`$ equation $`u_t6uu_x+u_{xxx}=0`$, with interesting properties like an infinite number of conservation laws. The corresponding conserved quantities can be used as Hamiltonians for an integrable system ($`KdV`$ hierarchy). A striking feature of such non-linear equations is an infinite number of conserved quantities, which imply that the solutions of these systems must be infinitely restricted. This results in such solutions being quite stable structures ($`solitons`$) which retain their shapes even after collisions. An interesting class of coupled non-linear equations was introduced by M.Toda (1967) to describe a 1D crystal with non-linear coupling between nearest neighbour atoms. These (lattice) models also admit $`soliton`$ solutions which reduce to the $`KdV`$ equation in the continuum limit. At the ‘field’ level, such models (with exponential ‘potentials’) simulate a general class of non-linear equations–called Toda Field Theory–which include the Liouville and Sine-Gordon equations as special cases. For the solution of these equations, a general method of ”inverse scattering” was proposed by Gelfand-Levitan. The logic of this method is to convert, via a suitable transformation, the original non-linear equation to an equivalent $`linear`$ equation, and study the evolution of the latter, more or less according to standard methods already developed for them (including group-theoretic, Lie-algebraic, etc methods). The inverse scattering method paved the way to connections with other known models of QFT, such as conformally invariant FT and the Hamiltonian reduction of Wess-Zumino-Witten model. Similarly the $`KdV`$ equation is related to the 4D Yang-mills theories, thus providing a connection of the latter with 2D integrable models. In an instructive, self-contained article on this subject, Bani Sodermark gives a perspective view of integrable systems with special reference to the Toda Lattice hierarchy, and reveals the connections of such non-linear field theories with other sectors of QFT. ### 4.3 Light-Front Dynamics Dirac laid the foundations of QFT, not only through his famous Equation, but at least with 2 more seminal contributions within a year’s gap from each other: a) light-front (LF) quantization \[Rev.Mod.Phys.21, 392 (1949)\]; b) constrained dynamics \[Can.J.Math.2, 129 (1950)\]. In the former, he suggested that a relativistically invariant Hamiltonian theory can be based on different classes of initial surfaces: instant form ($`x_0=const`$); light-front (LF) form ($`x_0+x_3=0`$); hyperboloid form ($`x^2+a^2<0`$). The structure of the theory is strongly dependent on these 3 surface forms. In particular, the ” LF form” remains invariant under $`7`$ generators of the Poincare’ group, while the other two are invariant only under $`6`$ of them. Thus the LF form has the maximum number ($`7`$) of ”kinematical” generators (their representations are independent of the dynamics of the system), leaving only 3 ”hamiltonians” for the dynamics. Dirac’s LF dynamics got a boost after Weinberg’s discovery of the $`P_z=inf`$ frame which greatly simplified the structure of current algebra. The Bjorken scaling in deep inelastic scattering, supported by Feynman’s parton picture, brought out the equivalence of LF dynamics with the $`P_z=inf`$ frame. The LF language was developed systematically within the QFT framework by Kogut-Soper (1970), Leutwyler-Stern (1978), Srivastava (1998) and others. The time ordering in LF-QFT is in the variable $`\tau =x_0+x_3`$, instead of $`t=x_0`$ in the instant form. And despite certain technicalities, the LF dynamics often turns out to be simpler and more transparent than the instant form, without giving up on the net physical content. This is borne out from comparative studies: of spontaneous symmetry breaking on the LF ; of degenerate vacuum in certain $`(1+1)D`$ QFT which are exactly soluble and renormalizable (e.g., the Schwinger model and its chiral version); of chiral boson theories; and of QCD in covariant gauges. Indeed, the LF quantization of QCD in the Hamiltonian form bids fair to be a viable alternative to the lattice gauge theory for calculating non-perturbative quantities. Removal of constraints by the Dirac method gives fewer independent dynamical variables in the LF formalism than in the instant form; for this reason, LF variables have found applications even in String and $`M`$-theories. In an instructive self-contained review (with a rich collection of references), Prem Srivastava gives a detailed review of most of these topics in a leisurely and systematic manner, and leads the interested reader all the way to the frontier with several new results. #### 4.3.1 $`2D`$ Field Theory $`2D`$ models in QFT have also been of great interest in the contemporary literature. Such theories reveal some remarkable features, such as fermion-boson equivalence, which facilitates the solution of fermion-FT in terms of its bosonized version. This concept of bosonization in turn has been useful in the understanding of $`4D`$ phenomena that can be described by an effective $`2D`$ FT, such as the demonstration of quark confinement in exactly soluble $`2D`$ models \[Casher-Kogut-Susskind (1973)\]. Another important discovery in $`2D`$ FT concerns an ”anomaly-generated” mass \[Jackiw-Rajaraman (1985)\] for the gauge boson in the Chiral Schwinger model. (This mechanism may be contrasted to the standard Higgs mechanism for generating the vector boson mass via spontaneous symmetry breaking). The ”anomaly” here stands for the loss of the conservation property due to quantum corrections involved in the quantization of the gauge theory. This disease in turn needs Dirac’s second weapon for cure: Constrained dynamics. In a short perspective article in this Book, Dayashankar Kulshreshtha reviews the constrained dynamics and local gauge invariance of several $`2D`$ FT models, in both Instant and LF forms, and in so doing, brings out the detailed working of the BRST formalism as applied to such $`2D`$ models. ### 4.4 Constrained Dynamics To recall the essential elements of a constrained dynamical system, which includes most systems of physical interest (e.g., QED, QCD, Electroweak and Gravity theories), it is characterized by an $`overdetermined`$ set of coordinates. These are best kept track of within a Hamiltonian formulation, which has a natural place for all the coordinates (canonical and redundant), so that the complete set of constraints emerges easily. The nature of these constraints in turn is determined by the structure of the matrix of Poisson brackets (PB) of the constraints of the theory, which also carries the signature of whether or not the underlying theory is gauge invariant (GI). Thus if this PB matrix is singular, then the set of constraints is $`firstclass`$, and the theory is GI. On the other hand, if this matrix is non- singular, then the set of constraints of the theory is $`secondclass`$, and the theory is non-GI. (Indeed this is often taken as a criterion for distinguishing a GI from a non-GI system). These GI systems are then quantized under some appropriate gauge choices, or ”gauge fixing” (GF). Now in the usual Hamiltonian formulations of a GI theory under some GF’s, one necessarily destroys the gauge invariance, since the GF corresponds to converting the first class constraints to second class constraints. To quantize a GI theory by maintaining gauge invariance despite GF, one needs the more general BRST (1974) formulation, wherein the theory is rewritten as a quantum system with generalized GI, called BRST invariance. This in turn requires enlarging the Hilbert space, and replacing the gauge transformation by a BRST transformation which involves the introduction of (anti-commuting) Faddeev-Popov $`ghostfields`$. This amounts to embedding the GI system into a BRST invariant system (but isomorphic to the former), whose unitarity is guaranteed by the conservation and nilpotency of the BRST charge. Thus the Dirac\[Can.J.Math.2, 129 (1950)\]-Bergmann \[Phys.Rev.83, 1018 (1951)\] theory of Constraints lies at the root of (Hamiltonian) description of interactions in QFT based on Action principles which, due to the requirements of Lorentz, local gauge, (and/or diffeomorphism) invariances, must employ singular Lagrangians. This is generally adequate for the study of simple gauge theories (controlled by some Lie groups acting on some internal spacein Minkowski space-time), via the covariant approach based on BRST symmetry which, at least for infinitesimal gauge transformations, allows a regularization and renormalization of the relevant theories within the local QFT framework. On the other hand, the gauge freedom of theories that are invariant under diffeomorphism groups of the underlying space-time (e.g., in general relativity or string theory) is encumbered by the arbitrariness for the observer in the ”definitory properties” of space-time and/or the measuring apparatus;\[see L.Lusanna-this Book\]. Such ambiguities affect bigger issues like: the understanding of $`finite`$ gauge transformations; the Gribov ambiguity in the choice of function space for the fields; proper definition of relativistic bound states vis-a-vis quark confinement; and last not least the conceptual and practical problems posed by gravity. These require a fresh look at the foundations of QFT to know if we: i) understand the physical degrees of freedom hidden behind gauge and/or general covariance; ii) can meanfully reformulate the physics (both classical and quantum) in terms of them. Logically this would amount to abandoning local QFT for non-perturbative interactions, and a reformulation of relativistic theories to allow natural coupling to Gravity. These and allied issues are addressed in a state of the art review by Luca Lusanna , aimed at a unified reformulation of the 4 basic interactions in terms of Dirac-Bergmann observables, with emphasis on the open problems–mathematical, physical and interpretational. ## 5 Extension Of QFT Frontiers A long term ambition of QFT has been the dream of unificication of all the gauge fields with the Gravitation Field whose quantization has all along posed a big challenge in its own right. \[A major difficulty in the way of unification of this sector with the other three, as was once succinctly put by Abdus Salam, lies in the ”spin mismatch” of their respective fields (vector vs tensor), which would militate against a common strategy\]. Nevertheless such a unification was to come about from an entirely new paradigm which envisaged extension of the original tenets of Field Theory based on a point particle description to one with $`Strings`$. In this Section we offer a panoramic view of some major theoretical developments from seemingly unrelated angles, which, apart from their impact on Physics in their own right, have provided some key ingredients converging towards the emergence of modern $`StringTheory`$. These developments which may be termed $`Supersymmetry`$ (SUSY), $`ConformalFieldTheory`$ (CFT), and $`Duality`$, are outlined next. ### 5.1 SUSY In Field Theory In its march towards Unification, Field Theory has continued to break new ground in several directions. An important step in Unification was marked by the discovery of Supersymmetry $`(SUSY)`$, introduced in the early seventies by a galaxy of authors in the context of 2D QFT (Gervais-Sakata) as well as in 4D QFT (Golfand, Likhtman, Akulov, Volkov, Wess and Zumino), for a unified understanding of the two known forms of matter–bosons (integral spins) and fermions(half integral spins)–hitherto regarded as two distinct field types, with commuting and anticommuting properties respectively. The new symmetry between bosons and fermions may be incorporated within the definition of a single ”Superfield”, with transformations inter-relating the two constituents, so that $`SUSY`$ becomes a part of space-time symmetry implied by relativistic invariance. The Gauge principle too admits of a corresponding extension to unify both these sectors. What are the motivations for such a lavish extension of space-time symmetry ? Apart from its aesthetic appeal, there are some theoretical considerations of a more concrete nature which are dwelt on in this Book through two complementary reviews of $`SUSY`$ in Field Theory, (with special reference to Particle Physics), by two leading experts in the field: Rabi Mohapatra and Norisuke Sakai respectively. According to Sakai , the most important motivation for $`SUSY`$ is the $`Gaugehierarchyproblem`$ showing up via the vastly different mass scales of the electroweak ($`M_W`$) vs the ”$`GUT`$-theoretic” ($`M_G`$): $`M_W^2/M_G^2`$ $`10^{28}`$. A similar gap exists between the ”GUT” vs Planck (gravity) mass scales: $`M_G^2/M_{pl}^2`$$`10^6`$. To account for this phenomenon, it is necessary to invoke a suitable $`Symmetry`$ reason which may be precisely formulated by the so-called ”naturalness” hypothesis (t’Hooft 1979) which demands that a system acquires a higher symmetry as a certain (small) parameter goes to zero, e.g., chiral symmetry occurs when a (small) fermion mass goes to zero; or a local gauge symmetry corresponds to the vanishing of a vector boson mass. Now the mass scale $`M_W`$ of weak bosons arises from the vacuum expectation value $`<\varphi >_0v0`$, related to the mass $`M_H`$ of the Higgs scalar field $`\varphi `$. So to regard the gauge hierarchy problem as the result of some symmetry breaking, we must give a $`Symmetry`$ reason to make the Higgs scalar mass vanishingly small. Classically a vanishing scalar mass corresponds to a symmetry called scale invariance, which however cannot be maintained quantum mechanically. In a perspective review on this subject, Norisuke Sakai argues for ”Supersymmetry” between the Higgs scalar and a spinor partner as a good option: Chiral symmetry gives zero mass to the latter, while $`SUSY`$ makes the former massless (through a cancellation of the respective contributions to the self-energy loops). In a complementary perspective review on the same subject, Rabi Mohapatra stresses the versatility of $`SUSY`$ as a tool for understanding many unsolved problems of physics: a) improvement in the singularity structure of local fields for understanding the disparate scales of Nature (e.g., Electroweak vs Gravity); b) possibility of unifying Gravity with the other forces by making $`SUSY`$ local instead of global; c) prospects of understanding $`nonperturbative`$ properties of field theories, hitherto considered ‘impossible’ in non-$`SUSY`$ form. As to the manifestations of $`SUSY`$ in a real world, this ”Bose-Fermi” symmetry is supposed to be badly broken, so that any search for superpartners (bosons vs ‘bosinos’; fermions vs ‘s-fermions’) has so far yielded zero dividends. On the other hand the formulation of Supersymmetry in non-relativistic quantum mechanics is relatively free from constraints. Indeed, since Schroedinger (1940) noticed the existence of well-defined ”supersymmetric partners” for the energy levels of a given quantum mechanical system, many applications to such systems (including nuclear and condensed matter physics), have kept pace with the rapid strides of SUSY in field theory in recent years. Indeed, the existence of SUSY partners in the energy levels of (appropriately chosen) even vs odd nuclei have been systematically established by group theoretic methods (interacting boson models, etc). Similarly, in solid state physics, an interesting correspondence has been observed between the critical behaviour of a ‘spin system’ in random magnetic fields in $`d`$ dimensions, and that of the spin system without the random magnetic field in $`d2`$ dimensions. This ”dimensional reduction” may be traced to an underlying $`SYSY`$ for the spin system in random magnetic fields (see N Sakai–this Book). In the absence of discovery of $`SUSY`$ partners in Field Theory, the benefits from $`SYSY`$ have so far been purely theoretical, varying from reduction of the degrees of divergence arising from various loop integrals in standard field theory (by at least two orders), to a heavy reduction in the number of dimensions (from 26 to 10) needed for self-consistency in a string theoretic formulation. The Articles by Mohapatra and Sakai between them provide quite a complementary description of the $`SUSY`$ formalism in QFT, together with an glimpse of the recent developments. And apart from its applications in particle physics, this formalism also serves as a background to the vast field of supersymmetric string theory. ### 5.2 CFT An independent insight into the origin of String Theory comes from the role of $`Conformal`$ $`Field`$ $`Theory`$ (CFT), viz., conformally invariant QFT in 2D(imensions), not only as as a vital ingredient of its anatomy, but also with firm hold on other disciplines like condensed matter physics. The CFT route to the evolution of String Theory is sketched in this Book as part of a bigger (historical) survey by Werner Nahm , tracing a whole sequence of developments in QFT right from its (Dirac) beginning, and encompassing in the process several other areas of physics on which CFT has had a decisive impact. In this saga, the interplay of physical intuition and mathematical rigour has brought together the practitioners of these respective disciplines, though not necessarily working in tandem. On the one hand, the beauty and transparency of CFT have made for a rich variety of intellectual exercises in abstract mathematics (with new emerging areas like automorphic groups, Kähler-Einstein metrics, etc), and on the other, facilitated the study of intensely practical physical systems such as continuous phase transitions in condensed matter physics. The impact of CFT on string theory has had its origin in several theoretical developments: the Thirring model in 2D; Skyrme’s idea of the equivalence between Fermions and Bosons; Coleman’s equivalence theorem on the Thirring Model versus the Sine-Gordon equation (despite their apparent dissimilarity); and the role of conformal invariance in the structure of Wilson’s Renormalization Group equations. To recall the essentials of Conformal invariance, this symmetry is satisfied in the absence of any ‘scale’ dimension. Examples are Maxwell’s Equations in free space; Dirac equation for massless fermions which satisfy conformal invariance. The 2D Thirring model, which may be regarded as a basic ingredient of string theory, also has this property due to absence of a scale dimension. Using this mathematical picture, the string may be regarded as a 1D object in space spanning a world sheet (a Riemann surface) embedded in 2D space-time, where a point on the string is represented by $`X^\mu (\sigma ,\tau )`$; $`\sigma `$, $`\tau `$ being the 2 world sheet coordinates. The impact of CFT has been no less impressive in the domain of condensed matter physics (CMP) where there exist a rich class of QFT’s exhibiting the structure of conformally invariant fields, such as in 2D surface coatings. Thus at a critical temperature $`(T_c)`$, the long range fluctuations of arbitrary scales make irrelevant the details of molecular structure, and the theory approaches a $`continuumlimit`$, with no visible scale dimension to keep track of. Indeed in this limit, the correlation functions behave like the Euclidean $`n`$-point functions of standard QFT with conformal invariance properties. Nahm discusses an interesting correspondence between the Ising model in CMP and Thirring model in QFT. The equation satisfied by the spin waves of the Ising model is formally identical to the 2D Dirac equation for massless fermions. Indeed condensed matter physics provides a more stable and economical background for testing these ideas than the expensive HEP laboratories ! ### 5.3 String Theory Via Duality Perhaps the most startling ”revolution” in Physics to date which had its origin in QFT, has been the String Theory, and its successive ”Avatars” (incarnations), aimed at unifying all the forces of Nature (from orthodox gauge theories of strong, e.m. and weak interactions, all the way to gravity). An orthodox route to its evolution may be attributed to the strong interaction problem in QFT, which has had wide ramifications from vastly different angles, each providing an independent insight into its mysteries. A very promising approach to strong interactions came from the $`Duality`$ Principle which has had a long history (prehaps traceable to the Bootstrap hypothesis), based on the equivalence of the direct channel (resonances) and crossed channel Regge poles with a universal slope $`\alpha ^{}1GeV^2`$. An explicit realization of this idea was achieved via the Veneziano representation for $`4`$-point amplitudes satisfying the requirements of duality and crossing symmetry, which was soon generalized to $`N`$-point amplitudes satisfying the same properties. Through a path integral representation of such amplitudes, Nambu, Nielsen and Susskind recognized that these amplitudes describe a 1D (string-like) object moving in space, with the inverse of the universal Regge slope identified as the ”string” tension $`T`$. The ”string” interpretation was further reinforced by a subsequent representation due to Virasoro, with very similar properties. And its promise of relevance to particle physics (despite stiff competition from QCD!) got a boost from the Scherk-Schwarz (1974) observation that such a ”string theory” could serve as a candidate for incorporating gravity in its ambit, on the ground that the massless spin-2 particle appears naturally in the closed string spectrum. To that end the string tension $`T`$ needed to be increased by $`19`$ orders of magnitude (up to the Planck scale!) to qualify for a viable theory of gravity. The conceptual gap was finally bridged by the seminal work of Green-Schwarz (1984) who succeeded in constructing a consistent $`10D`$ super Yang-Mills theory coupled to supergravity which is free from anomalies only for certain gauge groups ($`SO(32)`$ or $`E_8\times E_8`$). This work, perhaps for the first time, showed real prospects for unifying the fundamental forces. The String Theory has grown by leaps and bounds during the past decade, and its vast ramifications have grown to such formidable literature over the past decade, that a minimal justice to it would itself require several volumes of review. Nevertheless, after a short overview by the Master, John Schwarz of the subject, a panoramic account of the major developments in this exciting field (together with an exhaustive set of references) is given in a perspective Article by Jnanadeva Maharana . Schwarz views the different superstring theories (and an extension called $`Mtheory`$) as different facets of a unique underlying theory going beyond ordinary QFT’s. However, recent duality conjectures suggest that a more complete definition of these theories may come from the large $`N`$ limits of suitably chosen $`U(N)`$ gauge theories; (see L Bonora below). The Maharana Article leads the interested reader through several stages of its development, from i) perturbative aspects of $`ST`$; successively through ii) $`DualitySymmetries`$ as a characteristic of String Theories (ST); iii) $`M`$-theory as a unified view of the $`five`$ perturbatively consistent $`ST^{}s`$; iv) microscopic understanding of Black Holes, and so on, all the way to the frontiers of the field. Attempting to cover the later stages of development in this rapidly growing field, Loriano Bonora reviews some advances in the study of the relation between Yang-Mills $`(YM)`$ theory and strings, based on the classical $`YM`$-theory solutions (Riemannian instantons) which are 2D solutions describing Riemann surfaces in the strong coupling limit. Strictly, such relations historically date back all the way to ’t Hooft (’74) through his famous $`1/N_c`$ expansion for large $`N_c`$, wherein the dominant Feynman amplitudes correspond to the 2D Riemann surfaces. This ‘natural connection’ with strings was subsequently upgraded to a concrete shape via studies of 2D QCD (for string-like properties), which was further generalized to a connection between conformal super-$`YM`$ and super-string theory of type $`IIB`$, in the large $`N_c`$ limit. The Bonora Article reveals, among other things, a direct link between String Theory and non-abelian $`YM`$ theory, through the emergence in the latter of classical solutions modelled over Riemann surfaces, leading to a ”string” interpretation. Historically, this came about only after the proposal of the $`MatrixTheory`$, which in the large $`N_c`$ limit converges to the (non-perturbative) $`MTheory`$. ## 6 CS Field Theory And Condensed Matter Physics While the dominant concern in Field Theory has been in the traditional domain of particle physics, its powerful language and tecnniques have found profitable employment over a much wider domain, which comprises topics in Condensed Matter Physics, and newly emerging fields like Quantum Hall Effect, fractional statistics and $`Anyons`$. These phenomena lend themselves to QFT treatment in $`(2+1)`$ dimensions, where the celebrated ”Chern-Simon” $`(CS)`$ term plays a key role (see also Part B on Topological Field Theories). What are the special features of QFT in $`(2+1)D`$, and what specific role does the $`CS`$ term play in this reduced space-time continuum ? Perhaps the most striking feature is the appearance of $`fractionalstatistics`$ ! For, whereas in 3 (or higher) space dimensions, all particles must either be bosons (integral spin) or fermions (half-integral spin), in 2 space dimensions, the particles can have any fractional spin/statistics with impunity ! Such particles are called $`Anyons`$. Now since the usual spin-statistics relation follows from the premises of the standard 4D relativistic QFT, it is natural to ask if $`Anyons`$ can be understood from the corresponding 3D QFT. The question goes beyond mere academic interest since lower dimensions can be effectively realized in the physical world through the ”freezing” out of certain degrees of freedom, (e.g., in a strongly confined potential, or at low enough temperatures), so that these ‘quasi-particles’ may well exhibit anyon-like properties. And indeed experiments on Quantum Hall Effect (QHE) have revealed the existence of fractionally charged excitations (thus implying anyons). A critical discussion on the question of anyons and fractional statistics in $`(2+1)`$ dimensions, with particular reference to the role of the Chern-Simons $`(CS)`$ term in 3D QFT, is given by Avinash Khare in a perspective Article on the subject in this Book. To that end, Khare clarifies the definition of ”quantum statistics” which relates to the ”phase” picked up by a wave function when two identical particles are $`adiabaticallyexchanged`$, as distinct from the usual definition of permutation symmetry for two identical particles. \[While both definitions coincide for 3 and higher dimensions, they differ in 2 dimensions\]. He then discusses in detail the main properties of the $`CS`$ term, especially its role as a gauge field mass term, in whose presence anyons can appear in one of two different ways: i) as a soliton of the corresponding QFT ; or ii) as fundamental quanta carrying fractional statistics. So far, the state of the art is based on non-relativistic QFT, wherein the $`CS`$ term provides an effective cushion against a non-local formulation of anyon fields, thus facilitating a ‘local’ formulation. However a full-fledged relativistic QFT formulation is not yet feasible. Perhaps the most tangible success from $`CS`$ fields so far is a natural understanding of the Quantum Hall (QH) Effect. A state-of-the-art review by R Rajaraman puts this topical subject in perspective. We summarize some essential features of a QH system, from his own account. A QH system which is defined as ”quasi 2D layers of electrons trapped in the interface of semi-conductors, at very high magnetic fields and very low temperatures, has revealed many remarkable features ”. Particularly interesting is the presence of certain states characterized by the so-called ”filling fractions” $`(\nu )`$ which are either integers, or certain $`odddenominator`$ fractions; $`\nu =hc\overline{\rho }/eB`$, where $`\overline{\rho }`$ is the mean electron density, and $`B`$ the applied field. The special states corresponding to these $`\nu `$-values show extremely flat plateaus in Hall conductivity which (in units of $`e^2/h`$) are exactly equal these values to within an accuracy of $`1`$ in $`10^7`$ ! These features are very universal inasmuch as the details of the material seem irrelevant. It was earlier recognized that the electrons in these QH states form an incompressible fluid, described by ”Laughlin wave functions” (which are reminiscent of Jastrow-type correlations in nuclear wave functions). A more analytical study of these empirical functions suggested a Landau-Ginsberg type scenario for the QHE in terms of an order parameter field (subsequently to be identified with a Chern-Simons field), thus formally bringing this subject within a 3D QFT network. The analogy of the order parameter field in QHE to that obtaining in superconductivity of the Landau-Ginsberg description, is of course not a literal one since there are no bosonic Cooper pairs in QHE. Indeed in this 3D QFT scenario, the ”anyons” (Chern Simons fields) have an intermediate status between bosons and fermions. However for the special case when the anyon angle is an odd multiple of $`\pi `$, a composite of the electron with an $`odd`$ number of flux tubes, effectively amounts to constructing a ”bosonic” analogue of Cooper pairs from out of ”fermions” which now provides the desired order-parameter (CS) field operating in the plateau of the QH system. Rajaraman reviews a formal QFT procedure for constructing such CS gauge fields, as well as the formulation of their dynamics at the 3D QFT level. As to the connection of the CS gauge fields at the first quantized level, these are of course expressible in terms of the ”phase angles” involved in the exchange of electrons in an $`N`$\- electron wave function in 2D (see also Khare in this Book). ## 7 QCD-Motivated Strategies For Strong Interactions Turning now to the strong interaction problem in the standard field theoretic picture, its prime candidate, QCD, has since its birth been beset with problems of reliable calculational techniques to deliver results. An introductory overview of several approaches \[symmetries, effective Lagrangians and Wilson expansions\] to deduce hadron properties from QCD is sketched in the Article by Olivier Pene , aimed at establishing a link between perturbative and non-perturbative QCD via lattice methods. We now go into more specific details of a few principal QCD-based methods. ### 7.1 QCD Sum Rules To recall the main signatures of the prime candidate, QCD, which it shares with any non-abelian gauge theory, are expressed by a two-fold pattern: i) decreasing coupling strength at shorter distances (Asymptotic Freedom); and ii) increasing coupling strength at longer distances ($`confinement`$). The former is fairly well understood, and provides a perturbative basis for calculating QCD effects in high energy processes. In particular, the powerful method of ”QCD Sum Rules”, based on Wilson’s Operator Product Expansion $`(OPE)`$, was developed by Shifman- Vanstein-Zakharov for the study of non-perturbative QCD in a large variety of applications from hadronic masses (with two-point functions), coupling constants, form factors (with three-point functions), and reactions (four point functions). The basic philosophy is one of a $`duality`$ between two ways of representing a correlator: i) $`OPE`$ with various ”twist” terms (vacuum condensates, treated as free parameters of the theory) representing successive non-perturbative corrections to an otherwise perturbative expansion; ii) a dispersion formula saturated by certain low-lying hadron resonances. Equating the two amounts to evaluating hadronic parameters in terms of the quark-level condensates. Despite certain conceptual problems of ”microscopic causality” encountered in the ”matching” of two sides of the equation, this method (QCD-SR) has proved very popular among a wide class of high energy phenomenologists, and has been continually refined over the years. A leisurely review of the state of the QCD-SR art on the quark structure of hardons, as well as its working on the problem of hadrons in nuclear matter (at finite temperature) is given by Leonard Kisslinger in this Book. ### 7.2 Non-Perturbative Methods With QCD Features The state of the art in this field is so diffuse that a more organized exposition is needed for such methods. To that end the attempts at addressing the strong interaction problem in QCD may be divided into two broad categories: i) soluble models designed to shed light on its general features through exact calculations; and ii) effective Lagrangian methods for 4-fermion interactions, somewhat reminiscent of the Bethe ”Second Principle” Theory of effective nucleon-nucleon interactions of the Fifties. Srivastava , as well as Kulshreshtha , in Part C of this Book, have already provided a flavour of the results to be expected from type (i) theories, using the method of LF-QFT. Type (ii) which deals with more realistic situations, albeit at the cost of some phenomenology, has a much wider literature to choose from. To do a semblance of justice to this field, this Book includes $`two`$ articles of this type, reviewing the methodology and working of such QFT-based approaches. The first one, by Vladimir Karmanov , gives an in-depth review of covariant light-front (LF) dynamics, with applications to field theory and relativistic wave functions. The formalism is effectively 3D in content, which can be obtained by projecting the (4D) Bethe-Salpeter amplitudes on the light-front plane, and although a reversal of steps is not possible to reconstruct the 4D BS amplitude, the LF formalism still represents a powerful alternative for solving QFT problems. Karmanov also discusses some typical applications. #### 7.2.1 Markov-Yukawa Transversality Principle The second article by Asoke Mitra offers a comparative view of the state of the art in several QFT approaches based on effective 4-fermion interactions (including QCD features), of both 3D and 4D types (Tamm-Dancoff, Bethe-Salpeter, Salpeter, quasi potentials, light-front). In this context, attention is focussed on an important but somewhat less known principle called ”Markov-Yukawa Transversality” ($`MYTP`$) which decrees that the interaction between the two (quark) constituents be $`transverse`$ to the composite (hadron) 4-momentum, by virtue of which the BSE kernel has an effective (albeit covariant) 3D support. As a result of this ”Covariant Instantaneity” the starting 4D BSE is exactly reducible to a 3D form, and conversely the steps can be reversed so as to allow an $`exactreconstruction`$ of the original 4D BSE in terms of 3D ingredients ! Thus $`MYTP`$ allows an exact interlinkage between the 3D and 4D BSE forms, so that both forms can be used interchangeably, unlike most other approaches in the literature which employ either a 4D or a 3D form of the BS dynamics, but not both simultaneously. It might be of some historical interest to note that the Salpeter equation has a 3D structure stemming from its (instantaneous) kernel with a 3D support, and therefore its original 4D form can be recovered a la $`MYTP`$ by reversing the steps, but this possibility had never been explored. This gap is now filled by $`MYTP`$ which provides a formally covariant basis for the instantaneous approximation. The same principle ($`MYTP`$) can also be generalized from covariant instantaneity to the covariant light-front. A fall-out of the 3D-4D interlinkage provided by $`MYTP`$ is that it gives a $`twotier`$ description: the 3D form for the hadron spectra which are $`O(3)`$-like; and the 4D form to address the transition amplitudes as 4D loop integrals using standard (4D) Feynman rules. This Principle can be easily incorporated in the usual framework of coupled Bethe-Salpeter and Schwinger-Dyson equations (BSE-SDE) stemming from a (chirally invariant) 4-fermion Lagrangian with current quarks interacting via the full gluon propagator, so that the quark mass is acquired via the NJL-mechanism. And the generalization from covariant instantaneity to the covariant light-front helps remove certain problems of Lorentz mismatch of vertex functions that arise in a 4D loop integral under the covariant instantaneity ansatz. These and other details are reviewed in the article by Mitra which also stresses a parallelism of treatment of $`q\overline{q}`$ and $`qqq`$ systems. ### 7.3 The Harmonic Oscillator: A Powerful Bridge In QFT No amount of literature on the impact of QFT in Physics would be complete without an exposure of the role of the Harmonic Oscillator (HO) in shaping Quantum Theory, as an integral part of this Book theme. It was therefore a matter of great satisfaction when Marcos Moshinsky , who may be regarded as the ”Father of the Harmonic Oscillator in Physics”, agreed to contribute a perspective article on the HO theme. The only obstacle against a regular format for his Article was that he had only recently written a comprehensive book on the subject \[M. Moshinsky and Yu.F.Smirnov, $`TheHarmonicOscillatorInModernPhysics`$, (Harwood Academic Press, the Netherlands, 1996)\]. Nevertheless in his Article, he has provided a comprehensive list of contents of his HO-book, which already offers a glimpse of the depth and range of physical problems (from the simplest quantum mechanical ones to the $`n`$-body Relativistic Oscillator) that are amenable to the amazing powers of HO techniques in tandem with the standard methods of Group Theory. In addition he has reviewed some recent work of his on relativistic particles of arbitrary spin in a $`confining`$ HO potential, with applications to Spectroscopy. ## 8 Conclusion: Foundations Of Quantum Theory We conclude this Introduction to the Book with an Article by Dipankar Home on the modern perspectives on the foundations of quantum mechanics (the predecessor of QFT), which are increasingly being scrutinized by relating them to precise experimental studies. In this Article, Home picks two main issues: i) quantum measurement problem; ii) quantum non-locality, for a detailed exposition in a theory vs experiment scenario. He concludes with a quotation from John Bell: ”It seems to me possible that the continuing anxiety about what quantum mechanics means or entails will lead to still more tricky experiments which will eventually find some soft spot.” Translated to the QFT level, this looks like an appropriate conclusion for this Book as well.
no-problem/9911/astro-ph9911458.html
ar5iv
text
# Is Nova Sco 1994 (GRO 1655-40) a Relic of a GRB ? ## Introduction In this note we consider only the long term GRBs, of duration from several seconds up to several minutes. This is just the dynamical time of a He star, which we consider as progenitor. The formation of black holes in single stars of ZAMS masses $`2035M_{}`$ was proposed by Brown, Lee, & Bethe (1999). However, these do not lose their envelopes except in binaries. This latter case has been studied by Brown et al. (1999) who evolve the transient sources in this way. These have mostly low mass main sequence companions, although in two cases the companions are subgiants. In many more cases, binaries of $`7M_{}`$ black holes with companions up to nearly equal mass of the ZAMS mass of the black hole progenitors are predicted. These companions do not, however, fill their Roche Lobes, and consequently are not observed. None the less, the Wolf-Rayet progenitors of the $`7M_{}`$ black holes in these binaries offer a set of progenitors for GRB’s. They are already somewhat in rotation because of the companion star. Note that in the Brown, Lee, & Bethe (1999) scenario, their envelopes are removed only following He core burning, in the supergiant stage, so there is only a short time left in their evolution for them to lose He by wind. Thus, a substantial amount of He should be left in the W.-R., although most of it will have been burned to carbon and oxygen (Woosley, Langer, & Weaver 1993), and the explosion we describe below would be Type Ib. The conclusion of the Bethe & Brown (1999) paper was that in order for Wolf-Rayets followed by high-mass black holes like that in Cyg X-1 to be formed in single stars, a ZAMS mass of $`\text{ }>80M_{}`$ was necessary. This was based on the calculation of Woosley, Langer, & Weaver (1993) who used a too large mass loss rate for the He winds. These stars have been reevolved by Wellstein & Langer (1999) with lower mass loss rates, but the evolution has not been carried beyond the carbon-oxygen core stage, so we do not yet know how much the lower winds will decrease the mass limit for evolving into a high-mass black hole. The carbon oxygen cores still have about $`33\%`$ central carbon abundances, so they clearly will not skip the convective carbon burning stage and therefore may well end up as low mass compact objects. These high mass Wolf-Rayet stars are the progenitors of GRBs in Woosley’s Collapsar model (MacFadyen & Woosley 1999). In addition to these and the high-mass black holes in the transient sources, the coalescence of the low-mass black holes in the Bethe & Brown (1998) scenario of compact binary evolution with the companion He star (Fryer & Woosley 1996) offers another type of generator for the long term GRBs. All three of the above possibilities involve a He star being accreted by a black hole. In this process the black hole will be spun up. The energy can be extracted by the Blandford-Znajek (BZ) 1977 process, as in Lee, Wijers, & Brown (1999). The BZ power supplied into the disc will halt the inflow, and later propel the matter outwards (Brown, Lee, Lee, & Bethe 2000) in a Type Ib supernova explosion. ## Energetics of GRBs The maximum energy that can be extracted from the BZ mechanism (Lee, Wijers, & Brown 1999) is $`E_{max}=0.09M_{\mathrm{BH}}c^2`$. For a $`7M_{}`$ black hole, such as is found in Nova Sco 1994, $`E_{max}1.1\times 10^{54}\mathrm{ergs}`$. The black hole is first formed with a mass of at least $`1.5M_{}`$ (Brown & Bethe 1994). The maximum energy, after these corrections, is still an order of magnitude greater than the $`3\times 10^{52}`$ ergs used by Iwamoto et al. in the supernova explosion. Presumably the explosion will take place before the BZ mechanism can deliver full energy, leaving the black hole with substantial spin energy. Without beaming, the estimate of the energy in the jet of the GRB (Anderson et al. 1999) is $`E_{990123}=4.5\times 10^{54}\mathrm{ergs}`$. The BZ scenario entails substantial beaming, so this energy should be multiplied by $`d\mathrm{\Omega }/4\pi `$, which may be a small factor $`0.01`$. The BZ power can be delivered at a maximum rate of $`P_{\mathrm{BZ}}=6.7\times 10^{50}\left({\displaystyle \frac{B}{10^{15}\mathrm{G}}}\right)^2\left({\displaystyle \frac{M_{\mathrm{BH}}}{M_{}}}\right)^2\mathrm{erg}\mathrm{s}^1.`$ (1) ## Is Nova Sco 1994 a Relic of a GRB ? Several characteristics of Nova Sco 1994 (GRO 1655-40) can be understood if it is a relic of a GRB. First of all the high space velocity $`150\pm 19`$ km/s can be understood if a supernova explosion is associated with black hole formation (Brandt et al. 1995, Nelemans et al. 1999). In our scenario, first a GRB is initiated by the BZ-mechanism, following which a Type Ib supernova explosion is begun by the energy deposited in the fat accretion disc. Following Brandt et al. (1995) we note that a binary symmetric in the frame of the exploding star will be asymmetric in the center of mass of the binary. The amount of mass that can be ejected is constrained by the fact that if more than half of the total initial mass is ejected, the system will become unbound. Brandt et al. consider collapse to a $`4M_{}`$ black hole; then at the time of collapse the collapsing He star must have a mass of $`9M_{}`$ or greater. In fact, the initial black hole needs to be no more than $`1.5M_{}`$ according to Brown & Bethe (1994), but one would expect a substantial amount of the carbon-oxygen core to collapse with the Fe, and also substantial fallback of the original carbon-oxygen core, probably burned to Fe in the explosion. In order to obtain an $`7M_{}`$ black hole, the initial He envelope would have to be $`15M_{}`$ corresponding to a ZAMS mass of $`3540M_{}`$. Israelian et al. (1999) find a large overabundance of oxygen, magnesium, silicon and sulpher in the F3-F8 IV/III companion star of $`1.63.1M_{}`$ orbiting the companion. These are just the elements copiously produced in a Type Ib supernova explosion. Contrary to Iwamoto et al. (1998), who need $`0.7M_{}`$ of $`{}_{}{}^{56}Fe`$ to reproduce the brightness of SN 1998bw, Israelian find no enhancement in the Fe. In our scenario we expect the jet of the GRB preceding the supernova explosion to go along the rotation axis of the black hole, and the supernova explosion to be initiated perpendicular to this in the accretion disc. The highly nonequilibrium processes in the jet would not initially affect the supernova, but might be expected to excite the He lines in later stages where the expanding supernova interacts with the jet. There are indications that the black hole in Nova Sco 1994 is spinning rapidly (Sobczak et al. 1999, Gruzinov 1999). We would expect the BZ central engine to stop delivering energy following the supernova explosion which disrupts the magnetic fields. In Appendix C of Brown, Lee, Wijers, & Bethe (1999) the observed GRB rate at the present time is estimated to be $`0.1`$ GEM (Galactic Event per Mega year). With a factor of 100 for beaming, this would require 10 GEM. Brown, Lee, & Bethe (1999) estimate the birth rate for visible transient sources in the Galaxy to be 8.8 GeM. However, including the high-mass black-hole binaries with companions which have not evolved to their Roche Lobes, and therefore would not be visible, they arrive at a 25 times higher number; namely 220 GEM. (Inclusion of the “silent” binaries effectively removes the $`q`$, the ratio of masses, in the calculation. The more massive companions are all inside of their Roche Lobes, except for the two subgiants in the systems V404 Cyg and XN Sco94.) Also, there are the other possible GRB progenitors, Collapsars and coalescing black hole and He star, mentioned above. Thus, there must be other severe criteria for GRBs; e.g., high magnetic fields in the rotating He envelopes, etc. In evolution of the transient sources the hydrogen envelope of the massive star must be taken off only following He core burning, if collapse into a high-mass black hole is to be obtained (Brown, Lee, & Bethe 1999). In this rather last stage the companion star would try to spin the hydrogen envelope up towards corotation in the binary before expelling it. With the large viscosity from magnetic turbulance assumed in the Spruit & Phinney (1998) argument, the He core would be carried along, but probably in differential rotation because the common envelope time is very short, $`1`$ year. Similar considerations follow for the Fryer & Woosley (1998) model of coalescence of black hole with companion He star. The common envelope evolution here need not happen as late as in the transient source scenario, but on the other hand the predicted merger rate is high, 380 GEM (Brown, Lee, Wijers, & Bethe 1999). ## Discussion We suggested Nova Sco 1994 (GRO 1655-40) as a relic of a GRB and Type Ib SN explosion. Our model begins from a black hole in a He star. It is assumed that in the hypercritical accretion, the disc has a high magnetic field. The black hole is spun up and the GRB, powered by the Blandford-Znajek mechanism, is driven along the nearly matter-free axis of rotation of the black hole. With high viscosity such as follows from magnetic turbulence, the swallowing of the He star by the black hole takes a dynamical time of the star. The BZ mechanism stops once the accretion disc disappears and the magnetic field disperses, leaving the black hole with some spin energy. We point out that power roughly equal to that driving the GRB will be delivered into the disc made up out of hyperaccreting helium. The delivered energy first brings the accreting matter to rest, then drives it backwards through the accretion disc (and to the sides of it) in a Type Ib supernova explosion. From consideration of energetics, both the GRB and SN explosion can be powered by several times $`10^{53}`$ ergs, if we take the black hole to have mass $`7M_{}`$ typical of those in the transient sources, but the GRB may well take off and the explosion begin before maximum energy is delivered. In a more detailed paper (Brown, Lee, Lee, & Bethe, 2000), we show that hypercritical accretion onto a black hole in the middle of a self-similar accretion disc of Narayan & Yi (1994) type will spin the black hole up to $`\text{ }>90\%`$ of maximum, so that $`10^{54}`$ ergs is available. Running the plasma current through a current (Thorne 1986) around the black hole and through the accretion disc, we argue that roughly equal fraction of this energy are available to power the GRB and the Type Ib supernova, the former by energy delivered in the loading area up the rotation axis of of the black hole and the latter in the accretion disc. In a more detailed paper (Lee, Brown, & Wijers 1999) we show that how the plasma can pass through magnetosonic points, etc. ## ACKNOWLEDGMENTS We would like to thank Ralph Wijers and Stan Woosley for useful discussions. This work is supported partially by the U.S. Department of Energy Grant No. DE-FG02-88ER40388. HKL is supported also in part by KOSEF Grant No. 985-0200-001-2 and BSRI 98-2441.
no-problem/9911/astro-ph9911191.html
ar5iv
text
# Exploring Halo Substructure with Giant Stars: II. Mapping the Extended Structure of the Carina Dwarf Spheroidal Galaxy ## 1 Introduction It is becoming increasingly clear that dwarf spheroidal galaxies are not the simple systems they were once thought to be. A great deal of recent work has exposed the complicated star formation histories in many of these small systems in the Local Group (see summaries by Grebel 1997, 1998, Mateo 1998). That a system like Carina, which has experienced at least three major episodes of star formation in its lifetime (Smecker-Hane et al. 1994, 1996, Mighell 1997, Hurley-Keller et al. 1998), can retain gas to fuel repeated bursts appears to contradict old notions of these low luminosity systems having fragile, fluffy mass potentials (see, e.g., Dekel & Silk 1986, Burkert & Ruiz-Lapuente 1997, MacLow & Ferrara 1999). Indeed, dynamical studies of the mass-to-light, $`M/L`$, ratios of most of the Galactic dwarf spheroidals (e.g., Aaronson 1983, Seitzer & Frogel 1985, Aaronson & Olszewski 1987, Pryor & Kormendy 1990, Mateo et al. 1993, Suntzeff et al. 1993, Hargreaves et al. 1994, Vogt et al. 1995, Olszewski et al. 1996, Mateo et al. 1998b) imply large values, approaching $`M/L100`$ or more in the systems with the least total luminosity: Draco, Sextans and Ursa Minor (Olszewski, Aaronson & Hill 1995 and references therein, Irwin & Hatzidimitriou 1995, IH95 hereafter; Mateo 1998). Clearly dwarf spheroidal galaxies are not just larger versions of globular clusters, which have typical $`M/L12`$, but are of a very different structural character. The structural difference between globular clusters and dwarf spheroidals has often been attributed to a large dark matter content in the latter. However, the large dark matter interpretation of the large velocity dispersions observed in dwarf spheroidals is subject to various uncertainties and still debated. There are assumptions (see Vogt et al. 1995, Piatek & Pryor 1995, IH95, Kleyna et al. 1999) incorporated into typical $`M/L`$ determinations that remain to be verified, including the assumption of isotropically distributed stellar orbits, that mass follows the distribution of light in these systems, and even that normal Newtonian gravity applies.<sup>1</sup><sup>1</sup>1Milgrom (1983a,b) has proposed a form of Modified Newtonian Dynamics (MOND) that serves to reduce the implied $`M/L`$’s of bound stellar systems. Moreover, unresolved binaries inflate derived velocity dispersions to some degree that may or may not affect $`M/L`$ ratios significantly (Suntzeff et al. 1993, Olszewski, Pryor & Armandroff 1996, Hargreaves, Gilmore & Annan 1996). Perhaps the greatest uncertainty in the dark matter interpretation of the velocity dispersion data is that of a virial equilibrium state for the dwarf spheroidals. The notion of dwarf galaxies in virial equilibrium has been questioned by, e.g., Kuhn & Miller (1989), who attributed the high velocity dispersions to orbital resonance “heating” of the stars in satellites, with a resulting inflation of the internal velocities. While this particular model has been controversial (Pryor 1999), the idea that passage of the satellites near massive objects like the Galactic center or dark matter clumps in the halo (Kroupa 1999) can affect the internal dynamics and outer structure of these dwarf galaxies has a long history. For example, Hodge & Michie (1969) concluded that Galactic tides could have an important effect on the outer structure of dwarf spheroidals and suggested that Ursa Minor is presently in the throes of total disruption. Still, it is not clear to what extent tidal disruption can perturb the inferred dark matter content. For example, it is hard to understand how tides could be affecting such distant dwarfs as Leo I and Leo II, for which relatively large ($`10`$), $`M/L`$ are found (Vogt et al. 1995). In a numerical modeling study of the phenomenon of tidally induced, large velocity dispersions in dwarf spheroidals, Piatek & Pryor (1995) concluded that Galactic tides cannot account for extraordinarily large $`M/L`$, though an inflation of $`M/L`$ to about 40 was possible. Johnston, Sigurdsson & Hernquist (1999b) similarly find little influence on the core velocity dispersions for tidally disrupting systems. The main influence of tides on the dynamics of the dwarfs is not to inflate central velocity dispersions, but rather to produce large ordered motions that would resemble apparent systemic rotations. Indeed, such shearing motions have been observed in the Ursa Minor and Draco systems by Hargreaves et al. (1994). However, a contrasting point of view comes out of high resolution, N-body studies by Klessen & Kroupa (1998) of the dynamical evolution of satellite galaxies in Milky Way-like gravitational potentials followed until well after the disruption. Klessen & Kroupa find that the debris of large satellite galaxies on orbits of eccentricities greater than 0.41 and undergoing severe tidal disruption eventually converge into stable remnants of about 1% the original satellite mass. When viewed along certain lines of sight, these remnants have properties very similar to dwarf spheroidal galaxies, including, most interestingly, velocity dispersions leading to inferred high $`M/L`$’s in spite of the fact that the remnants do not contain dark matter. The notion that some of the Milky Way dwarf spheroidals may be tidal remnants has been discussed for several decades because of the seemingly non-random alignments of the satellites around the Milky Way (Kunkel 1979, Lynden-Bell 1982, Majewski 1994, Lynden-Bell & Lynden-Bell 1995, Palma, Majewski & Johnston 1999). While Klessen & Kroupa do not predict what the density profiles of their tidal remnants would look like, their scenario predicts that their dSphs should exhibit ordered radial velocity gradients across the galaxy, similar to those discussed above. These issues have been confused by the discovery around the majority of the Milky Way dwarf spheroidals of “breaks” in the starcount profiles where the character of the counts changes from a steeply falling King profile to a much more gradual decline with radius (Eskridge 1988a,b; IH95). Such features are seen in simulations in which a satellite (with mass tracing the light) is being stripped by the Milky Way’s tidal field (Oh, Lin & Aarseth 1995, Piatek & Pryor 1995, Johnston et al. 1999b), and it is tempting to attribute such breaks in the radial profile to the onset of an outer population of escaping stars.<sup>2</sup><sup>2</sup>2 It is worth noting one interesting exception to this attribution. In an earlier study of the extratidal phenomenon with van Agt’s (1978) sample of extratidal stars around Sculptor, Innanen & Papp (1979) concluded that stars outside the tidal radius could still be bound if on retrograde orbits about the satellite. Stars on such orbits can resist tidal stripping by the Milky Way. On the other hand, arguments have been made that the existence of tidal tails is evidence against the presence of significant dark matter halos in dwarf spheroidals (Moore 1996, Burkert 1997). If the satellite contained sufficient dark matter, then the point at which tidal effects became important would lie well beyond the “tidal radius” found from King model fits to the luminous matter. As Moore (1996) concludes: “…even modest amounts of dark matter will be very effective at containing the visible stars and halting the production of tidal tails.” Whatever the solution to these contradictory and confusing issues, it is certain that more observational handles on the problem would help with clearing the theoretical hurdles. For example, if the outer populations of the dwarf spheroidals could be mapped well past the break radius, it may become evident whether they evolve into obvious tidal tails. Measuring the velocities of stars beyond the break would also provide a great advantage to discriminating between models. For example, evidence for shearing motions, as described above, would support tidal tail models, while isotropic velocity dispersions would support the notion that the beyond-the-break populations are bound. Both mapping dwarf galaxies to large radii and obtaining velocities of well separated, but associated stars are among the goals of the present program, which has an overall aim to employ various strategies to uncover tidal debris from disrupting satellite galaxies. In this contribution we present the first results of a search for widely extended, beyond-break-radius stars associated with nearby dwarf galaxies, with a focus on a survey around the Carina dwarf spheroidal. Carina was discovered from UK Schmidt plates by Cannon, Hawarden, & Tritton (1977). Demers, Beland, & Kunkel (1983) determined the following structural parameters (from star counts off of prints of a CTIO 4-m plate) out to a radius of $`38`$ arcmin along the semimajor axis: an ellipticity of 0.4 at a position angle of 75, and a tidal radius of 33 arcmin. IH95 examined Carina’s structure via star counts out to a radius of $`40`$ arcmin using APM scans of Schmidt plates and found similar structural parameters: an ellipticity of 0.33 at a position angle of 65, and a tidal radius of 28 arcmin. Most significantly, IH95 found a clear break in the radial starcount profile and suggested the existence of an apparent extratidal population. Kuhn, Smith & Hawley (1996) noted a spatially extended RR Lyrae distribution for Carina (though all of their RR Lyrae were interior to the IH95 tidal radius), and with a more sophisticated starcount analysis gave even more compelling evidence for an extratidal Carina population extending some four tidal radii along the major axis, twice as far out as measured by IH95. From theoretical considerations of the observed structural parameters of Galactic dwarfs in IH95, Johnston et al. (1999b) designated Carina as one of the likely Galactic dwarf spheroidals with among the highest current fractional destruction rates. Given these various encouraging indicators, Carina is a promising first candidate to test our search technique for true tidal debris among the Galactic dwarf spheroidals. We have reported elsewhere (Majewski 1999, Majewski et al. 1999c) preliminary results from our search for tidal debris around the Magellanic Clouds. Our overall strategy for finding extratidal stars differs from previous efforts in that we specifically target giant stars associated with the dwarf galaxies. This allows us to cover large areas of the sky efficiently with small telescopes, as we do not require deep imaging: Only the top several magnitudes of the red giant branch (RGB) of each galaxy are sought. The giant stars are identified photometrically using the three filter, Washington $`M,T_2+DDO51`$ system described in Majewski et al. (1999b; “Paper I” hereafter). While perhaps more prone to small number statistics because of more restricted sample sizes, our technique confers certain advantages over the deep imaging, “CMD-differencing” strategies employed by, for example, IH95, Kuhn et al. (1996), Grillmair et al. (1995; see also Grillmair 1998), and others, that depend on uncovering statistical excesses of starcounts (either total starcounts or counts in particular regions of color-magnitude space). The latter type studies are especially sensitive to the zero-point level of, and variations in, the stellar starcount background (see, e.g., the discussions on this point in IH95). On the other hand, our goal is to pinpoint actual extratidal candidates individually (not statistically) by their signal as giant stars with properties expected for giants associated with the dwarf galaxy. By weeding out the overwhelming foreground curtain of dwarf stars, our approach is not only less susceptible to background subtraction problems and thereby capable of probing to larger radii more easily, but we also generate candidate lists of bona fide extratidal giants that are bright enough for spectroscopic verification and study. Thus, our approach fulfills the above stated strategies of mapping to large radii as well as providing good candidates for spectroscopy to do dynamical tests. A campaign of targeted searches for extratidal stars and tidal tails around Galactic dwarf galaxies is of interest for numerous reasons. First, it is important to understand the extended structure of the dwarf spheroidals as leverage on the dark matter issues outlined above. Indeed, because we identify specific “extratidal” targets suitable for radial velocity measurement, we hope to be able to test directly whether they are bound to the dwarfs or not on the basis of the differences in the expected dynamical signatures. If unbound, we may then proceed to test the various models (e.g., Oh et al. 1995, Piatek & Pryor 1995, Hamlin 1997, Johnston 1998, Johnston et al. 1999b) of dwarf spheroidal disruption that make specific predictions of the velocity characteristics of these stars. Second, the discovery of substantial tidal tails associated with Galactic satellites would provide the opportunity to measure the shape and size of the Galactic mass potential with unprecedented accuracy (Johnston et al. 1999c). An advantage conferred by our survey approach is that we can identify actual tidally-stripped stars that are viable candidates (i.e., bright enough) for proper motion measurement via the Space Interferometry Mission; a sample of some 100 such stars along a tidal tail with fully measured space velocities (to 10 km s<sup>-1</sup> accuracy) can yield a measure of the mass of the Milky Way to a few percent accuracy in the region that the satellite’s orbit explores. Hence, with several such tidal tails we could map the shape and size of the Milky Way with unprecedented accuracy. Finally, the possibility of ongoing destruction of globular clusters and satellite galaxies has great bearing on the understanding of the formation and evolution of our own Milky Way. It is of interest to know what contribution is made to the Galactic halo from the destruction of satellites and accretion of their remains. The substantial ongoing contribution to the Milky Way of stellar and cluster debris from the Sagittarius dwarf galaxy, with a tidal tail now mapped to some 40 from the galaxy core (Mateo, Olszewski & Morrison 1998a; see also Majewski et al. 1999d, Johnston et al. 1999a) is likely not peculiar to the present epoch. ## 2 Photometry The observations obtained for this project were accumulated over several observing runs at the Las Campanas Observatory, when small blocks of observing time were available (due to airmass, twilight, and weather considerations) during other programs. We include data from both the Swope 1-m (C40) and du Pont 2.5-m (C100) telescopes. Observations on the C40 were made with the same SITE#1 CCD and filters as employed in Paper I. On the C40, this CCD gives 23.8 arcmin per side field-of-view. Data were taken during grey or bright time on the nights of UT 10 March 1999 and 28 April to 3 May 1999. Data on the former run were photometric, while the CCD fields observed during the latter run were not photometric. In most cases, CCD fields overlapped with neighbors so that color/magnitude consistency could be checked. For each C40 field, exposures of 120, 120, and 1200 seconds were taken in each of the Washington $`M`$, $`T_2`$ and $`DDO51`$ filters. All frames were reduced with the stand alone version of DAOPHOT II (Stetson 1992) which produces point spread function-fitting (PSF) photometry. Figure 1 shows the distribution of all detected stars in celestial coordinates; the lines show the boundaries of the various CCD frames (solid lines show frames taken during photometric conditions and dashed lines show frames take during non-photometric conditions). The density of detected stars at different points in Figure 1 are a function of the relative contribution of Carina, the relative proximity to the Galactic plane (Carina is at a Galactic latitude of $`b=22^{}`$), the inclusion of both C40 and (deeper) C100 data, and the degree of cloudiness and seeing, which affects the limiting magnitudes of the C40 data. The PSF-fit magnitude measures were calibrated against Geisler (1990) standards. For the data taken during photometric conditions, photometric transformation equations including airmass, color terms and nightly zero-point terms were determined. We followed the calibration procedures described in Majewski et al. (1994), using a similar matrix inversion algorithm (Harris, Fitzgerald & Reed 1981). The resultant transformation equations were applied to all of the photometric frames. A comparison of instrumental magnitudes in the CCD frames taken during cloudy weather to the fully transformed magnitudes on the photometric frames allowed derivation of frame-by-frame color and magnitude offset terms for the former data; thus the CCD data taken during non-photometric conditions were locked into the system of the calibrated photometric magnitudes. Figure 1 shows the geometry of photometric and bootstrapped fields. Note that the photometric frames were located in the center field and in a ring of fields separated from the center. Non-photometric frames overlapping multiple photometric frames were matched to all simultaneously. When final calibrations of all frames were achieved, a comparison of the derived magnitudes for stars in all overlapping regions between CCD frames showed no major discrepancies: the mean frame-to-frame offsets were typically of order 0.01 magnitudes. Nevertheless, for our final catalogues we adopted the magnitude measures for multiply photometered stars from the photometric frames over the non-photometric frames, whenever possible. The du Pont 2.5-m observations were made on the night of UT 11 March 1999 with the new Wide Field Camera (WFC) system built by Ray Weymann and collaborators. The WFC delivers a useful circular field-of-view some 23 arcmin in diameter. Four fields were observed: one centered on the Carina core, two 50 arcmin along the major axis and outside the tidal radius (adopting the structural parameters found by IH95), and one along the minor axis at the same distance. The locations of these frames are shown by the circular fields in Figure 1. During the early operation of the WFC a residual misalignment of the field flattening lens with respect to the instrument axis shifted the center of symmetry of optimal focus slightly away from the center of the CCD frame, leaving the edge of one quadrant with somewhat deteriorated image profiles of sufficient severity that PSF-fitting photometry produced unsatisfactory results, even after allowing for PSF variation with a quadratic dependence on position in the CCD frame. We decided to optimize our DAOPHOT solutions to give good results over a large fraction of the CCD field and sacrifice the ability to work with the bad portion of the image, rather than trying to salvage the bad part of the field at the expense of less than satisfactory PSF-fitting over the entire field. Thus, as may be seen in Figure 1, the C100 data show a decline in the number of detections to the upper left of each circular field. While this compromise means we lose stars from our survey, there should be no preference for losing giants compared to dwarfs. While the C100 data were taken during photometric conditions, limited access meant there was no opportunity to obtain corresponding calibration frames. Fortunately, however, several of the C100 frames overlap with the photometrically calibrated C40 grid, and from these overlaps transformation equations could be derived for the C100 data. The latter were applied to all C100 frames, whether they overlapped the C40 data or not. Note that in the case where a star was photometered on more than one set of CCD frames, a weighted average of the magnitudes from the different frames was taken. The photometric errors in the C40 and C100 data as a function of magnitude for each filter are shown in Figure 2a for the C40 data and Figure 2b for the C100 data. It can be seen that the C40 data, in particular, show a wide range in quality. We remove from further consideration all detected objects with non-stellar image profiles. This was determined by deriving the running mean (in a 50 star “boxcar” filter) of the DAOPHOT II $`\chi `$ and sharp parameters as a function of magnitude and rejecting $`3\sigma `$ outliers from this mean for the C40 data. However, because of the problems with the image quality on the C100 frames, we took a more conservative rejection limit of $`2.3\sigma `$. In addition, at this point we exclude all stars that have magnitude errors in any filter that are larger than 0.1 magnitudes. The latter cut is effectively one in magnitude (Figure 2) for each CCD field. The $`(MT_2,M)_0`$ color-magnitude diagram (CMD) for the C40 data for the area shown in Figure 1 is shown in Figure 3a; the total CMD for the C100 data, which probe some 2 magnitudes deeper, is shown in Figure 3b. Each star has been corrected for reddening based on its celestial coordinates and a comparison to the Schlegel et al. (1998) reddening maps. The precision of the photometry is typically about 0.04 magnitudes at $`M=19`$ for the C40 data (but with a wide spread about this depending on the particular frame – see discussion of the relative magnitude limits below), and 0.03 magnitudes at $`M=19`$ for the C100 data. The C100 data go deeper than the Carina horizontal branch (HB) at $`M20.5`$, while the Carina red clump is just barely detected in the C40 data. Figures 3c and 3d show the CMDs for the stars actually used in our survey , after application of the selection criteria described above. ## 3 Identification of Carina Giant Star Candidates Our strategy for identifying likely extratidal giant stars associated with Carina tidal debris involves the application of two basic criteria: (1) stars must have magnesium line/band strengths consistent with those for giant stars with the abundance of Carina, and (2) stars must have combinations of surface temperatures and apparent magnitude consistent with the red giant branch of Carina. We apply these criteria in succession: ### 3.1 Giant Star Discrimination in the Two-Color Diagram Paper I describes the method by which dwarf/giant separation can be achieved through the three filter imaging technique employed here. The basis of our technique lies in the sensitivity of the $`DDO51`$ filter to the MgH band (bandhead at 5211 Å ) and Mg b triplet near 5150 Å (McClure 1976). These magnesium features are sensitive to stellar surface gravity (primarily) and temperature and abundance (secondarily) in later type stars. When combined with the wideband $`M`$ and $`T_2`$ filters of the Washington system, the $`DDO51`$ filter is especially useful for discriminating giant stars from foreground dwarfs on the basis of differences in their respective $`MDDO51`$ colors at a given $`MT_2`$ color. The former color measures the strength of the magnesium line/band strength (where $`M`$ acts as a suitable “continuum” measure for comparison to $`DDO51`$; Geisler 1984), while $`MT_2`$ is sensitive primarily to stellar surface temperature (and is almost a linear scaling of $`VI`$; Paper I). With this photometric system, it is possible with great efficiency to isolate giant stars at the distance of Carina with small telescopes, even in bright, moonlit skies as we had for the Carina observations here. In Figure 4 we show the dereddened two-color diagram for both our C40 and C100 data, after pruning the sample with the error and image shape criteria described above. Figure 4 shows the characteristic “elbow-shaped” locus of dwarf stars (Paper I), which typically have the largest magnesium absorption at any given temperature. The region enclosed by the box drawn with thick solid lines is the general area in the two-color diagram inhabited by evolved, cool stars more metal-poor than \[Fe/H\]$`0.5`$ (Paper I). The curved loci of giant stars of different abundances, as determined from the synthetic photometry of Paltoglou & Bell (1994) and presented in Paper I are overlaid for comparison. We note that Carina is established to have \[Fe/H\]$`=1.99`$ with a dispersion of 0.25 dex, based on spectroscopic observations of 52 giants by Smecker-Hane et al. (1999). The diagonal, blue boundary of the “giant region” we have selected here is a somewhat conservative compromise to produce relatively uncontaminated giant candidate samples, while not sacrificing too many lower luminosity, bluer giants: The line is approximately parallel to the center of the near-solar metallicity dwarf locus, but offset by about +0.1 mag in ($`MDDO51`$) to account for typical magnitude errors at the faint end of the data sets. We now consider only stars in this delimited giant region as our first selection for metal-poor giants in our survey fields. As will become evident below, this giant star “bounding box” selects not only Carina stars on the RGB, but also Carina red clump stars. While the goal of our three filter photometry is to cast exclusively for giant stars associated with Carina, our selection criterion in Figure 4 will also bring some contaminants into our net. Obviously, we will catch any giants with metallicities approximately like that of Carina. There will also be some dwarf contamination due to several-$`\sigma `$ photometric errors scattering dwarfs into our giant star selection box. Finally, as may be seen from the lines in Figure 4, metal-poor subdwarfs with \[Fe/H\] $`2.5`$ also get pulled into our net. We expect the number of the latter type stars to be quite small, based on the very small fraction of halo stars with metallicities this poor. From Reid & Majewski (1993), the number of halo stars expected down to $`V=20`$ over 2.2 deg<sup>2</sup> is about 590. However, only a small fraction ($`10\%`$) of these halo stars will be cool enough (spectral type K and later) to have an $`(MT_2)_0`$ color which would place them in the giant bounding box, and only about 8% of these would be expected to have metallicities as low as \[Fe/H\] $`2.5`$, according to Beers (1999) and Norris (1999). This leaves an expected level of contamination of $`5`$ metal-poor subdwarfs in our entire survey area. We conclude that the vast majority of brighter non-Carina stars we select with only a color-color criterion will be random field giants, while at faint magnitudes we will pick up some dwarfs with extreme photometric errors. Eventually, either of these two types of contaminants will be readily identifiable through spectroscopy by their radial velocities (field giants) or line strengths (photometric error dwarfs). In another part of our halo observational program, we have done a search for tidal stellar debris from the Magellanic Clouds (Majewski et al. 1999a; see Majewski et al. 1999c). This part of our program includes both a photometric search for giants, as we have done here, as well as follow-up spectroscopy of the giant candidates. It is worth pointing out that in our Magellanic Cloud survey, which has identical photometric material to that which we have used here, we have found a very high success rate in the fraction of our giant candidates that we find to have spectroscopic line strengths and velocities like metal poor, halo giants. We expect similar, or better, success rates here since in the Magellanic Cloud work we used only aperture photometry, not PSF-fitting photometry as we used here, and in that other work we also allowed a more liberal selection (a larger “giant box”) in the two-color diagram. As a final check on the quality of the dwarf/giant discrimination with our photometric technique, we consider the sample of 23 candidate Carina RGB stars observed spectroscopically by Mateo et al. (1993). They chose these stars to lie near the top of the Carina RGB in the $`(BV,V)`$ CMD, and found that 17 of the candidates had radial velocities consistent with Carina membership, while six had heliocentric velocities clustered around 0 km s<sup>-1</sup>, consistent with their being foreground dwarfs (see their Figures 3 and 4). All of these stars were photometered by us, and we find that all 17 of the Carina RGB members (from Mateo et al.) are clearly giants in our two-color diagram (Figure 5, filled circles), while all 6 of the “foreground” stars of Mateo et al. are clearly dwarfs (open circles). We note that even while our photometry for several of the stars observed by Mateo et al. had errors that were too large to keep them in our formal sample, the colors of these stars still lie in the proper part of the color-color diagram, as seen in Figure 5. This comparison gives confidence that with our technique we can easily separate metal-poor giants from the typical foreground dwarf to produce very “clean” lists of RGB candidate stars suitable for followup spectroscopy. ### 3.2 The Color-Magnitude Locus of the Carina Red Giant Branch We now use our own photometry of Carina itself to establish the expected location of associated evolved stars in the color-magnitude diagram. Figure 6 shows the color-magnitude distribution of all stars selected as giants in Figure 4, but within 10 arcmin (roughly the core radius) of the center of Carina for both the C40 and C100 data. Note that we measured the center of Carina from our own data by fitting marginal distributions, but, as our determined center agreed to within 1 arcmin of the IH95 determination, for consistent comparisons we adopted the IH95 Carina center for all calculations from here on. We apply the 10 arcmin radial cut here to ensure that we obtain as pure a sample of bound Carina stars as possible for defining the Carina RGB region in the CMD. As can be seen by Figure 6, the selection of “giant star candidates” by the color-color technique seems to do a reasonable job of isolating a relatively “clean” sample of evolved stars: Very few stars fall outside the general region dominated by the Carina RGB in the CMD. Those stars falling away from the Carina RGB may either be dwarf stars that failed our giant discrimination due to photometric error or intrinsic properties (\[Fe/H\] $`2.5`$ dwarfs that show $`MDDO51`$ colors of moderately metal-poor giants), they may be field giant stars, or they may be Carina giants with several-$`\sigma `$ errors in their photometry. Based on the location of the primary locus of Carina RGB stars in Figure 6, we may now apply a second, color-magnitude selection criteria to our giant star candidate sample, since, presumably, any RGB stars associated with Carina, no matter how far from the core of Carina and within our data set, should resemble RGB stars in the CMD of the Carina core. We note that the expected timescale ($`1`$ Gyr) for tidal drift from Carina within the angles we survey are shorter than the enrichment timescale (Gyrs). Moreover, at present there is no evidence for an age-metallicity relation among the variously aged populations in Carina (Smecker-Hane et al. 1994, Da Costa 1994). We also expect stars in tidal tails to be lying within a few physical tidal radii (a few kpc) of the Carina core along the line of sight; these differences in distance would be virtually indistinguishable with our photometry. Only in very particular circumstances – e.g., looking along Carina’s orbit – would we expect to see tidal debris to be highly elongated along the line of sight (for an example of the typical geometry of streamers around a satellite see Johnston 1998, Figure 3). From the Carina core RGB distributions in Figure 6 we define a CMD bounding box, shown by the solid lines. This region (assembled from the combination of three second order polynomials) was defined to contain the bulk of the Carina core RGB locus, as well as the apparent red clump on the bluest end. This same box may now be applied to the entire giant star candidate sample over 2.2 deg<sup>2</sup> from §3.1 to pick from among them those that, in addition to being pre-selected as evolved stars by their colors, are of the correct magnitude for their color to be associated with Carina (i.e., of an appropriate abundance/distance combination). Note that the actual size and shape of the bounding box utilized here does not really matter as long as it is applied consistently at all places in our survey mapping (Figure 1); in §3.4 we account for the level of contamination by background/foreground stars, which scales with the size of the box. A box that is too large simply allows more contamination into our final selection of Carina-associated stars. While this translates into a lower efficiency for follow-up spectroscopy of the selected sample, the increased contamination level may be removed in a statistical way by appropriate background subtraction. On the other hand, a box that is too restrictive means that more Carina-associated stars may be lost, and may decrease the signal-to-noise of our extended stellar population discussed below. Again, we have attempted to compromise between these extremes, except that we erred on the side of making the box a “loose fit” to the RGB locus to account for the fact that extratidal stars may acquire slightly different mean distances as they get drawn out with different energies, ahead of or behind the parent object (e.g., as suggested by the bridge/tail description of Toomre & Toomre 1972). ### 3.3 “Carina-Like” Giant Candidates in the Two-Color and Color-Magnitude Diagrams Figure 7 shows the primary locations of the core Carina RGB, as selected by the CMD-selection box in Figure 6. We see that the color-color distribution of these stars is smaller than the entire “giant box” in the two-color diagram, and one could conceive of narrowing the color-color selection criterion further by collapsing the giant box around the Carina locus. We do not do so here, however. We now apply the selection criteria defined by the bounding boxes in both Figures 4 and 6 to the entire sample of stars in the C40 and C100 data sets to define the sample of most likely Carina-associated stars. In Figure 8 we show the CMDs of all C40 and C100 stars satisfying the two-color selection criterion in Figure 4 – i.e., all stars selected as evolved stars of similar metallicity to Carina in our entire survey area. Even when all evolved stars from the entire survey are included in the CMD, the dominant CMD structure, particularly for cool stars, is the Carina RGB. We include in Figure 8 the CMD selection criterion given by the bounding box selected in Figure 6. We note that it encloses most of the apparent Carina RGB, as expected. However, a notable exception is the trio of stars at $`M_o17.5`$ and $`(MT_2)_o>1.85`$: Though these three stars appear to be an extension of the Carina RGB bounding box, the latter does not extend red enough to include them because there were no examples of such relatively rare stars in the Carina core. Though we have formally excluded these stars in our analysis, we do consider these three stars as likely Carina-associated stars that we would have found with an appropriately-extended CMD bounding box. We return to discuss these stars in our spectroscopic tests in §3.5. ### 3.4 Sky Distributions and Evaluation of Giant Background Level Figure 9a shows the distribution on the sky of all stars selected by our combined color-color-magnitude selection. The central fall off in the concentration of Carina giant candidates is obvious, but the fall off does not truncate completely, and, indeed, the density of candidates seems to flatten out and extend not only beyond the core radius, but the tidal radius as well. By comparison, we show in Figure 9b the stars that have similar color-color characteristics as Carina giants (i.e., within the box in Figure 4) but outside the CMD bounding box in Figure 6, i.e., stars that would be metal-poor giants, but generally at different distances than Carina. The latter show no central concentration, but rather, for the most part, the (expected) random, flat distribution of halo field giants. At first, the similarity of the distribution of the relative stellar density in the outer parts of Figure 9a and Figure 9b may appear to be a cause for concern. Some of the similarity is related to the differing depths of the individual CCD fields, which modulate both the number of detected Carina-associates and non-associates. Moreover, there could be some “spill-over” of true Carina stars to just outside of our selection criteria, which moves them from panel 9a to panel 9b. But of most concern to our purpose here is what fraction of the extratidal stars in Figure 9a are likely to be real and how many are expected to be “interlopers” – e.g., (1) dwarf stars that are accidentally selected to be “Carina-like” giants due either to photometric errors or extremely low \[Fe/H\], or (2) actual giant stars that happen to have the correct color/abundance/distance characteristics that place them into our sample? We must evaluate the expected level of contamination from interlopers, and we do so by monitoring the “background level” of such stars as a function of magnitude. Before continuing, however, we must stress that the sky distributions of candidate giant stars as illustrated in Figure 9 are modulated by the variable depth of our data across the entire survey area and Figure 9a, even if accurately depicting the existence of extratidal Carina debris, cannot be interpreted as a mapping of the true relative density of that debris. Our analysis must proceed by taking into account the relative depths of our somewhat inhomogeneous data set. We do so by analyzing the survey with four different magnitude limits. At each magnitude limit, we include only those survey areas that are complete to that depth. The net effect of this approach is that with an increasing magnitude limit we cover less area on the sky, but we are able to recover greater densities of potential Carina-associated stars in the smaller areas because we probe further down the RGB. The goal of analyzing different magnitude-limited data sets in this way is a fair appraisal of not only the expected contamination levels, but the true relative sky densities of giant stars, while taking maximal advantage of the area covered at various depths. Figure 10 shows the sky distributions of color-color-magnitude selected Carina-associated giant candidates taking into account the magnitude limits of the frames. For comparison, Figure 11 shows the same for all stars selected as metal-poor giants in the color-color diagram, but which are not along the Carina RGB in the color-magnitude diagram (i.e., presumably metal-poor giants at different distances than Carina). In Figure 10a and perhaps Figure 10b, it can be seen that the brighter, candidate Carina-associated stars do seem to show an overall radial drop-off from the core, but one that continues beyond the nominal tidal radius of IH95 (compare to the presumably “random field” star distributions in Figures 11a and 11b). For simplicity during the remainder of the discussion in this Section, we will refer to these stars outside the IH95 tidal radius as “extratidal”, though we acknowledge the controversy regarding the true tidal radii of dwarf galaxies like Carina, as discussed in §1. Unfortunately, it is more difficult to follow any apparent radial trend in the deeper survey fields shown in Figures 10c and 10d, because of the poor radial sampling in the placement of the fields. On the other hand, it can be seen that the total number of extratidal giant candidates in Figures 10c and 10d outnumber the stars in the same regions of the sky in Figures 11c and 11d, respectively. This is significant because the relative areas in the color-magnitude diagram from which the extratidal giants are culled is much smaller in Figure 10 than in Figure 11. Thus, it would appear, we are seeing a significant excess in extratidally-positioned stars at just the colors and magnitudes expected for Carina-associated populations. To put the latter assessment on a more quantitative footing, we assess in Figures 12 and 13 the background contribution of field giant stars (and the expected small contribution of foreground dwarfs from photometric error and extreme subdwarfs) to our counts of candidate Carina-associated giants. The foundation of our background analysis is the assumption that the distribution of random halo field giants should be relatively smooth and slowly varying with distance. Indeed, if the Galactic halo follows anything close to an $`R^3`$ power law, as is widely assumed (and reported from the most recent surveys of blue horizontal branch stars; see a recent summary in Sluis & Arnold 1998), the counts of halo giants per unit solid angle should be flat, modulo second order effects relating to possible metallicity gradients (which are generally not found in the outer halo; Searle & Zinn 1978, Zinn 1985, Carney et al. 1990, Armandroff, Da Costa & Zinn 1992, Rich 1998). In our case, we count giant stars already pre-selected (on the basis of their position in the color-color diagram) to be metal-poor; if we adopt a counting filter in the CMD with a shape matching the CMD selection box in Figure 6 (which follows the outline of an \[Fe/H\]$`2`$ RGB) our magnitude counts of these giants translate more or less directly into counts by distance modulus. Thus, we offset the CMD bounding box of Figure 6 by 0.5 magnitude intervals, and at each offset position count the number of giants satisfying the color-color criterion shown in Figure 4. These counts are summarized for each of our four magnitude-limited data sets in Figure 12 (which shows all metal-poor, color-color selected giants) and Figure 13 (which shows only those metal-poor, color-color selected giants outside the IH95 tidal radius). Note that the actual filter used for each panel in Figures 12 and 13 was modified to take into account the varying depths of the four magnitude-limited data sets. For example, in the $`M<19.3`$ data set, the bottom of the CMD bounding box is truncated precisely at $`M=19.3`$. In turn, the $`M<19.8`$ data set is analyzed with the appropriately truncated CMD bounding box at $`M=19.8`$, and etc. For each magnitude-limited data set, the modified, truncated bounding box is the one offset and used to produce the giant count histograms in Figures 12 and 13. Note that only offsets in the direction of brighter magnitudes make sense, as offsets in the fainter direction incorrectly evaluate the numbers of stars due to sample incompleteness at the faint end. The maximum negative magnitude offset was given by the bright-end, CCD-saturation limit of the survey. The main feature to note in each panel of Figure 12 and 13 is the relatively flat contributions of stars at magnitudes brighter than the Carina RGB. Indeed, under the assumption that the majority of these stars are giant stars and not dwarfs with large photometric errors or low \[Fe/H\], the high degree of flatness in the histograms strongly supports an $`R^3`$ distribution for Galactic halo field giants (or at least metal-poor giants). We assume this flatness persists through the magnitude range dominated by the Carina RGB ($`\mathrm{\Delta }M=0`$) in our survey, and adopt the mean level of the flat distribution as our background level of field halo giants and other interlopers in our “Carina-associated” giant sample at $`\mathrm{\Delta }M=0`$. As the magnitude offsets approach 0 in each case shown in Figures 12 and 13, we see a sudden rise in the numbers of giants counted in the shifting counting box. The peak histogram values centered on $`\mathrm{\Delta }M=0`$ give the total number of stars in the Carina CMD bounding box as originally centered on the Carina RGB. But the sharpness of the rise seems to vary among the different samples. This is because there is some overlap of the shifted box with the true Carina RGB for small magnitude offsets and the maximum vertical extent of the bounding box varies: $`\mathrm{\Delta }M=1.3`$, 1.7, 2.1 and 2.5 magnitudes for the $`M<19.3`$, $`M<19.8`$, $`M<20.3`$ and $`M<20.8`$, respectively. If the “Carina-contaminated” bins less than these $`\mathrm{\Delta }M`$ are ignored, we may determine the mean expected background contribution to our candidate Carina-associated stars from the various test offsets of the CMD bounding box in Figures 12 and 13. These data are included in Table 1. Figure 12 shows that when all color-color selected metal-poor giants are considered regardless of their sky position in our survey, the number of expected contaminants lying within the Carina RGB is rather small: $`<4\%`$ for all four magnitude limits explored. This suggests that, unless some peculiar problem is affecting our candidates specifically at the color-magnitude location of the Carina RGB, we might expect relatively high Carina membership probabilities from a spectroscopic follow-up study of these candidates (an expectation that is supported by our successful dwarf/giant discrimination of the Mateo et al. 1993, stars discussed at the end of §3.1 and shown in Figure 5, and by our spectroscopy in §3.5 below). Moreover, from the data in Figure 13 and Table 1 we see that the excess of candidate extratidal Carina-associated giants is at the level of 3.7$`\sigma `$ or more for each of the four magnitude-limited data sets we explore. A final observation to be noted from Figure 10 is that the extratidal distribution of Carina-like giant candidates appears rather isotropic, however, our field coverage is not ideal for assessing this. In contrast, Kuhn et al. (1996) report no extratidal Carina extension perpendicular to its major axis, but note that they explore only two minor axis fields 2 away from the Carina center. We also note some interesting similarities in the angular distribution of our Carina-associated giant candidates, particularly those in the $`M<19.8`$ and $`M<20.3`$ data sets (Figures 10b and 10c), and the isopleths published by IH95. In particular, the various spurs of higher density extending off of the IH95 central Carina contours and extending past the tidal radius (especially the spur to the southwest, but also the several other spurs at other position angles) are rather similar to such features in our data. Perhaps this is not surprising, as IH95’s starcount analysis was essentially limited to counting Carina RGB members, albeit about a magnitude deeper than we have here. Nevertheless, the apparent general agreement in the two rather different analyses is encouraging. ### 3.5 Spectroscopy Table 1 and Figures 10 and 13 suggest that we have found a significant “extratidal” population around Carina. A radial velocity survey to confirm whether these stars are indeed Carina-associated and, if so, to understand their velocity characteristics, is an obvious next observational step. While we have been unable to do such a study, we have managed to secure spectra of two of our Carina giant candidates – both exterior to the IH95 tidal radius – during twilight time on nights allocated for other programs on the C100. With the remaining available telescope time, we also decided to observe two of the brighter, $`(MT_2)_o>1.85`$ stars that lie outside of the Carina CMD bounding box, but that do appear to be at the very tip of the Carina RGB (see discussion in §3.3). These stars yielded much better S/N spectra in the observing time available than the other two stars almost a magnitude fainter. One of these brighter stars is inside the tidal radius, while the other lies exterior. All four spectra were taken on the nights of UT 27 Aug to 1 Sep 1999 with the Modular Spectrograph. The wavelength range spans from approximately H$`\beta `$ to H$`\alpha `$ at $``$ 1 Å pixel<sup>-1</sup>. The spectrographic set-up and the radial velocity reduction procedures have been described elsewhere (Majewski et al. 1999d). We present the results of this analysis, and other data about the stars including the positions, the angular separation from the center of Carina, the magnitude and color, our derived radial velocities and the height of the radial velocity cross-correlation peak (see Majewski et al. 1999d), in Table 2. We show the positions of the four stars observed spectroscopically as triangle symbols in Figure 10a. From repeat measures of previously well-observed stars during this observing run, we have determined our external, random and systemic velocity errors on the Carina candidate spectra to be $`1015`$ km s<sup>-1</sup>; this is sufficient to check association with Carina, but not good enough to make conclusions regarding possible differential velocity structure. The heliocentric radial velocity of Carina is 224$`\pm 3`$ km s<sup>-1</sup> (Mateo 1998) with a spread in the velocities of individual carbon stars and giants of about $`\pm 15`$ km s<sup>-1</sup> (Lynden-Bell, Cannon & Godwin 1983; Mateo et al. 1993). Star C1407251, a bright giant candidate located within 16 arcmin of the Carina core, has a velocity that agrees with the Carina velocity and is certainly a member. Star C2103156, the bright giant star candidate that lies outside the IH95 tidal radius, gives a spectrum that looks remarkably metal-poor and very similar to that of star C1407251; the radial velocity for star C2103156 lies within 2$`\sigma `$ of Carina’s systemic velocity and we consider this giant candidate a likely member of the Carina system. Our spectra of stars C2103156 and C2501583, the two extratidal stars that were observed and that lie inside the Carina CMD bounding box, are also rather devoid of significant absorption lines which again suggests an association with Carina. Unfortunately, however, the combination of no strong lines and a weak and noisy signal make it hard to get a good radial velocity for these stars: We obtain marginal cross-correlation peaks that we generally regard as unacceptably small ($`<0.3`$) and indicative of a several times larger random velocity error. Nevertheless, the derived heliocentric velocities are very close to Carina’s (with C2501927 almost an exact match) and give rather poor matches to the expected velocity of foreground dwarfs, $`20`$ km s<sup>-1</sup> (Mateo et al. 1993). We conclude the that latter two stars are far more likely to be associated with Carina than to be foreground stars. We regard this small, but important test of four of our identified Carina-associated RGB candidates as vindication of our approach, and as support for our claims that the distribution of candidate Carina-associated giants in Figures 9 and 10 is most likely to reflect a real extended structure of the Carina dwarf. We hope to make further observations of additional extratidal candidate stars with a larger aperture telescope. ### 3.6 Radial Profiles We present “Carina RGB” starcounts as a function of elliptical annuli, along with the sampled areas in each annulus, in Table 3. The shape of the elliptical annuli were adopted from the parameters given by IH95, namely an ellipticity of 0.33 at a position angle of 65. We space the width of our annular counting bins at one-fifth the major axis tidal radius given by IH95, except we use two times finer resolution in our first four bins. The $`r_{inner}`$ and $`r_{outer}`$ listed in Table 3 correspond to the inner and outer radius of the annuli along the major axis. We convert the annular counts to densities (taking into account the actual survey area covered within each annulus) subtract the mean density of the background counts (derived from data in Tables 1 and 3 and presented in Table 3), and present the resultant radial densities (per arcmin<sup>2</sup>) in Figure 14. To improve our signal-to-noise we combine into two bins our outermost eight annuli in Figures 14 and 15. The difference in the relative numbers of stars at each radius for our different magnitude-limited samples merely reflects the increase in the relative density of stars as a function of survey depth. In general, the counts from the different magnitude-limited data sets track each other at all radii (but, of course, the four data sets are not completely independent), though the $`M<19.3`$ and $`M<19.8`$ show more Poissonian scatter at large radii. We also include in Figure 14 the Carina count data as presented by IH95 (their Table 3), which has a magnitude limit almost another magnitude deeper than our $`M<20.8`$ sample and which shows a commensurately higher density scaling. Our counts roughly track the IH95 counts, as well as the similarly deep Kuhn et al. (1996) starcount data also presented in Figure 14. Note that the IH95 data looses signal-to-noise after about 40 arcmin, at which point the background correction becomes critical, and IH95 limited their presentation to this radius (so we do so here as well). Even before 40 arcmin, however, the IH95 data show the effects of decreased signal-to-noise. On the other hand, our $`M<20.8`$ data have reasonable signal-to-noise to almost 80 arcmin, at which point we are limited only by the extent of our survey sky coverage. Thus, our technique could potentially probe the extended structure of Carina to even greater radii than we have done here. In order to compare results more readily, we normalize the relative densities in our four data sets to the IH95 data (Figure 15a). We normalize near the radius corresponding to our third annulus where our data have the highest counts (signal-to-noise). For the Kuhn et al. (1996) data, we normalize to the IH95 counts at the Carina core. The various data sets show general agreement in the character of the radial profile, especially within 20 arcmin. Moreover, as found by IH95 and hinted at in the counts by Demers et al. (1983), our data show a break in the fall-off rate of the radial counts near 20 arcmin, and this slower rate of decline continues to the radial limit of our survey area. However, the level of the counts in the IH95 data tend to be several times higher than ours in the outer radii of overlap (from about 13 to 40 arcmin), though this is where the IH95 data show large uncertainties and are most affected by their adopted background levels. Within our own survey there is a trend in that the data sets with brighter magnitude limits have faster fall-offs at large radii than do the deeper data sets. This is likely a result of the fact that our brighter data sets face the problem of quantization noise at smaller radii than do the fainter data sets. Thus, for purposes of analysis at the largest radii, we take the $`M<20.3`$ and $`M<20.8`$ data sets as most likely to represent the true density profile (Figure 15b), though even these have some quantization noise at the outermost extent of our survey area. The King profile (King 1962, 1966) fit by IH95 to the central region of their Carina data is also shown in Figure 15a, and highlights the dramatic break in our radial density rate of decline after about 20 arcmin. Our beyond-the-break counts of giants also approximately match the Kuhn et al. (1996) starcount data (which monitor excess populations only beyond the Carina break radius), though the latter also show more apparent scatter at large radii. This general match to the Kuhn et al. data to the limit of our survey is in spite of the fact that the Kuhn et al. data correspond only to fields along the Carina major axis. One might interpret this to suggest that our adoption of uniformly shaped and oriented elliptical annuli has little effect on the derived radial profiles, but we note that in our deepest data sets the sampling of azimuthal angles around Carina narrows to a range dominated by the Carina major axis, similar to the sampling by Kuhn et al. ### 3.7 Mass Loss Rate Models of the radial distribution of the stars around a tidally disrupting satellite, e.g., by Johnston et al. (1999b, 2000), show characteristics very similar to those shown in Figure 15 (compare to Figure 15 of Johnston et al. 1999b). The Johnston et al. model demonstrates that the break point results from the contribution and eventual dominance of unbound stars. Dashed lines are included in Figure 15 past the tidal radius to represent various $`r^\gamma `$-laws discussed by Johnston et al. (1999b, 2000). We present in Figure 15b only our deeper samples, for clarity in the comparison. It can be seen that our deeper survey data have a radial fall-off somewhere between $`\gamma =1`$ and $`2`$ to the limit ($`80`$ arcmin) of our areal coverage. Given the match of our data as presented in Figure 15 to the model predictions of Johnston et al. (1999b), we proceed for now under the assumption that past the radial profile break we are seeing unbound, extratidal debris, and calculate the mass loss rate using the formalism outlined in Johnston et al. (1999b). Note that even with ”perfect” observations of their simulations Johnston et al. (1999b, 2000) found that they could only recover the rate of destruction of their satellites to within a factor of two. Hence the dominant source of uncertainty in our own calculation will be from the inherent simplicity of the model rather than observational errors, and the number we derive should be taken as an order-of-magnitude estimate of the destruction rate rather than a definitive measurement. Because the Johnston et al. (1999b) formalism assumes complete area sampling in the derivation of the relative numbers of stars within certain annuli, we scale our counts of stars at each of our annuli by the ratio of the total elliptical area to the amount of that annulus we actually surveyed. Under the Johnston et al. nomenclature, we adopt $`r_{break}`$ as occurring between our sixth and seventh annuli (23 arcmin), the radius to which the extratidal debris is well-defined as $`R_{xt}=64`$ arcmin (between our thirteenth and fourteenth annuli), and take for Carina’s orbital parameters $`g(\theta )=1`$ and $`T_{orb}=2\pi R_{GC}/(200\mathrm{k}\mathrm{m}/\mathrm{s})`$ with $`R_{GC}=101`$ kpc; this yields a mass-loss rate of $`\left(\begin{array}{c}\frac{df}{dt}\hfill \end{array}\right)_1=0.27`$ Gyr<sup>-1</sup>. This rather high value is relatively insensitive to the actual outer radial limit we take for the extratidal population, $`R_{xt}`$. We note that Johnston et al. who used the surface brightness at the location of $`r_{break}`$ in the IH95 data to estimate the mass-loss rate with an alternative computational method, obtained $`\left(\begin{array}{c}\frac{df}{dt}\hfill \end{array}\right)_2<0.33`$ Gyr<sup>-1</sup>, an upper limit in agreement with our value. Carina and Ursa Minor have the largest estimated upper limits for mass loss rates among the dwarf galaxies discussed by Johnston et al. (all Milky Way dSph’s excluding Sagittarius). The implication of a mass-loss rate of this order of magnitude is that Carina will not likely survive another Hubble time. Extrapolating backwards in time, one comes to the conclusion that Carina has likely lost a significant amount of mass already, and might be expected to sport significant tidal tails (see §4.3). ## 4 Discussion ### 4.1 Summary of Results Our goal was to find evidence for, and begin mapping, tidal debris in the Carina dwarf galaxy by way of a search for Carina-associated giant stars to well beyond the nominal Carina tidal radius. We have used two criteria to select stars that are candidate giant stars associated with the Carina dwarf spheroidal: (1) colors in the ($`MT_2`$, $`MDDO51`$) plane, where we are able to isolate evolved stars with \[Fe/H\] of order that of Carina, and (2) positions in the CMD that are similar to those of Carina giant branch and red clump stars. We check the background level of halo field giants, metal-poor subdwarfs masquerading as giants, and other possible interlopers, and find that they make a minor contribution to our signal. The latter stars, which we expect mainly to be random halo field giants, show a flat magnitude count slope, which suggests that they follow an $`R^3`$ law, as is commonly adopted for the halo. We derive the radial profile of candidate Carina-associated giants, and find a break in the counts at about 20 arcmin, near the tidal radius derived by IH95 (who have better sampling and signal-to-noise in the Carina core). A well-established, $`3.7\sigma `$ excess of Carina-associated giant candidates is found beyond this radius, and spectroscopy of several of these stars verifies that they represent a real extended structure of Carina. The beyond-the-break stars show an $`r^\gamma `$ decline in their radial fall-off, with $`1<\gamma <2`$, that is of a form similar to the predictions for unbound stars in tidally disrupting systems (Johnston et al. 1999b). Our excess of giant stars outside the King model cut-off radius may well represent stars that have recently been stripped from Carina due to Galactic tidal forces. Or they may represent still bound, retrograde revolving counterparts to those stars that were tidally stripped due to the fact that they happened to be in prograde orbits when they became extratidal (Innanen & Papp 1979). Alternatively, if, as discussed in §1, modest amounts of dark matter prohibits the production of tidal tails, our Carina radial profile to $`>80`$ arcmin heralds the need for an explanation of multiple component structures in dwarf spheroidals like Carina. Each of these alternatives has a distinct, kinematical signature that would be recognizable with an appropriate radial velocity survey. Note that our search technique identifies actual dwarf galaxy-associated giant candidates and thus we are able not only to find radially-averaged galaxy profiles, but to make two-dimensional maps of the local overdensity of extended debris. In the case of Carina, since we observe viable candidates to the edges of our survey area, several times beyond the King model cut-off radius, it is possible that we are only seeing the beginnings of a wider diaspora of Carina stars that, if tidal debris, will eventually sort by relative energies into classic tidal tails at larger distances from Carina. This hypothesis must be followed up both by casting a wider photometric net for extratidal Carina giants at greater angular separations from the Carina core, and by testing whether the dynamics of present and future samples of “extratidal Carina-associated giants” have the proper, rotation-like radial velocity signature predicted for tidal debris. Unfortunately, we have been able to do the proper spectroscopic confirmations for only three extratidal stars, and so it is beyond the capabilities of the present paper to settle the thorny and weighty issues concerning dark matter, tidal debris and the true location of dwarf spheroidal tidal radii outlined in the Introduction. Instead, we choose to end our discussion by raising additional intrigue as to how Carina’s extended populations may affect study of two of its neighbors – the Magellanic Clouds and the Milky Way. ### 4.2 Possible Connection to Apparent Tidal Debris Near the Magellanic Clouds In a separate contribution from our program to study substructure in the halo, we discuss a targeted search for tidal stellar debris from the Magellanic Clouds (Majewski et al. 1999a; see Majewski et al. 1999c). The latter work includes observations in a partially-filled ring of fields encircling both Clouds. We have found coherent radial velocity structures among the distant giants identified in almost every field we have surveyed, which strongly suggests that there is tidal debris widely dispersed across the stretch of sky we have sampled in that survey (i.e., an envelope from $`250^{}`$ to $`320^{}`$ in Galactic longitude and from $`18^{}`$ to $`55^{}`$ in Galactic latitude). However, the strongest signal we have encountered – both in the excess in the density of giants identified as well as in the tightness of the coherence of the radial velocities of these stars – is among a set of six fields spanning a 15 arc located $`18^{}`$ from the center, and to the northeast, of the LMC (we term these fields LMC-NE here). The LMC-NE fields are placed directly between the LMC and Carina, with the surveyed arc of fields slicing across the arc connecting the LMC and Carina, 8/10 of the way from the LMC to Carina. It is too soon to ascribe the coherently moving stars in the LMC-NE fields as Magellanic in origin; however, their spatial and velocity distribution are not inconsistent with model expectations (Majewski et al. 1999a) for Magellanic debris, with a possible additional contribution of a moving group of stars from the LMC polar ring described by Kunkel et al. (1997). These findings may be relevant to our findings here since the LMC-NE fields showing the distant moving group stars get as close as $`3^{}`$ from the center of Carina. Note also that Carina lies near the Magellanic plane, along with the Clouds, Ursa Minor, Draco and a number of globular clusters, and some or all of these objects have been proposed to represent chunks of debris from the break-up of a formerly larger progenitor Magellanic system (Lynden-Bell 1976, Kunkel 1979, Palma, Majewski & Johnston 1999). In this scenario, these dwarf galaxy/globular cluster chunks would likely be awash in a debris stream of stars also pulled out of the progenitor. Thus, if Carina itself has an origin as tidal debris, we might expect coherent groups of stars nearby, whether drawn from Carina directly, or not. An argument against a picture as just painted is that tidal dwarf galaxies are not expected to contain dark matter (Barnes & Hernquist 1992, Moore 1996, Burkert 1997, Klessen & Kroupa 1998), whereas large dark matter contents ratios have been used to explain the high velocity dispersion of Carina stars (e.g., Mateo et al. 1993). Whether Magellanic in origin or not, there is a blanket of coherently moving stars in the outer halo in this general direction of the sky and detected within 3 of Carina. It is worth checking whether this blanket extends to the position of Carina and contributes to the extratidal giant candidates we have found around Carina. However, given that there is a radial fall-off of stars with distance from Carina, we might not expect all of the extratidal Carina giants to be contributed by the LMC-NE feature. Clearly a more extensive survey of these stars from the LMC to Carina is needed. ### 4.3 Implications for the Structure and Origin of the Milky Way Halo If Carina is losing stars, then the Milky Way halo is gaining them. Because of its age distribution, Carina presents an interesting case for the accretion of stars in the halo. If disintegrating, Carina should presently be contributing predominantly intermediate age ($`7`$ Gyr) stars (Mould & Aaronson 1983, Mighell 1990) with a small admixture of stars from its old (12-15 Gyr) and young (2-3 Gyr) burst populations (Smecker-Hane et al. 1994, Grebel 1998, Mateo 1998). The derived proportional integrated star formation for these populations, based on Carina’s present ratios of different aged stars, varies among authors but averages to ratios of old:intermediate:young approximately as 0.2:1.0:0.1. The youngest stars accreted from Carina may be comparable to the Preston et al. (1994) blue metal-poor stars, of which about half are thought to be relatively young stars from accretion events (Preston 1999, personal communication). A comparison of the numbers of such young stars in Carina to the number in the Galactic halo has been used to provide an upper limit on the contribution of stars to the Milky Way by Carina or Carina-like, accreted galaxies: Unavane, Wyse & Gilmore (1996) calculate that at most approximately 60 dwarfs with the mass and metallicity of Carina could have been accreted by the Galactic halo, and would now account for a total of $`3\%`$ of the mass of the halo. However, such a calculation assuming a static Carina may greatly underestimate the potential contribution of matter to the Galactic halo via the accretion of dwarf galaxies. The present mass loss rate we have determined (§3.7) suggests that Carina is now losing of order 27% of its mass every Gyr, and, since that rate was determined from the current distribution of luminous matter under a scenario where light traces mass, that fractional mass loss rate may be adopted for both the dark and luminous matter.<sup>3</sup><sup>3</sup>3The luminosity of Carina is 4.3 x 10<sup>5</sup> L and its total estimated mass is 1.3 x 10<sup>7</sup> M, which yields an integrated $`M/L`$ of 31 (Mateo 1998). If we assume this fractional mass loss rate as typical over the life of Carina, then we approximate the mass of Carina $`N`$ Gyr ago to be $`(0.73)^N`$ larger than at present. Thus, we find that Carina was approximately a factor of 2, 10, and 100 times larger at the times of the bursts occurring approximately 2, 7, and 14 Gyr ago. Thus, Carina’s predominant stellar contribution to the Milky Way may have been in the form of old stars from its first starburst. This mass loss rate also suggests that, if having been maintained for the past Hubble time, accretion of Carina alone would have contributed about 6% of the Galactic halo’s mass, and Carina itself would now be reduced to 1% of its original mass. Interestingly, Klessen & Kroupa (1998) find in N-body simulations of the tidal interaction of a satellite with a massive galaxy that the models converge to a dwarf remnant that has 1% of the mass of the initial satellite. If Carina has been losing mass at this prodigious rate, then it has lost nearly all of the stellar component formed more than 7 Gyr ago. The loss of so much of the old stellar population could dramatically distort the observed star formation history (SFH) with respect to the actual SFH: The fact that 80% of the stars currently in Carina appear to be younger than the burst of star formation that occurred 7 Gyr ago (Hurley-Keller et al. 1998) could be ascribed to the fact that 90% of the mass of proto-Carina had already been accreted by the Milky Way by that time. We thank Andi Burkert for helpful conversations. Chris Palma is gratefully acknowledged for his creation of the SKAWDPHOT program used for the photometric transformation solutions. We are especially thankful for the support given this project by the Carnegie Observatories, both in Pasadena and at Las Campanas. The Director of the Carnegie Observatories, Dr. Augustus Oemler, has been especially kind with the granting of discretionary time, and with supporting SRM as a Visiting Associate of the Observatories. This telescope access has been particularly helpful as we have been unsuccessful obtaining telescope time for this project elsewhere. JCO and SRM acknowledge several grants in aid of undergraduate research from the Dean of the College of Arts & Sciences at the University of Virginia. SRM acknowledges partial support from an National Science Foundation CAREER Award grant, AST-9702521, a fellowship from the David and Lucile Packard Foundation, and a Cottrell Scholarship from The Research Corporation. Captions
no-problem/9911/hep-ph9911264.html
ar5iv
text
# 1 Introduction ## 1 Introduction Notwithstanding their enormous theoretical appeal, supersymmetric (SUSY) models provide several important challenges to the model builder. These include the problem of flavor changing neutral currents (FCNC), dimension five (including Planck scale suppressed) proton decay, and CP violating phases. Flavor non-conservation in SUSY theories is referred to as the supersymmetric flavor problem and is closely tied with the mechanism of SUSY breaking. New sources for FCNC in SUSY theories can arise from non-universal sparticle soft masses, and from a non-alignment (non-proportionality) of trilinear soft terms with the charged fermion Yukawa matrices. In $`N=1`$ minimal supergravity (SUGRA) , universality and proportionality holds at the Planck scale ($`M_P=2.410^{18}`$ GeV). For estimating flavor changing processes one should renormalize the soft SUSY breaking terms between $`M_P`$ and the SUSY breaking/electroweak scales. If a GUT scenario is considered one should also integrate out the heavy states which decouple at $`M_G`$, the GUT scale. These two procedures violate universality and proportionality , which could cause problems with FCNC. In gauge mediated SUSY breaking alignment holds at the low energy scale and FCNC are adequately suppressed. An alternative approach for resolving the supersymmetric flavor problem is the so called decoupling solution , in which the FCNC are suppressed by large squark and slepton masses. In order to satisfy the existing experimental bounds it is sufficient to have squarks (sleptons) in the mass range $`\stackrel{>}{_{}}10`$ TeV. On the other hand, to avoid spoiling the gauge hierarchy, the stop mass (and also $`\stackrel{~}{b},\stackrel{~}{\tau }`$ in case of large $`\mathrm{tan}\beta `$) should not exceed $`1`$ TeV. The sparticles corresponding to the first two generations can be heavier, since their interactions with the higgs fields are suppressed by their Yukawa couplings. In the charged fermion sector we have, of course, the opposite hierarchical picture! A priori, without any symmetry reasons, it seems quite surprising to have a mass spectrum with such an inverse hierarchy. In a recently proposed scenario , one possible mediator of SUSY breaking was assumed to be an anomalous $`𝒰(1)`$ symmetry, so that SUSY breaking mainly occurs through a non-zero $`D_A`$-term (of $`𝒰(1)`$), while the contributions from $`F`$-terms are relatively suppressed. Sparticles will gain soft masses if their $`𝒰(1)`$ charge is non zero. Otherwise, their soft mass will be relatively suppressed. It is tempting to exploit this $`𝒰(1)`$ also as a flavor symmetry (for the neutrino oscillation scenarios with $`𝒰(1)`$ flavor symmetry within MSSM and various GUTs see , -). Since the top quark mass is close to the electroweak symmetry breaking scale, it is natural to assume that it arises through a renormalizable Yukawa coupling The $`𝒰(1)`$ charges of the higgs superfields are taken to be zero. Note that in the absence of additional symmetries the $`\mu `$ problem remains unresolved. The Yukawa couplings of the light families can be suppressed by prescribing them appropriate $`𝒰(1)`$ charges. It follows that sparticles corresponding to the light fermions will have large soft masses in comparison to their counterparts from the third family. If the contribution to the soft masses from $`D_A`$-term is dominant and in the $`10`$ TeV range, the supersymmetric flavor problem will be resolved. In this paper we attempt to develop this approach within the framework of $`SU(5)`$ GUT and study some of its phenomenological implications (for earlier related works see -). Employing an anomalous $`𝒰(1)`$ as a mediator of SUSY breaking, and as a flavor symmetry, we obtain a suitable mass spectrum for proper suppression of FCNC. It turns out that this also leads to a strong (and desirable) suppression of the dominant nucleon decay in minimal SUSY $`SU(5)`$, since in the internal loops of the $`d=6`$ nucleon decay diagrams, there appear sparticles belonging to the first two families. In our scenario the dominant decays occur through diagrams in which sparticles of the third generation participate, and for adequate suppression the regime with intermediate or low $`\mathrm{tan}\beta `$ is required. It is worth stressing that the neutrino and charged lepton decay channels are comparable in magnitude, with the proton lifetime estimated to be $`\tau _p10^3\tau _0`$ (where $`\tau _010^{29\pm 2}`$ yr. is the proton lifetime in minimal SUSY $`SU(5)`$, assuming squark and gaugino masses around $`1`$ TeV). Due to $`𝒰(1)`$ flavor symmetry, all Planck scale mediated $`d=5`$ baryon number violating operators are also adequately suppressed. The model is also compatible with the various neutrino oscillation scenarios that are in agreement with the atmospheric and solar neutrino data . We stress bi-maximal vacuum neutrino mixing scenario, with the $`𝒰(1)`$ symmetry once again playing a crucial role . We also indicate how the large and small mixing angle MSW oscillations for resolving the solar neutrino anomaly can be realized. The paper is organized as follows: in Section 2 we discuss SUSY breaking through an anomalous $`𝒰(1)`$ symmetry, and show how the desirable sparticle spectrum needed for suppression of FCNC can be obtained. Some necessary conditions which should be satisfied are pointed out. We also discuss suppression of nucleon decay and present the appropriate suppression factors which do not depend on GUT physics, but are closely tied to the low energy sector. In Section 3 we present an $`SU(5)`$ example in which (the same) anomalous $`𝒰(1)`$ symmetry is exploited as a flavor symmetry to provide a natural understanding of hierarchies between charged fermion masses and their mixings. We briefly explain how the bad asymptotic $`SU(5)`$ mass relations $`\widehat{M}_d^0=\widehat{M}_e^0`$ involving the light families are avoided in our approach. We discuss the various neutrino oscillation scenarios which simultaneously accommodate the atmospheric and solar neutrino puzzles. Estimates for the nucleon decay widths are also presented. Our conclusions are summarized in Section 4. ## 2 SUSY breaking anomalous $`𝒰(1)`$ , FCNC and nucleon decay We employ the proposal of ref. and consider an anomalous $`𝒰(1)`$ symmetry as a mediator of SUSY breaking. It is well known that anomalous $`𝒰(1)`$ symmetries often emerges from strings. The cancellation of its anomalies occur through the Green-Schwarz mechanism , and the associated Fayet-Iliopoulos term is given by $$\xi d^4\theta V_A,\xi =\frac{g_A^2M_P^2}{192\pi ^2}\mathrm{Tr}Q.$$ (1) The $`D_A`$-term is $$\frac{g_A^2}{8}D_A^2=\frac{g_A^2}{8}\left(\mathrm{\Sigma }Q_a|\phi _a|^2+\xi \right)^2,$$ (2) where $`Q_a`$ is the ‘anomalous’ charge of $`\phi _a`$ superfield. Let us introduce a singlet superfield $`X`$ with $`𝒰(1)`$ charge $`Q_X=1`$. Assuming $`\mathrm{Tr}Q>0`$ ($`\xi >0`$), the cancellation of (2) fixes the VEV of the scalar component of $`X`$: $$X=\sqrt{\xi },$$ (3) with SUSY unbroken at this stage. Including a mass term for $`X`$ in the superpotential, $$W_m=\frac{m}{2}X^2,$$ (4) the cancellation of $`D_A`$ will be partial, and SUSY will be broken due to non-zero $`F`$ and $`D`$ terms. Taking into account (2) and (4), the potential for $`X`$ will have the form $$V=m^2|X|^2+\frac{g_A^2}{8}\left(\xi |X|^2\right)^2.$$ (5) Minimization of (5) gives $$X^2=\xi \frac{4m^2}{g_A^2},$$ (6) along which $$D_A=\frac{4m^2}{g_A^2},F_Xm\sqrt{\xi }.$$ (7) From (2), taking into account (6), (7), for the soft scalar masses squared ($`\mathrm{mass}^2`$) we have $$m_{\stackrel{~}{\phi }_a}^2=Q_am^2.$$ (8) Thus, the scalar components of superfields which have non-zero $`𝒰(1)`$ charges gain masses through $`D_A`$. We will assume that the VEV of $`X`$ is somewhat below $`M_P`$, namely $$\frac{X}{M_P}ϵ0.22,$$ (9) while the scale $`m`$ is in the range $`10`$ TeV (see below). Those states which have zero $`𝒰(1)`$ charges will gain soft masses of the order of gravitino mass $`m_{3/2}`$ from the Kähler potential $$m_{3/2}=\frac{F_X}{\sqrt{3}M_P}=m\frac{ϵ}{\sqrt{3}},$$ (10) which, for $`m=10`$ TeV, is relatively suppressed ($`1`$ TeV). The gaugino masses also will have the same magnitudes $$M_{\stackrel{~}{G}_i}m_{3/2}1\mathrm{TeV}.$$ (11) The mass term (4) violates the $`𝒰(1)`$ symmetry and is taken to be in the $`10`$ TeV range. Its origin may lie in a strong dynamics where $`m`$ is replaced by the VEV of some superfield(s) . One possibility is to introduce a singlet superfield $`Z`$ with $`Q_Z=2`$, and vector-like superfields $`\overline{Q}+Q`$ ($`Q_{\overline{Q}}=Q_Q=0`$), assumed to be a doublet-antidoublet pair of a strong $`SU(2)`$ gauge group. Then, imposing an additional global symmetry, $$Ze^{\mathrm{i}\alpha }Z,\overline{Q}Qe^{\mathrm{i}\alpha }\overline{Q}Q,$$ (12) the lowest term in the superpotential will be $$W_0=\lambda \frac{\overline{Q}Q}{M_P^2}ZX^2.$$ (13) Assuming that $`SU(2)`$ becomes strong at scale $`\mathrm{\Lambda }`$, the non-perturbative superpotential induced by the instantons will have the form $$W_{\mathrm{inst}}=\frac{\mathrm{\Lambda }^5}{\overline{Q}Q},$$ (14) and the scalar superpotential will be <sup>1</sup><sup>1</sup>1The non-perturbative term (14) violates global symmetry in (12). This can happen if the symmetry is anomalous.: $$W_s=\lambda \frac{\overline{Q}Q}{M_P^2}ZX^2+\frac{\mathrm{\Lambda }^5}{\overline{Q}Q}.$$ (15) The potential built from the $`F`$ and $`D`$-terms has the form $$V_s=\mathrm{\Sigma }|F_a|^2+\frac{g_A^2}{8}D_A^2,$$ (16) where $$F_a=\frac{dW_s}{d\phi _a},D_A=\xi |X|^2+2|Z|^2.$$ (17) It is easy to verify that there is no solution along which the $`F`$ and $`D`$-terms simultaneously vanish. Minimization of (16) gives the following solutions $$X^2=\frac{4}{3}\xi \frac{16}{3}\frac{m^2}{g_A^2},Z^2=\frac{1}{6}\xi \frac{2}{3}\frac{m^2}{g_A^2},$$ $$\overline{Q}^4=Q^4=\frac{9m^2M_P^2}{2\lambda }\left(1\frac{9\sqrt{3}}{8}\frac{mM_P^2}{\xi \sqrt{\xi }}\right),$$ (18) where $$m^2=\frac{\sqrt{6}\mathrm{\Lambda }^5}{\xi \sqrt{\xi }}.$$ (19) From (9), (18) we find $$ϵ_Z\frac{Z}{M_P}=\frac{1}{2\sqrt{2}}ϵ0.08,\sqrt{\xi }=\frac{\sqrt{3}}{2}M_Pϵ.$$ (20) Substituting (18) in (17), we readily obtain the expression for $`D_A`$ given in (7), and expression (8) (for calculating soft masses) is valid. Assuming that $`\mathrm{\Lambda }3.310^{12}`$ GeV, from (19) we obtain the desirable value for $`m`$($`10`$ TeV). In this example, among the non-zero $`F`$-terms, it is $`F_Z`$ which dominates and provides the dominant contribution to the gravitino mass ($`1`$ TeV) in (10). Turning now to the question of FCNC, we require that the ‘light’ quark-lepton superfields carry non-zero $`𝒰(1)`$ charges. This means that the soft masses of their scalar components are in the $`10`$ TeV range, which automatically suppresses flavor changing processes such as $`K^0\overline{K}^0`$, $`\mu e\gamma `$ etc., thereby satisfying the present experimental bounds . To prevent upsetting the gauge hierarchy, the third generation up squarks must have masses no larger than a TeV or so (hence they have zero $`𝒰(1)`$ charge). The same applies to sbottom and stau for large $`\mathrm{tan}\beta `$ since, for $`\lambda _b\lambda _\tau 1`$, large masses ($`\stackrel{>}{_{}}10`$ TeV) of $`\stackrel{~}{b}`$ and $`\stackrel{~}{\tau }`$ would spoil the gauge hierarchy. Although the tree level mass of the stop can be arranged to be in the $`1`$ TeV range by the $`𝒰(1)`$ symmetry, the $`2`$-loop contributions from heavy sparticles of the first two generations can drive the stop $`\mathrm{mass}^2`$ negative . This is clearly unacceptable, and one proposal for avoiding it requires the existence of new states in the multi-TeV range. The dangerous contribution which comes from $`2`$-loop diagrams is proportional to $$\mathrm{\Sigma }m_{\stackrel{~}{\phi }_a}^2T_a,$$ (21) where $`m_{\stackrel{~}{\phi }_a}^2`$ (see (8)) is the soft $`\mathrm{mass}^2`$ of $`\stackrel{~}{\phi }_a`$, and $`T_a`$ is the Dynkin index of the appropriate representation. The representations and $`𝒰(1)`$ charges of the new states should be chosen so that (21) vanishes, namely $$\mathrm{\Sigma }Q_aT_a=0.$$ (22) We will later see how this is implemented in the $`SU(5)`$ example. Let us now turn to some implications for proton decay. We assume that $`d=5`$ baryon number violating operators arise from the couplings $$qAqT+qBl\overline{T},$$ (23) after integration of color triplets $`T,\overline{T}`$ with mass $`M_T210^{16}`$ GeV (we consider triplet couplings with left-handed matter, which provide the dominant contribution to nucleon decay). After wino dressing of appropriate $`d=5`$ operators, the resulting $`d=6`$ operators causing proton to decay into the neutrino and charged lepton channels have the respective forms : $$\frac{g_2^2}{M_T}\alpha (u_ad_b^i)(d_c^j\nu ^k)\epsilon ^{abc},$$ (24) $$\frac{g_2^2}{M_T}\alpha ^{}(u_ad_b^i)(u_ce^j)\epsilon ^{abc},$$ (25) where $$\alpha =[(L_d^+\widehat{B}L_e)_{jk}(L_u^+\widehat{A}L_d^{})_{mn}+(L_d^+\widehat{A}L_u^{})_{jm}(L_d^+\widehat{B}L_e)_{nk}]V_{mi}(V^+)_{n1}I(\stackrel{~}{u}^m,\stackrel{~}{d}^n)+$$ $$[(L_u^+\widehat{A}L_d^{})_{1i}(L_u^+\widehat{B}L_e)_{mk}(L_d^+\widehat{A}L_u^{})_{im}(L_u^+\widehat{B}L_e)_{ik}]V_{mj}I(\stackrel{~}{u^m},\stackrel{~}{e^k}),$$ (26) $$\alpha ^{}=[(L_u^+\widehat{A}L_d^{})_{1i}(L_d^+\widehat{B}L_e)_{mj}+(L_u^+\widehat{A}L_d^{})_{1m}(L_d^+\widehat{B}L_e)_{ij}](V^+)_{m1}I(\stackrel{~}{d^m},\stackrel{~}{\nu ^j})+$$ $$[(L_u^+\widehat{B}L_e)_{1j}(L_u^+\widehat{A}L_d^{})_{mn}+(L_u^+\widehat{A}L_d^{})_{1m}(L_e^T\widehat{B}^TL_u^{})_{jn}](V^+)_{m1}V_{ni}I(\stackrel{~}{d}^m,\stackrel{~}{u}^n).$$ (27) $`L`$’s are unitary matrices which rotate the left handed fermion states to diagonalize the mass matrices, and $`I`$’s are functions obtained after loop integration and depend on the SUSY particle masses circulating inside the loop. For example , $$I(\stackrel{~}{u},\stackrel{~}{d})=\frac{1}{16\pi ^2}\frac{m_{\stackrel{~}{W}}}{m_{\stackrel{~}{u}}^2m_{\stackrel{~}{d}}^2}\left(\frac{m_{\stackrel{~}{u}}^2}{m_{\stackrel{~}{u}}^2m_{\stackrel{~}{W}}^2}\mathrm{ln}\frac{m_{\stackrel{~}{u}}^2}{m_{\stackrel{~}{W}}^2}\frac{m_{\stackrel{~}{d}}^2}{m_{\stackrel{~}{d}}^2m_{\stackrel{~}{W}}^2}\mathrm{ln}\frac{m_{\stackrel{~}{d}}^2}{m_{\stackrel{~}{W}}^2}\right),$$ (28) with similar expressions for $`I(\stackrel{~}{d},\stackrel{~}{\nu })`$ and $`I(\stackrel{~}{u},\stackrel{~}{e})`$. Consider those diagrams in which sparticles of the first two families participate. Since their masses are large ($`\stackrel{>}{_{}}10`$ TeV) compared to the case with minimal $`N=1`$ SUGRA, we expect considerably suppression of proton decay. For minimal $`N=1`$ SUGRA, $`m_{\stackrel{~}{u}}m_{\stackrel{~}{d}}m_{\stackrel{~}{W}}m_{3/2}1`$ TeV, and (28) can be approximated by $$I_0\frac{1}{16\pi ^2}\frac{1}{m_{3/2}}.$$ (29) In the $`𝒰(1)`$ mediated SUSY breaking scenario, expression (28) takes the form $$I^{}\frac{1}{16\pi ^2}\frac{m_{\stackrel{~}{W}}}{m_{\stackrel{~}{q}}^2}\eta I_0$$ (30) The nucleon lifetime in this case will be enhanced by the factor $`\frac{1}{\eta ^2}10^4`$. Of course, there exist diagrams in which one sparticle from the third and one from the ‘light’ families participate. In this case, (28) takes the form $$I^{\prime \prime }\frac{1}{16\pi ^2}\frac{2m_{\stackrel{~}{W}}}{m_{\stackrel{~}{q}}^2}\mathrm{ln}\frac{m_{\stackrel{~}{q}}}{m_{\stackrel{~}{W}}}\eta ^{}I_0$$ (31) and the corresponding proton lifetime will be $`\frac{1}{\eta ^2}500`$ times large. As pointed out in (within minimal $`N=1`$ SUGRA), the contribution from diagrams in which sparticles from the third generation participate could be comparable with those arising from the light generation sparticle exchange. Since minimal SUSY $`SU(5)`$ gives unacceptably fast proton decay with $`\tau _010^{29\pm 2}`$ yr, care must exercised in realistic model building (the situation is exacerbated if $`\mathrm{tan}\beta `$ is large). This problem is easily avoided in the anomalous $`𝒰(1)`$ mediated SUSY breaking scenario. Note that since the dominant contribution comes from the second term of (26), and in the internal lines of the appropriate nucleon decay diagram there necessarily runs one slepton state, even if the latter belongs to third family, it can have mass in the $`10`$ TeV range if $`\mathrm{tan}\beta `$ is either of intermediate ($`1015`$) or low value (this is required for preserving the desired gauge hierarchy). Thus, thanks to the anomalous $`𝒰(1)`$ symmetry, in addition to avoiding dangerous FCNC, one can also obtain adequate suppression of nucleon decay. Interestingly, this disfavors the large $`\mathrm{tan}\beta `$ regime which could be a characteristic feature of this class of models! ## 3 An $`SU(5)`$ example Let us now consider in detail a SUSY $`SU(5)`$ GUT and show how things discussed in the previous section work out in practice. ### 3.1 Charged fermion masses and mixings We exploit the anomalous $`𝒰(1)`$ as a flavor symmetry to help provide a natural understanding of the hierarchies between the charged fermion masses and their mixings. In these considerations the parameter $`ϵ0.22`$ (see (9)) plays an essential role. The three families of matter in $`(10+\overline{5})`$ representations, and higgs superfields $`\overline{H}(\overline{5})+H(5)`$ <sup>2</sup><sup>2</sup>2We assume the presence of $`Z_2`$ matter parity which distinguishes the matter and higgs superfields and prevents rapid proton decay. have the following transformation properties under $`𝒰(1)`$: $$Q_{10_3}=0,Q_{10_2}=2,Q_{10_1}=3$$ $$Q_{\overline{5}_1}=2+n,Q_{\overline{5}_2}=Q_{\overline{5}_3}=n,Q_{\overline{H}}=Q_H=0.$$ (32) The couplings relevant for the generation of up type quark masses are given by $$\begin{array}{ccc}& \begin{array}{ccc}10_1& \mathrm{\hspace{0.17em}\hspace{0.17em}10}_2& \mathrm{\hspace{0.17em}\hspace{0.17em}10}_3\end{array}& \\ \begin{array}{c}10_1\\ 10_2\\ 10_3\end{array}& \left(\begin{array}{ccc}ϵ^6& ϵ^5& ϵ^3\\ ϵ^5& ϵ^4& ϵ^2\\ ϵ^3& ϵ^2& \mathrm{\hspace{0.17em}\hspace{0.17em}1}\end{array}\right)H,& \end{array}$$ (33) while those responsible for down quark and charged lepton masses are $$\begin{array}{ccc}& \begin{array}{ccc}\overline{5}_1& \overline{5}_2& \overline{5}_3\end{array}& \\ \begin{array}{c}10_1\\ 10_2\\ 10_3\end{array}& \left(\begin{array}{ccc}ϵ^5& ϵ^3& ϵ^3\\ ϵ^4& ϵ^2& ϵ^2\\ ϵ^2& \mathrm{\hspace{0.17em}\hspace{0.17em}1}& \mathrm{\hspace{0.17em}\hspace{0.17em}1}\end{array}\right)ϵ^n\overline{H}.& \end{array}$$ (34) Upon diagonalization of (33), (34) we obtain $$\lambda _t1,\lambda _u:\lambda _c:\lambda _tϵ^6:ϵ^4:1.$$ (35) $$\lambda _bϵ^n,\lambda _d:\lambda _s:\lambda _bϵ^5:ϵ^2:1,$$ $$\lambda _\tau ϵ^n,\lambda _e:\lambda _\mu :\lambda _\tau ϵ^5:ϵ^2:1,$$ (36) where $`n=0,1,2`$ determines the value of $`\mathrm{tan}\beta `$, $$\mathrm{tan}\beta ϵ^n\frac{m_t}{m_b}.$$ (37) From (33) and (34), we obtain $$V_{us}ϵ,V_{cb}ϵ^2,V_{ub}ϵ^3.$$ (38) We see that the $`𝒰(1)`$ symmetry yields desirable hierarchies (35), (36) of charged fermion Yukawa couplings as well as the magnitudes of the CKM matrix elements (38). The reader will note, however, that (34) implies the asymptotic mass relations $`\widehat{M}_d^0=\widehat{M}_e^0`$, which are unacceptable for the two light families. This is readily avoided through the mechanism suggested in by employing two pairs of $`(\overline{15}+15)_{1,2}`$ matter states. Namely, with $`𝒰(1)`$ charges $$Q_{15_1}=Q_{\overline{15}_1}=3,Q_{15_2}=Q_{\overline{15}_2}=2,$$ (39) consider the couplings $$\begin{array}{cc}& \begin{array}{ccc}10_1& \mathrm{\hspace{0.17em}\hspace{0.17em}10}_2& \mathrm{\hspace{0.17em}\hspace{0.17em}10}_3\end{array}\\ \begin{array}{c}\overline{15}_1\\ \overline{15}_2\end{array}& \left(\begin{array}{ccc}\mathrm{\hspace{0.17em}\hspace{0.17em}1}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}\\ ϵ& \mathrm{\hspace{0.17em}\hspace{0.17em}1}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}\end{array}\right)\mathrm{\Sigma },\end{array}\begin{array}{cc}& \begin{array}{cc}15_1& \mathrm{\hspace{0.17em}\hspace{0.17em}15}_2\end{array}\\ \begin{array}{c}\overline{15}_1\\ \overline{15}_2\end{array}& \left(\begin{array}{ccc}\mathrm{\hspace{0.17em}\hspace{0.17em}1}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}& \\ ϵ& \mathrm{\hspace{0.17em}\hspace{0.17em}1}& \end{array}\right)M_{15},\end{array}$$ (40) where $`\mathrm{\Sigma }`$ is the scalar $`24`$-plet whose VEV breaks $`SU(5)`$ down to $`SU(3)_c\times SU(2)_L\times U(1)_Y`$. For $`M_{15}\mathrm{\Sigma }`$, we see that the ‘light’ $`q_{1,2}`$ states reside both in $`10_{1,2}`$ and $`15_{1,2}`$ states with similar ‘weights’. At the same time, the other light states from $`10`$-plets ($`u^c`$ and $`e^c`$) will not be affected because the $`15`$-plets do not contain fragments with the relevant quantum numbers. Thus, the relations $`m_s^0=m_\mu ^0`$ and $`m_d^0=m_e^0`$ are avoided, while $`m_b^0=m_\tau ^0`$ still holds since the terms in (40) do not affect $`10_3`$. As far as the sparticle spectrum is concerned, since the superfields $`10_3,\overline{H},H`$ have zero $`𝒰(1)`$ charges, the soft masses of their scalar components will be in the $`1`$ TeV range, $$m_{\stackrel{~}{10}_3}m_{\overline{H}}m_Hm_{3/2}=1\mathrm{TeV},$$ (41) while for $`10_{1,2}`$ and $`\overline{5}_1`$ we have $$m_{\stackrel{~}{10}_1}m_{\stackrel{~}{10}_2}m_{\stackrel{~}{\overline{5}}_1}m10\mathrm{TeV}.$$ (42) The soft masses of the scalar fragments from $`\overline{5}_{2,3}`$ depend on the value of $`n`$, and for $`n0`$, they also will be in the $`10`$ TeV range, which is preferred for proton stability. In order to satisfy condition (22) and avoid color instability in our model, we will introduce one pair of $`\overline{F}(\overline{5})+F(5)`$ supermultiplets with $`𝒰(1)`$ charges $$Q_{\overline{F}}=Q_F=\frac{1}{2}(17+3n),$$ (43) and with the following transformation properties under the symmetry in (12), $$\overline{F}Fe^{(10+n)\mathrm{i}\alpha }\overline{F}F.$$ (44) The superpotential coupling which generates mass term for these states is given by $$W_F=M_P\left(\frac{Z}{M_P}\right)^{10+n}\left(\frac{X}{M_P}\right)^{3n}\overline{F}F,$$ (45) from which, after substituting the VEVs (9) and (20), we obtain $$M_F=M_Pϵ_Z^{10+n}ϵ^{3n}=\{\begin{array}{ccc}200\mathrm{TeV}\hfill & \text{if }n=0\hfill & \\ 100\mathrm{TeV}\hfill & \text{if }n=1\hfill & \\ 40\mathrm{TeV}\hfill & \text{if }n=2\hfill & \end{array}$$ (46) Therefore, the masses of these additional states are considerable more than a TeV range. It is easy to verify that $`M_F^2`$ dominates over the negative soft $`\mathrm{mass}^2`$ ($`=\frac{17+3n}{2}m^2(30\mathrm{TeV})^2`$) for all possible values of $`n`$($`=0,1,2`$), so that color will be unbroken. Furthermore, taking into account (32) and (43), it is easily checked that the condition (22) which prevents the stop quark $`\mathrm{mass}^2`$ from becoming negative is automatically satisfied. Finally, since the $`𝒰(1)`$ charges of $`15_{1,2}`$ states are the same as those of $`10_{1,2}`$’s, the soft $`\mathrm{mass}^2`$ terms for light $`\stackrel{~}{q}_{1,2}`$ fragments are unchanged so that (22), with the choice of charges in (43), still holds. ### 3.2 Neutrino oscillations We next demonstrate how the solar and atmospheric neutrino data can be accommodated within the $`SU(5)`$ scheme. We stress the bi-maximal vacuum oscillation scenario, but also point out how the small (or large) mixing angle MSW oscillations can be realized. Indeed, the picture is similar to our previously considered $`SU(5)`$ and $`SO(10)`$ scenarios <sup>3</sup><sup>3</sup>3For other scenarios of neutrino oscillation with $`𝒰(1)`$ flavor symmetry within MSSM and various GUTs see .. Since the states $`\overline{5}_2`$ and $`\overline{5}_3`$ have the same $`𝒰(1)`$ charge (see (32)), we can expect naturally large $`\nu _\mu \nu _\tau `$ mixing. This also can be seen from the texture in (34). Introducing an $`SU(5)`$ singlet right handed neutrino $`𝒩_3`$ with suitable mass, the state ‘$`\nu _3`$’ can acquire the mass relevant for the atmospheric neutrino puzzle. At this stage the other two neutrino states are massless. Large $`\nu _e\nu _{\mu ,\tau }`$ mixing can be obtained by invoking the mechanism suggested in which naturally yields ‘maximal’ mixings between neutrino flavors. For this we need two additional $`SU(5)`$ singlet states $`𝒩_1`$, $`𝒩_2`$. Under $`𝒰(1)`$, the $`𝒩_i`$ states carry charges: $$Q_{𝒩_1}=Q_{𝒩_2}=n+2,Q_{𝒩_3}=0.$$ (47) The relevant couplings are $$W_{𝒩_3}=M_{𝒩_3}𝒩_3^2+ϵ^n(aϵ^2\overline{5}_1+b\overline{5}_2+c\overline{5}_3)H𝒩_3,$$ (48) $$\begin{array}{cc}& \begin{array}{cc}𝒩_1& 𝒩_2\end{array}\\ \begin{array}{c}\overline{5}_1\\ \overline{5}_2\\ \overline{5}_3\end{array}& \left(\begin{array}{ccc}ϵ^{2n+4}& \mathrm{\hspace{0.17em}\hspace{0.17em}1}& \\ ϵ^{2n+2}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}& \\ ϵ^{2n+2}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}& \end{array}\right)H,\end{array}\begin{array}{cc}& \begin{array}{cc}𝒩_1& 𝒩_2\end{array}\\ \begin{array}{c}𝒩_1\\ 𝒩_2\end{array}& \left(\begin{array}{ccc}ϵ^{2n+4}& 1& \\ \mathrm{\hspace{0.17em}\hspace{0.17em}1}& 0& \end{array}\right)M_𝒩,\end{array}$$ (49) where $`a,b,c`$ are dimensionless coefficients. Note that there also exists the coupling $`M^{}ϵ^{2+n}𝒩_1𝒩_3`$ which, if properly suppressed (see below), will not be relevant. Let us choose the basis in which the charged lepton matrix (34) is diagonal. This choice is convenient because the matrix which diagonalizes the neutrino mass matrix will then coincide with the lepton mixing matrix. The hierarchical structure of the couplings in (48) will not be altered, while the ‘Dirac’ and ‘Majorana’ masses from (49) will respectively have the forms $$\begin{array}{cc}m_D=& \left(\begin{array}{ccc}ϵ^{2n+4}& \mathrm{\hspace{0.17em}\hspace{0.17em}1}& \\ ϵ^{2n+2}& ϵ^2& \\ ϵ^{2n+2}& ϵ^2& \end{array}\right)h_u,\end{array}\begin{array}{cc}M_R=& \left(\begin{array}{ccc}ϵ^4& \mathrm{\hspace{0.17em}\hspace{0.17em}1}& \\ \mathrm{\hspace{0.17em}\hspace{0.17em}1}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}& \end{array}\right)M_𝒩.\end{array}$$ (50) Taking $$M^{}M_{𝒩_3}/ϵ^{2n},M_𝒩\stackrel{>}{_{}}\frac{M^2ϵ^{2n}}{M_{𝒩_3}}$$ (51) and the other coefficients of order unity, integration of the $`𝒩`$ states leads to the following ‘light’ neutrino mass matrix: $$\widehat{m}_\nu =\widehat{A}m+\widehat{B}m^{},$$ (52) where $$m\frac{ϵ^{2n}h_u^2}{M_{𝒩_3}},m^{}\frac{ϵ^{2n+2}h_u^2}{M_𝒩},$$ (53) $$\begin{array}{ccc}& & \\ \widehat{A}=\begin{array}{c}\end{array}& \left(\begin{array}{ccc}a^2ϵ^4& abϵ^2& acϵ^2\\ abϵ^2& b^2& bc\\ acϵ^2& bc& c^2\end{array}\right)m,& \end{array}$$ $$\begin{array}{ccc}& & \\ \widehat{B}=\begin{array}{c}\end{array}& \left(\begin{array}{ccc}ϵ^2& 1& 1\\ \mathrm{\hspace{0.17em}\hspace{0.17em}1}& ϵ^2& ϵ^2\\ \mathrm{\hspace{0.17em}\hspace{0.17em}1}& ϵ^2& ϵ^2\end{array}\right)m^{}& \end{array}.$$ (54) For $$M_{𝒩_3}ϵ^{2n}10^{15}\mathrm{GeV},M_𝒩ϵ^{2n+2}10^{18}\mathrm{GeV},$$ (55) the ‘light’ eigenvalues are $$m_{\nu _3}m(b^2+c^2+a^2ϵ^4)310^2\mathrm{eV},$$ $$m_{\nu _1}m_{\nu _2}m^{}310^5\mathrm{eV}.$$ (56) Ignoring CP violation the neutrino mass matrix (52) can be diagonalized by the orthogonal transformation $`\nu _\alpha =U_\nu ^{\alpha i}\nu _i`$, where $`\alpha =e,\mu ,\tau `$ denotes flavor indices, $`i=1,2,3`$ the mass eigenstates, and $`U_\nu `$ takes the form $$\begin{array}{ccc}U_\nu =& \left(\begin{array}{ccc}\frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}& s_1\\ \frac{1}{\sqrt{2}}c_\theta & \frac{1}{\sqrt{2}}c_\theta & s_\theta \\ \frac{1}{\sqrt{2}}s_\theta & \frac{1}{\sqrt{2}}s_\theta & c_\theta \end{array}\right)& \end{array},$$ (57) with $$\mathrm{tan}\theta =\frac{b}{c},s_1=\frac{aϵ^2}{\sqrt{b^2+c^2}},$$ (58) and $`s_\theta \mathrm{sin}\theta `$, $`c_\theta \mathrm{cos}\theta `$. From (52)-(58) the solar and atmospheric neutrino oscillation parameters are $$\mathrm{\Delta }m_{21}^22m^2ϵ^210^{10}\mathrm{eV}^2,$$ $$𝒜(\nu _e\nu _{\mu ,\tau })=1𝒪(ϵ^4),$$ (59) $$\mathrm{\Delta }m_{32}^2m_{\nu _3}^210^3\mathrm{eV}^2,$$ $$𝒜(\nu _\mu \nu _\tau )=\frac{4b^2c^2}{(b^2+c^2)^2}𝒪(ϵ^4),$$ (60) where the oscillation amplitudes are defined as $$𝒜(\nu _\alpha \nu _\beta )=4\mathrm{\Sigma }_{j<k}U_\nu ^{\alpha j}U_\nu ^{\alpha k}U_\nu ^{\beta j}U_\nu ^{\beta k}.$$ (61) We see that the solar neutrino puzzle is explained by maximal vacuum oscillations of $`\nu _e`$ into $`\nu _{\mu ,\tau }`$. For $`bc`$ the $`\nu _\mu \nu _\tau `$ mixing is naturally large, as suggested by the atmospheric anomaly. For $`bc`$ the $`\nu _\mu \nu _\tau `$ mixing will be even maximal, and $`\nu _e`$ oscillations will be $`50\%`$ into $`\nu _\mu `$ and $`50\%`$ into $`\nu _\tau `$. As far as the small angle MSW solution for the solar neutrino puzzle is concerned, from (34) we see that the expected mixing between $`\nu _e`$ and $`\nu _{\mu ,\tau }`$ states is $`ϵ^2`$, which provides the desirable value $`\mathrm{sin}^22\theta 4ϵ^410^2`$. To obtain $`\nu _e\nu _{\mu ,\tau }`$ oscillations, we can introduce a $`SU(5)`$ singlet state $`N`$ (instead of $`𝒩_{1,2}`$ states), which will provide mass in the $`10^3`$ eV range to the ‘$`\nu _2`$’ state, so that the small angle MSW oscillation for explaining the solar neutrino deficit is realized. Large mixing angle MSW solution is obtained by keeping the $`𝒩_{1,2}`$ states with the transformation properties in (47). Maximal $`\nu _e\nu _{\mu ,\tau }`$ oscillations will still hold, and the desired scale ($`10^6\mathrm{eV}^2`$) can be generated by taking $`M_𝒩ϵ^{2n+2}10^{16}`$ GeV in (53). The oscillation picture (60) for the atmospheric neutrinos will be unchanged. ### 3.3 Nucleon decay in $`SU(5)`$ Turning to the issue of nucleon decay in $`SU(5)`$ , we will take $`n0`$ in (32), which provides soft masses for $`\overline{5}_{2,3}`$ states in the $`10`$ TeV range. As pointed out in section 2, this will enhance proton stability. For decays with neutrino emission, in the relevant diagrams there circulate $`\stackrel{~}{t}`$ and $`\stackrel{~}{\mu }(\stackrel{~}{\tau })`$. Using the forms of (33), (34), and taking into account (26), (30), one estimates from (24), $$\tau (pK\nu _{\mu ,\tau })\frac{1}{\eta ^2}\left(\frac{\mathrm{sin}^2\theta _c}{V_{ts}}\right)^2\tau _0210^3\tau _0,$$ (62) where $`\theta _c`$ is the Cabibbo angle and $`\tau _0`$ is the proton lifetime in minimal $`SU(5)`$ model combined with SUGRA \[in obtaining (62), we took into account that $`\tau _0(\lambda _s\lambda _c\mathrm{sin}^2\theta )^2`$\]. As far as decays with emission of charged leptons are concerned, there are diagrams inside which circulate $`\stackrel{~}{t}`$, $`\stackrel{~}{b}`$ states, which are in the $`1`$ TeV range. It turns out that these diagrams provide the dominant contribution to proton decay. However, considerable suppression relative to minimal $`SU(5)`$ still occurs due to the small mixings between the third and light generations, and also because the baryon-meson-charged lepton matrix element is relatively suppressed . From all this, taking into account (25), (27), (33), (34), for the dominant decay we find $$\tau (pK\mu )10\left(\frac{\mathrm{sin}^2\theta _c}{V_{ub}}\right)^2\tau _010^3\tau _0.$$ (63) In summary, the color triplet mediated proton decay modes are adequately suppressed and interestingly, the decays into the charged lepton and neutrino channels are comparable. Before concluding, let us note that the Planck scale suppressed baryon number violating $`d=5`$ operator $`\frac{1}{M_P}q_1q_1q_2l_{2,3}`$, which could cause unacceptably fast proton decay, is also suppressed, since it emerges from the coupling $$\frac{1}{M_P}\left(\frac{X}{M_P}\right)^{8+n}10_110_110_2\overline{5}_{2,3},$$ (64) with the suppression guaranteed by the $`𝒰(1)`$ symmetry. ## 4 Conclusions In this paper we have discussed SUSY models which are accompanied by an anomalous $`𝒰(1)`$ symmetry. If the latter mediates SUSY breaking, crucial suppression of FCNC as well as dimension five proton decay can be achieved. If the same $`𝒰(1)`$ also acts as flavor symmetry, one can provide a natural qualitative explanation of the hierarchies between the charged fermion masses and the values of CKM matrix elements. An example based on $`SU(5)`$ is worked out in detail, with neutrino oscillations also taken into account. The $`𝒰(1)`$ flavor symmetry also adequately suppresses the Planck scale induced baryon number violating $`d=5`$ operators. The mechanisms discussed in this paper can be extended to a variety of GUTs such as $`SO(10)`$ and $`SU(5+N)`$. This work was supported in part by DOE under Grant No. DE-FG02-91ER40626 and by NATO, contract number CRG-970149.
no-problem/9911/hep-ex9911001.html
ar5iv
text
# Digital Pulseshape Analysis by Neural Networks for the Heidelberg-Moscow-Double-Beta-Decay-Experiment ## 1 Introduction The question of a nonvanishing neutrino mass is still one of the most outstanding open problems in modern physics. Especially after the latest striking hints for neutrino oscillations from the Super-Kamiokande experiment it has become very important to verify these results independently. Neutrinoless double-beta (0$`\nu \beta \beta `$) decay, which violates Lepton-number and B-L conservation by two units, is one of the most promising tools for the search of a finite neutrino mass and some other physics beyond the standard model . Furthermore it seems to be the only possibility to distinguish between the Majorana- and the Dirac-nature of the neutrino. If 0$`\nu \beta \beta `$-decay is observed neutrinos have to be of Majorana-type and have a finite mass. The atmospheric neutrino problem confirmed by the Super-Kamiokande collaboration has brought degenerate neutrino models back to attention again , where all neutrinos have a mass in the order of $`𝒪`$(eV). The newest generation of 0$`\nu \beta \beta `$-decay experiments, especially the Heidelberg-Moscow-Experiment started already now to test this mass range. ## 2 The Heidelberg-Moscow-Experiment The Heidelberg-Moscow-Experiment is presently the most sensitive experiment looking for 0$`\nu \beta \beta `$-decay . Out of 19.2 kg enriched <sup>76</sup>Ge five p-type High-Purity Germanium (HPGe) crystals were grown, which are now operated as p-type detectors in the Gran Sasso Underground Laboratory with an active mass of 10.96 kg in an extremely radiopure surroundings. The experiment has a background rate of 0.2 counts/(kg keV y) in the energy region between 2000 keV and 2080 keV, where the expected signal, a sharp peak at 2038.56 keV (the Q-value of $`\beta \beta `$-decay) lies. Since 1995 an additional background reduction has been achieved through the use of Digital Pulseshape Analysis (PSA). Due to the fact that the shape of the detected pulse is dependent on the type of interaction a distinction between multiple scattered Compton events and single interaction events (as 0$`\nu \beta \beta `$-decay) is possible. A 0$`\nu \beta \beta `$-decay event would appear as a Single Site Event (SSE), since the mean free path of the two electrons emitted by the decay is smaller than the time resolution of the detector allows to distinguish due to the low drift velocities of the electron-hole pairs. This means that Multiple Site Events (MSE) in the energy region of 0$`\nu \beta \beta `$-decay can be regarded as background. To distinguish between the two interaction types a one-parameter method was developed at that time, based on the fact that the time structure of the pulse shapes in Germanium detectors are mainly dependent on the locations of the various events of a count within the HPGe-crystal. For MSE’s one therefore expects a broader pulse in time than for SSE’s since the initial locations of the electron-hole pairs are distributed over a larger area of the crystal and the overall detection time therefore increases. With this method a reduction of the background by a factor of three in the area of the expected signal could be reached . Nevertheless a large amount of information is neglected with this method since only one parameter serves as the distinguishing criterium. Furthermore the method relies on a statistical correction of the measured SSE pulses since the efficiency of the method is substantially smaller than 100% resulting in a loss of information about the single events. For this reason we developed a new method based on neural networks to use as much information as possible from the recorded pulse shapes and to avoid statistical treatment of the obtained data. ## 3 Neural Networks Neural Networks are nowadays used in a wide variety of applications like pattern-, image- and videoimage-recognition. Since in the case of PSA the discrimination technique relies on a sort of pattern recognition it seemed consequent to base a new PSA-technique on this method. In contrast to the old method, where only one parameter was used as the distinguishing criterium, all the information obtained by the measurement about the time structure of the pulse is fed to the neural networks in order to distinguish between SSE’s and MSE’s. Typically a network is divided into processing units, which are further divided into single neurons. Each unit recieves signals from the previous level (i.e. from the neurons in the unit) and computes an output, which is then passed further to the next unit (i.e. to the neurons of the unit). The schematic action of such a feed forward network is depicted in Fig. 1. A typical neural network consists of three layers: the input layer, the hidden layer and the output layer. It has been shown that such a network suffices to approximate any function with a finite number of discontinuities to arbitrary precision if the activation function of the hidden unit neurons are nonlinear . If one has digitized information in an array (in our case it is the time evolution of the measured current behind the preamplifier, i.e. it is one-dimensional), the entries $`x_j`$ can be passed to the input layer to ’activate’ the neurons through the activation function, typically of the sigmoid form $$(x_j)=y_i(x_j)=\frac{1}{1+e^{x_j}}.$$ (1) Each neuron then passes its activation value $`y_i`$ to all the neurons in the hidden layer after multiplying it with a weight factor, so that the input to the neurons in the next layer is given by: $$x_j^h=\underset{i}{}w_{ij}y_i+\theta _j$$ (2) where $`\theta _j`$ is a threshold specific to the layer and $`w_{ij}`$ is the corresponding weight between the i-th neuron in the input layer and j-th neuron in the hidden layer. Again the output of the neuron is calculated through the activation function given in (1) and passed to the neurons of the next layer. Finally one obtains the output signals from the activation of the output neurons. Often (like in this analysis) the output layer consists of only one neuron returning a value between 0 and 1 thus deciding whether the data passed to the input layer belongs to a signal of type A) or B). However the network has to be configured in order to be able to distinguish reasonably between two types of input patterns. This is mostly done by a sort of training process. If one has a library of input patterns, these can be passed to the network. After the input pattern has been applied and the output has been calculated, the connections between the neurons are adjusted according to the generalized delta rule: $$\mathrm{\Delta }w_{ij}=\gamma \delta _jy_i,$$ (3) where $`\gamma `$ is the learning rate, $`y_j`$ is the activation of the neuron due to the given input pattern and $`\delta _k`$ is an error signal which in our case (sigmoid activation function) is given for the output layer by $$\delta _o=(d_oy_o)^{}(x_o)=(d_oy_o)y_o(1y_o)$$ (4) and by $$\delta _h=^{}(x_h)\underset{o=1}{\overset{N_0}{}}\delta _ow_{ho}=y_h(1y_h)\underset{o=1}{\overset{N_0}{}}\delta _ow_{ho}$$ (5) for the hidden layer. Here $`^{}`$ corresponds to the first derivative of the activation function, $`d_0`$ is the expected result of the output neurons and N<sub>o</sub> is the number of output neurons. Often, like in this analysis, a momentum term is used in the learning process to avoid oscillations in the training procedure: $$\mathrm{\Delta }w_{jk}(t+1)=\gamma \delta _ky_j+\alpha \mathrm{\Delta }w_{jk}(t),$$ (6) where $`t`$ is the presentation number and $`\alpha `$ is a constant representing the effect of the momentum term. After a certain number of these training procedures the network ’learns’ the patterns of the types of input information and the output of the network results in a value close to zero for a pattern of type A) and in a value close to one for a pattern of type B). For a general introduction to Neural Networks see for example . ## 4 Digital Pulsform Analysis with Neural Networks In order to perform PSA a sufficiently large library of known reference pulses has to be collected for the training process. A reliable source of the two different kinds of pulses is needed for this reason. It is well known that high energetic (E $`>`$ 500 keV) total absorption peaks consist mainly of Compton scattered events (see. ). The amount of MSE’s in these peaks in general is not less than 80%. In contrast to this, Double-Escape peaks (pair production followed by the annihilation of the e<sup>+</sup> and the escape of both 511 keV $`\gamma `$’s) consist of SSE’s only, since the detected particle in this case is a single electron with energy $$E_{DE}=(E_02\times 511keV),$$ (7) whose dissipation length again is smaller than the time resolution of the detector allows to resolve. Only the Compton background from higher energetic peaks in this area contributes to a contamination of MSE’s in the peak region of the Double-Escape line. Using a <sup>228</sup>Th-calibration source, the Double-Escape line of the total absorption peak at 2614.53 keV with an energy of E<sub>DE</sub>=1592.5 keV can be used for the SSE sample. To avoid systematic effects in the training process, a total absorption peak with a similar energy should be used for the MSE sample. The peak at 1621 keV from the <sup>228</sup>Th-daughter nuclide <sup>212</sup>Bi seemed appropriate for this purpose. ## 5 Simulation of PSA To test the efficiency and the reliability of the new method we performed simulations of calibration measurements of the Heidelberg-Moscow-Experiment. It is especially important to check for a possible energy dependence of the method since the energy of the training pulses ($``$ 1.6 MeV) does not coincide with the energy region of the expected 0$`\nu \beta \beta `$-decay signature (2038.5 keV). For this purpose we used the GEANT3.21 Monte-Carlo code extended for low energetic decays. The geometry of the experiment and the library of low energetic decays was programmed and successfully tested in earlier works . The code was further extended to distinguish between multiple and single interaction events . In Fig. 2 the simulated spectra of a calibration measurement with a <sup>228</sup>Th source with the whole setup of the Heidelberg-Moscow-Experiment and the resulting expected ratio of SSE’s in the spectrum are shown. In the energy region of the 0$`\nu \beta \beta `$-decay between 2000 keV and 2080 keV a reduction factor of $``$2.8 is expected through the use of PSA. ## 6 Network results We recorded $``$ 20.000 events of each kind (1592 keV Double-Escape line and 1621 keV total absorption peak) with every detector of the Heidelberg-Moscow-Experiment in the Gran Sasso Underground laboratory. After arranging these pulses in a library, they were used to train the networks. Since the Pulseshapes are dependent on detector parameters like size and form of the crystal, an own network had to be configured for every detector used in the PSA. The Neural Networks used in this analysis consisted out of three layers: An input layer out of 180 neurons, a hidden layer with 90 neurons and an output layer with a single neuron. To check the network also during the training process, we monitored the evolution of the network with time, i.e. with the number of presented pulses. This is shown in Fig. 3 for one of the enriched detectors: In the upper panel the output of the Network for the training peaks of the MSE and SSE lines are depicted separately as a function of time. It is evident that the network stabilizes after $``$ 400.000 presented pulses (i.e. each pulse has been passed to the network $``$ 10 times) and is able to distinguish between the two types of events. The fact that the network is able to identify the contamination of wrong pulses (from the admixture of SSE’s to the MSE sample and of MSE’s to the SSE sample) in the separate libraries gives us further confidence in the power of the method. This is shown in the lower panel. Here the fraction of identified SSE Pulses within the two samples of the training pulses is drawn as a function of presented pulses. The shaded areas give the one sigma region for the expectation obtained from simulations (see section 5). Note that the fraction of ’wrong pulses’ in the libraries is not an input parameter to the training process. This behaviour is obtained solely by the presentation of the reference pulses, i.e. the Neural Network itself recognizes the contamination without previous knowledge. From Fig. 3 it is obvious that further training of the network is meaningless after a certain limit, the obtained separation into MSE and SSE is not stabilizing further after $``$ 400.000 presented pulses. The separation is fluctuating around its mean value from here on. This is probably due to the fact that a non negligible amount of wrong pulses is contained in the training samples. In principle it would be possible to remove a large amount of this contamination since the network identifies wrong pulses within the samples itself. However it seemed too dangerous to use this method for further training since this could give rise to sytematic effects. Once the training process is finished, it is important to check the obtained results with independent data not used in the training process. We saved one thousand events of each kind for every detector to do this test. The result is seen in Fig. 5. As is evident, also for the independent data the separation works very well (for a quantitative analysis see section 7). In Fig. 5 it is visible that for a non-negligible fraction of the pulses an output $`y_o`$ between 0.1 and 0.9 is returned from the network, i.e. the pulses are not properly attributed to a definite type. The fraction of these pulses is $``$ 20% for all the detectors. This quantity can be identified as the efficiency of the separation. However, since we have further information from the simulation, we use this fact first to adjust the outcome of the networks to the expectations from the simulation. We define a cut value $`\zeta `$ so that all pulses with $`y_o<\zeta `$ are identified as SSE. To adjust the network to the expected result we vary $`\zeta `$ and perform a least square fit for the simulated and measured SSE ratios over the whole energy range above 500 keV. Having found the best fit, this cut is applied to the network result and thus the SSE-spectra are obtained. Note that separation between the two type of events has been obtained this way in Fig. 3. The result with this cut value $`\zeta `$ is then used to calculate the efficiencies $`e_s`$ and $`e_m`$ of the correct identification of SSE pulses and MSE pulses (see section 7). In the lower panel of Fig. 4 the obtained result for the energy-region around the reference pulses is shown. Most events in the Double-Escape-line are recognized as SSE’s. Only a small fraction from the background contributes to a contamination of MSE’s. Also the 1621 keV peak is recognized correctly to consist mainly of MSE’s. ## 7 Comparison with the simulation and the old parameter cut To compare the results of simulation and measurement directly, the measured and expected ratios of SSE’s in the spectrum as a function of energy are shown in Fig. 4 together with the result from the old one-parameter method. It is evident that the result from the neural netwok is satisfactory over the whole energy range above $``$ 500 keV. Only below $``$1000 keV there is a noticable difference between the neural network method and the old method. Here the old cut yields too many SSE pulses. Note that especially in the energy region interesting for 0$`\nu \beta \beta `$-decay (2000 keV-2080 keV) the agreement of the two techniques is very good. In Tab. 1 the fraction of identified SSE’s in the Double-Escape peak is listed for the four detectors together with the expected results from the simulation. As evident, the measured results are in good agreement with the expectation for the Double-Escape peak. The situation for the measured SSE fraction in the 1621 keV peak is slightly different. Since the efficiency $`e_m`$ for correct identification of MSE’s is not 100%, the actual measured SSE fraction within this peak is somewhat higher than the expected fraction from the simulation. Once the real fraction of SSE’s in a certain energy region is known through e.g. a simulation, it is easy to calculate the efficiencies of the recognition: $$e_s=\frac{1}{\gamma _s}+\left(1\left(\frac{a}{s}\right)_{SSE}\right)\frac{\frac{1}{\gamma _s}\frac{1}{\gamma _m}}{\left(\frac{a}{s}\right)_{SSE}\left(\frac{a}{s}\right)_{MSE}}$$ (8) and $$e_m=1\frac{\frac{1}{\gamma _s}\frac{1}{\gamma _m}}{\left(\frac{a}{s}\right)_{SSE}\left(\frac{a}{s}\right)_{MSE}}$$ (9) where $`\gamma _m=s_{MSE}`$/S<sub>MSE</sub> is the ratio of real SSE events in the 1621 keV peak to events identified as SSE’s by the network within the peak, $`\gamma _s=s_{SSE}`$/S<sub>SSE</sub> is the according fraction for the Double-Escape-peak and $`(\frac{a}{s})_{MSE}`$ and $`(\frac{a}{s})_{SSE}`$ are the simulated ratios of all events to SSE events in the given energy region. The total efficiency $`e_{tot}`$ of the method is then given by the square root of the product of the two single efficiencies. The obtained efficiencies for the four networks in the Heidelberg-Moscow-Experiment are listed in Tab. 2. Obviously an efficient separation of MSE and SSE pulses can be accomplished with Neural Networks. In principle it is possible to correct the result of the network through the known efficiencies with the relation $$s=\frac{S(1e_m)A}{(e_s+e_m1)}.$$ (10) Here S is the number of SSE’s identified by the network, A is the total number of events and $`s`$ is the real amount of SSE in the sample A. In our case the correction yields a smaller SSE rate $`s`$ then actually obtained with the neural networks S, since the number of MSE’s identified as SSE’s is larger than the number of SSE’s identified as MSE’s. $$(1e_m)(As)>(1e_s)s.$$ (11) However, we decided not to make use of this correction. Since the obtained efficiencies are high, the correction would be only of the order of 30% in the case of a large ratio $`\frac{a}{s}`$, as realized for total absorption peaks. The correction would be of the order of $``$10% for the expected ratio in the 0$`\nu \beta \beta `$-decay energy range. The fact that we do not apply the correction makes the use of the new technique a conservative method. From the ratio of identified SSE pulses in the energy region between 2000 keV and 2080 keV it is expected that the background in the calibration spectrum can be further reduced by a factor of 2.67$`\pm `$0.05 which is in good agreement with the results obtained from the simulation which yields a reduction factor of 2.78$`\pm `$0.01 and the one parameter-cut, which gives a reduction by a factor of 2.53$`\pm `$0.05 The slightly smaller value in the measurement is the effect of the efficiencies for the recognition. We expect a similar reduction for the Heidelberg-Moscow-Experiment. To finally check the compatibility of the two methods, in the right diagram of Fig. 4 we show the fraction of pulses from the library which were attributed the same type from both methods as a function of energy. With the efficiencies given above and the efficiencies of the old method we expect a fraction of $``$ 70 % to be identified equaly. Indeed $``$75% of the events are classified equally. ## 8 Conclusion We developed a new method to distinguish between multiple scattered and single interaction events in HPGe-detectors on the basis of Neural Networks. We showed that this technique is capable of distinguishing between the two types of events with a very high efficiency. The comparison with a simulation performed for this purpose confirms these results. ## 9 Aknowledgement B. M. would like to thank Ch. Gund for help with the extension of the GEANT3.21 code. B. M. is supported by the Graduiertenkolleg of the University of Heidelberg.
no-problem/9911/hep-ph9911484.html
ar5iv
text
# Comment on the paper: “Feynman-Schwinger representation approach to nonperturbative physics” by Ç. Şavklı et al. [Phys. Rev. C 60, 055210 (1999), hep-ph/9906211] ## Abstract We point out that the scalar Wick-Cutkosky model (with $`\chi ^2\varphi `$ interaction) used in this study has been known for a long time to be unstable. However, the numerical results presented in this paper do not show any sign of this instability which casts some doubt on their reliability. We compare with the worldline variational approach. Worldline methods (sometimes called the “particle represention of field theory”) have experienced a revival in the last few years (see e.g. Ref. ), both in perturbative and nonperturbative studies. In a recent paper, Şavklı et al. have applied what they call the “Feynman-Schwinger representation” to various field theoretic models . Among these they also discuss the theory of charged scalar particles $`\chi `$ of mass $`m`$ interacting through the exchange of a neutral scalar particle $`\varphi `$ of mass $`\mu `$ whose (Euclidean) Lagrangian is given by (Eq. (3.1)) $$=\chi ^{}\left[m^2^2+g\varphi \right]\chi +\frac{1}{2}\varphi \left(\mu ^2^2\right)\varphi .$$ (1) This is usually referred to as “Wick-Cutkosky model” and has been studied extensively in the context of the bound-state problem in quantum field theory. In these studies self-energy corrections are generally omitted and the exchange of neutral particles is restricted to be of the ladder type only. However, it is well known that the full theory described by the Lagrangian (1) is unstable. This is quite plausible already in a classical description by recognizing that the interaction term $`g\chi ^{}\chi \varphi `$ is equivalent to a $`\mathrm{\Phi }^3`$-term whose potential is unbounded from below, but has been proven more rigorously by Baym nearly 40 years ago. Note that this instability is also present in the quenched approximation where closed particle loops are neglected. This can be easily seen, for example, from Eqs. (3.3) and (3.4) in Ref. which give the full two-particle propagator – or any other $`n`$-point function – in terms of $$S(x,y)=y\left|\frac{1}{m^2^2+g\varphi }\right|x$$ (2) before the functional integration over the field $`\varphi `$ has been performed. Although $`m^2^2`$ is a positive definite operator it is obvious that for $`g0`$ there will be always (negative) field configurations $`\varphi `$ which lead to a vanishing of the denominator. If the singularity is properly treated, one therefore obtains an imaginary part of the euclidean $`n`$-point function which for the propgator can be interpreted as width of the metastable state . This happens irrespective whether the additional determinant in the functional integral over $`\varphi `$ is set to a constant (quenched approximation) or taken fully into account . The authors of Ref. present Monte-Carlo results for the quenched single-particle propagator (based on a discretized and Wick-rotated version of the Feynman-Schwinger representation) where the self-energy is the only mechanism to dress the bare propagator. Therefore their results should be sensitive to the above-mentioned deficiency of the Wick-Cutkosky model. However, Fig. 8 in their paper shows no sign of the instability over a wide range of coupling constants. This casts serious doubts on the reliability of their numerical results and the claimed “calculation of nonperturbative propagators”. We do not know the reason for this failure: perhaps it is due to the recipe used in Eq. (3.22) to suppress unwanted oscillations in the integral over the proper time $`s`$ or other numerical problems. Another possibility is that it results from using (in the authors’ words) “a rather small cutoff parameter” $`\mathrm{\Lambda }=3\mu `$ in the Pauli-Villars regularization. In any case, we believe that it is a serious failure and should be investigated in more detail. In this context we also note that in Ref. the necessary renormalization has not been performed, although, in principle, it is straightforward in a super-renormalizable theory like the Wick-Cutkosky model: the cutoff should be sent to infinity while keeping the physical mass at its measured value. Without that a drastic cutoff-dependence of the results is inevitable. The authors of Ref. further present perturbative results for the self-energy of a single particle in the (quenched) Wick-Cutkosky model. Solving for the physical mass $`M`$ in the gap-like equation (3.40) they find that “the dressed mass $`M`$ decreases up to a critical value $`g_{\mathrm{crit}}`$ which occurs when the mass reaches … $`M_{\mathrm{crit}}=0.094`$ GeV. For this simple case the critical coupling is given by $$g_{\mathrm{crit}}=\mathrm{\hspace{0.25em}22.2}\mathrm{GeV}.$$ (3) For larger values of $`g`$ there are no real solutions, showing that the dressed particle is unstable”. As we have pointed out to the authors this critical coupling constant is a far cry from the one obtained in a worldline variational approach which gave $`\alpha _{\mathrm{crit}}\overline{g}_{\mathrm{crit}}^2/(4\pi M^2)=\mathrm{\hspace{0.25em}0.81}`$ where $`2\overline{g}g`$ (cf. Eq. (15) in Ref. with Eq. (3.4) in the paper under discussion). In this calculation, the physical mass $`M`$ was always kept at $`0.939`$ GeV and $`\mu =0.14`$ GeV. In contrast, Eq. (3) for the slightly different value $`\mu =0.15`$ GeV leads to the totally unrealistic value $`\alpha _{\mathrm{crit}}=(11.1/0.094)^2/(4\pi )1100`$ which only shows that perturbation theory cannot be applied in the nonperturbative regime. In summary, we think that one of the few things one can learn from applying nonperturbative methods to an unrealistic model like the Wick-Cutkosky model is whether the specific method is capable to catch some genuine nonperturbative aspects. In the one-(heavy)-particle sector this is mainly the instability of that model . Neither the perturbative calculation nor the supposedly “exact” numerical Monte-Carlo calculation in Ref. fare well in this respect.
no-problem/9911/astro-ph9911041.html
ar5iv
text
# 1 The distribution of nearby halo stars in the plane of angular momentum components, 𝐽_𝑧 vs. 𝐽_⟂=√{𝐽_𝑥²+𝐽_𝑦²}, for our near complete sample (a) and for one Monte Carlo realization (b). Our Monte Carlo data sets have the same number of stars and the same spatial distribution as the observed sample. The characteristic parameters of the multivariate Gaussian used to describe the kinematics are obtained by fitting to the observed mean values and variances after appropriate convolution with the observational errors. We then generate 10000 “observed” samples as follows. A velocity is drawn from the underlying multivariate Gaussian; it is transformed to a proper motion and radial velocity (assuming the observed parallax and position on the sky); observational “errors” are added to the parallax, the radial velocity and the proper motion; these “observed” quantities are then transformed back to an “observed” velocity. Velocities are referred to the Galactic Centre; we adopt 8 kpc as the distance to the Galactic Centre and 220 km⁢s⁻¹ towards galactic longitude 𝑙=0 and galactic latitude 𝑏=0 as the velocity of the Local Standard of Rest. Debris streams in the solar neighbourhood as relicts from the formation of the Milky Way Amina Helmi, Simon D.M. White, P. Tim de Zeeuw and HongSheng Zhao Leiden Observatory, P.O. Box 9513, 2300 RA Leiden, The Netherlands Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85740 Garching bei München, Germany It is now generally believed that galaxies were built up through gravitational amplification of primordial fluctuations and the subsequent merging of smaller precursor structures. The stars of the structures that assembled to form the Milky Way should now make up much or all of its bulge and halo, in which case one hopes to find “fossil” evidence for those precursor structures in the present distribution of halo stars. Confirmation that this process is continuing came with the discovery of the Sagittarius dwarf galaxy, which is being disrupted by the Milky Way, but direct evidence that this process provided the bulk of the Milky Way’s population of old stars has so far been lacking. Here we show that about ten per cent of the metal–poor stars in the halo of the Milky Way, outside the radius of the Sun’s orbit, come from a single coherent structure that was disrupted during or soon after the Galaxy’s formation. This object had a highly inclined orbit about the Milky Way at a maximum distance of $``$ 16 kpc, and it probably resembled the Fornax and Sagittarius dwarf spheroidal galaxies. Early studies treated the formation of the Milky Way’s spheroid as an isolated collapse, argued to have been either rapid and “monolithic”<sup>2</sup>, or inhomogeneous and slow compared to the motions of typical halo stars<sup>3</sup>. A second dichotomy distinguished “dissipationless” galaxy formation, in which stars formed before collapse<sup>4</sup>, from “dissipative” models in which the collapsing material was mainly gaseous<sup>5</sup>. Aspects of these dichotomies remain as significant issues in current theories<sup>6</sup>, but they are typically rephrased as questions about whether small units equilibrate and form stars before they are incorporated into larger systems, and about whether they are completely disrupted after such incorporation. Stars from Galactic precursors should be visible today either as “satellite” galaxies, if disruption was inefficient, or as part of the stellar halo and bulge, if it was complete. Recent work examined the present–day distribution expected for the debris of a precursor which was disrupted during or soon after the formation of the Milky Way. Objects which could contribute substantially to the stellar halo near the Sun must have had relatively short orbital periods. Ten Gyr after disruption their stars should be spread evenly through a large volume, showing none of the trails characteristic of currently disrupting systems like Sagittarius<sup>8</sup>. In any relatively small region, such as the solar neighbourhood, their stars should be concentrated into a number of coherent “streams” in velocity space, each showing an internal velocity dispersion of only a few $`\mathrm{km}\mathrm{s}^1`$. Objects initially similar to the Fornax or Sagittarius dwarf galaxies should give rise to a few streams in the vicinity of the Sun. The high quality proper motions provided by the HIPPARCOS satellite allow us to construct accurate three–dimensional velocity distributions for almost complete samples of nearby halo stars. Drawing on two recent observational studies<sup>9,10</sup>, we define a sample containing 97 metal deficient (\[Fe/H\] $`1.6`$ dex) red giants and RR Lyrae within 1 kpc of the Sun and with the following properties: 1. HIPPARCOS proper motions are available for 88 of them<sup>11</sup>; for the remaining stars there are ground-based measurements<sup>12</sup>; in all cases accuracies of a few mas yr<sup>-1</sup> are achieved. 2. Radial velocities have been measured from the ground, with accuracies of the order of 10 $`\mathrm{km}\mathrm{s}^1`$. Metal abundances have been determined either spectroscopically or from suitable photometric calibrations<sup>13,14,15</sup>. 3. Calibrations of absolute magnitude $`M_V`$ against \[Fe/H\] for red giants<sup>13,14,15</sup> and RR Lyrae<sup>16</sup>, allow photometric parallaxes to be derived to an accuracy of roughly 20%. These are more accurate than the corresponding HIPPARCOS trigonometric parallaxes, but still remain the largest source of uncertainty in the derived tangential velocities. 4. We estimate the completeness to be of the order of $``$ 92%, based on the fact that there are eight known giants which satisfy our selection criteria but do not have measured proper motions. We look for substructure in our set of halo stars by studying the entropy $`S`$ of the sample, defined as: $$S=\underset{i}{}\frac{N_i}{N}\mathrm{log}\frac{N_iA_P}{N},$$ (1) where the sum is over the $`A_P`$ elements of the partition, the $`i`$-th element contains $`N_i`$ stars, and $`N`$ is the total number of stars. In the presence of substructure the measured entropy will be smaller than that of a smooth distribution, and will depend on the details of the partition; some partitions will enhance the signal, whereas others will smear it out. If there is no substructure then all partitions will yield similar $`S`$ values, and no significant minimum value will be found. We implement this entropy test initially by partitioning velocity space into cubic cells 70 $`\mathrm{km}\mathrm{s}^1`$ on a side. This choice is a compromise. It leaves a large number of cells in the high velocity range empty, but in the regions containing most of the stars, there are at least a few stars per cell. It is necessary to quantify the significance of any observed low entropy value relative to the distribution expected in the absence of substructure. Here we do this by generating Monte Carlo realizations which test whether the kinematics of the sample are consistent with a multivariate gaussian distribution<sup>17</sup>. We calculate entropies for 10000 Monte Carlo samples on the same partition as the real data; only for 5.6% do we find values of $`S`$ smaller than observed. We have repeated this test for many partitions, finding a large number with probabilities as low or lower than this. In particular, for a partition with a 250 $`\mathrm{km}\mathrm{s}^1`$ bin in $`v_\varphi `$, and 25 $`\mathrm{km}\mathrm{s}^1`$ bins in $`v_R`$ and $`v_z`$, ($`v_R`$, $`v_\varphi `$ and $`v_z`$ are the velocity components in the radial, azimuthal and $`z`$-directions respectively), only 0.06% of Monte Carlo simulations have $`S`$ smaller than observed. In general cubic cells yield lower significance levels, suggesting that the detected structure may be elongated along $`v_\varphi `$. We conclude that a multivariate Gaussian does not properly describe the distribution of halo star velocities in the solar neighbourhood. At this point the main problem is to identify the structure which makes the observed data incompatible with a smooth velocity distribution. A comparison of the three principal projections of the observed distribution to similar plots for our Monte Carlo samples reveals no obvious differences. To better identify streams we turn to the space of adiabatic invariants. Here clumping should be stronger, as all stars originating from the same progenitor have very similar integrals of motion, resulting in a superposition of the corresponding streams. We focus on the plane defined by two components of the angular momentum: $`J_z`$ and $`J_{}=\sqrt{J_x^2+J_y^2}`$, although $`J_{}`$ is not fully conserved in an axisymmetric potential. In Fig. 1a we plot $`J_z`$ versus $`J_{}`$ for our sample. For comparison, Fig. 1b gives a similar plot for one of our Monte Carlo samples. For $`J_{}`$ 1000 kpc $`\mathrm{km}\mathrm{s}^1`$ and $`|J_z|1000`$ kpc $`\mathrm{km}\mathrm{s}^1`$, the observed distribution appears fairly smooth. In this region we find stars with relatively low angular momentum and at all inclinations. In contrast, for $`J_{}`$ 1000 kpc $`\mathrm{km}\mathrm{s}^1`$, there are a few stars moving on retrograde low inclination orbits, an absence of stars on polar orbits, and an apparent “clump” on a prograde high inclination orbit. To determine the significance of this clumping, and to confirm it as the source of the signal detected by our entropy test, we compare the observed star counts in this plane to those for our Monte Carlo data sets. We count how many stars fall in each cell of a given partition of this angular momentum plane and compare it to the expected number in the Monte Carlo simulations. We say that the $`i`$-th cell has a significant overdensity if there is less than 1% probability of obtaining a count as large as the observed $`N^i`$ from a Poisson distribution with mean $`N^i=N_{\mathrm{sim}}^1{\displaystyle \underset{j=1}{\overset{N_{\mathrm{sim}}}{}}}N_j^i`$, where $`N_j^i`$ is the count in the $`i`$-th cell in the $`j`$-th simulation, and $`N_{\mathrm{sim}}`$ is the number of simulations. We repeat this test is for a series of regular partitions of $`p\times q`$ elements, with $`p`$, $`q`$ ranging from 3 to 20, thus allowing a clear identification of the deviant regions. We find a very significant deviation in most partitions for cells with $`J_{}`$ 2000 kpc $`\mathrm{km}\mathrm{s}^1`$ and $`500<J_z<1500`$ kpc $`\mathrm{km}\mathrm{s}^1`$; the probabilities of the observed occupation numbers range from 0.03% to 0.98%, depending on the partition, and in some partitions more than one cell is significantly overdense. Given this apparently significant evidence for substructure in the local halo, we study what happens if we relax our metallicity and distance selection criteria. We proceed by including in our sample all red giants and RR Lyrae stars studied by Chiba and Yoshii<sup>10</sup> with metallicities less than $`1`$ dex and distances to the Sun of less than 2.5 kpc. This new sample contains 275 giant stars and adds 5 new stars to the most significant clump in our complete sample. Of the 13 members of the clump, 9 have $`\left[\mathrm{Fe}/\mathrm{H}\right]1.6`$, whereas the remaining 4 have $`[\mathrm{Fe}/\mathrm{H}]1.53\pm 0.12`$, indicating that they are also very metal–poor. These stars are distributed all over the sky with no obvious spatial structure. In Fig. 2 we highlight the kinematic structure of the clump in the extended sample. The clump stars are distributed in two streams moving in opposite directions perpendicular to the Galactic Plane, with one possible outlier. This star has $`v_R=285\pm 21\mathrm{km}\mathrm{s}^1`$, and we exclude it because its energy is too large to be consistent with the energies of the other members of the clump. The velocity dispersions for the stream with negative $`v_z`$ (9 stars) are $`\sigma _\varphi =30\pm 17`$, $`\sigma _R=105\pm 16`$, $`\sigma _z=24\pm 28`$ in km s<sup>-1</sup>, whereas for the stream with positive $`v_z`$ (3 stars) these are $`\sigma _\varphi =49\pm 22`$, $`\sigma _R=13\pm 33`$, $`\sigma _z=31\pm 28`$ in km s<sup>-1</sup>. An elongation in the $`v_R`$-direction is expected for streams close to their orbital pericentre (the closest distance to the Galactic Centre; compare with other plots of simulated streams<sup>7</sup>). The orbit of the progenitor system is constrained by the observed positions and velocities of the stars. The orbital radii at apocentre and pericentre are $`r_{\mathrm{apo}}16`$ kpc and $`r_{\mathrm{peri}}7`$ kpc, the maximum height above the plane is $`z_{\mathrm{max}}13`$ kpc, and the radial period is $`P0.4`$ Gyr, for a Galactic potential including a disk, bulge and dark halo<sup>8</sup>. We run numerical simulations of satellite disruption in this potential to estimate the initial properties of the progenitor. After 10 Gyr of evolution, we find that the observed properties of the streams detected can be matched by stellar systems similar to dwarf spheroidals with initial velocity dispersions $`\sigma `$ in the range $`1218\mathrm{km}\mathrm{s}^1`$ and core radii $`R`$ of $`0.50.65`$ kpc. We also analysed whether the inclusion of an extended dark halo around the initial object would affect the structures observed and found very little effect. We derive the initial luminosity $`L`$ from $`L=L^{}/(f^{\mathrm{giant}}\times C^{}\times f^{\mathrm{sim}})`$, where $`L^{}=350\mathrm{L}_{}`$ is the total luminosity of the giants in the clump in our near-complete sample, $`f^{\mathrm{giant}}0.13`$ is the ratio of the luminosity in giants with $`M_V`$ and $`(BV)`$ in the range observed to the total luminosity of the system for an old metal–poor stellar population<sup>18</sup>, $`C^{}=0.92`$ is our estimated completeness, and $`f^{\mathrm{sim}}1.9\times 10^4`$ is the fraction of the initial satellite contained in a sphere of 1 kpc radius around the Sun as determined from our simulations. This gives $`L1.5\times 10^7\mathrm{L}_{}`$, from which we can derive, using our previous estimates of the initial velocity dispersion and core radii, an average initial core mass-to-light ratio $`M/L310\mathrm{{\rm Y}}_{}`$, where $`\mathrm{{\rm Y}}_{}`$ is the mass-to-light ratio of the Sun. A progenitor system with these characteristics would be similar to Fornax. Moreover, the mean metal abundance of the stars is consistent with the derived luminosity, if the progenitor follows the known metallicity–luminosity relation of dwarf satellites in the Local Group<sup>19</sup>. The precursor object was apparently on an eccentric orbit with relatively large apocenter. Given that it contributes 7/97 of the local halo population, our simulations suggest that it should account for 12% of all metal–poor halo stars outside the solar circle. Figure 2 shows that there are few other halo stars on high angular momentum polar orbits in the solar neighbourhood, just the opposite of the observed kinematics of satellites of the Milky Way<sup>20</sup>. The absence of satellite galaxies on eccentric non-polar orbits argues that some dynamical process preferentially destroys such systems; their stars should then end up populating the stellar halo. As we have shown, the halo does indeed contain fossil streams with properties consistent with such disruption. References 1. Ibata, R., Gilmore, G. & Irwin, M.J. A dwarf satellite galaxy in Sagittarius. Nature 370, 194–196 (1994). 2. Eggen, O.J., Lynden-Bell, D. & Sandage, A.R. Evidence from the motions of old stars that the Galaxy collapsed. Astrophys. J. 136, 748–766 (1962). 3. Searle, L. & Zinn, R. Compositions of halo clusters and the formation of the galactic halo. Astrophys. J. 225, 357–379 (1978). 4. Gott, J.R. III Recent theories of galaxy formation. Ann. Rev. Astron. Astrophys. 15, 235-266 (1977). 5. Larson, R.B. Models for the formation of elliptical galaxies. Mon. Not. R. Astron. Soc. 173, 671–699 (1975). 6. White, S.D.M. & Frenk, C.S. Galaxy formation through hierarchical clustering. Astrophys. J. 379, 52–79 (1991). 7. Helmi, A. & White, S.D.M. Building up the stellar halo of the Galaxy. Mon. Not. R. Astron. Soc. 307, 495–517 (1999). 8. Johnston K.V., Hernquist L. & Bolte M. Fossil signatures of ancient accretion events in the Halo. 1996, Astrophys. J. 465, 278–287 (1996). 9. Beers, T.C. & Sommer-Larsen, J. Kinematics of metal-poor stars in the Galaxy. Astrophys. J. Suppl. 96, 175–221 (1995). 10. Chiba, M. & Yoshii, Y. Early evolution of the Galactic halo revealed from Hipparcos observations of metal-poor stars. Astron. J. 115, 168–192 (1998). 11. The Hipparcos and Tycho Catalogues (SP-1200, European Space Agency, ESA Publications Division, ESTEC, Noordwijk, The Netherlands, 1997). 12. Roeser, S. & Bastian, U. A new star catalogue of SAO type. Astron. Astrophys. Suppl. 74, 449–451 (1988). 13. Anthony-Twarog, B.J. & Twarog, B.A. Reddening estimation for halo red giants using uvby photometry. Astron. J. 107, 1577–1590 (1994). 14. Beers, T.C., Preston, G.W., Shectman, S.A. & Kage, J.A.. Estimation of stellar metal abundance. I - Calibration of the Ca II K index. Astron. J. 100, 849–883 (1990). 15. Norris, J., Bessell M.S. & Pickles, A.J. Population studies. I. The Bidelman-MacConnell “weak–metal” stars. Astrophys. J. Suppl. 58, 463–492 (1985). 16. Layden, A.C. The metallicities and kinematics of RR Lyrae variables, 1: New observations of local stars. Astron. J. 108, 1016–1041 (1994). 17. Sommer-Larsen, J., Beers, T.C., Flynn, C., Wilhelm, R. & Christensen, P.R. A dynamical and kinematical model of the Galactic stellar halo and possible implications for Galaxy formation scenarios. Astrophys. J. 481, 775–781 (1997). 18. Bergbusch, P.A. & VandenBerg, D.A. Oxygen–enhanced models for globular cluster stars. II. Isochrones and luminosity functions. Astrophys. J. Suppl. 81, 163–220 (1992). 19. Mateo, M. Dwarf galaxies of the Local Group. Ann. Rev. Astron. Astrophys. 36, 435–506 (1998). 20. Lynden-Bell, D. & Lynden-Bell, R.M. Ghostly streams from the formation of the Galaxy’s halo. Mon. Not. R. Astron. Soc. 275, 429–442 (1995). Acknowledgements: A.H. wishes to thank the Max Planck Institut für Astrophysik for hospitality during her visits. We made use of the Simbad database (maintained by Centre de Donnée astronomiques de Strasbourg) and of the HIPPARCOS online facility at the European Space Research and Technology Centre (ESTEC) of the European Space Agency (ESA).
no-problem/9911/astro-ph9911002.html
ar5iv
text
# Fundamental parameters of nearby stars from the comparison with evolutionary calculations: masses, radii and effective temperatures ## 1 Introduction Stellar evolution calculations for single stars model their evolution in terms of the variations of their fundamental parameters R and $`T_{\mathrm{eff}}`$, as a function of time since the starting of hydrogen fusion reactions until their final extinction. Mass is the key parameter that decides the stars’ evolution, chemical composition and other factors play a secondary role. The evolution of the stars can be plotted as tracks in the HR diagram, some of its areas being more crowded by paths corresponding to stars of different masses. Other regions are completely empty, forbidden spaces which no star is supposed to cross. At least in principle, after proper conversion from the theoretical to the observational plane, it is possible to associate the position of a star in the HR diagram with a given stellar mass and time since its birth, or with a range of masses and times. The only requirements are an accurate knowledge of the distance from the observer to the star, and a pair of photometric measurements. This is a well-known method that has been largely applied to simplified cases where some constraints on age and distance exist, such as well detached binaries or stellar clusters. It has also been used in the search for age-metallicity relationships in the Galactic Disc or the Halo. Most of the stellar parameters at play in the calculations are the very same that define the atmospheric properties, and so shape the features observed in the stellar spectra. These quantities have been often estimated directly from the spectra, and only in a few situations, commonly for the lack of empirical alternatives, has consideration been given to the evolutionary models. Before trying to apply the method, several important questions need to be posed, such as how crowded is the HR diagram, or in other words how severe is the degeneracy between age, mass and metallicity for a given position in the HR diagram. It is of relevance to demonstrate that the translation from the theoretical parameters to the observational plane is properly done, otherwise no matter how realistic the calculations are, there is no hope to get useful results. Finally, an extensive assessment of the adequacy of the evolutionary calculations is required. All three issues can be simultaneously answered in applying the method to several favourable cases. Stellar masses and radii are known with extremely high accuracy for a bunch of nearby eclipsing binaries (Popper 1980; Andersen 1991). Some of these systems have been already employed by Schröder, Pols, & Eggleton (1997) and Pols et al. (1997) to test critically their evolutionary calculations (Pols et al. 1995) and tune parameters in their scheme that take account of convection. Stellar radii are alternatively and independently measured by interferometric techniques (e.g. Richichi et al. 1998) and also by the so-called InfraRed Flux Method (Blackwell & Lynas-Gray 1994). The later provides probably the most direct and model-independent estimate of the stellar effective temperature. Other parameters involved are the chemical composition and the age. Although it is possible to check the metallicity estimates from the isochrones with the results from spectroscopic measurements, and the stellar ages can be compared with other methods, such as activity indicators (see, e.g., Rocha-Pinto & Maciel 1998), or the abundances of radioactive nuclei (see, e.g., Goriely & Clerbaux 1999), we shall not concentrate on them here. In this paper, we have made use of the stellar evolutionary calculations of Bertelli et al. (1994) to study the possibility of retrieving the fundamental stellar parameters: radius, mass, and effective temperature, from the comparison of the stars’ position in the colour-magnitude diagram with computations of stellar evolution for isolated stars. We quantify the degeneracy between mass and age for a given place in the HR diagram using error bars in the estimates of these parameters. Stars in eclipsing binary systems and the InfraRed Flux Method are used to test the procedure. And finally, the technique is applied to 17,219 stars identified by the Hipparcos astrometric mission within an sphere with radius of 100 pc centred in the Sun. ## 2 Retrieval of the stellar parameters. Interpolation in the theoretical isochrones Calculations of stellar evolution interpolated to find the position of coeval stars of a given chemical composition but different masses are commonly referred to as isochrones. Among the published isochrones, those presented by Bertelli et al. (1994) span all observed stellar masses, metallicities from \[M/H\] = $`1.7`$ to 0.4, and stellar ages from $`4\times 10^6`$ to $`2\times 10^{10}`$ yrs. The computations mostly use the OPAL radiative opacities by Rogers & Iglesias (1992) and Iglesias et al. (1992). The reader is referred to Bertelli et al. (1994) and references therein for details. We assume that both the B$``$V colour, and the absolute magnitude (and therefore the distance to the star) are known with enough precision to avoid significant observational errors affecting derived quantities. We quantify this statement below. To retrieve the appropriate set of stellar parameters that reproduces the position of a star in the colour-magnitude diagram we proceed as follows. First we search the entire set of isochrones to find what are, if any, the B$``$V colours that reproduce the observed absolute V magnitude ($`M_v`$) within the observational errors. Then, the different possible solutions, corresponding to different ages, metallicities, masses, etc. are averaged to obtain a mean value for every stellar parameter, and an estimate of the uncertainty from the standard deviation. The final uncertainty includes three components. First of all, the intrinsic spread, as different evolutionary paths cross the same area of the colour-magnitude diagram. Second, the noise introduced in the translation from the theoretical $`\mathrm{log}gT_{\mathrm{eff}}`$ plane to the observational colour-magnitude diagram. Also, observational errors in the two considered parameters (B–V and $`M_v`$) contribute. Applying the procedure along a grid of colours and absolute magnitudes, we construct maps of the combined uncertainties expected for the different stellar parameters. We are mainly interested in radii, masses, and $`T_{\mathrm{eff}}`$s: the magnitudes available from observations of eclipsing binaries and the IRFM we are comparing to in the subsequent sections. The gray scale in the images included in Fig. 1 has been set to represent the relative uncertainties in these magnitudes. To construct the maps we have assumed an uncertainty of 0.01 dex in B–V and of 0.2 dex in $`M_v`$. The dashed line indicates an isochrone of 4 Gyrs. and solar chemical composition. The largest errors correspond to the darkest areas: $``$ 200 % for the mass, $``$ 80% for the radii, and $`30\%`$ for the effective temperatures. Uncertainties in the retrieved masses are especially significant close to the position of the horizontal branch, where intermediate mass stars cross back from the giant stage and more massive stars make their way up from the main sequence, and reach the worst expectations for giants and AGB stars. Understandably, there is also significant confusion in the area where the post-AGB stars cross the upper main sequence way down to the white dwarf cooling sequence, although in practice very few stars will appear in such a rapidly evolving phase. Errors in the retrieved radii are small for most of cases, although for the reddest evolved stars the confusion is very large, and a similar conclusion is sketched from Fig. 1 for the effective temperatures. These results show that radii and $`T_{\mathrm{eff}}`$s, are largely constrained by the position of a star in the colour-magnitude diagram, regardless of the existence of a wide range of possibilities for ages and metallicities. It is apparent from Fig. 1 that the masses of evolved stars with different ages and metallicities show more disparate values at a given position in the colour-magnitude diagram. ## 3 Critical testing Highly reliable measurements of stellar masses and radii are available for eclipsing binary systems. We use them to check the method suggested here for deriving the fundamental stellar parameters from the comparison between the position of the stars in the colour-magnitude diagram and the predictions of stellar evolution calculations. The model-independent effective temperatures and stellar diameters obtained through the InfraRed Flux Method by Blackwell & Lynas-Gray (1994) for solar-metallicity stars are also used in the test. ### 3.1 Eclipsing binary systems No doubt these privileged binary systems can provide the most solid determinations of stellar masses and radii. Andersen (1991) has produced the most recent compilation of high-accuracy ($`<`$ 2%) determinations in binaries, listing 90 stars. Andersen lists errors for the absolute V magnitudes, and we have assumed errors in the B–V colour to be $`0.01`$ dex. The upper and middle panels of Figure 2 compare the differences between the direct estimates of radius, and mass with those retrieved from the interpolation in the isochrones. The dashed lines correspond to differences of $`\pm `$ 10%. The radii are accurately well predicted with a remarkable standard deviation of 6%. The masses show a larger scatter of $`12`$%. Note that in these two panels of Fig. 2, the horizontal axes have been logarithmically expanded for clarity. The effective temperatures listed by Andersen (1991) come from different estimators. However, it is important to remark that spectroscopic analyses of these systems normally combine photometry and spectroscopy, and then the effective temperatures are likely to be more reliable than those in most of studies of isolated field stars. The mean of the relative errors in the effective temperature provided by Andersen for his sample is 3%. The lower panel of Fig. 2 shows the comparison with the effective temperatures retrieved from the evolutionary models. The agreement is excellent: the standard deviation is a mere 4%. For a given position in the colour magnitude diagram, radius and luminosity (and therefore, effective temperature) depend very weakly on metallicity. However, that does not apply to mass and it is reflected in Fig. 1 through the large uncertainties for the retrieved masses. The fact that the stars in the Andersen sample are all nearby, and therefore are roughly of solar metallicity is very useful to demonstrate this feature. Fig. 3, represents a comparison of the same magnitudes as in Fig. 2, but now the masses, radii, and $`T_{\mathrm{eff}}`$s, have been estimated assuming roughly solar metallicity ($`0.7<`$ \[Fe/H\] $`<+0.4`$), and then further restricting the set of isochrones. The agreement between observed and predicted radii improves very little, as reflects the standard deviation of 5%, and no significant improvement is achieved in the effective temperature, but the tendency to underestimate some of the masses no longer exists: the standard deviation of the relative differences between predicted and observed masses is now reduced to 8%. These results indicate that the combination of these masses and radii will lead to gravities estimates with a standard deviation in $`\mathrm{log}g`$ of 0.06 dex. For low-gravity stars of solar metallicity, a smaller mass will be derived by assuming the star to be metal-poor, and then the average of all metallicities underestimate the true value, as shown in Fig. 2. Only one of the stars in Andersen’s sample (TZ For A) has a gravity lower than $`\mathrm{log}g=3`$ and hence the sample is restricted to objects on, or close to the main sequence. To avoid this restriction we have extended the sample including a few of other systems with somewhat poorer determinations. We have included the 5 resolved spectroscopic binaries compiled by Popper (1980): HD16739, HD168614, $`\delta `$ Equ, Capella, and Spica; and 2 detached subgiant eclipsing systems: TY Pyx and Z Her, also included in the compilation by Popper. The sample was completed with the studies of $`\zeta `$ Aur by Bennett et al. (1996) and $`\gamma `$ Per by Popper & McAlister (1987). The B–V for the two components of $`\gamma `$ Per have been estimated from their spectral type and the tables of Aller et al. (1982). $`T_{\mathrm{eff}}`$s are provided for the stars in these systems, the resolved spectroscopic binaries with open circles, and the two components of $`\gamma `$ Per and $`\zeta `$ Aur with asterisks. Fig. 4 shows no correlation of the relative differences between observed and evolutionary radii, masses, or effective temperatures against $`\mathrm{log}g`$. ### 3.2 The IRFM The InfraRed Flux Method, developed by Blackwell and his collaborators, provides an accurate procedure to derive stellar angular diameters and effective temperatures by measuring the monochromatic flux at an infrared frequency and the bolometric flux, and using theoretical model atmospheres to estimate the monochromatic flux at the star’s surface. The method has been applied by Blackwell & Lynas-Gray (1994) using the Kurucz (1992) LTE model atmospheres to a sample of 80 stars. All but one of the stars in the sample have been observed by the Hipparcos mission and we have converted the IRFM angular diameters to radii using the trigonometric parallaxes ($`\pi `$). We have identified the position of the stars in the colour-magnitude diagram making use of the V band photometry and the B$``$V colours included in the Hipparcos catalogue and then interpolated in the Bertelli et al. (1994) isochrones to find the evolutionary status, and the fundamental stellar parameters. One star (HR1325) was rejected, as we could not find an isochrone close enough (within observational uncertainties) to the position of the star in the colour-magnitude diagram. In this analysis, we have neglected the effects of reddening and assumed that the rotational velocities are not high enough to disturb the position of the stars in the colour-magnitude diagram. No assumption has been made about the metallicity of the sample. The angular diameters of Blackwell & Lynas-Gray have been converted to radii using the Hipparcos trigonometric parallaxes. The upper panel of Figure 5 shows the comparison between the derived radii against the estimates from the evolutionary models. Again, the agreement is gratifying showing a standard deviation of only 5%. The effective temperatures retrieved from the isochrones are also compared to the IRFM temperatures in the lower panel of Fig. 5. The standard deviation of the comparison is less than 2%. The sample includes a star as far as 277 pc from the Sun, a distance at which errors in the Hipparcos parallax are expected to be significant, but the rest of the stars are within 100 pc. The radius of the largest star in the sample (HR2473; G8Ib) from the IRFM measurements is about 142.0 R, in good agreement with the prediction from its position in the colour-magnitude diagram 142.3 R (see Fig. 5). However, such a highly evolved star, as we previously found for the cool component of $`\zeta `$ Aur, occupies a region in the red part of the colour-magnitude diagram quite crowded by stellar evolutionary tracks, ending up with a very large uncertainty in the retrieved stellar parameters (see Fig. 1). In the case of the radius, the standard deviation is 43 R. The difference here from the previous comparison for the K supergiant in $`\zeta `$ Aur is that for HR2473, the mean value of the radii compatible with the star’s position in the colour-magnitude diagram is in very good agreement with the observational estimate. ## 4 Application of the method to the Hipparcos catalogue Taking into account both the accuracy of the estimates and the the range of parameters where the predictions are too vague (crowded zones in the colour-magnitude diagram), the stellar evolutionary calculations can be used with no more than photometry and the trigonometric parallax within the following limits: $$\begin{array}{ccccc}0.87& & R/R_{}& & 21\\ 0.88& & M/M_{}& & 22.9\\ 3,961& & T_{\mathrm{eff}}& & 33,884\mathrm{K}\\ 2.52& & \mathrm{log}g& & 4.47.\end{array}$$ (1) The samples used here to test the accuracy of the procedure to retrieve the stellar parameters are strongly biased towards solar metallicities. We have shown that the comparison between physical and predicted values is excellent when restricting the possible range for the metallicity to roughly solar ($`0.4<`$ \[Fe/H\] $`<+0.4`$), based on a priori knowledge of the statistical peculiarities of the sample. Nevertheless, the results here described cannot be safely extended to low-metallicity stars without further study. A precise knowledge of the fundamental stellar parameters is essential to make a comparison possible between observations and theoretical studies, shedding light on multiple aspects of stellar structure, stellar evolution, and the physics of stellar atmospheres. From the previous sections, we have seen that it is possible to use in a direct manner the isochrones of Bertelli et al. (1994) to estimate masses, radii, and effective temperatures for unevolved stars with solar-metallicity provided an accurate estimate of the distance from Earth is available. The Hipparcos mission has measured, among other quantities, parallaxes that lead to distances precise typically better than 20% up to 100 pc from the Sun (see, e.g., Perryman et al. 1995). We have derived absolute magnitudes and combine them with the B–V index (compiled also in the Hipparcos catalogue) to estimate masses, radii, gravities, and effective temperatures for 17,219 stars that appear within the range of masses, radii, and gravities assessed in this study out of 22,982 stars included in the Hipparcos catalogue within the sphere of 100 pc radius from the Sun. Solar metallicity ($`0.4<`$ \[Fe/H\] $`<+0.4`$) has been assumed. Although the tail of the metallicity distribution of stars in the solar neighbourhood reaches \[Fe/H\] $`1`$, most of the stars are within the selected interval (Rocha-Pinto & Maciel 1996). An overview of a few of the possible uses of these data is given in the next section. Table 1, available only in electronic format, includes the following information: Hipparcos #, V, $`\pi `$, $`\sigma (\pi )`$, $`M_v`$, $`\sigma (M_v)`$, B$``$V, $`\mathrm{log}g`$, $`\sigma (\mathrm{log}g)`$, M/M, $`\sigma (\mathrm{M}/\mathrm{M}_{})`$, $`\mathrm{log}(R/R_{})`$, $`\sigma (\mathrm{log}R/R_{})`$, $`BC`$, $`\sigma (BC)`$, $`\mathrm{log}T_{\mathrm{eff}}`$, and $`\sigma (\mathrm{log}T_{\mathrm{eff}})`$. Fig. 6 shows the derived luminosity and actual mass functions for the sample. These figures have been derived using the data in Table 1, and therefore are restricted to the range $`0.8\mathrm{M}`$/$`\mathrm{M}_{}22`$, although the statistics are poor for the more massive stars and we have further restricted the plots. Although it is beyond the scope of this paper, the study of these distribution functions will likely shed light on the discussion about the universality of the initial mass function, and the star formation history of the Galactic disc. The use of a finer grid for the interpolation in stellar age and metallicity may provide additional information. ## 5 Discussion of the results The comparison between the stellar masses, radii and effective temperatures for detached eclipsing binaries and those analyzed by the IRFM with the values retrieved from the position of the stars in the colour-magnitude diagram and interpolation in the isochrones of Bertelli et al. (1994), showed that the latter provides a method to estimate those fundamental parameters with high accuracy for most purposes. Stellar radii can be predicted to $`6`$%, masses to $`8`$%, and effective temperatures to $`2`$% for the ranges in these parameters listed in §4. However, the longest episode in stellar evolution, the main-sequence, imposes a bias in the samples. Giants and super-giants are not common in the analysed samples with high-quality data, and hence our conclusions cannot be safely extended to very low gravities. New observational efforts are in progress (see, e.g. Bennett, Brown & Yang 1998) and are likely to improve the situation in the future. The same or very similar arguments apply to the metal content of the stars. Our sample is biased towards solar-like metallicities and from the material studied here it is not possible to derive any conclusion for the lower-metallicity domain. The addition of more constraints, such as prior (from spectroscopy or photometry) knowledge of the stellar metal abundance, and a more detailed analysis, weighting the different possible stellar models with care, depending on the speed with which they cross that part of the colour-magnitude diagram, should likely improve the results and possibly extend its applicability to metal-poor stars. Rotation changes the position of the stars in the colour-magnitude diagram (Maeder & Peytremann 1970). Typically, the position of a star rotating at $`200`$ km s<sup>-1</sup> will change by 0.1–0.3 mag in $`M_v`$ and 200–250 K in $`T_{\mathrm{eff}}`$. Therefore, for high rotational velocities, has to be taken into account before applying the procedure described here. It is possible to get better B–V colour estimates than those compiled in the Hipparcos catalogue. The calibration of Harmanec (1998) makes it possible to accurately estimate the V and B colours from the B$``$V and U$``$B indices together with the Hipparcos $`H_p`$ magnitudes. Besides, the B–V colours listed in the Hipparcos catalogue can be refined by combining them with Earth-based measurements, as proved by Clementini et al. (1999). We have selected the set of isochrones derived from the evolutionary calculations of Bertelli et al. (1994) because they are one of the most homogeneous and comprehensive among those publicly available in electronic format. It is of interest to check whether the use of alternative models would lead to the same conclusions. Considering the particular case of stars between 0.8 and 1.25 M of 4 Gyr. age, we can compare the calculations of VandenBerg (1985) with those of Bertelli et al. (1994). Figure 7 displays such a comparison in both, the theoretical ($`\mathrm{log}gT\mathrm{eff}`$) and observational ($`M_v`$ B–V) planes. The points in the isochrones corresponding to equal masses (left-side panels; 0.8, 1.0, 1.25 M) or radii (right-hand panels; 0.83, 1.20, 2.0, 2.51 R) have been linked by solid segments. Differences in the theoretical plane may be the effect of one or more of the different ingredients in the calculations, such as convection treatment or radiative opacities (Los Alamos Opacity Library vs. OPAL), as well as slightly different assumed metallicities (Z=0.0169 vs. 0.02), or mass fraction of helium (Y=0.25 vs. 0.28). The agreement is better for the dwarfs with lower effective temperatures and gets poorer for lower gravities. However, the discrepancies in the observational plane become more significant and systematic, and could induce important differences in the results. Even though there are details such as discordant assumptions for the Sun’s bolometric correction ($`0.12`$ vs. $`0.08`$), the different model atmospheres employed in the translation from the theoretical magnitudes must play a major role. The more recent model atmospheres (Kurucz 1992) employed by Bertelli et al. perform adequately, as suggested by the conclusions in §3. The positions in the $`\mathrm{log}gT_{\mathrm{eff}}`$ plane predicted by the calculations of Schaller et al. (1992) without overshooting, shown with filled circles in the left-side upper panel of Fig. 7 for 0.8, 1.0, and 1.25 M, do not exactly overlap neither with the predictions derived from Bertelli et al.’s calculations, nor with those of VandenBerg (1985), yet even so they include the same radiative opacities (LAOL; Huebner et al. 1977) and assume the same value for the mixing-length $`\alpha `$ as VandenBerg’s models. Again, slightly different values for Z, Y, and details in the treatment of envelope convection must be responsible. The open circle correspond to a 1.25 M model with overshooting. Balona (1994) made use of the calibration of Balona & Shobbrook (1984) to estimate absolute magnitudes from the synthetic Strömgren indices computed by Lester et al. (1986) based on Kurucz (1979) model atmospheres. He derived effective temperatures, gravities, and luminosities from the model atmospheres, and used the $`T_{\mathrm{eff}}`$s and luminosities to interpolate in the models of stellar evolution by Claret & Gimenez (1992) and Schaller et al. (1992) and then estimate evolutionary gravities. The comparison showed that the evolutionary gravities were larger than the gravities from the model atmospheres for stars with $`\mathrm{log}g<4`$, and the discrepancy was increasing towards lower gravities. Besides, the situation appeared to be reversed for stars with $`\mathrm{log}g>4`$. The evolutionary calculations used here tend to underpredict the stellar masses for low-gravity stars. However, this effect is much smaller and in opposite sense to the huge differences found by Balona (1994) that were as large as 0.5 dex for $`\mathrm{log}g`$ (evolutionary) $`3.5`$. The explanation is still unclear, but we note that the comparison of the gravities from LTE (model-atmospheres) spectroscopic analysis (iron ionization equilibrium) with those obtained combining Hipparcos parallaxes and evolutionary models seem to agree reasonably well (Allende Prieto et al. 1999; see also Fuhrmann 1998). Gravitational redshifts are proportional to the mass-radius ratio and systematically affect measurements of stellar radial velocities (Dravins et al. 1999). Photospheric spectral lines are shifted by roughly 600 m s<sup>-1</sup> for a star like the Sun, and by more than 1000 m s<sup>-1</sup> for more massive stars in the main sequence. The empirically determined errors for masses and radii derived from the position of a star in the colour-magnitude diagram make it possible to estimate its gravitational redshift with an uncertainty of the order of 100 m s<sup>-1</sup>. A precise knowledge of the stellar radius and the distance to the star combine together to translate the measured stellar flux to the flux at the star’s surface, a quantity that can be compared with synthetic spectra from model atmospheres. Spectrophotometry is already available for the large sample of stars observed by IUE, HST, and many other missions, and their investigation will likely provide valuable information on the stellar content of the Galactic disc in the solar neighbourhood and beyond. ###### Acknowledgements. This work has been partially funded by the NSF (grant AST961814) and the Robert A. Welch Foundation of Houston, Texas. We thank Mario M. Hernández for fruitful discussions about rapidly rotating stars. We are as well indebted to the the referee, Gianpaolo Bertelli, for helpful comments that improved the paper. We have made use of data from the Hipparcos astrometric mission of the ESA, the NASA ADS, and SIMBAD.
no-problem/9911/astro-ph9911445.html
ar5iv
text
# A measurement of Ω from the North American Test Flight of Boomerang ## 1. Introduction The dramatic improvement in the quality of astronomical data in the past few years has presented cosmologists with the possibility of measuring the large scale properties of our universe with unprecedented precision (e.g. Kamionkowski & Kosowsky (1999)). The sensitivity of the angular power spectrum of the Cosmic Microwave Background (CMB) to cosmological parameters has lead to analyses of existing datasets with increasing sophistication in an attempt to measure such fundamental quantities as the energy density of the universe and the cosmological constant. This activity has lead to improved methods of Maximum Likelihood Estimation (Bond, Jaffe & Knox 1998, hereafter BJK (98), Bartlett et al (1999)), attempts at enlarging the range of possible parameters (Lineweaver (1998); Tegmark (1999); Melchiorri et al. (1999)), and the incorporation of systematic uncertainties in the experiments (Dodelson & Knox, from now on DK (99), Ganga et al. (1997)). Within the class of adiabatic inflationary models there is now strong evidence from the CMB that the universe is flat. The most extensive range of parameters has been considered by Tegmark (1999) where the author found that a flat universe was consistent with CMB data at the $`68\%`$ confidence level. A more thorough analysis was performed in DK (99), incorporating the non-Gaussianity of the likelihood function, possible calibration uncertainties and the most recent data: again, the $`68\%`$ likelihood contours comfortably encompass the Einsten-de Sitter Universe. All these previous analyses were restricted to the class of open and flat models. In this letter we present further evidence for a flat universe from the CMB. Using the methods for parameter estimation described in BJK (98), we perform a search in cosmological parameter space for the allowed range of values for the fractional density of matter, $`\mathrm{\Omega }_M`$, and cosmological constant, $`\mathrm{\Omega }_\mathrm{\Lambda }`$, given the recent estimate of the angular power spectrum from the 1997 test flight of the Boomerang experiment (see the companion paper Mauskopf et al. (1999)). We obtain our primary constraints from this data set alone and find compelling evidence, within the family of adiabatic inflationary models for a flat universe. In section 2 we briefly describe the Boomerang experiment, the data analysis undertaken and the characteristics of the angular power spectrum obtained. In section 3 we spell out the parameter space we have explored, the method we use and the constraints we obtain on the fractional energy density of the universe, $`\mathrm{\Omega }=\mathrm{\Omega }_M+\mathrm{\Omega }_\mathrm{\Lambda }`$. Finally in section 4 we discuss our findings and combine it with other cosmological data to obtain a new constraint on the cosmological constant. ## 2. The Data The data we use here are from a North American test flight of Boomerang (Boomerang/NA), a balloon-borne telescope designed to map CMB anisotropies from a long-duration, balloon-borne (LDB) flight above the Antarctic. A detailed description of the instrument can be found in Masi et al. (1999). A description of the data and observations, with a discussion of calibrations, systematic effects and signal reconstruction can be found in Mauskopf et al. (1999). This test flight produced maps of the CMB with more than 200 square degrees of sky coverage at frequencies of 90 and 150 GHz with resolutions of 26 arcmins FWHM and 16.6 arcmins FWHM respectively. The size of the Boomerang/NA 150 GHz map (23,561, $`6^{}`$ pixels) required new methods of analysis able to incorporate the effects of correlated noise and new implementations capable of processing large data sets. The pixelized map and angular power spectrum were produced using the MADCAP software package of Borrill (1999a, 1999b) (see http://cfpa.berkeley.edu/$``$borrill/cmb/madcap.html) on the Cray T3E-900 at NERSC and the Cray T3E-1200 at CINECA. FIG. 1.— The Likelihood function of each of the eight band powers, $`\mathrm{}_{\mathrm{eff}}=(58,102,153,204,255,305,403,729)`$, reported in Mauskopf et al. (1999) computed using the offset lognormal ansatz of BJK (98). The angular power spectrum, $`C_{\mathrm{}}`$, resulting from the analysis of the 150 GHz map was estimated in eight bins spanning $`\mathrm{}`$ with seven bin’s centered between $`\mathrm{}=50`$ to $`\mathrm{}=400`$ and one bin at $`\mathrm{}=800`$. The bin correlation matrix is diagonalized as in BJK98 resulting in eight orthogonalized (independent) bins. We present the likelihood for each orthogonalized band power in Figure 1, using the offset lognormal ansatz proposed in BJK (98). As described in Mauskopf et al. (1999) the data show strong evidence for an acoustic peak with an amplitude of $`70\mu `$K<sub>CMB</sub> centered at $`\mathrm{}200`$. ## 3. Measuring Curvature The Boomerang/NA angular power spectrum covers a range of $`\mathrm{}`$ corresponding to the horizon size at decoupling. The amplitude and shape of the power spectrum is primarily sensitive to the overall curvature of the universe, $`\mathrm{\Omega }`$ (Doroskevich, Zeldovich, & Sunyaev (1978)); other parameters such as the scalar spectral index, $`n_S`$, the fractional energy density in baryons, $`\mathrm{\Omega }_B`$, the cosmological constant, $`\mathrm{\Omega }_\mathrm{\Lambda }`$, and the Hubble constant, $`H_0100h`$ km sec<sup>-1</sup>, will also affect the height of the peak and therefore some “cosmic confusion” will arise if we attempt individual constraints on each of the parameters (Bond et al (1994)). In our analysis we shall restrict ourselves to the family of adiabatic, CDM models. This involves considerable theoretical predjudice in the set of parameters we choose to vary although, as the presence of an acoustic peak at $`\mathrm{}200`$ becomes more certain, the assumption that structure was seeded by primordial adiabatic perturbations becomes more compelling (Liddle (1995); however, counterexamples exist, Turok (1997); Durrer & Sakellariadou (1997); Hu (1999)). We should, in principle, consider an 11-dimensional space of parameters; sensible priors due to previous constraints and the spectral coverage of the Boomerang/NA angular power spectrum reduce the space to 6 dimensions. In particular, we assume $`\tau _c=0`$ (lacking convincing evidence for high redshift reionization), we assume a negligeable contribution of gravitational waves (as predicted in the standard scenario), and we discard the weak effect due to massive neutrinos. The remaining parameters to vary are $`\mathrm{\Omega }_{CDM}`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$, $`\mathrm{\Omega }_B`$, $`h`$, $`n_S`$ and the amplitude of fluctuations, $`C_{10}`$, in units of $`C_{10}^{COBE}`$. The combination $`\mathrm{\Omega }_Bh^2`$ is constrained by primordial nucleosynthesis arguments: $`0.013\mathrm{\Omega }_Bh^20.025`$, while we set $`0.5h0.8`$. For the spectral index of the primordial scalar fluctuations we make the choice $`0.8n_S1.3`$ and we let a $`20\%`$ variation in $`C_{10}`$. As our main goal is to obtain constraints in the ($`\mathrm{\Omega }_M=\mathrm{\Omega }_{CDM}+\mathrm{\Omega }_B`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$) plane, we let these parameters vary in the range $`[0.05,2]\times [0,1]`$. Proceeding as in DK (99), we attribute a likelihood to a point on this plane by finding the remaining four, “nuisance”, parameters that maximize it. The reasons for applying this method are twofold. First, if the likelihood were a multivariate Gaussian in all the parameters, maximizing with regards to the nuisance parameters corresponds to marginalizing over them. Second, if we define our $`68\%`$, $`95\%`$ and $`99\%`$ contours where the likelihood falls to $`0.32`$, $`0.05`$ and $`0.01`$ of its peak value (as would be the case for a two dimensional Multivariate Gaussian), then the constraints we obtain are conservative relative to any other hypersurface we may choose in parameter space in the sense that they rule out a smaller range of parameter space than other usual choices. The likelihood function for the estimated band powers is non-Gaussian but one can apply the “radical compression” method proposed by BJK (98); the likelihood function is well approximated by an offset lognormal distribution whose parameters can be easily calculated from the output of MADCAP. The theory $`C_{\mathrm{}}`$s are generated using CMBFAST (Seljak & Zaldarriaga (1996)) and the recent implementation for closed models CAMB (Lewis, Challinor & Lasenby (1999)). We search for the maximum along a 4 dimensional grid of models, using the fact that variations in $`C_{10}`$ and $`n_s`$ are less CPU time consuming. We searched also for the multidimensional maxima of the likelihood adopting a Downhill Simplex Method (Press et al. (1989)), obtaining consistent results. FIG. 2.— The Likelihood function of $`\mathrm{\Omega }=\mathrm{\Omega }_M+\mathrm{\Omega }_\mathrm{\Lambda }`$ normalized to unity at the peak after marginalizing along the $`\mathrm{\Omega }_M\mathrm{\Omega }_\mathrm{\Lambda }`$ direction. The dashed line is the cumulative likelihood. In Figure 2, we plot the likelihood of $`\mathrm{\Omega }`$ normalized to 1 at the peak where, again, we have maximized along the $`\mathrm{\Omega }_M\mathrm{\Omega }_\mathrm{\Lambda }`$ direction. The likelihood shows a sharp peak near $`\mathrm{\Omega }=1`$ and this result is insensitive to the tradeoff between $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$. (see Figure 3 and explanation in following paragraphs). This is an extreme manifestation of the “cosmic degeneracy” problem (because we are focusing on just the first peak): we are able to obtain robust constraints on $`\mathrm{\Omega }`$ without strong constraints on $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ individually. Within the range of models we are considering, we find that $`68\%`$ of integrated likelihood corresponds to $`0.85\mathrm{\Omega }1.25`$ ($`0.65\mathrm{\Omega }1.45`$ at $`95\%`$). The best fit is a marginally closed model with $`\mathrm{\Omega }_{CDM}=0.26`$, $`\mathrm{\Omega }_B=0.05`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.75`$, $`n_S=0.95`$, $`h=0.70`$, $`C_{10}=0.9`$. An almost equivalent good fit is given by $`\mathrm{\Omega }_{CDM}=0.39`$, $`\mathrm{\Omega }_B=0.07`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.65`$, $`n_S=0.90`$, $`h=0.55`$, $`C_{10}=1.0`$. In Figure 3 (top panel) we estimate the likelihood of the data for a $`20\times 20`$ grid in ($`\mathrm{\Omega }_M`$,$`\mathrm{\Omega }_\mathrm{\Lambda }`$) by applying the maximization/marginalization algorithm described above. The effect of marginalizing is, as expected, to expand the contours along the $`\mathrm{\Omega }=`$constant lines but has little effect in the perpendicular direction and we are able to rule out a substantial region of parameter space. For $`\mathrm{\Omega }_M1`$ models, the position of the peak is solely dependent on the angular-diameter distance, with a good approximation being $`\mathrm{}_{peak}\mathrm{\Omega }^{\frac{1}{2}}`$; this approximation breaks down when $`\mathrm{\Omega }_M0`$ where the early time integrated Sachs-Wolfe effect becomes important and $`\mathrm{}_{peak}`$ is far more sensitive to $`\mathrm{\Omega }`$ (White & Scott (1996)). This effect leads to a convergence of contour levels as $`\mathrm{\Omega }_M0`$ in Figure 3. FIG. 3.— The likelihood contours in the $`\mathrm{\Omega }_M`$,$`\mathrm{\Omega }_\mathrm{\Lambda }`$ plane, evaluated at the maxima of the remaining four “nuisance” parametes. The top panel is from the Boomerang/NA data, the bottom panel is from Boomerang/NA+COBE. The contours correspond to 0.32, 0.05 and 0.01 of the peak value of the likelihood. The small triangle indicates the best fit. The dashed line corresponds to the flat models. ## 4. Discussion In the previous section we have obtained a constraint on $`\mathrm{\Omega }`$ using only the Boomerang/NA data. These new results are consistent with Lineweaver (1998), Tegmark (1999) and DK (99). However, the Boomerang/NA data on its own does not constrain the shape and amplitude of the power spectrum at $`\mathrm{}25`$ and limits our ability to independently determine the parameters $`n_S`$, $`\mathrm{\Omega }_B`$, $`h`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ and $`C_{10}`$. We combine the Boomerang/NA data with the 4-year COBE/DMR angular power spectrum to attempt to break this degeneracy. In Figure 3 (bottom panel) we plot the likelihood contours, again maximized over the nuisance parameters for the combined Boomerang/NA and COBE data. The inclusion of the COBE data does not greatly affect the constraints at high $`\mathrm{\Omega }_M`$ or the confidence levels on $`\mathrm{\Omega }`$, but, as expected, it helps to close of the contours at low values of $`\mathrm{\Omega }_M`$. The best fit model changes to have $`\mathrm{\Omega }_{CDM}=0.46`$, $`\mathrm{\Omega }_B=0.05`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.50`$, $`n_S=1.0`$, $`h=0.70`$, $`C_{10}=0.94`$. We find that for the likelihood to be greater than $`0.32`$ of its peak value then $`\mathrm{\Omega }_M>0.2`$, again similar to the results of DK (99). One can combine our constraints with those obtained from the luminosity-distance measurements of high-$`z`$ supernovae (Perlmutter et al. (1998); Schmidt et al (1998)): using the 1-$`\sigma `$ constraint from Perlmutter et al (1998), $`\mathrm{\Omega }_M0.6\mathrm{\Omega }_\mathrm{\Lambda }=0.2\pm 0.1`$, we find $`0.2\mathrm{\Omega }_M0.45`$ and $`0.6\mathrm{\Omega }_\mathrm{\Lambda }0.85`$. A few comments are in order about the robustness of our analysis. Firstly we have not truly marginalized over the nuisance parameters. However the constraints we obtain in this way are, if anything, more conservative. Secondly, although we are limiting ourselves to standard adiabatic models, a strong case can be made against the rival theory of topological defects: the presence of a fairly localized rise and fall in the data around $`\mathrm{}`$ of 200 indicates that the characteristic broadening due to decoherence of the either cosmic strings (Contaldi, Hindmarsh & Magueijo (1999)) or textures (Pen, Seljak & Turok (1997)) is strongly disfavoured. Finally we have restricted ourselves to only four extra nuisance parameters. Again we believe this does not affect our main result (our constraints on $`\mathrm{\Omega }`$) although it may affect the low $`\mathrm{\Omega }_M`$ constraints when we combine the Boomerang/NA data with COBE; the results from Tegmark (1998) and DK (99) lead us to believe that the effect will not greatly change our results. To summarize we have used the angular power spectrum of the Boomerang/NA test flight to constrain the curvature of the universe. Given that we have based our results on this data set alone, our results are completeley independent from previous analysis of the CMB. At the time of submission, this letter is also the first analysis of this kind to include closed models in the computation. We find strong evidence against an open universe: we find that $`0.65\mathrm{\Omega }1.45`$ at the 95$`\%`$ confidence level, significantly ruling out the current favourite open inflationary models for structure formation (Lyth & Stewart (1990); Ratra & Peebles (1995); Bucher, Goldhaber & Turok (1995)). Much tighter constraints will soon be placed on these and others cosmological parameters from future data sets, including data obtained during by the Antarctic LDB flight of Boomerang, which mapped over $`1200`$ square degrees of the sky with $`12^{}`$ angular resolution and higher sensitivity per pixel than the Boomerang/NA. We acknowledge useful conversations with Dick Bond, Ruth Durrer, Eric Hivon, Tom Montroy, Dmitry Pogosyan and Simon Prunet. The Boomerangprogram has been supported by Programma Nazionale Ricerche in Antartide, Agenzia Spaziale Italiana and University of Rome La Sapienza in Italy, by NASA grant numbers NAG5-4081 & NAG5-4455 in the USA, and by the NSF Science & Technology Center for Particle Astrophysics grant number SA1477-22311NM under AST-9120005, by the NSF Office of Polar Programs grant number OPP-9729121 and by PPARC in UK. This research also used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC03-76SF00098. Additional computational support for the data analysis has been provided by CINECA/Bologna. We also acknowledge using the CMBFAST, CAMB and RADPack packages.
no-problem/9911/gr-qc9911122.html
ar5iv
text
# Cosmological models with one extra dimension. ## 1 Acknowledgements. I am grateful to C.Armendariz Picon, V.F.Mukhanov, A.M.Boyarsky and D.V.Semikoz for the useful discussions on the subject. This work was supported by the Sonderforschungsbereich SFB 375 für Astroteilchenphysik der Deutschen Forschungsgemeinschaft. ## 2 Appendix. In this Appendix we find solutions of the vacuum Einstein equations in $`(4+1)`$ dimensions. If we take the space-time metric in the form (4) the components of the Ricci tensor are $`R_{00}={\displaystyle \frac{1}{4}}(\dot{\lambda }\dot{\nu }\dot{\lambda }^22\ddot{\lambda })+{\displaystyle \frac{1}{4}}(\nu ^2\nu ^{}\lambda ^{}+2\nu ^{\prime \prime })e^{\nu \lambda }+{\displaystyle \frac{3\nu ^{}}{2r}}e^{\nu \lambda }`$ $`R_{11}={\displaystyle \frac{1}{4}}(\lambda ^{}\nu ^{}\nu ^22\nu ^{\prime \prime })+{\displaystyle \frac{1}{4}}(\dot{\lambda }^2\dot{\nu }\dot{\lambda }+2\ddot{\lambda })e^{\lambda \nu }+{\displaystyle \frac{3\lambda ^{}}{2r}}`$ $`R_{01}={\displaystyle \frac{3\dot{\lambda }}{2r}};R_{22}={\displaystyle \frac{1}{2}}re^\lambda (\lambda ^{}\nu ^{})2e^\lambda +2k`$ $`R_{33}=fR_{22};R_{44}=f\mathrm{sin}^2\theta R_{22}`$ where $`k`$ is defined right after equation (8). The relevant components of the Einstein equations which define the functions $`\lambda ,\nu `$ in (4) are $`\left({\displaystyle \frac{\lambda ^{}}{2r}}{\displaystyle \frac{1}{r^2}}\right)e^\lambda +{\displaystyle \frac{k}{r^2}}=𝒯_0^0`$ (39) $`\left({\displaystyle \frac{\nu ^{}}{2r}}+{\displaystyle \frac{1}{r^2}}\right)e^\lambda +{\displaystyle \frac{k}{r^2}}=𝒯_1^1.`$ (40) Taking $`𝒯_\alpha ^\beta =0`$ one finds that for the vacuum solutions of Einstein equations $$e^\nu =e^\lambda =k\frac{\mu }{r^2}.$$ (41)
no-problem/9911/physics9911036.html
ar5iv
text
# References 9.84 vs. 9.84: The Battle of Bruny and Bailey J. R. Mureika<sup>1</sup><sup>1</sup>1newt@palmtree.physics.utoronto.ca Department of Physics, University of Toronto, Toronto, Ontario M5S 1A7 Canada Abstract At the recent 1999 World Athletics Championships in Sevilla, Spain, Canada’s Bruny Surin matched Donovan Bailey’s National and former World Record 100 m mark of 9.84 s. The unofficial times for each, as read from the photo-finish, were 9.833 s and 9.835 s respectively. Who, then, is the fastest Canadian of all time? A possible solution is offered, accounting for drag effects resulting from ambient tail-winds and altitude. From the moment Bruny Surin’s silver-clad Seville performance popped up on the screen, it was only a matter of time before the question was raised. This story, of course, has its roots in the July 27, 1996 performance of one Donovan Bailey, Canada’s prime hopeful at the centennial Olympic Games in Atlanta. Coming from behind, Donovan shifted to a gear which that day only he possessed, surging ahead to the tape in a World Record mark of 9.84s. Fast forward to August 22, 1999: Montreal’s Bruny Surin battles for global century supremacy with the formidable Maurice Greene, who a shy 2 months earlier had eclipsed Bailey’s world mark in an ominously reminiscent 9.79s. Although falling mildly short of his rival’s 9.80s gold medal romp, Surin’s time was far from disappointing, yet in a way uniquely Canadian: 9.84s. The score so far: 9.84s Bailey, 9.84s Surin– but is a 9.84s always a 9.84s? Without delving too deeply into the philosophical, one needs to address the standards of electronic timing. For those not familiar with the equipment, the top-of-the-line photo-finish cameras actually sample at 0.001s, implying that the athletes’ performances are initially recorded to three decimal places. For various reasons, the precision is only kept to two places, but the rounding process isn’t quite scientific. Unless the third decimal place is ’0’, the times are rounded up to the next highest hundredth. So, two performances can be up to 0.009s apart, and still be regarded as “equal”! Herein lies our current dilemma. In a recent issue of Athletics , it was pointed out that Bailey’s 9.84 s was initially a 9.835 s, while Surin’s 9.84 s was really 9.833 s. How does reaction time fit in? Donovan slept in the blocks for 0.174 s (a potential nail-in-the-coffin for an Olympic final!), while Bruny blasted ahead of his field in 0.127 s. So, after a little math, Surin clocks in with 9.706 s, but Bailey now leads at 9.661 s. Is Donovan’s performance truly of Olympic proportions, as compared to Bruny’s?? In case you didn’t see this coming (by now you should all know better), we can’t disregard two vital pieces of data: wind speed and altitude! The measurements in question were $`+0.7`$ m/s in Atlanta (approximately 315 m above sea level), and $`+0.2`$ m/s in Seville (about 12 m above sea level). Since a tail-wind boosts a sprint time, Bailey’s $`+0.7`$ m/s tail gave him more of an advantage than Surin’s $`+0.2`$ m/s. However, a higher altitude sprint is easier than one closer to sea level– hence, a Seville race will be slower than one in Atlanta! What to do?? Is this debate destined for the files of Unsolved Mysteries? Through the miracle of numerical modeling, it’s possible to estimate the benefit associated with each statistic. Drag is calculated as $$drag=\frac{1}{2}\rho C_dA(vw)^2,$$ (1) where $`A`$ is the cross-sectional area of the sprinter, $`C_d`$ is the drag coefficient, $`\rho `$ the density of the air, $`v`$ the sprinter’s speed, and $`w`$ the wind speed. Note the dependence on $`\rho `$ and $`w`$: the higher the altitude, the thinner the air, the lower the value of $`\rho `$. Likewise, the stronger the tail wind, the smaller $`(vw)^2`$ gets. Hence, both imply a lower overall drag on the sprinter. Since the effect of wind will vary with altitude, it’s reasonable to convert all performances to their sea level equivalent (or 0 metres altitude). The following chart gives a quick indication of the degree to which a 9.72 s sea-level clocking (assuming reaction is subtracted) will be boosted by differing wind and altitude conditions. The last row represents the elevation of Mexico City, to give appreciation for the advantage experience in the 1968 Olympics (the density of air is roughly $`7678\%`$ that at sea level, so clearly with the right tail wind, it’s no wonder that the sprints and jumps experienced record-breaking performances). Plugging the numbers into my model , I find the following quick figures: the altitude+wind combo for Bailey implies that his race would be equivalent to roughly a 9.719 s ($`+0.044`$ s from just wind; $`+0.058`$ s combined). Surin’s race would correspond to a 9.720 s century ($`+0.013`$ s wind; $`+0.014`$ s combined). There we have it: instead of 9.84 vs. 9.84, after correcting for reaction time and drag effects, we end up with 9.720 s vs. 9.719 s. Since exact values of $`C_d`$ and $`A`$ are unknown, their estimation introduces a degree of uncertainty to any calculation. So, it’s certainly not unreasonable to expect that this could account for a 0.001s discrepancy, implying that Bailey’s and Surin’s times are effectively indistinguishable! Thus, whose 9.84 s is faster? According to these preliminary results: they’re both the fastest. But, with two 9.84 s clockings toping the national list, Canada certainly comes out ahead. Perhaps the 2000 season will shed some definitive light on the individual battle.
no-problem/9911/hep-th9911065.html
ar5iv
text
# La Plata-Th 99/11 IP-BBSR 99/32 Monopoles in non-Abelian Einstein-Born-Infeld Theory ## Abstract We study static spherically symmetric monopole solutions in non-Abelian Einstein-Born-Infeld-Higgs model with normal trace structure. These monopoles are similar to the corresponding solution with symmetrised trace structure and are existing only up to some critical value of the strength of the gravitational interaction. In addition, similar to their flat space counterpart, they also admit a critical value of the Born-Infeld parameter $`\beta `$. The Dirac-Born-Infeld (DBI) model has recently received special consideration in connection with string theory and D-brane dynamics after the discovery that the low energy effective action for D-branes is precisely described by the DBI action - (for a review see ). BPS solutions to DBI theory for gauge fields and scalars were then studied and interpreted in terms of branes pulled by strings -. Different non-Abelian generalization of the DBI action have been proposed -. They differ in the way a scalar action is defined from objects carrying group indices. The use of a symmetrised trace proposed by Tesytlin in the context of superstring theory seems to be the natural one in connection with supersymmetry and leads to a linearized Lagrangian with BPS equations identical to Yang-Mills ones -. The use of the ordinary trace and other recipes for defining the DBI action have been also investigated . Vortex, monopole and other soliton-like solutions to theories containing a DBI action have been studied for Abelian and non-Abelian gauge symmetry -, in this last case both using the symmetrised and the normal trace operation to define a scalar DBI action. A distinctive feature was discovered for DBI vortices and monopoles: there exists a critical value $`\beta _c`$ of the Born-Infeld $`\beta `$-parameter below which regular solutions cease to exist -. In the non-Abelian case, this critical value can be tested only when the normal trace is adopted since, for the symmetrised trace, the Lagrangian is known only as a perturbative series in $`1/\beta `$ and the results are valid outside the domain where $`\beta `$-criticality takes place. The existence of $`\beta _c`$ is much resemblant to the phenomenon occuring for self-gravitating monopoles -, that exhibit a critical behavior with respect to the strength of the gravitational interaction: above some maximum value of a parameter $`\alpha `$, related to the strength of gravitational interaction, regular monopole solutions collapse. This behavior, originally described in Einstein-Yang-Mills-Higgs theory, has been shown to take place also when the DBI action governs the dynamics of the gauge field . Since the symmetric trace has been adopted in these last references, the existence of $`\beta _c`$ in DBI theories coupled to gravity could not be analysed; it is the purpose of the present work to address this issue, namely, to study the Einstein-DBI-Higgs model using the normal trace for defining the DBI action, find self-gravitating monopole solutions, determine whether they cease to exist below some critical value $`\beta _c`$ as in the flat space case and, in the affirmative, study the interplay between $`\alpha `$ and $`\beta _c`$. We consider the following Einstein-Born-Infeld-Higgs action for $`SU(2)`$ fields with the Higgs field in the adjoint representation $`S={\displaystyle d^4x\sqrt{g}\left[L_G+L_{BI}+L_H\right]}`$ (1) with $`L_G`$ $`=`$ $`{\displaystyle \frac{1}{16\pi G}},`$ $`L_H`$ $`=`$ $`{\displaystyle \frac{1}{2}}D_\mu \varphi ^aD^\mu \varphi ^a{\displaystyle \frac{e^2g^2}{4}}\left(\varphi ^a\varphi ^av^2\right)^2`$ and the non Abelian Born-Infeld Lagrangian, $`L_{BI}=\beta ^2tr\left(1\sqrt{1+{\displaystyle \frac{1}{2\beta ^2}}F_{\mu \nu }F^{\mu \nu }{\displaystyle \frac{1}{8\beta ^4}}\left(F_{\mu \nu }\stackrel{~}{F}^{\mu \nu }\right)^2}\right)`$ (2) where $`D_\mu \varphi ^a=_\mu \varphi ^a+eϵ^{abc}A_\mu ^b\varphi ^c,`$ $`F_{\mu \nu }=F_{\mu \nu }^at^a=\left(_\mu A_\nu ^a_\nu A_\mu ^a+eϵ^{abc}A_\mu ^bA_\nu ^c\right)t^a`$ and the trace $`tr`$ in Lagrangian (2) is defined so that $`tr(t^at^b)=\delta ^{ab}`$. Here we are interested in purely magnetic configurations, hence we have $`F_{\mu \nu }\stackrel{~}{F}^{\mu \nu }=0`$. The elementary excitations are a massless photon, two massive charged vectors bosons with mass $`M_W=ev`$, and a massive neutral Higgs scalar with mass $`M_H=\sqrt{2}gev`$. Varying the action with respect to the matric we obtain the following expression for the energy-momentum tensor $`T_{\lambda \rho }={\displaystyle \frac{g^{\mu \nu }F_{\mu \lambda }^aF_{\nu \rho }^a}{\sqrt{1+\frac{1}{4\beta ^2}F_{\mu \nu }^aF^{a\mu \nu }}}}2\beta ^2g_{\lambda \rho }\left(1\sqrt{1+{\displaystyle \frac{1}{4\beta ^2}}F_{\mu \nu }^aF^{a\mu \nu }}\right)`$ (3) For static spherical symmetric solutions, the metric can be parametrized as $`ds^2=e^{2\nu (R)}dt^2+e^{2\lambda (R)}dR^2+r^2(R)(d\theta ^2+\mathrm{sin}^2\theta d\phi ^2)`$ (4) We consider the ’t Hooft-Polyakov ansatz for the gauge and scalar fields $`A_t^a(R)=0=A_R^a,A_\theta ^a=e_\phi ^a{\displaystyle \frac{W(R)1}{e}},A_\phi ^a=e_\theta ^a{\displaystyle \frac{W(R)1}{e}}\mathrm{sin}\theta ,`$ (5) and $`\varphi ^a=e_R^avH(R).`$ (6) Putting the above ansatz in Eq.1, defining $`\alpha ^2=4\pi Gv^2`$ and rescaling $`RR/ev`$, $`r(R)r(R)/ev`$ and $`\beta \beta ev^2`$ we get the following expression for the Lagrangian $`{\displaystyle 𝑑Re^{\nu +\lambda }\left[\frac{1}{2}\left(1+e^{2\lambda }\left((r^{})^2+\nu ^{}(r^2)^{}\right)\right)\alpha ^2r^2\left\{2\beta ^2\left(1\sqrt{1+\frac{V_1}{\beta ^2}}\right)V_2\right\}\right]}`$ (7) where $`V_1={\displaystyle \frac{1}{r^2}}e^{2\lambda }(W^{})^2+{\displaystyle \frac{1}{2r^4}}(W^21)^2`$ (8) and $`V_2={\displaystyle \frac{1}{2}}e^{2\lambda }(H^{})^2+{\displaystyle \frac{1}{r^2}}(HW)^2+{\displaystyle \frac{1}{4}}g^2(H^21)^2`$ (9) Here the prime denotes differentiation with respect to $`R`$. The dimensionless parameter $`\alpha `$ can be expressed as the mass ratio $`\alpha =\sqrt{4\pi }{\displaystyle \frac{M_W}{eM_{Pl}}}`$ (10) where $`M_{Pl}=1/\sqrt{G}`$ is the Planck mass. As expected in the limit of $`\beta \mathrm{}`$ the above action reduces to that of the Einstein-Yang-Mills-Higgs model. Also, the limit $`\alpha =0`$, for which we must have $`\nu (R)=0=\lambda (R)`$ corresponds to the flat space Born-Infeld-Higgs theory. Note that the use of the normal trace allowed to reaccomodate the Lagrangian in terms of a square root without reference to the gauge group generators; this is not possible for the case of the symmetyrized trace. From now on we consider the gauge $`r(R)=R`$, corresponding to the Schwarzschild-like coordinates and rename $`R=r`$. We define $`A=e^{\nu +\lambda }`$ and $`N=e^{2\lambda }`$. Integrating the $`tt`$ component of the energy-momentum we get the mass of the monopole equal to $`M/evG`$ where $`M=\alpha ^2{\displaystyle _0^{\mathrm{}}}𝑑rr^2\left\{V_22\beta ^2\left(1\sqrt{1+{\displaystyle \frac{V_1}{\beta ^2}}}\right)\right\}`$ (11) Following ’t Hooft the electromagnetic $`U(1)`$ field strength $`_{\mu \nu }`$ can be defined as $`_{\mu \nu }={\displaystyle \frac{\varphi ^aF_{\mu \nu }^a}{\varphi }}{\displaystyle \frac{1}{e\varphi ^3}}ϵ^{abc}\varphi ^aD_\mu \varphi ^bD_\nu \varphi ^c.`$ Then using the ansatz(3) the magnetic field $`B^i={\displaystyle \frac{1}{2}}ϵ^{ijk}_{jk}`$ is equal to $`e_r^i/er^2`$ with a total flux $`4\pi /e`$ and unit magnetic charge. The $`tt`$ and $`rr`$ components of Einstein’s equations are $`{\displaystyle \frac{1}{2}}\left(1(rN)^{}\right)=\alpha ^2r^2\left\{V_22\beta ^2\left(1\sqrt{1+{\displaystyle \frac{V_1}{\beta ^2}}}\right)\right\}`$ (12) $`{\displaystyle \frac{A^{}}{A}}=\alpha ^2r\left[(H^{})^2+{\displaystyle \frac{2(W^{})^2}{r^2\sqrt{1+\frac{V_1}{\beta ^2}}}}\right]`$ (13) and the matter field equations are $`\left(AN{\displaystyle \frac{W^{}}{\sqrt{1+\frac{V_1}{\beta ^2}}}}\right)^{}=WA\left(H^2+{\displaystyle \frac{W^21}{r^2\sqrt{1+\frac{V_1}{\beta ^2}}}}\right)`$ (14) $`(ANr^2H^{})^{}=AH\left(2W^2+g^2r^2(H^21)\right)`$ (15) Note that the field $`A`$ can be elliminated from the matter field equations using Eq.(12). Now we consider the globally regular solution to the above equations. Expanding the fields in powers of $`r`$ and keeping the leading order terms we obtain the following expressions near the origin $`H=ar+O(r^3),`$ (16) $`W=1br^2+O(r^4),`$ (17) $`N=1cr^2+O(r^4)`$ (18) where $`c`$ is expressed interms of the free parameters $`a`$ and $`b`$ as $`c=\alpha ^2\left[a^2+{\displaystyle \frac{g^2}{6}}+{\displaystyle \frac{4}{3}}\beta ^2\left(\sqrt{1+{\displaystyle \frac{6b^2}{\beta ^2}}}1\right)\right]`$ (19) We are looking for asymptotically flat solution and hence we impose $`N=1{\displaystyle \frac{2M}{r}}.`$ (20) Then the gauge and the Higgs fields has the following behaviour in the asymptotically far region: $`W=Cr^Me^r\left(1+O\left({\displaystyle \frac{1}{r}}\right)\right)`$ (21) $`H=\{\begin{array}{cc}1Br^{\sqrt{2}gM1}e^{\sqrt{2}gr},\hfill & for0<g\sqrt{2}\hfill \\ 1\frac{C^2}{g^22}r^{2M2}e^{2r},\hfill & otherwise.\hfill \end{array}`$ (24) We solved the equations of motion numerically using the above boundary conditions. The solutions are pretty much the same as those for the case of the symmerised trace for $`\beta 1`$ and they agree with the corresponding solution for the Yang-Mills-Higgs case for lagre $`\beta `$ . For a definite value of $`\beta `$, the solution exists up to some critical value $`\alpha _{max}`$ of the parameter $`\alpha `$. The minima of the metric function decreases as we increase the value of $`\alpha `$ for $`\alpha <\alpha _{max}`$ and it approches zero for $`\alpha \alpha _{max}`$. The solution does not exist for $`\alpha >\alpha _{max}`$. It is observed that when we decrease the value of $`\beta `$ the $`\alpha _{max}`$ goes on increaseing. The values of $`\alpha _{max}`$ for different $`\beta `$ are given in table 1. However it is also found that, similar to the flat space monopoles there is a critical value $`\beta _c`$ for finite $`\alpha `$, and the solutions does not exist for $`\beta <\beta _c`$. For $`\alpha =1`$ and $`g=0`$ we find $`\beta _c0.1`$ which is much smaller than the corresponding value for flat space which is approximately $`0.5`$. The profile for different values of $`\alpha ,\beta `$ for the case of $`g=0`$ are given in the Figs.$`1,2\&3`$. To understand these results, let us recall that the existence of an upper bound $`\alpha _{max}`$ for the gravitation strength can be interpreted by observing that monopole mass$`/`$radius ratio increases as $`\alpha `$ increases so that $`\alpha _{max}`$ can be seen as the value at which the monopole becomes gravitationally unstable and collapses -. This for Yang-Mills action, which corresponds to the $`\beta \mathrm{}`$ limit of DBI action. Now, as $`\beta `$ decreases from its limiting value, the mass of the monopole decreases (see Table 2) so that the collapse should occur for a value $`\alpha _{max}^\beta >\alpha _{max}^{\beta =\mathrm{}}`$, as observed in our solutions. The same kind of analysis can be performed regarding the lowering of $`\beta _c`$ as $`\alpha `$ increases: as shown in -, the singular behavior occuring at $`\beta _c`$ manifests as an abrupt descent in the soliton mass, a phenomenon which is in concurrence with the enhancing of the mass as $`\alpha `$ grows. That is the reason why $`\beta _c^\alpha <\beta _c^{\alpha =0}`$. In summary, we analysed the gravitating monopole solutions in non-Abelian Born-Infeld-Higgs system with a normal trace structure. The solutions exist up to some critical value $`\beta _c`$ of the Born-Infeld parameter $`\beta `$ below which there is no solution. It was not possible to study this feature in the corresponding model with symmetrised trace structure since the perturbative expansion implicit in this case is not valid for small $`\beta `$. We also found that the parameter $`\alpha `$ has some maximum allowed value $`\alpha _{max}`$ for a definite $`\beta `$. This $`\alpha _{max}`$ increases as we decrease $`\beta `$. It would be worth studying if the similar behaviour occures in case of non-Abelian black holes. Also it should be possible to study the dyon solutions in the non-Abelian model with normal trace structure. Acknowledgements: F.A.S is partially supported by CICBA, CONICET (PIP 4330/96), ANPCYT (PICT 97/2285). P.K.T is very much grateful to Avinash Khare for many helpful discussions. | $`\beta `$ | $`\alpha _{max}^2`$ | | --- | --- | | 0.10 | 8.1 | | 0.15 | 7.2 | | 0.20 | 6.5 | | 0.25 | 6.1 | | 0.30 | 5.9 | | 0.50 | 5.7 | | 1.00 | 5.6 | Table 1 $`\alpha _{max}^2`$ for different $`\beta `$ ($`g=0`$). $`\alpha _{max}`$ decreases as we increase $`\beta `$. | $`\beta `$ | $`M/evG`$ | | --- | --- | | 1.0 | 1.22097 | | 0.5 | 1.19034 | | 0.27 | 1.11625 | | 0.2 | 1.05062 | | 0.15 | 0.98329 | Table 2 $`M/evG`$ for different $`\beta `$ ($`g=0`$ and $`\alpha ^2=2.5`$). Mass decreases as we decrease $`\beta `$.
no-problem/9911/hep-th9911027.html
ar5iv
text
# Holography and the Future Tube ## 1 Introduction The purpose of this talk is to describe some remarkable geometric facts relating the Future Tube $`T_n^+`$ of $`n`$-dimensional Minkowski spacetime to the reduced phase space $`P`$, or “space of motions ” of a particle moving in $`(n+1)`$-dimensional Anti-de-Sitter Spacetime with a view to illuminating the Maldacena conjectures relating string theory on $`AdS_{n+1}`$ to Conformal Field theory on its conformal boundary. ## 2 The Future Tube Slightly confusingly perhaps, the Future Tube $`T_n^+`$ is usually defined as those points of complexified Minkowski spacetime $`𝐄_𝐂^{n1,1}`$, with complex coordinates $`z^\mu =x^\mu +iy^\mu `$, such that $`y^\mu `$ lie inside the past light cone $`C_n^{}`$. That is $`y^0<\sqrt{y^iy^i}`$. With this convention together with standard conventions of quantum field theory, a positive frequency function is the boundary value of a function which is holomorphic in the future tube. The relation between a holomorphic function, for example one defined in some bounded domain $`D𝐂^n`$, and the values of that function on its Shilov boundary $`S`$, which is an $`n`$ real dimensional submanifold of the $`(2n1)`$ real dimensional boundary $`D`$ may be said to be “holographic ”in that the information about two real valued functions of $`2n`$ real variables is captured by two real valued functions of $`n`$ real variables. This is the key idea behind the application of dispersion relations to quantum field theory and their use to derive rigourous general results such as the spin-statistics theorem and invariance under CPT. ## 3 Bounded Complex Domains Homogeneous bounded domains were classified by Cartan and their properties are described in detail by Hua . The case we are interested in corresponds to the Hermitian symmetric space $`D=SO(n,2)/(SO(2)\times SO(n))`$. It is referred to by Hua as “Lie Sphere Space ”. In order to see this, consider the complex light cone $`C_𝐂^{n+2}𝐂^{n+2}`$ given by $$(W^{n+1})^2+(W^{n+2})^2(W^i)^2=0.$$ (1) Compactified complexified Minkowski spacetime $`\overline{𝐄_𝐂^{n1,1}}`$ consists of complex light rays passing through the origin. This means that one must identify rays $`W^A`$ and $`\lambda W^A`$, where $`A=1,\mathrm{},n+2`$, $`i=1,\mathrm{},n`$ and $`\lambda 𝐂^{}𝐂0`$, that is $`\overline{𝐄_𝐂^{n1,1}}=C_𝐂^{n+2}/𝐂^{}`$. Evidently $`SO(n1,2;𝐑)`$ acting in the obvious way on $`𝐂^{n+1}`$ leaves the complex lightcone $`C_𝐂^{n+2}`$ invariant and commutes with the $`𝐂^{}`$ action. Thus the action of $`SO(n1,2;𝐑)`$ descends to $`D`$. If we restrict the coordinates $`W^A`$ to be real, we obtain the standard construction of $`n`$-dimensional compactified real Minkowski spacetime $`\overline{𝐄^{n1,1}}`$, as light rays through the origin of $`𝐄^{n1,2}`$. It follows that $`\overline{𝐄^{n1,1}}(S^1\times S^{n1})/𝐙_2`$, The finite points in Minkowski spacetime, i.e. those not contained in what Penrose refers to as $``$ may be obtained by intersecting the light cone with a null hyperplane which does not pass through the origin. To get to the description given by Hua , introduce coordinates parameterising (most of) the light cone by $`u,w^i`$ by $$W^{n+1}iW^{n+2}=\frac{1}{u},$$ (2) $$W^{n+1}+iW^{n+2}=\frac{w^iw^i}{u},$$ (3) and $$W^i=\frac{w^i}{u},$$ (4) If $`w𝐂^n`$ is a complex $`(n1)`$ column vector and $`w^2=w^tw`$ and $`|w|^2=w^{}w`$ then the domain $`D`$ defined by $$1|w|^2\sqrt{|w|^4|w^2|^2}.$$ (5) The topological boundary is given by the real equation: $$12|w|^2+|w^2|^2=0.$$ (6) On the other hand, the Shilov boundary $`SD`$ is determined by the property that the maximum modulus of any holomorphic function on $`P`$ is attained on $`S`$. Consider, for example, the holomorphic function $`w`$. It attains its maximum modulus when $`w=\mathrm{exp}(i\tau )𝐧`$, where $`𝐧`$ is a real unit $`(n1)`$ vector. Thus $`S`$ is given by $`S^1\times S^{n1}/𝐙_2`$. If we take $`W^i`$ to be the real unit $`n`$-vector $`𝐧`$ and $`u=\mathrm{exp}(i\tau )`$ we see that $`S`$ and $`\overline{𝐄^{n1,1}}`$ are one and the same thing. ## 4 The geodesic Flow of $`AdS_{n+1}`$ Now let us turn to $`(n+1)`$-dimensional Anti-de-Sitter spacetime . One possible approach to quantising a relativistic particle moving in $`AdS_{n+1}n`$, might be to look at the relativistic phase space and then pass to the constrained space and to “quantise” it. Recall that, in general, the relativistic phase space of a spacetime $`M`$ is the cotangent bundle $`T^{}M`$ with coordinates $`\{x^\mu ,p_\mu \}`$, canonical one-form $`A=p_\mu dx^\mu `$ and symplectic form $$\omega =dp_\mu dx^\mu .$$ (7) The geodesic flow is generated by the covariant Hamiltonian $$=\frac{1}{2}g^{\mu \nu }p_\mu p_\nu .$$ (8) The flow for a timelike geodesic, corresponding to a particle of mass $`m`$ lies on the level sets, call them $`\mathrm{\Gamma }`$, given by $$=\frac{1}{2}m^2.$$ (9) The restriction of the canonical one-form $`A`$ to the level sets $`\mathrm{\Gamma }`$ endows them with a contact structure, in other words, the restriction of $`dA`$ has rank $`2n`$ and it’s one-dimensional kernel is directed along the geodesic flow. Thus, locally at least, one may pass to the reduced phase space $`P=\mathrm{\Gamma }/G_1`$ where $`G_1`$ is the one-parameter group generated by the covariant Hamiltonian $``$, by a “Marsden-Weinstein reduction”. Geometrically, the group $`G_1`$ takes points and their cotangent vectors along the world lines of the timelike geodesics. The reduced $`2n`$-dimensional phase space $`P`$ is naturally a symplectic manifold. Moreover the isometries of $`M`$ act by canonical transformations, i.e. by symplectomorphisms, on $`P`$ taking timelike geodesics to timelike geodesics. Each Killing vector field $`K_a^\mu (x)`$ on $`M`$ determines a “moment map” $`\mu _a(x,p)=K_a^\mu p_\mu `$ on $`T^{}M`$ which Poisson commutes with the covariant Hamiltonian $``$. They thus descend to reduced phase space $`P`$ where their Poisson algebra is is the same as the Lie algebra of of the isometry group. Thus, for example, if $`n=2`$ the Poisson algebra is $`sl(2;𝐑)`$. This fact is behind the connection between black holes, conformal mechanics, and Calogero models discussed in . We are interested in the quantum theory rather than the classical geodesic motion and so it is appealing to attempt to implement the geometric quantization programme by “quantising ” $`P`$. A point of particular interest would then be to compare it with a more conventional approach based on quantum field theory in a fixed background. In the general case this appears to be difficult because one does not have a good understanding of the space of timelike geodesics $`P`$. However in the present case of $`AdS_{n+1}`$, the space $`P`$ may be described rather explicitly. It is a homogeneous Kähler manifold which is isomorphic to the future tube $`T_n^+`$ of $`n`$-dimensional Minkowski spacetime. Because it is a Kähler manifold one may adopt a holomorphic polarisation. The resulting “quantization” is the same as that considered by Berezin and others many years ago (see e.g. ). A more physical description is in terms of coherent states. To see the relation between $`P`$ and $`T_n^+`$ explicitly it is convenient to regard $`AdS_{n+1}`$ as the real quadric in $`𝐄^{n,2}`$ given by $$(W^{n+1})^2+(W^{n+2})^2(W^i)^2=1.$$ (10) It is clear by comparing (1) and (10) that the light cone $`C_{n+2}`$ and the quadric $`AdS_{n+1}`$ can only touch at infinity, which explains why the conformal boundary of $`AdS_{n+1}/𝐙_2`$ is the same as compactified Minkowski spacetime $`\overline{𝐄^{n1,1}}(S^1\times S^{n1})/𝐙_2`$, where $`𝐙_2`$ is the antipodal map $`𝐙_\mathrm{𝟐}:𝐖^𝐀𝐖^𝐀`$. Using this representation of $`AdS_{n+1}`$ , one easily sees that every timelike geodesic is equivalent to every other one under an $`SO(n1,2)`$ transformation. They may all be obtained as the intersection of some totally timelike 2-plane passing through the origin of of the embedding space $`𝐄^{n,2}`$ with the $`AdS_{n+1}`$ quadric. The space $`P`$ of such two planes may thus be identifed with the space of geodesics. It is a homogeneous space of the isometry group, in fact the Grassmannian $`SO(n,2)/(SO(2)\times SO(n))`$. Note that, as one expects, the dimension of $`P`$ is $`2n`$. The denominator of the coset is the maximal compact subgroup of $`SO(n,2)`$. Two factors correspond to timelike rotations in the timelike 2-plane and rotations of the normal space respectively. The former may be identified with the one parameter group $`G_1`$ generated by the covariant Hamiltonian $``$. Thus the level set $`\mathrm{\Gamma }`$ is the coset space $`SO(n,2)/SO(n)`$. ## 5 Quantisation Given a manifold $`X`$ with coordinates $`x`$ and a measure $`\mu `$, an over complete set of coherent states is a set of vectors $`\{|x\}`$ in some quantum mechanical Hilbert space $`H_{\mathrm{qm}}`$ providing a resolution of the identity $$_X\mu |xx|=\widehat{1}.$$ (11) Usually one takes $`X`$ to be a homogeneous space of some Lie group. However that is not essential for the general concept. A Hermitian operator $`\widehat{A}`$ on $`H_{\mathrm{qm}}`$ is associated with a real function $`A(x)`$ via the relation $$\widehat{A}=_XA(x)\mu |xx|=\widehat{1}.$$ (12) The commutator algebra of a set of operators on $`H_{\mathrm{qm}}`$ then gives rise to an algebra $`𝒜`$ on the associated functions. If $`X`$ is a symplectic manifold one expects, at least in the limit that Planck’s constant is small, or more strictly speaking in the limit of large action, that the algebra $`𝒜`$ reflects the Poisson algebra on the functions on $`X`$. For the special case of non-compact Kähler manifold of dimension $`2n`$, Kähler form $`\omega `$ and with coordinates $`w^i`$ and metric $$g_{i\overline{j}}=\frac{^2F}{w^i\overline{w}^j}.$$ (13) where $`F`$ is the Kähler potential it was proposed by Berezin that one choose for $`H_{\mathrm{qm}}`$ the space of holomorphic functions with inner product $$g(w)|f(w)=_X\overline{g(w)}f(z)\frac{\omega ^n}{n!}\mathrm{exp}(\frac{1}{h}F)$$ (14) One must now choose $`h`$ so that one gets a non-trivial Hilbert space $`H_{\mathrm{qm}}`$. Given that one may proceed to represent the isometries of $`X`$ on $`H_{\mathrm{qm}}`$ and to introduce other operators and investigate their algebras. Note that the metric on $`H_{\mathrm{qm}}`$ will in general depend upon $`h`$. The quantity $`h`$ is referred to in this context as Planck’s constant, although physically, for dimensional reasons, that is not really accurate. If it happens that $`X`$ is Einstein Kähler with negative scalar curvature then the Monge Ampère equation tells us that $$\mathrm{det}g_{i\overline{j}}=\mathrm{exp}(\mathrm{\Lambda }F)$$ (15) Thus, to get convergence one wants $`h\frac{1}{|\mathrm{\Lambda }|}`$. An upper bound for Planck’s constant seems very puzzling from a physical point of view but it has a simple explanation. Consider the circle bundle $`S^1EX`$ over the Kähler manifold $`X`$ with a connection whose curvature $`F=dA`$ is a multiple, $`e`$, of the symplectic form $`\omega `$. Thus $`F=e\omega `$ and one may think of $`e`$ as the product of the electric charge and the strength of magnetic field. The covariant derivative is $`\overline{𝒟}=\overline{}+e_{\overline{w}}F`$. Thus if $`\psi =e^{eF}f(w)`$, where $`f(w)`$ is holomorphic, then $$\overline{𝒟}\psi =0.$$ (16) Now the space of spinors on a Kähler manifold may be identified with the space of differential forms of type (0,p). The Dirac operator corresponds to the operator $`\sqrt{2}(\overline{}+\overline{}^{})`$. Minimally coupling the Dirac operator to the Kähler connection corresponds geometrically to taking the tensor product of the space of spinors with a power of the canonical bundle. The first power gives the canonical $`Spin^c`$ structure on $`X`$. If $`X`$ is assumed to have trivial second co-homology we may take any (not even rational power). The Dirac operator minimally coupled to $`F`$ corresponds to $`\sqrt{2}(\overline{𝒟}+\overline{𝒟}^{})`$. It follows that by setting $`2e=\frac{1}{h}`$ We may identify $`H_{\mathrm{qm}}`$ with the space of zero modes of the minimally coupled Dirac operator on $`X`$. Now typically $`X`$ has negative curvature, and so only if the charge on the spinors is sufficiently large, and of the correct sign, will there be a big space of, or indeed any, zero modes. The above theory was rather general. We now restrict to the case of a bounded complex domain $`D`$ in $`𝐂^n`$. We begin by describing its Kähler structure. Associated with $`D`$ is the Hilbert space $`\mathrm{Hol}(D)`$ of square integrable holomorphic functions. If $`\{\varphi _s\}`$, $`s=1,\mathrm{},`$ is an orthonormal basis for $`\mathrm{Hol}(D)`$ the Bergman Kernel $`K(w,\overline{v})`$ is defined by $$K(w,\overline{v})=\varphi _s(w)\overline{\varphi _s(v)}.$$ (17) The Bergman Kernel gives rise to a Kähler potential $`F(w,\overline{w})=\mathrm{log}K(w,\overline{w})`$ in terms of which the Bergman metric on $`D`$ is given by $$g_{i\overline{j}}=\frac{^2F}{w^i\overline{w}^j}.$$ (18) Geometrically the basis $`\{\varphi _s\}`$ gives a map of $`D`$ into $`\mathrm{𝐂𝐏}^{\mathrm{}}`$ and $`g_{i\overline{j}}`$ is the pull-back of the Fubini-Study metric. Now although $`D`$ has finite Euclidean volume, because $`K`$ and $`F`$ typically diverge at the boundary $`D`$ the volume of $`D`$ with respect to the Kähler metric $`g_{i\overline{j}}`$ diverges. For example, in our case Hua has calculated $`K(w,\overline{w})`$ and finds $$K(w,\overline{w})=\frac{1}{V_n}\frac{1}{(1+|w^2|^22|w|^2)^n},$$ (19) where $`V_n=\frac{\pi ^n}{2^{n1}n!}`$ is the Euclidean volume of $`D`$. ## 6 Cheng-Mok-Yau-Anti-de-Sitter Spacetimes The theory described above does not take into account gravity. Of course supergravity methods provide a way of doing that. They would lead to replacing $`AdS_n`$ by some other solution of the supergravity equations of motion. For example with some other Einstein Space. A great deal of work has already appeared going in this direction. One new possibility will be described in this section. However it is also worth asking how the space $`P`$ might be generalised. The relationship between these two generalisations is then of interest. This will be dercibed in the next section. If $`n`$ is odd, $`n=2m+1`$ say, then $`AdS_{2m+1}`$ may be regarded as a circle bundle over complex hyperbolic space $`H_𝐂^m`$ . Clearly, using complex coordinates $`Z^A`$, $`A=1,\mathrm{},m+1`$ in $`𝐑^{2m+2}𝐂^{m+1}`$, $`AdS_{2m+1}`$ is given by $$|Z^1|^2\mathrm{}|Z^{m+1}|^2=1.$$ (20) We may now fibre by the $`U(1)`$ action $`Z^Ae^{it}Z^A`$. The orbits are closed timelike curves in $`AdS_{2m+1}`$. The base space $`B`$ has a Riemannian, i.e. positive definite, metric. In fact $`B=SU(m,1)/U(m)H_𝐂^m`$ is the unit ball in $`𝐂^m`$. This is another bounded complex domain in $`𝐂^m`$. The metric on the base space is precisely its Bergman metric. In fact $`H_𝐂^m`$ with its Bergman metric is ithe symmetric space dual of $`\mathrm{𝐂𝐏}^m`$ with its Fubini-Study metric. The construction we have just described is the symmetric space dual of the usual Hopf fibration. The metric is $$ds^2=(dt+A_adx^b)^2+g_{ab}dx^adx^b$$ (21) where $`a,b=1,2,\mathrm{},2m`$, $`g_{ab}`$ is the Einstein-Kähler metric and $`dA`$ is the Kähler form. In traditional relativist’s language, $`AdS_{2m+1}`$ has been exhibited as an ultra-stationary metric (i.e. one with constant Newtonian potential $`U=\frac{1}{2}\mathrm{log}(g_{00})`$. The Sagnac or gravito-magnetic connection, governing frame-dragging effects corresponds precisely to the connection of the standard circle bundle over the Kähler base space. Its curvature is the Kähler form. Now it is easy to check that one may replace the Bergman manifold $`\{B,g_{ab}\}`$ with any other $`2m`$ dimensional Einstein-Kähler manifold with negative scalar curvature and obtain a $`(2m+1)`$-dimensional Lorentzian Einstein manifolds admitting Killing spinors in this way. According to Cheng and Yau and Mok and Yau , there is a rich supply of complete Einstein Kähler metrics on complex domains. Their investigation might well prove to be fruitful. The conformal boundary of these spacetimes is clearly related to the boundary of the complex domain, but the precise relationship is not at all clear. Apart from their possible applications to the Maldecena conjecture, these spacetimes may provide a useful arena, albeit in higher dimensions, for investigating the effects of closed timelike curves in general relativity. ## 7 Adapted Complex Structures We have seen that the space $`P`$ of timelike geodesics in $`AdS_n`$ carries an Einstein Kähler structure. In fact the entire cotangent bundle $`T^{}AdS_n`$ admits a Ricci-flat pseudo Kähler metric, i.e. one with signature $`(2n2,2)`$. The existence of this Ricci-flat pseudo-Kähler metric may be obtained by analytically continuing Stenzels’s positive definite Ricci-flat Kähler metric on the cotangent bundle of the standard $`n`$-sphere, $`T^{}S^n`$ . The simplest case is when $`n=2`$. Stenzel’s metric is then the Eguchi-Hanson metric which may be analytically continued to give a “Kleinian” metric of signature $`(2,1)`$ on $`T^{}AdS_2`$. In fact the cotangent bundle of any Riemannian manifold may be endowed with a canonical complex structure and a variety of Kähler metrics. We will describe this construction shortly and formulate a version for Lorentzian manifolds. Before doing so we descibe Stenzel’s construction. The cotangent bundle of the $`n`$-sphere $`T^{}S^n`$ may be identified with an affine quadric in $`𝐂^{n+1}`$. This may be seen as follows: $`T^{}S^n`$ consist of a pair of real $`(n+1)`$ vectors $`X^A`$ and $`P^A`$ such that $$X^1X^1+X^2X^2+\mathrm{}+X^{n+1}X^{n+1}=1,$$ (22) $$X^1P^1+X^2P^2+\mathrm{}+X^{n+1}P^{n+1}=0.$$ (23) If $`P=\sqrt{P^1P^1+P^2P^2+\mathrm{}+P^{n+1}P^{n+1}}`$ one may map $`T^{}S^n`$ into the affine quadric $$(Z^1)^2+(Z^2)^2+\mathrm{}+(Z^{n+1})^2=1$$ (24) setting $$Z^A=A^A+iB^A=\mathrm{cosh}(P)X^A+i\frac{\mathrm{sinh}(P)}{P}P^A.$$ (25) Stenzel then seeks a Kähler potential depending only on the restriction to the quadric (24) of the function $$\tau =|Z^1|^2+|Z^2|^2+\mathrm{}+|Z^{n+1}|^2.$$ (26) The Monge-Ampère equation now reduces to any ordinary differential equation. In the case of $`AdS_{p+2}`$ we may proceed as follows. The bundle of future directed timelike vectors in $`AdS_{p+2}`$, $`T^+AdS_{p+2}`$ consists of pairs of timelike vectors $`X^A`$, $`P^A`$ in $`𝐄^{p+1,2}`$ such that $$X^AX^B\eta _{AB}=1$$ (27) and $$X^AP^B\eta _{AB}=0,$$ (28) with $`P^A`$ future directed and $`\eta _{AB}=\mathrm{diag}(1,1,+1,\mathrm{},+1)`$ the metric. We define $`P=\sqrt{P^AP^B\eta _{AB}}`$ and $$Z^A=\mathrm{cosh}(P)X^A+i\frac{\mathrm{sinh}(P)}{P}P^A$$ (29) which maps $`T^+AdS_{p+2}`$ to the affine quadric $$Z^AZ^B\eta _{AB}=1.$$ (30) One then seeks a Kähler potential depending only on the restriction to the quadric (24) of the function $$\tau =|Z^0|^2+|Z^{p+2}|^2|Z^1|^2\mathrm{}|Z^{p+1}|^2.$$ (31) The Monge-Ampère equation again reduces to an ordinary differential equation. The canonical complex structure on $`T^{}M`$ is defined as follows. Let $`\sigma \left(x^\mu (\sigma ),p_\mu =g_{\mu \nu }\frac{dx^\nu }{d\sigma }(\sigma )\right)`$ be a solution of Hamilton’s equations for a geodesic in $`T^{}M`$. Then in any complex chart $`w^i`$, $`i=1,\mathrm{},n`$ one demands that for all geodesics the map $`𝐂T^{}M`$ given by $`\sigma +i\tau (x^\mu (\sigma ),\tau p_\mu )`$ is holomorphic. The “energy ” $`=\frac{1}{2}g^{\mu \nu }p_\mu p_\nu `$ is a real valued positive function on $`T^{}M`$ which vanishes only on $`M`$. It is plurisubharmonic with respect to the complex structure i.e. the hermitian metric $$\frac{^2}{w^i\overline{w}^j}$$ (32) is positive definite, and $`\sqrt{}`$ satisfies the homogeneous Monge-Ampère equation $$\mathrm{det}\left(\frac{^2\sqrt{}}{w^i\overline{w}^j}\right)=0.$$ (33) In the case of Stenzel’s construction one has $`=\frac{1}{2}P^2=\frac{1}{8}(\mathrm{cosh}^1(\tau ))^2`$. ## 8 The Physical Dimension The analysis above works for general dimension. However the case $`n=4`$ is special since $`SO(4,2)SU(2,2)/𝐙_2`$. One may identify points in real four-dimensional Minkowski spacetime $`𝐄^{3,1}`$ with two by two Hermitian matrices $`z=x^0+𝐱\sigma `$. The future tube $`T_4^+`$ then corresponds to complex matrices $`x=z^0+𝐳\sigma `$ whose imaginary part is positive definite. The Cayley map $$zw=(zi)(z+i)^1$$ (34) maps this into the bounded holomorphic domain in $`𝐂^4`$ consisting of the space of two by two complex matrices $`w`$ satisfying $$1ww^{}>0.$$ (35) For more details the reader is directed to . For this approach to the compactification of Minkowski spacetime see also ## 9 de-Sitter The aim of the talk has been to describe some intriguing relations between the future tube and the covariant phase space of Anti-de-Sitter spacetime which seem to lie at the heart of holography. One may ask: what about de-Sitter spacetime? As one might expect, everything goes wrong. One problem is that one does not get a positive definite metric on covariant phase space. This makes for difficulties in the usual approach to geometric quantisation. Thie lack of a positive metric is more or less clear because the timelike geodesics are orbits of the non-compact group $`SO(1,1)`$. This last fact is also closely related to thermal radiation seen by geodesics observers in de-Sitter spacetime. A closely related problem concerns the lack of a positive energy generator of $`SO(n,1)`$ and the consequent impossibility of de-Sitter supersymmetry. ## 10 Final Remarks After the talk I became aware of where a geometric quantisation approach to Anti-de-Sitter spacetime is described. I was also told of some work in axiomatic quantum field theory which appears to have some relation to what I have desribed above. ## 11 Acknowledgement I should like to thank the organizers for the opportunity to speak at such a stimuating conference.
no-problem/9911/astro-ph9911489.html
ar5iv
text
# Pulsations and stability of stars with phase transition ## I Introduction First-order phase transition (PT1) is characterized by density jump from $`\rho _1`$ to $`\rho _2`$ with $`q=\rho _2/\rho _1>1`$, at some pressure $`P_0.`$ PT1 leads to density discontinuity inside the star while pressure is continious across boundary between phases. Also continious are gravity- and pressure-induced forces inside the star. Here we review some results of studying such stars get mainly (but not only) by author. ## II Three methods There are three main methods of star’s equilibrium and stability analysis: * Static Criterion (Mass - Central Pressure dependence) * Dynamical Principle (pulsation frequency) * Variational (Energetic) Principle (variation of total energy). Stability loss/restoration (critical equilibrium states) according to these methods are defined as follows: * Mass extremum, $`dM/dP_c=0`$ * Zero frequency of pulsation (of the lowest mode), $`\omega ^2=0`$ * Equilibrium energy extremum, $`\delta ^2E=0.`$ In fig.1 two first principles are shown qualitatively. Unstable equilibrium states are shown by broken line. ## III Newton Theory of Gravitation Here are some general results in NTG for stars with PT1: * At $`q>1.5`$, stability loss occurs at $`P_c=P_0`$ for any EoS, while recover of stability for larger $`P_c`$ depends on EoS * At $`q_{min}(\gamma )<q<1.5`$, stability loss occurs only at $`P_c>P_0`$; $`\gamma =\mathrm{\hspace{0.33em}1}+1/n,\gamma \text{ and }n`$ are adiabatic and polytropic indices. E.g., $`q_{min}=1.46,\mathrm{\hspace{0.33em}1.33},\mathrm{\hspace{0.33em}1.20},\text{ and }1.09\text{ for }n=1,\mathrm{\hspace{0.33em}1.5},\mathrm{\hspace{0.33em}2}\text{ and }2.5.`$ The critical value of (relative) mass of new-phase core is larger for smaller $`q`$’s and for smaller $`\gamma `$’s ($`softer`$ EoS’s). E.g., critical value of new-phase core $`x_{crit}=8/9(q3/2)\text{ for }\gamma =1(n=1)`$ * At slow rotation with angular velocity $`\mathrm{\Omega }`$, in spherical approximation, $`q_{crit}=3/2\mathrm{\Omega }^2/\mathrm{\hspace{0.33em}4}\pi G\rho _1`$, for stability loss at $`P_c=P_0`$ for any EoS. * For PT1, $`starting`$ at finite radius (”$`neutralcore`$”), the larger size of neutral core, the larger value of $`q_{crit}.`$ E.g.,$`q_{crit}\mathrm{}\text{ at }x\mathrm{\hspace{0.33em}3}/4\text{ at }n=0\text{ and }x\mathrm{\hspace{0.33em}.6824}\text{ at }n=1.`$ ### A Classical example Inverse $`\beta `$-decay reactions in dense degenerate matter of white dwarf stars lead to nuclei transformations $`(A,Z)(A,Z2)`$ (”$`neutronization`$”), and to density jump with $`q=Z/(Z2)<1.5`$. $`M\rho _c`$ curves for equilibrium cold white dwarfs in classical paper by T.Hamada, E.Salpeter, Ap.J. 134 (1961) 669 are $`incorrect`$, as: a) central density $`\rho _c`$ is $`not`$ continious variable, and b) mass maximum is $`not`$ at point $`P_c=P_0,`$ but at some $`P_c>P_0.`$ ### B General Relativity In GR, the critical value of energy density jump $`q=\epsilon _2/\epsilon _1`$ is equal to $`3/2(1+P_0/\epsilon _1)`$ for stability loss at $`P_c=P_0`$ for any EoS. ## IV Two-Incompressible-Phase Star Here are some important features of this model: * At $`q\mathrm{\hspace{0.33em}1.5}`$ there is no unstable equilibrium states * At $`q>1.5`$ recover of stability occurs (at point $`P_c=P_2`$) for relative radius of new-phase core, $`x=r_{core}/R_{star}`$, defined by relation: $$f(q,x)=(q1)^2x^4+4(q1)x+32q=0$$ (1) Star with $`x>\sqrt{2}1`$ is stable for $`arbitrarylargeq`$ * Frequency squared of the small adiabatic radial pulsations of the lowest mode for star in $`slow`$ rotation with angular velocity $`\mathrm{\Omega }`$: $`\omega _\mathrm{\Omega }^2=\omega _0^2+\mathrm{\Delta }_\mathrm{\Omega }(q,x);`$ $$\omega _0^2=\frac{4\pi G\rho _1f(q,x)}{3(q1)(1x)},$$ (2) $$\mathrm{\Delta }_\mathrm{\Omega }(q,x)=\frac{2}{3}\mathrm{\Omega }^2\left(\frac{5x(1x)(1+x)^2}{1+(q1)x^5}\frac{1+(q1)x}{(q1)(1x)}\right).$$ (3) At small cores ($`x0`$), $`\mathrm{\Delta }_\mathrm{\Omega }`$ is negative – rotation reduces the stability of star with PT1. In general, $`\mathrm{\Delta }_\mathrm{\Omega }`$ may be of both signs, e.g., at $`q>2.11`$, rotation leads to decreasing of value of $`x_{crit}`$ from Eq.(1). ### A Non-linear pulsations In the next approximaion, pulsations are non-harmonic. Writing down $`R=R_{eq}+z,|z|<<R_{eq},R_{eq}`$being radius of equilibrium model with the same mass, we get in next-to-zeroth approximation the following equation of motion: $$\ddot{z}+\omega _0^2z+Cz^2+D\dot{z}^2=0,$$ (4) where $`\omega _0^2`$ is as in Eq. (2), while constants $`C`$ and $`D`$ are some functions of $`q`$ and of equilibrium value of $`x`$. The solution of Eq. (4) with accuracy up to $`a^2,`$( $`a`$ being an amplitude of z) is as follows: $$z(t)=\frac{1}{2}(\frac{C}{\omega _0^2}+D)a^2+a\times \mathrm{cos}\omega _0t+\frac{1}{6}(\frac{C}{\omega _0^2}D)a^2+a^2\times \mathrm{cos}2\omega _0t.$$ (5) In this approximation the frequency does not differ from one in zero’th approximation, $`\omega _0`$, that is period $`T=T_0=2\pi /\omega _0`$, while pulsations are non-sinusoidal. An amplitude of a star’s expansion is larger than an amplitude of a star’s compression and star spends more time with $`R>R_{eq}`$ than in state with $`R<R_{eq}`$. A character of non-harmonicity - a slow large-amplitude ”expansion” and rapid ”contraction” with smaller amplitude - should be the same for all equilibrium stable models at $`\lambda >3/2`$. Damping of pulsations of star with PT1 depends largely on relation between velocity of motion $`v_p`$ and a sound velocity $`v_s`$ in the region of phase transition. In general, the larger $`v_p/v_s`$ the larger damping effect due to PT. ### B General Relativity Full analytical investigation of equlibrium and stability of two-imcompressible-phase model with PT1 is possible by static and variational methods. In first post-Newtonian approximation, ($`P_0/\epsilon _11`$) critical value of relative ”radius” of core, at which stability recover occurs, is equal to: $$x_{crit}(q)=x+\mathrm{\Delta }_{PN}(P_0/\epsilon _1),$$ (6) with $`xx_N`$ defined in Eq. (1) and : $$\mathrm{\Delta }_{PN}=\frac{97q+27(q1)x+(4q^233q+27)x^2+(q1)(94q)x^3}{2(q1)(1+(q1)x^3)^3}$$ (7) At $`q3/2`$, $`x_{PN}q3/23/2P_0/\epsilon _1`$, so that $`x_{PN}=0`$ at $`q=3/2(1+P_0/\epsilon _1)`$, which coincides with result for $`any`$ EoS and for $`any`$, not only for small values of $`P_0/\epsilon _1`$. In general, first post-Newtonian correction to $`x_{crit}`$ may be of both signs, negative at $`q<1.89`$ and positive for larger $`q^{}s.`$ At $`q=4+\sqrt{8},x=\sqrt{2}1+(5941\sqrt{2})/2P_0/\epsilon _1.`$ In fact, dependence of both corrections, due to GR and rotation, on $`x`$ and $`q`$ is rather complicated, and Fig. 2 presents only part of $`(qx)`$plane with lines, on which $`\mathrm{\Delta }_\mathrm{\Omega }(q,x)=0`$ and $`\mathrm{\Delta }_{PN}(q,x)=0`$. Also shown is the curve $`f(q,x)=0`$ from Eq. (1) which marks a boundary between $`stable`$ equilibrium states (at right) from $`unstable`$ ones (at left). References 1. W.H. Ramsey, MNRAS 110 (1950) 325; 113 (1951) 427; M.J. Lighthill, MNRAS 110 (1950) 339 ; W.C. De Markus, Astron. J. 59 (1954) 116 . 2. T. Hamada, E.E.Salpeter, Ap.J. 134 (1961) 669; E. Schatzman, Bull. Acad. Roy. Belgique 37 (1951) 599; E. Schatzman, White Dwarfs, North-Holland Publ. Co. Amsterdam, 1958. 3. Z.F. Seidov, Izv. Akad. Nauk Azerb. SSR, ser. fiz-tekh. matem. no. 5 (1968) 93; Soobsh. Shemakha Astrophys. Observ. 5 (1970) 58 ; Izv. Akad. Nauk Azerb. SSR, ser. fiz-tekh. matem. no. 1-2 (1970) 128; Izv. Akad. Nauk Azerb. SSR, ser. fiz-tekh. matem. no. 6 (1969) 79. 4. Ya.B. Zeldovich, Z.F. Seidov (1966) unpublished; Z.F. Seidov, Astrofizika 3 (1967) 189 . 5. Ya.B. Zeldovich, I.D. Novikov, Relativistic Astrophysics, Univ. Chicago Press, Chicago, 1971. 6. Z.F. Seidov, Astron. Zh. 48 (1971) 443; B. Kämpfer, Phys.Lett. 101B (1981) 366. 7. Z.F. Seidov, Astrofizika 6 (1970) 521. 8. Z.F. Seidov, Space Research Inte Preprint (1984) Pr-889. 9. M.A. Grienfeld, Doklady Acad. Nauk SSSR 262 (1982) 134; G.S. Bisnovatyi-Kogan, Z.F. Seidov, Astrofizika 21 (1984) 570.
no-problem/9911/astro-ph9911529.html
ar5iv
text
# EUVE Observations of the Magnetic Cataclysmic Variable QQ Vulpeculae ## 1 Introduction Polars, or AM Her systems, a subset of magnetic cataclysmic variables (CVs), are a unique class of CVs. As with the typical CV, polars contain a white dwarf which accretes matter from a Roche lobe-filling low-mass secondary star. However, unlike a typical CV, the white dwarf in a polar has a magnetic field strength on the order of a few tens of megaGauss (MG). These strong magnetic fields cause the disruption of the accreted material that would otherwise form an accretion disk. Instead, material transfer is routed on to the white dwarf in the form of an accretion stream which follows the path of one or more of the magnetic field lines to impact the surface at one or both of the magnetic poles. As the material approaches the white dwarf, a shock front is created where impact energy is released in the form of extreme ultraviolet (EUV) and X-ray photons. Surface heating, of up to a few hundred thousand degrees Kelvin, and possible penetration of the surface by material blobs completes the accretion region production of high energy flux. Cropper (1990) presents a detailed review of polars. Since the accretion regions in polars are the site of the majority of high energy emission, we would expect that EUV and X-ray observations would provide a wealth of information about their accretion geometry. This is indeed the case as shown, for example, by Sirk & Howell (1998). Differentiation and study of eclipses of the accretion region by the secondary, the far and near accretion stream, and the white dwarf (a self-eclipse; when the rotation of the CV system causes the accretion pole to pass behind the limb of the white dwarf), allows a number of system parameters to be determined. These include the inclination of the system, the position of the accretion pole on the white dwarf, and if the white dwarf is accreting at one or both of its magnetic poles. With this type of study in mind, one of us (SBH) placed QQ Vul on the target list of the EUVE satellite Right Angle Program (McDonald et al., 1994). ## 2 Previous Observations of QQ Vul Serendipitously discovered in a survey of soft X-ray sources (Nugent et al., 1983), QQ Vulpeculae was confirmed as an AM Her binary through detection of circular and linear polarization (Nousek et al., 1982, 1984). QQ Vul has a relatively long orbital period for a polar, $`P_{orb}=222.5`$ min (Nousek et al., 1984), and although the strength of the magnetic field has not been directly measured, it is assumed to be fairly typical, $`2030`$ MG (Liebert & Stockman, 1985). Blackbody fits to the spectrum of QQ Vul yield a temperature of $`T2\times 10^5`$ K for the X-ray heated accretion regions (Nousek et al., 1984). Mass estimates of M<sub>1</sub>=0.58M and M<sub>2</sub>=0.35M have been determined for QQ Vul by Mukai & Charles (1987). The initial multi-wavelength observations of QQ Vul (Nousek et al., 1984) were able to place some constraints on the system geometry. These observations suggested a system with a magnetic pole tilted $`75^o85^o`$ from our line of sight during the linear polarization pulse peak, an orbital inclination of $`46^o<i<74^o`$, and a stellar latitude of the accreting magnetic pole in the range $`63^o<\mathrm{\Delta }<80^o`$. Circular and linear polarization observations (Nousek et al., 1984) have revealed a weak and diffuse linear polarization pulse centered on maximum light, indicating that the near field accretion column is always in sight, although the pole does graze the limb during a self-eclipse (Cropper, 1998). The polarization data also suggest that there is non-radial accretion flow (McCarthy, Bowyer, & Clarke, 1986; Cropper, 1998) implying a “kink” in the accretion stream which flows to the magnetic pole. Throughout its history of observations, QQ Vul has repeatedly shown a complex and varying X-ray light curve. Studies undertaken with the use of Einstein (Nousek et al., 1984), ROSAT (Beardmore et al., 1995), and EXOSAT (Osborne et al., 1986, 1987) have all shown the complexities apparent in the X-ray component of QQ Vul. Osborne et al. (1987) found that the soft X-ray count rate had doubled within a period of two years and that the shape of the light curve they observed was indeed quite different from the initial X-ray light curve of QQ Vul (Nousek et al., 1984). Figure 1 in Osborne et al. (1987) provides a comparison of the different X-ray light curves previously obtained for QQ Vul. There has been conflicting evidence for whether or not QQ Vul possesses two accreting poles. While the double peaked nature of the soft X-ray light curves of Osborne et al. (1986, 1987) might lead one to interpret it as two-pole accretion, it is noted there that the second pole is not evident in the optical light curve. Other observations (Beardmore et al., 1995) detected soft X-ray spectral variations as a function of orbital phase which could be modeled by an extended multi-temperature accreting region or by two accreting poles with slightly different temperatures. However, all of these previous observations do agree that if two-pole accretion is taking place, the primary accreting pole is a weaker source in soft X-rays. Recent polarization data seem to strongly suggest that QQ Vul is undergoing two-pole accretion. Optical polarization data from Schwope (1991) cannot be explained by one-pole accretion and a second linear polarization peak, seen in the data of Cropper (1998), requires a second accreting region to be in view at certain binary phases. ## 3 EUVE Observations and Data Analysis The EUV photometric data were obtained with the EUVE satellite using the right-angle pointing Scanner Telescopes A and B. Scanner A imaged QQ Vul through a Lexan/Boron filter ($`\lambda _{\mathrm{peak}}=89\mathrm{\AA }`$), sensitive in the bandpass $`50180\mathrm{\AA }`$ while Scanner B data were obtained in an Al/Ti/C filter ($`\lambda _{\mathrm{peak}}=171\mathrm{\AA }`$), sensitive in the bandpass $`160240\mathrm{\AA }`$. Details of the photometric properties of the imaging telescopes on board the EUVE may be found in Sirk et al. (1997). Our observations of QQ Vul began on 1996 Aug 11 (GMT) and continued through 1996 Aug 16 (GMT), spanning $`1.5\times 10^5`$s, or a total of $`11P_{orb}`$. The data were passed through EUVE standard processing and delivered to us in compressed format on CD-ROM. We then extracted the scanner observations using the standard EUVE data analysis software packages within IRAF. Photometry was performed using an aperture with a seven pixel radius centered on the coordinates of the source and a background annulus having a radius of twenty pixels also centered on the object. Due to the large difference in signal-to-noise obtained in the two data sets, every 100 data points (photon events) were binned together for the raw data from Scanner A, and every 30 data points were binned together for the raw data from Scanner B. An IDL program (written by M. Sirk) was then used to produce light curve data files that were phased according to the ephemeris of the inferior conjunction of the secondary star in QQ Vul, HJD 2448446.4710(5) + 0.15452011(11)E (Schwope et al., 1998a). Finally, our resultant light curves, in both Scanner A (Lexan/B) and Scanner B (Al/Ti/C), were rebinned to 0.005 in phase. Figures 1 and 2 present our obtained EUVE light curves phased on the Schwope et al. (1998a) ephemeris. Convolving the mean EUV count rate of 0.01 counts/second with the effective area as a function of wavelength for the Lexan/B filter (Sirk, 1999), we find that the observed EUV flux is $`4.74\times 10^{14}`$ ergs s<sup>-1</sup>cm<sup>-2</sup>. It is interesting to note that Scanner B, viewing QQ Vul through the Al/Ti/C filter, detected anything at all. At a wavelength of $`171\mathrm{\AA }`$, and a hydrogen column density of $`N_H10^{20}\mathrm{cm}^2`$ (Osborne et al., 1986) for a distance to QQ Vul of 215 pc (Mukai & Charles, 1986), the optical depth is $`\tau 14`$. At such a large optical depth, the ISM transmission of photons from QQ Vul at this wavelength is essentially zero. We would therefore not expect to detect photons through this filter and indeed no other polar has been detected by EUVE in Scanner B at these wavelengths. However, the Al/Ti/C filter is known to have an X-ray leak peaking near 44$`\mathrm{\AA }`$ (Finley et al., 1988; Vallerga & Sirk, 2000), and we conclude that the data collected here with Scanner B is an X-ray light curve for QQ Vul with a mean effective wavelength of $``$44$`\mathrm{\AA }`$. We note that this is not the first detection of an X-ray leak with the Al/Ti/C filter; X-ray leaks were also reported in EUVE observations of the nova V1974 Cygni (MacDonald, 1996; Stringfellow & Bowyer, 1996). One flaw in our conclusion concerning the X-ray light curve would be if QQ Vul were actually quite close by in space, say less than 50 pc. We therefore independently re-determined the distance to QQ Vul using Bailey’s method (Bailey, 1982) and newly obtained infrared observations. Bailey’s method relies on the relationship between certain physical parameters of the secondary star in the CV system and the distance to that system. The relation is: $$\mathrm{log}d=\frac{K}{5}+1\frac{S_K}{5}+\mathrm{log}\frac{R_2}{R_{}}$$ (1) where $`d`$ is the distance, $`K`$ is the $`K`$-band magnitude of the secondary star, $`S_K`$ is the $`K`$-band surface brightness of the secondary, and $`R_2`$ is the radius of the secondary star. Using data kindly obtained by M. Huber with the Wyoming Infrared Observatory on 1998 Aug 30 UT (9:30 hours), we find that QQ Vul had a $`K`$ magnitude of 14.0$`\pm `$0.1 mag. Taking $`S_K=4.5`$ (Bailey, 1982), and $`R_2=0.43R_{}`$ (Nousek et al., 1984), the distance to QQ Vul is determined to be $`342`$ pc; a value which is in agreement with earlier measurements which suggest a lower limit of $`215`$ pc (Mukai & Charles, 1986). $`V,R,`$ and $`I`$ observations of QQ Vul by Mukai, Charles, & Smale (1988) detected a visual “companion star” to QQ Vul with $`K14.5`$ (obtained using $`VK`$ colors derived for $`K`$ spectral type stars). If possible contamination from this star in the infrared (i.e., $`K`$ band) is taken into consideration, QQ Vul would be even farther away. Therefore, it seems highly unlikely that the detected signal in Scanner B is due to $`160240\mathrm{\AA }`$ photons but is instead the aforementioned X-ray leak. The effective bandpass of the X-ray leak in the Al/Ti/C filter is roughly triangular in shape and covers the range of $`15\mathrm{\AA }<\lambda <68\mathrm{\AA }`$. The peak throughput, at 68$`\mathrm{\AA }`$, is $``$2% of the normal filter transmission near 171$`\mathrm{\AA }`$ and has zero sensitivity to photons with a wavelength below $`15\mathrm{\AA }`$ (Sirk, 1999). Using the effective area ratio of the Lexan/B filter to the Al/Ti/C filter (Sirk, 1999), and the fact discussed above concerning the total absence of long wavelength photons, we can determine that the X-rays observed for QQ Vul have wavelengths from $`1568\mathrm{\AA }`$, with a mean effective central wavelength near 44$`\mathrm{\AA }`$. Performing an approximate integration under the triangular bandpass and convolving it with the effective area as a function of wavelength, we find the observed X-ray flux to be $`3.60\times 10^{10}`$ ergs s<sup>-1</sup>cm<sup>-2</sup>. X-ray fluxes ranging from $`1.5\times 10^{12}`$ ergs s<sup>-1</sup>cm<sup>-2</sup> (Osborne et al., 1987) to $`2.25\times 10^{11}`$ ergs s<sup>-1</sup>cm<sup>-2</sup> (Beardmore et al., 1995) have been reported in previous studies, which, along with the value determined here, reflect the variability of the source. We have thus obtained simultaneous time-resolved photometric data in the X-ray ($`\lambda 44\mathrm{\AA }`$) (Figure 2) and EUV ($`\lambda _{\mathrm{peak}}=89\mathrm{\AA }`$) (Figure 1) wavelength regions which allow us to make a direct comparison of the emitting character of this system in these two wavelength regimes. ## 4 Discussion and Conclusions The EUV and X-ray light curves, Figures 1 and 2 respectively, both show a double peak shape with minima occurring near phase 0.2 and phase 0.85 and maxima at phases 0.0 and 0.45. While each light curve reveals similar gross features, we note that there is far less change and detail in the EUV data. This could be due to the fact that QQ Vul has a broader, more diffuse EUV emitting region but a smaller, better defined X-ray emitting region. The modulations of both light curves are uneven in their minima and maxima. The minima alternate between a deep, essentially complete eclipse at phase 0.85 to a less deep and well-defined dip near phase 0.2, while the maxima shift between a narrow peaked one near phase 0.0 to a brighter, broader one covering about 0.4 in phase, centered at 0.45. The two maxima, while showing that the secondary pole is stronger in intensity, are probably nearly equal in phase extent and overall shape, the narrower one being “cut-off” around phase 0.9 by an eclipse of the magnetic pole accretion region by the near-field accretion stream (Sirk & Howell, 1998). Interpreted as a two-pole accretor, the locations of the two poles would have centers near binary phases 0.1 and 0.55, that is, almost directly along the line of centers of the binary. The eclipse by the near-field stream of the accretion region facing the secondary star occurs before phase 0 as is the case in most polars (Sirk & Howell, 1998). The magnetic pole on the far side of the white dwarf suffers no eclipse, thus, it is visible in phase for approximately one-half of the orbital period, and its shape is consistent with a spot latitude of $`55^o75^o`$ (Sirk & Howell, 1998) given a binary inclination of $`60^o90^o`$ (see below). Figure 3 is a close-up of the X-ray light curve minimum near phase 0.8. It appears that this minima has two components. The first half of the broad dip, starting at phase 0.7, has a slow decline up to phase 0.85 and is likely to be the result of an eclipse by the near-field accretion stream. The remaining part of this dip shows an abrupt drop to near zero counts and appears to be flat bottomed from phase 0.85 to 0.92, with the most likely cause being a stellar eclipse of the X-ray emitting region by the secondary star. If true, this constrains the system inclination to be greater than $`60^o`$. An interesting feature appearing in the X-ray light curve (Figures 2 and 3), but not seen in the EUV light curve, is the narrow dip which occurs at phase 0.96. Using other polar light curves as a guide, this narrow dip feature is likely to correspond to an eclipse of the accretion region by the far-field accretion stream. It may also be present in the EUV data but the noise level precludes its discovery. While unresolved, the short duration of this narrow feature (0.015 in phase or 3 min) provides strong evidence for the compactness of the hard X-ray emitting region in QQ Vul. Translating this time in to a size on the white dwarf surface (without correction for latitude and assuming R<sub>WD</sub> = 7000km) we find an emitting region diameter of 660 km or, if circular, $`f`$=0.002. Taking the gross ratio of the low EUV count rate to the higher X-ray count rate (even as a leaked signal) one could conclude that the magnetic field strength in QQ Vul is relatively low, possibly less than 10-30 MG (Ramsay et al., 1994). However, while in general a large X-ray to EUV ratio indicates a lower magnetic field strength in polars, this is not always the case with the difference attributed to the structure and size of the accretion region (Sirk & Howell, 1998). Figure 4 presents the hardness ratio (X-ray/EUV) for QQ Vul. Due to the low value of the flux in the EUV light curve, both the X-ray and EUV light curves were re-binned to 0.05 in phase, thereby allowing the hardness ratio to not be dominated by noise spikes due to the low EUV flux values. Figure 4 exhibits an increased hardness near phase 0.25, with a sharp rise near phase 0.3. This peak might indicate the emergence of the second accreting pole. A comparison of our QQ Vul X-ray light curve with previous high energy observations shows that the continuous orbital variations and temporal changes, noted by Osborne et al. (1987), appear to be persistent. Some similarities, however, do exist between our X-ray light curve and the 1983 Oct and 1985 Jun EXOSAT light curves discussed in Osborne et al. (1986, 1987). For example, the unequal minima and maxima and even the presence of a narrow dip, probably due to an eclipse of the accretion region by the far-field accretion stream. This narrow dip feature occurs near phase 0.03 in the 1983 Oct EXOSAT light curve and 0.08 in the 1985 Jun EXOSAT light curve, compared with phase 0.96 seen in our light curve \[according to the ephemeris of Schwope et al. (1998a)\]. The 1985 Sep EXOSAT light curve (Osborne et al., 1987), is very different from our present data as it exhibits a much different shape with nearly equal maxima and yet again a narrow dip but one which appears at yet another phase (phase 0.71) within the light curve. The fact that the narrow dip is always present but changes phase indicates a movement, within the binary, of the far-field accretion stream similar to that observed in HU Aqr (Schwope et al., 1998b). KEB is supported by a graduate assistantship from the University of Wyoming. SBH acknowledges partial support for this work from NASA cooperative agreement NCC5-138 through an EUVE Guest Observer mini-grant and from NASA ADP grant NAG5-4233. The authors wish to thank Jennifer Cash and David Ciardi for their assistance with data reduction, Martin Sirk for his assistance with data reduction and for extremely useful discussions concerning the performance of the EUVE filters, Mark Huber for providing $`K`$-band photometry for QQ Vul, and Axel Schwope for supplying us with his QQ Vul ephemeris prior to publication.
no-problem/9911/hep-ph9911304.html
ar5iv
text
# Gluino-Mediated Rare B Decays Based on an invited talk given at the International Euroconference on Quantum Chromodynamics, July 1999, Montpellier, France, to appear in Nucl. Phys. B Suppl. CERN-TH/99-342, MPI/PhT-99-43 ## 1 Introduction Apart from the low-energy regime of the strong interaction, flavour physics is the least tested part of the SM. This is reflected in the rather large error bars of several flavour parameters such as the mixing parameters at the twenty percent level . However, the experimental situation concerning $`B`$ physics will drastically change in the near future. There are several $`B`$ physics experiments successfully running at the moment. In the upcoming years new facilities will start to explore $`B`$ physics with increasing sensitivity and within different experimental settings . The $`b`$ quark system is an ideal laboratory for studying flavour physics. Hadrons containing a $`b`$ quark are the heaviest hadrons experimentally accessible. Since the mass of the $`b`$ quark is much larger than the QCD scale, the long-range strong interactions are expected to be comparably small and are well under control thanks to the heavy quark expansion . Of particular interest are the so-called rare $`B`$ decays, which are flavour changing neutral current processes (FCNC) which vanish at the tree level of the SM. Thus, they are rather sensitive probes for physics beyond the SM . One of the main difficulties in analysing rare $`B`$ decays is the calculation of short-distance QCD effects. These radiative corrections lead to a tremendous rate enhancement. The QCD radiative corrections bring in large logarithms of the form $`\alpha _s^n(m_b)\mathrm{log}^m(m_b/M)`$, where $`M`$ is the top quark or the W mass and $`mn`$ (with $`n=0,1,2,\mathrm{}`$). They have to get resummed at least to leading-log (LL) precision ($`m=n`$). Within the SM the accuracy in the domina- ting perturbative contribution to $`BX_s\gamma `$ was recently improved to NLL precision. This was a joint effort of many different groups . The theoretical error of the previous leading-log (LL) result was substantially reduced to $`\pm 10\%`$ and the central value of the partonic decay rate increased by about $`20\%`$. Supersymmetric extensions of the SM have become the most popular framework of new theoretical structures at higher scales, much below the Planck scale. The precise mechanism of the necessary supersymmetry breaking is unknown . A reasonable approach to this problem is the inclusion of the most general soft breaking term consistent with the SM gauge symmetries in the so-called unconstrained minimal supersymmetric standard model (MSSM). This leads to a proli- feration of free parameters in the theory. A global fit to electroweak precision measurements within supersymmetric models shows that if the superpartner spectrum becomes light the fit to the data results in typically larger values of $`\chi ^2`$ compared with the SM. Supersymmetric models, however, can always avoid serious constraints from data because the supersymmetric contributions decouple . In the MSSM there are two kinds of new contributions to FCNC processes. The first class results from flavour mixing in the sfermion mass matrices . Moreover, one has CKM-induced contributions from charged Higgs boson and chargino exchanges (see ). This leads to the well-known supersymmetric flavour problem: the severe experimental constraints on flavour violations have no direct explanation in the structure of the unconstrained MSSM. Clearly, the origin of flavour violation is a model-dependent issue and is based on the relation of the dynamics of flavour and the mechanism of supersymmetry breaking. Keeping in mind our current phenomenological knowledge about supersymmetry, it is suggestive to perform a model-independent analysis of flavour changing phenomena. Such an analysis provides important hints on the more fundamental theory of soft supersymmetry breaking. ## 2 Gluino Contribution to $`BX_s\gamma `$ Among inclusive rare $`B`$ decays, the $`BX_s\gamma `$ mode is the most prominent because it is the only decay mode in this class that is already measured . Many papers are devoted to studying the $`BX_s\gamma `$ decay and similar decays within the MSSM. However, in most of these analyses, the contributions of supersymmetry were not investigated with the systematics of the SM calculations. In it was shown, that in a specific supersymmetric scenario NLL contributions are important and lead to a significant reduction of the stop-chargino mass region where the supersymmetric contribution has a large destructive interference with the charged-Higgs boson contribution. It is expected that the complete NLL calculation drastically decreases the scale dependence and, thus, the theoretical error. The NLL analysis is also a necessary check of the validity of the perturbative ansatz (see ). The NLL calculations in and also in are worked out in the heavy gluino case. In the analysis reported here, the gluino-mediated decay $`BX_s\gamma `$ is discussed where the gluino is not assumed to be decoupled. Previous work on the gluino contribution did not include LL or NLL QCD corrections, and gluino exchanges were treated in the so-called mass insertion approximation (MIA) only, where the off-diagonal squark mass matrix elements are taken to be small and their higher powers neglected. In our analysis we explore the limits of the MIA. Furthermore, we analyse the sensitivity of the bounds on the sfermion mass matrices to radiative QCD corrections. Within the SM, there is one coupling constant, $`G_F`$, relevant to the $`bs\gamma `$ decay. There is also one flavour violation parameter only, namely the product of two CKM matrices. All the loops giving the logarithms are due to gluons, which imply a factor of $`\alpha _s`$. The corrections can then be classified according to: * (LL), $`G_F(\alpha _sLog)^N`$, * (NLL), $`G_F\alpha _s(\alpha _sLog)^N`$. Thus, the above ordering also reflects the actual size of the contributions to $`bs\gamma `$. The corresponding analysis of QCD corrections in the MSSM is much more complicated. The MSSM has several couplings relevant to this decay and there are several flavour changing parameters. Thus, a formal LL term might have a small coupling while a NLL contribution is multiplied with a large one. Moreover, the couplings generally depend on the parameters, and the results should be applicable for large domains on the parameters. Another complication in supersymmetric theo- ries is the occurrence of flavour violations such as gluino exchanges (through the gluino-quark-squark coupling) where additional factors $`\alpha _s`$ are induced. They lead to magnetic penguin operators where the Wilson coefficients naturally contain factors of $`\alpha _s`$. Moreover, these contributions induce magnetic operators where the (small) factor $`m_b`$ is replaced by the gluino mass. Clearly this contribution is expected to be dominating. The gluino-induced contributions to the decay amplitude for $`bs\gamma `$ are of the following form: * (LL), $`\alpha _s(\alpha _sLog)^N`$ * (NLL), $`\alpha _s\alpha _s(\alpha _sLog)^N`$ In the matching calculation, all factors $`\alpha _s`$ regardless of their source should get expressed in terms of the $`\alpha _s`$ running with five flavours. However, non-decoupling effects through violations of the supersymmetric equivalence between gauge bosons and corresponding gaugino couplings have to be taken into account at the NLL level. Furthermore, one finds that gluino-squark boxes induce new scalar and tensorial four-Fermi operators, which are shown to mix into the magnetic operators without gluons. On the other hand, the vectorial four-Fermi operators mix only with an additional gluon into magnetic ones. Thus, they will contribute at the next-to-leading order only. However, from the numerical point of view the contributions of the vectorial operators (although NLL) are not necessarily suppressed w.r.t the new four-Fermi contributions; this is due to the expectation that the flavour-violation parameters present in the Wilson coefficients of the new operators are expected to be much smaller (or much more stringently constrained) than the corresponding ones in the coefficients of the vectorial operators. This is one of the most important reasons why a complete NLL order calculation should be performed. The mixed graphs, containing a $`W`$, a gluino and a squark, are proportional to $`G_F\alpha _s`$. They give rise only to corrections of the SM operators at the NLL level. There are also penguin contributions with two gluino lines in the NLL matching. The current discussion is restricted to the $`W`$ or gluino-mediated flavour changes and does not consider contributions with other Susy particles such as chargino, charged Higgs or neutralino. Clearly, analogous phenomena occur in those contributions. To understand the sources of flavour violation that may be present in supersymmetric models in addition to those enclosed in the CKM matrix, one has to consider the contributions to the mass matrix of a squark of flavour $`f`$: $$_f^2=\left(\begin{array}{cc}m_{f,LL}^2& m_{f,LR}^2\\ m_{f,RL}^2& m_{f,RR}^2\end{array}\right)+$$ $$+\left(\begin{array}{cc}F_{f,LL}+D_{f,LL}& F_{f,LR}\\ F_{f,RL}& F_{f,RR}+D_{f,RR}\end{array}\right)$$ In the super CKM basis where the quark mass matrix is diagonal and the squarks are rotated in parallel to their superpartners, the $`F`$ terms from the superpotential and the $`D`$ terms in the $`6\times 6`$ mass matrices $`_f^2`$ turn out to be diagonal $`3\times 3`$ submatrices. This is in general not true for the additional terms (2) from the soft supersymmetry breaking potential. Because all neutral gaugino couplings are flavour diagonal in the super CKM basis, the gluino contributions to the decay width of $`bs\gamma `$ are induced by the off-diagonal elements of the soft terms $`m_{f,LL}^2`$, $`m_{f,RR}^2`$, $`m_{f,RL}^2`$. ## 3 Numerical Results We show a few features of our numerical results based on a complete LL calculation. More details of the analysis can be found in . The size of the gluino contribution crucially depends on the soft terms in the squark mass matrix $`_{\stackrel{~}{D}}^2`$ and to a lesser extent on those in $`_{\stackrel{~}{U}}^2`$. In the following, we take all the diagonal entries in the soft matrices $`m_{\stackrel{~}{Q},LL}^2`$, $`m_{\stackrel{~}{D},RR}^2`$, $`m_{\stackrel{~}{U},RR}^2`$, to be equal; their common mass is denoted by $`m_{\stackrel{~}{q}}`$ and set to the value $`500`$ GeV. First, the matrix element $`m_{\stackrel{~}{D},LR;23}^2`$ is varied. All other entries in the soft mass terms are put to zero. Following the notation of , we define $$\delta _{LR;23}=m_{\stackrel{~}{D},LR;23}^2/m_{\stackrel{~}{q}}^2\text{and}x=m_{\stackrel{~}{g}}^2/m_{\stackrel{~}{q}}^2,$$ (1) where $`m_{\stackrel{~}{g}}`$ is the gluino mass. In Fig. 1, the QCD-corrected branching ratio is shown as a function of $`x`$ (solid lines), obtained when only $`\delta _{LR,23}`$ is vanishing ($`\delta _{LR,23}=0.01`$). Shown is also the range of variation of the branching ratio, delimited by dotted lines, obtained when the low-energy scale $`\mu _b`$ spans the interval $`2.4`$$`9.6`$GeV. The matching scale $`\mu _W`$ is here fixed to $`m_W`$. As can be seen, the theoretical estimate of $`BR(BX_s\gamma )`$ is still largely uncertain ($`\pm 25\%`$). An extraction of bounds on the $`\delta `$ quantities more precise than just an order of magnitude, therefore, would require the inclusion of next-to-leading logarithmic QCD corrections. It should be noticed, however, that the inclusion of the LL QCD corrections has already removed the large ambiguity on the value to be assigned to the factor $`\alpha _s(\mu )`$ in the gluino-induced operators. Before adding QCD corrections, it is not clear whether the explicit $`\alpha _s`$ factor should be taken at some high scale $`\mu _W`$ or a some low scale $`\mu _b`$, the difference is a LL effect. The corresponding values for $`BR(BX_s\gamma )`$ for the two extreme choices of $`\mu `$ are indicated in Fig. 1 by the dot-dashed lines ($`\mu =m_W`$) and the dashed lines ($`\mu =4.8`$GeV). The branching ratio is then virtually unknown. In spite of the large uncertainties which the branching ratio $`BR(BX_s\gamma )`$ still has at the LL in QCD, it is possible to extract indications on the size that the $`\delta `$-quantities may maximum acquire without inducing conflicts with the experimental measurements. As already noted in Ref. , the element $`\delta _{LR,23}`$ is certainly the flavour-violating parameter most efficiently constrained. In Fig. 2 , the dependence of $`BR(BX_s\gamma )`$ is shown as a function of this parameter when this is the only flavour-violating source. The branching ratio is obtained by adding the SM and the gluino contribution obtained for different choices of $`x`$ for the fixed values $`\mu _b=4.8`$GeV and $`\mu _W=m_W`$. The gluino contribution interferes constructively with the SM for negative values of $`\delta _{LR,23}`$, which are then more sharply constrained than the positive values. Overall, this parameter cannot exceed the per cent level. Much weaker is the dependence on $`\delta _{LL,23}`$ if this is the only off-diagonal element in the down squark mass matrix. This dependence is illustrated in Fig. 2 for different choices of $`x`$. The induced gluino contribution interferes constructively with the SM contribution for positive $`\delta _{LL,23}`$. Notice that given the large values of $`\delta _{LL,23}`$ allowed by the experimental measurement, the MIA cannot be used in this case to obtain a reliable estimate of $`BR(BX_s\gamma )`$, whereas it is an excellent approximation of the complete calculation in the case of $`\delta _{LR,23}`$. In the upper frame of Fig. 3, we vary $`\delta _{LL;23}`$ for $`x=0.3`$. The MIA and the exact result start to deviate considerably for $`x>0.4`$, i.e. well within the experimental error band; the exact result leads to more stringent bounds. An even more drastic example is shown in the lower frame of Fig. 3, where we increase $`m_{\stackrel{~}{g}}`$ in such a way that $`x=4`$, leaving all the other parameters unchanged: for $`\delta _{LL;23}>0.1`$ the mass matrix $`_{\stackrel{~}{D}}^2`$ has at least one negative eigenvalue. This feature is of course completely missed in the MIA. One also has to consider interference effects. In Fig. 4 we show that the additional contribution through a chain $`\delta _{LL;23}\delta _{LR;33}`$ weakens the bound on the parameter $`\delta _{LR;23}`$ significantly. In the solid curve we put $`\delta _{LL;23}=\delta _{LR;33}=\sqrt{\delta _{LR;23}}`$, while in the dashed curve $`\delta _{LL;23}=\delta _{LR;33}=0`$. We have chosen again $`x=0.3`$. Finally, we stress that a consistent precision analysis of the bounds on the sfermion mass matrix should include a NLL calculation and also interference effects with the chargino contribution. The work reported here has been done in collaboration with F. Borzumati, C. Greub and D. Wyler, which is gratefully acknowledged.
no-problem/9911/math9911166.html
ar5iv
text
# Equidimensionality of complex Lagrangian fibrations ## 1. Introduction First we define Lagrangian subvariety . ###### Definition 1. Let $`X`$ be a manifold with a holomorhpic symplectic form $`\omega `$. A subvariety $`Y`$ is said to be Lagrangian subvariety if $`dimY=(1/2)dimX`$ and there exists a resolution $`\nu :\stackrel{~}{Y}Y`$ such that $`\nu ^{}\omega `$ is identically zero on $`\stackrel{~}{Y}`$. Note that this notion does not depend the choice of $`\nu `$. We prove the following theorem. ###### Theorem 1. Let $`f:XB`$ be a proper surjective morphism over a normal base $`B`$. Assume that $`X`$ is a Kähler manifold with holomorphic symplectic form $`\omega `$ and a general fiber of $`f`$ is a Lagrangian submanifold with respect to $`\omega `$. Then every irreducible component of a fibre of $`f`$ is a Lagrangian subvariety. From Theorem 1, we obtain the following corollary. ###### Corollary 1. Under the situation above, $`f`$ is equidimensional. Especially $`f`$ is flat if $`B`$ is smooth. Combining Corollary 1 with \[2, Theorem 2\] and \[3, Theorem 1\], we obtain the following result. ###### Corollary 2. Let $`f:XB`$ be a fibre space of an irreducible symplectic manifold $`X`$ over a normal Kähler base $`B`$. Then $`f`$ is equidimensional. Acknowledgment. The author express his thanks to Professors A. Beauville and A. Fujiki for their advice and encouragement. ## 2. Proof of Theorem 1 We refer the following theorem due to Kollár \[1, Theorem 2.2\] and Saito \[4, Theorem 2.3, Remark 2.9\] ###### Theorem 2. Let $`f:XB`$ be a proper surjective morhpism from a smooth Kähler manifold $`X`$ to a normal variety $`B`$. Then $`\mathrm{R}^if_{}\omega _X`$ is torsion free, where $`\omega _X`$ is the dualizing sheaf of $`X`$. Let $`\overline{\omega }`$ be the complex conjugate of $`\omega `$. By Leray spectral sequence, there exists a morphism $$H^2(X,𝒪_X)H^0(B,\mathrm{R}^2f_{}𝒪_X).$$ Then $`\overline{\omega }`$ is a torsion element in $`H^0(B,\mathrm{R}^2f_{}𝒪_X)`$ since a general fibre of $`f`$ is a Lagrangian manifold. In addition, $`\omega _X𝒪_X`$. Hence $`\overline{\omega }`$ is zero in $`H^0(B,\mathrm{R}^2f_{}𝒪_X)`$ by Theorem 2. We derive a contradiction assuming that there exists an irrducible component of a fibre of $`f`$ which is not a Lagrangian subvariety. The letter $`V`$ denotes an non Lagrangian component. We take an embedding resolution $`\pi :\stackrel{~}{X}X`$ of $`V`$. Let $`\stackrel{~}{V}`$ be the proper transform of $`V`$. We will show that $`\pi ^{}\omega `$ is not zero in $`H^0(\stackrel{~}{V},\mathrm{\Omega }_{\stackrel{~}{V}}^2)`$. If $`dimV=(1/2)dimX`$, it is obious by the definition. If $`dimV>(1/2)dimX`$, we take a smooth point $`qV`$ such that $`\pi `$ is isomorhpic in a neighborhood of $`q`$. Since $`dimV>(1/2)dimX`$ and $`\omega `$ is nondegenerate, the restriction of $`\omega `$ on the tangent space of $`V`$ at $`q`$ is nonzero. Because $`\pi `$ is isomorphic in a neighborhood of $`q`$, $`\pi ^{}\omega `$ is not zero in $`H^0(\stackrel{~}{V},\mathrm{\Omega }_{\stackrel{~}{V}}^2)`$. Take the complex conjugate, $`\pi ^{}\overline{\omega }`$ is not zero in $`H^2(\stackrel{~}{V},𝒪_{\stackrel{~}{V}})`$. Therefore $`\overline{\omega }`$ is not zero in $`H^2(V,𝒪_V)`$. Let $`p:=f(V)`$ and $`X_p:=f^1(p)`$. We consider the following morphism: $$\mathrm{R}^2f_{}𝒪_Xk(p)H^2(X_p,𝒪_{X_p})H^2(V,𝒪_V).$$ Then $`\overline{\omega }`$ is zero in $`\mathrm{R}^2f_{}𝒪_Xk(p)`$ and notzero in $`H^2(V,𝒪_V)`$. That is a contradiction. ∎
no-problem/9911/astro-ph9911350.html
ar5iv
text
# HI observations of nearby galaxies I. The first list of the Karachentsev catalog ## 1 Introduction The only way to study the smallest galaxies is to search for them in our cosmic neighborhood. The first systematic catalog of nearby galaxies was prepared by Kraan-Korteweg & Tammann (1979) who collected all known galaxies with corrected radial velocities v<sub>0</sub> $``$500 km s<sup>-1</sup>, a total of 179 objects (hereafter called the KKT sample). Since that time the number of known galaxies within the Local Volume (i.e. within a distance of 10 Mpc) increased to 303 objects (Karachentsev et al. 1999). For the past decade the initial KKT sample has been increased almost two times in number due to the mass redshift surveys of galaxies from the known catalogues, revealing new nearby galaxies in the Milky Way ”Zone of Avoidance”, as well as special searches for dwarf galaxies in nearby groups. The increasing numbers of galaxies in the Local Volume is mainly due to many new dwarf galaxies. This fact demonstrates how incomplete our knowledge about the galaxy population of even the Local Volume is. A couple of years ago Karachentseva & Karachentsev (1998; hereafter KK98) initiated an all-sky search for candidates for new nearby dwarf galaxies using the second Palomar Sky Survey and the ESO/SERC plates of the southern sky. The results of the first two segments of the survey have been published, they cover large areas around the known galaxy groups in the Local Volume (KK98) and the area of the Local Void (Karachentseva et al. 1999). In a next step to derive distances we will measure radial velocities. Later on we will aim for more exact photometric distances. In this paper we present the first follow-up observations, the HI search for the galaxies in KK98. The HI search for dwarf irregular galaxies seems quite efficient as these galaxies are HI rich in general and with adequate velocity resolution, say 5 km s<sup>-1</sup>, all the HI of a given galaxy will be within a few velocity channels. The characteristic signature of a dwarf galaxy profile, a nearly gaussian structure, is different from radio interference and easily will lead to a good signal-to-noise ratio. ## 2 Observations Observations were performed with three different radio telescopes for different declination ranges. The 100-m radiotelescope at Effelsberg was used for declinations greater than $`31\mathrm{°}`$, the Nançay radio telescope was selected for galaxies in the declination range $`38\mathrm{°}31\mathrm{°}`$, and the compact array of the Australia Telescope was used for galaxies south of $`38\mathrm{°}`$. ### 2.1 Effelsberg observations The radio telescope at Effelsberg has been used in the total power mode (ON – OFF) combining a reference field 5 minutes earlier in R.A. with the on-source position. A dual channel HEMT receiver had a system noise of 30K. The 1024 channel autocorrelator was split into 4 bands (bandwidth 6.25 MHz) of 256 channels each shifted in frequency by 5 MHz with respect to their neighbor in order to cover a velocity range from -470 to 3970 km s<sup>-1</sup> overlapping 1.5 MHz between channels. The resulting channel separation was 5.1 km s<sup>-1</sup> yielding a resolution of 6.2 km s<sup>-1</sup> (10.2 km s<sup>-1</sup> after Hanning smoothing). The HI profiles observed with the 100-m radiotelescope are presented in Fig. 1 in order of increasing R.A. as in Table 1. The half power beam widths (HPBW) of the Effelsberg telescope at this wavelength is 9$`\stackrel{}{.}`$3. ### 2.2 Nançay observations For 15 galaxies in the declination range $`38\mathrm{°}31\mathrm{°}`$ the Nançay radio telescope was used with the same velocity resolution and coverage. Major differences to the description given for the Effelsberg observations were a different system noise (45K), a different antenna beam ($`3\stackrel{}{.}6\times 22\mathrm{}`$ in R.A. and Dec. for this declination range), and shorter integration phases with a cycle of 2 minutes for the ON and the OFF positions. Nine galaxies have been detected (Fig. 2). ### 2.3 Compact Array of the Australia Telescope 40 of the 57 galaxies south of declination $`38\mathrm{°}`$ have been observed with the Compact Array of the Australia Telescope. For this HI search we have chosen the 750A antenna array configuration in order to yield an antenna beam comparable to the optical size of the smallest galaxies (i.e. $``$ 1$`\mathrm{}`$). The frequency setup and correlator configuration was such that we obtained a velocity coverage from -450 to +2900 km s<sup>-1</sup> and a channel separation of 6.6 km s<sup>-1</sup> (i.e. a resolution of 7.9 km s<sup>-1</sup>). Each galaxy was observed for 10 min every few hours. With five to six observations per target position we achieved a regular coverage of the uv plane for these ’snapshot mode’ observations. The resulting integrated HI profiles are given in Fig. 3 (for a more detailed discussion of these data see Huchtmeier et al. in preparation). We may miss some flux with the interferometer (missing flux) as the observed HI emission extends over more than 2$`\mathrm{}`$ per channel for over 60% of the galaxies. Galaxies from the kk98 sample not observed are: kk 11, kk 63, kk179, kk 184, kk 189, kk 190, kk 197, kk 203, kk 211, kk 213, kk 214, kk 217, kk 221, kk 222, kk 235, kk 244, kk 248. ## 3 The data Our search list was an early version of the list of KK98 containing a few additional galaxies which did not make it into the final version because of their morphology and/or size (i.e. they were too small). Particularly, we took into account the results of HI searches for nearby dwarf galaxies made by Kraan-Korteweg et al. (1994), Huchtmeier et al. (1995), Burton et al. (1996), Huchtmeier & van Driel (1997), Huchtmeier et al.(1997) and Cote et al.(1997). The optical data of our galaxies are given in Table 1. The kk-number (or other identification if there is no kk-number) is given in column 1, R.A. and Dec. (1950) follow in columns 2 and 3. The optical diameters $`a`$ and $`b`$ in the de Vaucouleurs ($`D_{25}`$) system follow in columns 4 and 5, the morphological type in column 6 where we use the following coding: Im - irregular blue object with bright knot(s), Ir - irregular without knots or with amorphous condensations, the colour is neutral or bluish, Sm - disturbed spiral or irregular with signs of spiral structure, Sph - spheroidal, with very low brightness gradient or without any, the color is neutral or redish. The optical surface brightness (SB) has been coded (see KK98): high (H), low (L), very low (VL), and extremely low (EL) in column 7. The total blue magnitude B<sub>t</sub> and its reference follow in columns 8 and 9. ’NED’ - data are from the NASA/ Extragalactic Database, ’IK’ - visual estimates from POSS (typical error is about 0.4 mag) by I. Karachentsev, ’6m’ - accurate photometric data from the 6-m telescope CCD-frames obtained by Karachentsev and coworkers (unpublished); ’UH’ - photometric data from U. Hopp (Calar Alto) unpublished. The Galactic extinction follows in column 10. Other names (identifications) are listed in column 11. Results of the HI observations are summarized in Table 2. The kk-number is given in column 1, the HI-flux \[Jy km s<sup>-1</sup>\] follows in column 2, the maximum emission and/or the r.m.s. noise \[mJy\] in column 3, the heliocentric radial velocity plus error in column 4, the line widths at the 50%, the 25%, and the 20% level of the peak emission in column 5. Distances (column 6) have been derived with different methods, there are photometric distances in some cases, in other cases the group membership yields a distance. If no other distance estimate is available, we assumed a Hubble constant of 75 km s<sup>-1</sup>Mpc<sup>-1</sup> to derive a ’kinematic’ distance. The absolute magnitude is given in column 7, the integrated HI mass (column 8) was calculated as (e.g. Roberts 1969) $$(M_{HI}/M_{\mathrm{}})=2.355\times 10^5\times D^2\times S_v𝑑v$$ where $`D`$ is the distance of the galaxy in Mpc and $`S_v𝑑v`$ is the integrated HI flux in Jy km s<sup>-1</sup>. The relative HI content $`M_{HI}/L_B`$ follows in column 9. Finally, column 10 contains comments relative to the telescope used for the observation: unless otherwise noted observations have been performed with the 100-m radiotelescope at Effelsberg, N - marks the Nançay radio telescope, ATCA - the Australia Telescope Compact Array at Culgoora, NSW. In a number of cases emission at negative radial velocities has been observed (kk20, kk236, kk237; only kk 236 has been plotted as an example). The Dwingeloo HI survey (Hartmann & Burton 1997) has been consulted: in all cases of negative radial velocities extended HI emission was found suggesting that we observed high velocity clouds in our Galaxy. ## 4 Discussion A great majority (73%) of our galaxies are of type Im (26) and Ir (162), about 20% are of type Sph/Ir (12) and Sph (39), while the rest of 8% is a collection of different types from spiral to Im/Sm and BCD. The detection rate of our sample galaxies depends on the morphological type. 75% of the spirals (type S0 to Sm/Im and BCD) were detected; the detection rate for types Im and Ir is very similar close to 60%, whereas the detection rate for types Sph/Ir and Sph is considerable lower at 33 and 23%, respectively. The detection rate depends on the optical surface brightness (SB) class, too. From high SB to low, very low, and extremely low SB the detection rate decreases from 70% to 58%, 49%, and 43%, respectively. This trend reflects the type dependence and the fact that we deal with fainter galaxies as we descend from high SB to very low SB, the median absolute magnitudes for the detected galaxies change from -15.43 (H) to -13.92 (VL) for our brightness classes. A number of the galaxies within the present sample are associated with nearby groups of galaxies (e.g. Tully 1988) according to their position, radial velocity and relative resolution: NGC 672 group: kk 13, kk 14, kk 15; NGC 784 group: kk 16, kk 17; Maffei group: kk 19, kk 21, kk 22, kk 23, kk35 kk 44; Orion group: kk 49; M 81 group: kk 81, kk 83, kk 85, kk 89, kk 89, kk 91; Leo group: kk 94; CVn cloud: kk 109, UGC 7298, kk 137, kk 141, kk 144, kk 148, kk 149, kk 151, kk 154, kk 158, kk 160, kk 191, kk 206, kk 220, kk 230; Centaurus group: kk 170, kk 179, kk 182, kk 190, kk 191, kk 195, kk 197, kk 200, kk 211, kk 217, kk 218; NGC 6946 group: kk 250, kk 251, kk 252; Virgo cluster: kk 111, kk 127, kk 128, kk 140, NGC 4523, IC3517, kk 164, kk 168, kk 169, kk 172, kk 173, U 8091. There are a few cases of high $`M_{HI}/L_B`$ values in Table 2. Four of the five galaxies with $`M_{HI}/L_B5`$ are actually found to be confused by emission from nearby galaxies (see footnotes to Table 2). The present sample of galaxies as presented in Tables 1 and 2 will be discussed now in some detail with the help of global parameters. The distribution of radial velocity (v<sub>0</sub>, corrected for the rotation of our galaxy) is given in Fig. 4. Apart from a few background objects most of the galaxies belong to the local supercluster, about 25% are within the Local Volume. From this situation it is clear that the great majority of the galaxies in the present sample are dwarfish in nature. This will be shown more convincingly below when we compare several other global parameters of these objects. Next we will look at the optical linear diameter $`A_0`$ (in kpc). The histogram in Fig. 5 presents the number of galaxies binned in intervals of 0.5 kpc width. The distribution of the optical linear diameters of our galaxies extends from 0.2 kpc to 26 kpc, yet the great majority is smaller than 8 kpc in diameter (in the de Vaucouleurs $`D_{25}`$ system). Galaxies in the Local Volume (indicated by shaded areas) are even smaller with a median value of $`1.4\pm 0.2`$ kpc. Now we will use the correlation of two global parameters to compare the present sample of galaxies with the previously known galaxies in the Local Volume. In Fig. 6 the total mass of neutral hydrogen $`M_{HI}`$ of the galaxies is plotted versus their linear extent $`A_0`$ for this sample of galaxies. The full line is the regression line for the KKT sample (Huchtmeier & Richter 1988). This regression line seems to be an excellent fit for the present sample, too. The average HI mass of the galaxies in the Local Volume is 4.6 10<sup>7</sup> M. The HI masses in Fig. 6 cover a range from 10<sup>6</sup> to 10<sup>10</sup> solar masses. The HI luminosity function for galaxies has been studied with galaxies of 10<sup>7</sup> and more solar masses in HI so far. With the data of the new dwarf galaxies within the Local Volume we will be able in the end to discuss the HI luminosity function starting from 10<sup>6</sup> solar masses. The galaxies in our sample have small line widths on the average. In Fig. 7 we present the distribution of observed line widths in the upper panel and the (for inclination) corrected line widths in the lower panel. The optical axial ratio has been used here to derive the inclination. Galaxies within the Local Volume are indicated by the shaded areas. The peak of the line width distribution of the galaxies within the Local Volume is 39 km s<sup>-1</sup> for the uncorrected and 47 km s<sup>-1</sup> for the corrected line widths. The three global parameters we have considered so far point altogether toward the dwarfish character of the Local Volume objects in our sample: the average linear diameter of $`1.4\pm 0.2`$ kpc (Fig.5), the mean total HI mass of 4.6 10<sup>7</sup> M and the small line width of less than 50 km s<sup>-1</sup>. Two more global parameters are shown in Fig. 8, pseudo HI surface density $`\mathrm{\Sigma }_{HI}`$ and the relative HI content $`M_{HI}/M_T`$. The pseudo HI surface density is obtained by dividing the total HI mass $`M_{HI}`$ of the galaxy by the disk area of the galaxy as defined by its optical diameter $`A_0`$. This quantity is given in units of solar mass per square parsec as well as in the usual HI column density $`N_{HI}`$ in atoms cm<sup>-2</sup>. This quantity is plotted versus the relative HI content $`M_{HI}/M_T`$. Our galaxies fill the usual range in HI surface density as well as in relative HI content as observed for normal galaxies (e.g. HR). The present sample of galaxies is relatively rich in HI. Some of the scatter in the diagram is due to uncertainties in observed quantities, especially the inclination which is used to correct the line width which itself enters the total mass calculation by the square. The optical diameters are uncertain for galaxies at low galactic latitudes due to the high foreground extinction, e.g. Cas 2, ESO 137-G27, BK12, ESO 558-11. If we exclude the confused galaxies and those with heavy galactic extinction all entries in Fig. 8 with $`\mathrm{\Sigma }_{HI}100`$ M pc<sup>-2</sup> are gone. Low values of the HI surface density are not only due to the uncertainties of observational data, the gas content of dwarf galaxies is very sensitive to outside influences (tidal interactions) due to their shallow gravitational potential. Finally we plot the HI surface brightness versus the optical surface brightness (Fig. 9). The surface brightness class (Table 1, column 7) has been coded from 4 to 1 from high SB to extremely low SB in steps of 1. The different errors of the mean values of each class essentially depend on the different population size of each SB class. However, there is a definite trend of the HI surface density to grow with increasing optical SB by a factor of 2 to 4 (e.g. van der Hulst et al. 1993, de Blok 1997). ## 5 Conclusion In this paper we presented an HI search for 257 candidates for nearby dwarf galaxies. A detection rate of 60% on the average is quite high keeping in mind the limited velocity band and the fact that single-dish telescopes are literally ’blind’ for weak emission in the velocity range of the local HI emission (i.e. within -140 to +20 km s<sup>-1</sup>) and for 20% of HI-poor (spheroidal and Sph/Ir) objects in the sample. Most of the detected galaxies are located within the local supercluster, and about 25% are members of the Local Volume. The dwarfs within the Local Volume have a mean linear diameter of $`1.4\pm 0.2`$ kpc, a mean observed linewidths of 39 km s<sup>-1</sup>, and a mean total HI mass of 4.6 10<sup>7</sup> M. The smallest galaxies have HI masses of just over 10<sup>6</sup> solar masses. Once this full-sky survey will be finished we will be able to discuss the luminosity function of the Local Volume including these tiny dwarf galaxies. This investigation is especially needed as recent determinations of the galaxy luminosity function exhibit an increase for low mass objects. The exact value of this increase will be important for deriving the mass density in the local universe. ###### Acknowledgements. The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. The Nançay Radio Astronomy Observatory is the Unité Scientifique de Nançay of the Observatoire de Paris, associated as Unité de Service et de Recherche (USR) No. B704 to the French Centre National de la Recherche Scientifique (CNRS). The Observatory also gratefully acknowledges the financial support of the Conseil Régional of the Région Centre in France. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This work has been partially supported by the Deutsche Forschungsgemeinschaft (DFG) under project no. 436 RUS 113/470/0 and Eh 154/1-1.
no-problem/9911/cond-mat9911293.html
ar5iv
text
# Local structure of In0.5Ga0.5As from joint high-resolution and differential pair distribution function analysis ## I Introduction Ternary semiconductor alloys, such as In<sub>x</sub>Ga<sub>1-x</sub>As, are technologically important because they allow the semiconductor band-gap to be varied continuously between the band-gap values of the end members, GaAs and InAs, by varying the composition, $`x`$ . This has made the technological characteristics, physical properties and structure of In<sub>x</sub>Ga<sub>1-x</sub>As alloys a subject of numerous experimental and theoretical investigations. It has been found that the lattice parameters of the alloys well interpolate between those of the end members which is consistent with the so-called Vegard’s law. According to Vegard’s law, the structure of alloys adjusts itself so that the individual bond lengths are taking equal, compositionally averaged, values and the bond angles remain unperturbed from their ideal values for any alloy composition. For this reason, all structure dependent properties of In<sub>x</sub>Ga<sub>1-x</sub>As alloys, including electronic band-structure, are often calculated within the virtual crystal approximation (VCA). In this approximation the alloy is assumed to be a perfect crystal with all atoms sitting on ideal lattice sites and site occupancies given by the average alloy composition. Both GaAs and InAs have the zinc-blende structure ($`F\overline{4}3m`$) where In and Ga atoms occupy two interpenetrating face-centered-cubic (fcc) lattices and are tetrahedrally coordinated to each other . Accordingly, the VCA assumes that In<sub>x</sub>Ga<sub>1-x</sub>As alloys have the same zinc-blende structure and, furthermore, the first neighbor interatomic distances (i.e. In-As and Ga-As bonds lengths), bond ionicity, atomic potential etc. take some average values for any composition $`x`$. However, both extended x-ray absorption fine structure (XAFS) experiments and theory have shown that Ga-As and In-As bonds do not take some average value but remain close to their natural lengths L<sup>o</sup><sub>Ga-As</sub> = 2.437 Å and L<sup>o</sup><sub>In-As</sub> = 2.610 Å in the alloy. This behavior is close to the so-called Pauling model which assumes that the bond length between a given atomic pair in an alloy is more or less a constant, independent on composition $`x`$. This finding shows that the zinc-blende lattice of In<sub>x</sub>Ga<sub>1-x</sub>As is significantly deformed to accommodate the two-distinct Ga-As and In-As bond lengths present. Also, the deformation seems to be confined locally since the average crystal symmetry and structure is still of the cubic zinc-blende type as manifested by the Bragg diffraction patterns. It is well recognized that local structural distortions and associated fluctuations in atomic potentials significantly affect the properties of materials and, therefore, should be accounted for in more realistic theoretical calculations . Thus it seems there is a clear need for more detailed determination of the local structure of In<sub>x</sub>Ga<sub>1-x</sub>As semiconductor alloys which then may be used as an improved quality structure input to theoretical calculations. A number of authors have already proposed model structures for these alloys but there has been limited experimental evidence to date. This prompted us to undertake an extensive experimental study of the local atomic structure of In-Ga-As semiconductor alloys. The technique of choice for studying the local structure of semiconductor alloys has been XAFS . However, XAFS provides information about the immediate atomic ordering (first and sometimes second coordination shells) and all longer-ranged structural features remain hidden. To remedy this shortcoming we have taken the alternative approach of obtaining atomic pair distribution functions from x-ray diffraction data. The atomic pair distribution function (PDF) is the instantaneous atomic number density-density correlation function which describes the atomic arrangement in materials . It is the sine Fourier transform of the experimentally observable total structure factor obtained from a powder diffraction experiment. Since the total structure factor, as defined in Ref. 14, includes both the Bragg scattered intensities and the diffuse scattering part of the diffraction spectrum its Fourier associate, the PDF, yields both the local and average atomic structure of materials. By contrast an analysis of the Bragg scattered intensities alone, by a Rietveld type analysis for instance, yields the average crystal structure only. Determining the PDF has been the approach of choice for characterizing glasses, liquids and amorphous materials for a long time . However, its wide spread application to crystalline materials, where some deviation from the average structure is expected to take place, has been relatively recent . The present study is a further step along this line. Very high real space resolution is required to differentiate the distinct Ga-As and In-As bond lengths present in In<sub>x</sub>Ga<sub>1-x</sub>As alloys. High resolution is attained by obtaining the total structure factor S(Q), where $`Q=4\pi \mathrm{sin}\theta /\lambda `$ is the magnitude of the wave vector, to very a high value of $`Q`$ ( $`Q>40`$ Å<sup>-1</sup>). Here, $`2\theta `$ is the angle between the directions of the incoming and outgoing radiation beams and $`\lambda `$ is the wavelength of the radiation used. Recently, we carried out a high energy (60 keV; $`\lambda =0.206`$ Å) x-ray diffraction experiment and succeeded in obtaining PDFs for In<sub>x</sub>Ga<sub>1-x</sub>As crystalline materials ($`x=0`$, 0.13, 0.33, 0.5, 0.83, 1) of resolution high enough to differentiate Ga-As and In-As first atomic neighbor distances present . An analysis of the experimental data (see Fig. 4 in Ref. ) showed that the local disorder in In<sub>x</sub>Ga<sub>1-x</sub>As materials peaks at a composition $`x=0.5`$. This observation suggested In<sub>0.5</sub>Ga<sub>0.5</sub> as the most appropriate candidate for studying the effect of bond-length mismatch on the local structure of In-Ga-As family of semiconductor alloys. An important detail of the high energy experiments carried out is that low temperature (10 K) was used to minimize the thermal vibration in the samples, and hence increase the sensitivity to intrinsic atomic displacements. This left open the question about the impact of temperature on the local structure of In<sub>x</sub>Ga<sub>1-x</sub>As alloys and necessitated the carrying out of an complimentary experiment at temperatures considerably higher than 10 K. To partially compensate for the inevitable loss of resolution from the thermal broadening of atomic pair we carried out an anomalous scattering experiment and determined the In differential PDF at room temperature. In the present paper we report the high energy low temperature and anomalous scattering (In edge) room temperature experiments on the In<sub>0.5</sub>Ga<sub>0.5</sub>As. The experimental total and In differential PDFs have been fit with structure models and the way in which the zinc-blende lattice locally distorts to accommodate the two distinct Ga-As and In-As bonds present has been quantified. ## II EXPERIMENTAL DETAILS ### A Sample preparation The In<sub>0.5</sub>Ga<sub>0.5</sub>As alloy was prepared by mixing reagent grade GaAs and InAs powders in the proper amounts. These were sealed under vacuum in quartz tubes. The powders were heated above the liquidus and held for 3 hours to melt them, followed by quenching into cold water. The resulting inhomogeneous alloys were ground, resealed in quartz tubes under vacuum, and annealed at a temperature just below the solidus of the alloy for 72-96 hours. This procedure was repeated until the samples were homogeneous as determined from an x-ray diffraction measurement. The sample for high-energy x-ray diffraction measurements was a thin layer of the powder held between Kapton foils. The thickness of the layer was optimized to achieve a sample absorption $`\mu `$t $`1`$ for the 60 KeV x-rays. A standard sample holder with a cavity of rectangular shape (2 cm x 4 cm) and depth of 0.5 mm was used with the anomalous x-ray diffraction experiments. The powder was loaded into the cavity to avoid any texture formation and its extended surface left openly exposed to the x-ray beam. ### B High-energy x-ray diffraction experiments The high resolution total-PDF measurements and data analysis has been reported elsewhere . Here, the experiment procedures and data analyses employed are reported in some more detail. The experiments were carried out at the A2 24 pole wiggler beam line at Cornell High Energy Synchrotron Source (CHESS). All measurements were done in a symmetrical transmission geometry at 10 K. The polychromatic incident beam was dispersed using a double crystal Si(111) monochromator and x-rays of energy 60 keV ($`\lambda =0.206`$ Å) were employed. An intrinsic Ge detector coupled to a multi-channel (MCA) analyzer was used to detect the scattered radiation. By setting proper energy windows we were able to extract the coherent component of the scattered x-ray intensities during data collection. The diffraction data were collected in scanning at constant $`\mathrm{\Delta }Q`$ steps of 0.02 Å<sup>-1</sup>. Several runs were conducted and the resulting spectra averaged to improve the statistical accuracy and reduce any systematic error due to instability in the experimental set-up. The diffraction data were smoothed using the Savitzky, Golay procedure . The procedure was tuned in such a way that each data point gained or lost only one Poisson counting standard deviation in the smoothing process. The data were normalized for flux, and corrected for background scattering and experimental effects such as detector deadtime and absorption. The part of Compton scattering at low values of $`Q`$ not eliminated by the preset energy window was removed analytically applying a procedure suggested by Ruland . The resultant intensities were divided by the square of the average atomic form factor for the sample to obtain the total structure factor $`S(Q)`$, $$S(Q)=1+\frac{\left[I^{coh}(Q)\mathrm{\Sigma }c_if_i^2(Q)\right]}{\left[\mathrm{\Sigma }c_if_i(Q)\right]^2}$$ (1) where I<sup>coh</sup> is the coherent part of the total diffraction spectrum; $`c_i`$ and $`f_i(Q)`$ are the atomic concentration and scattering factor of the atomic species of type $`i`$ ($`i=`$ In,Ga,As), respectively . All data processing procedures were done with the help of the program RAD . The reduced structure factor $`F(Q)=Q[S(Q)1]`$ is shown in Fig. 1. As can be seen in the figure the F(Q) data are terminated at Q<sub>max</sub> = 45 Å<sup>-1</sup> beyond which, despite the high intensity synchrotron source employed and extra experimental data averaging applied, the signal to noise ratio became unfavorable. It should be noted, however, that this is a very high value of the wavevector for an x-ray diffraction measurement; for comparison Q<sub>max</sub> achieved with a conventional source such a Cu anode tube is less than 8 Å<sup>-1</sup>. The corresponding reduced atomic distribution function, $`G(r)`$, obtained through a Fourier transform $`G(r)=`$ $`4\pi r[\rho (r)\rho _o]`$ (2) $`=`$ $`(2/\pi ){\displaystyle _{Q=0}^{Q_{max}}}Q[S(Q)1]\mathrm{sin}QrdQ,`$ (3) is shown in Fig. 2. Here $`\rho (r)`$ and $`\rho _o`$ are the local and average atomic number densities, respectively, and $`r`$ the radial distance. It should be noted that no modification function, i.e. additional damping of the S(Q) data at high values of $`Q`$, was carried out prior to Fourier transformation. This ensures that the data have the highest resolution possible but it results in some spurious ringing in G(r). If Q<sub>max</sub> is high enough the ringing is small (on the level of the random noise), and in any case it is properly modeled by convoluting G(r) with a Sinc function which we do in all our modeling. ### C Anomalous x-ray diffraction experiments at the In Edge It is well known that a single diffraction experiment on an $`n`$-component system yields a total structure factor which is a weighted average of $`n(n+1)/2`$ distinct partial structure factors, i.e. $$S(Q)=\underset{i,j}{\overset{n}{}}w_{ij}(Q)S_{ij}(Q),$$ (4) where $`w_{ij}(Q)`$ is a weighting factor and $`S_{ij}(Q)`$ the partial structure factor for the atomic pair ($`i,j`$), respectively . The corresponding total PDF, too, is a weighted average of $`n(n+1)/2`$ partial pair correlation functions. For a multi component system like In<sub>0.5</sub>Ga<sub>0.5</sub>As it is therefore difficult to extract information about a particular atomic pair from a single experiment. The combination of a few conventional experiments (let say a combination of neutron, x-ray and electron diffraction experiments) or the application of anomalous x-ray scattering allows the determination of chemical-specific atomic pair distributions. We briefly describe the use of anomalous scattering to obtain chemical specific PDFs . If the incident x-ray photon energy is close to the energy of an absorption edge of a specific atom in the material, the atomic scattering factor should be considered a complex quantity dependent on both wavevector $`Q`$ and energy $`E`$ $$f(Q,E)=f_o(Q)+f^{}(E)+if^{\prime \prime }(E),$$ (5) where $`f_o(Q)`$ is the usual atomic scattering factor and $`f^{}`$ and $`f^{\prime \prime }`$ are the anomalous scattering terms depending on the x-ray photon energy $`E`$. The imaginary term, $`f^{\prime \prime }`$, is directly related to the photoelectric absorption coefficient and it is small and slowly varying for $`E`$ below the edge, rises sharply at the edge, and then gradually falls off. $`f^{}`$ has a sharp negative peak at the edge with a width which is typically 40-80 eV at half maximum and is small elsewhere . This behavior can be clearly seen in Fig. 3. The anomalous scattering technique takes advantage of the fact that $`f^{}`$ and $`f^{\prime \prime }`$ for a particular atomic species change rapidly only within $`100`$ eV of the respective absorption edge and that the characteristic absorption edges for different atomic species are separated by several hundreds of eV. Then a difference of two sets of diffraction data taken at two slightly different energies below an absorption edge of a particular element will contain only a contribution of atomic pairs involving that element. Accordingly, one can define a differential structure factor, DSF, as follows: $$DSF=\frac{I(E_1)I(E_2)\left[\mathrm{\Sigma }c_if^2(E_1)\mathrm{\Sigma }c_if^2(E_2)\right]}{\left[\mathrm{\Sigma }c_if(E_1)\right]^2\left[\mathrm{\Sigma }c_if(E_2)\right]^2},$$ (6) where $`E_1`$ and $`E_2`$ are the two photon energies used . The differential PDF, which gives information about the atomic distribution around the anomalous scattering atoms, is calculated analogous to Eq. (3) with S(Q) replaced by the DSF. Several experiments have already demonstrated the usefulness of anomalous scattering techniques in studying the local atomic ordering in both disordered and crystalline materials . A precise knowledge of the anomalous scattering terms is, however, a prerequisite for the interpretation of anomalous scattering experiments. Unfortunately, theoretical models are not capable of providing precise enough values for $`f^{}`$ and $`f^{\prime \prime }`$ in the vicinity of absorption edges. That is why anomalous scattering experiments usually involve a complimentary determination of $`f^{}`$ and $`f^{\prime \prime }`$. It is most frequently done by measuring the energy dependence of $`f^{\prime \prime }`$ in the vicinity of the absorption edge and a subsequent determination of $`f^{}`$ through the so- called dispersion relation : $$f^{}(E_o)=(2/\pi )\frac{f^{\prime \prime }(E)}{[E_0^2E^2]}𝑑E$$ (7) The same strategy was adopted in the present anomalous diffraction experiments. These were carried out at the In edge which is the highest energy ($`27.929`$ KeV) edge accessible in In-Ga-As system. The experiments were carried out at X7A beam line at the National Synchrotron Light Source at Brookhaven National Laboratory. Two energies, one just below (27.889 KeV) , and the other few hundred eV (27.629 KeV) below the In edge were used. The raw data are shown in Fig. 4. A Si channel-cut monochromator was used to disperse the white beam. The monochromator was calibrated by measuring the absorption edge of indium of high purity. The scattered x-rays were detected by a Ge solid state detector coupled to a multi-channel analyzer. Few energy windows, covering several neighboring channels, were set up to obtain counts integrated over specific energy ranges during the data-collection. These energy windows covered the coherent intensity; coherent, incoherent and In $`K\beta `$ fluorescence intensities all together; In $`K\alpha `$ fluorescence; As $`K\alpha `$ fluorescence, and a window covering the entire energy range which integrates the total scattered intensity in the detector. Integrated counts of these ranges were collected several times and then averaged to improve the statistical accuracy. The data were corrected for detector dead time, Compton and background scattering, attenuation in the sample and residual In $`K\beta `$ fluorescence which is not possible to be well separated from the coherent component of scattered intensities. In $`K\beta `$ was determined by monitoring the In $`K\alpha `$ signal and multiplying it by the ratio of In $`K\beta `$ to In $`K\alpha `$ output, which was experimentally determined by a complimentary experiment carried out well above the In edge ($`29`$ keV). By taking the difference between the two data sets, as shown in Fig. 4, all terms that do not involve In were eliminated, because only In scattering factor changed appreciably in the energy region explored while the scattering factors of Ga and As remained virtually the same, and the In DSF was obtained. The unknown anomalous scattering terms of In, involved in Eg. 5, were determined in the following way: In fluorescence yield was detected by scanning over a wide range across the In edge. The curve was matched to the theoretical estimates of Chantler for $`f^{\prime \prime }`$ and so the fluorescent yield converted to $`f^{\prime \prime }`$ data. $`f^{}`$ was calculated from these $`f^{\prime \prime }`$ data via the dispersion relation as given in Eq. 6. The anomalous scattering terms of In, resulted from the present experiments, are given in Fig. 3. As one can see in the figure, in the vicinity of In edge $`f^{}`$ and $`f^{\prime \prime }`$ change sharper than theory predicts. We determined the following values for $`f^{}`$ and $`f^{\prime \prime }`$ for the two energies employed: $`f^{}=3.89`$ and $`f^{\prime \prime }=0.637`$ at $`E=27.629`$ keV; $`f^{}=6.148`$ and $`f^{\prime \prime }=0.826`$ at E=27,889 keV. The use of the experimentally determined but not the theoretically predicted values of $`f^{}`$ and $`f^{\prime \prime }`$ turned out to be rather important in obtaining differential structure data of good quality. The In-DSF and differential PDF for In<sub>0.5</sub>Ga<sub>0.5</sub>As alloy are shown in Figs. 1 and 2, respectively. The In-DSF appears broader primarily because $`Q_{max}`$, and therefore the resolution of the measurement, is much lower than is the case for the total-PDF measurement (as is obvious in Fig. 1). There is an additional broadening of this peak because the In-DSP data were collected a room temperature instead of 10K, but this is expected to be small. ## III Results As can be seen in Fig. 1 significant Bragg scattering (well defined peaks) is present up to approximately 15 Å<sup>-1</sup> in the In difference and total structure factors of In<sub>0.5</sub>Ga<sub>0.5</sub>As alloy. At higher wavevectors only an oscillating diffuse scattering is evident. This implies that although the sample still has a periodic structure it contains considerable local displacive disorder. The disorder is due to the mismatch of Ga-As and In-As bond lengths clearly seen as a split first peak in the total PDF of Fig. 2 . Also shown in Fig.2 is the In difference PDF which has a single first peak well lining up with the higher-r component of the first peak in the total PDF. Since the In differential PDF contains only atomic pairs involving In its first peak can be unambiguously attributed to In-As atomic pairs. This allows us to identify the two components of the first peak in the total PDF as being due to Ga-As and In-As atomic pairs, respectively. According to the present high resolution x-ray diffraction experiments Ga-As and In-As bond lengths in the In<sub>0.5</sub>Ga<sub>0.5</sub>As alloy are 2.455(5) Å and 2.595(5) Å at 10 K, respectively. In the present anomalous diffraction experiments In-As bond length is 2.615(5) Å at room temperature. The observed elongation of the In-As bond with temperature is due to the usual thermal expansion observed in materials. We note that the present PDF-based results are in rather good agreement with the XAFS results of Mikkelson and Boyce for Ga-As and In-As bond lengths in In<sub>0.5</sub>Ga<sub>0.5</sub>As. An inspection of the experimental PDFs in Fig. 2 (see also Figs. 5 and 6) shows that the nearest atomic neighbor peak is the only one which is relatively sharp. Starting from the second-neighbor peak onwards all atomic-pair distributions (PDF peaks) show significant broadening. The observation shows that the bond-length mismatch gives rise to a considerable deformation of the underlying zinc-blende lattice of In<sub>0.5</sub>Ga<sub>0.5</sub>As alloy. To quantify this deformation we explored a few structure models as follows: ### A Supercell model based on the Kirkwood potential It was previously shown that a 512 atom supercell for the alloy structure, based on the zinc-blende unit cell but with In and Ga randomly arranged on the metal sublattice and atomic positions relaxed using the Kirkwood potential , explains well the high-resolution total-PDFs . In addition to this high spatial resolution PDF we now have a PDF which is chemically resolved. We are interested to know whether this supercell model is still sufficient for describing these new data. The model has been described in detail elsewhere . In the present modeling we used the same force constants $`\alpha _{ij}`$ and $`\beta _{ij}`$ that were selected to fit the end members GaAs and InAs . Using the relaxed atomic configuration a dynamical matrix has been constructed and the eigenvalues and eigenvectors found numerically. From this the Debye-Waller factors for all the individual atoms in the supercell have been determined. The PDF of the model was then calculated using a Gaussian broadening of the atomic-pair correlations to account for the purely thermal and zero-point motion. The width of the Gaussians was determined from the theoretical Debye-Waller factors . In addition, the calculated PDF was convoluted with a Sinc function to account for the truncation of the data at $`Q_{max}`$. A comparison between the model and experimental results is shown in Fig. 5. The agreement with both the high spatial resolution data (Fig. 5(a)) and with the differential PDF (Fig. 5(b)) is clearly very good. It has already been demonstrated that the Kirkwood-based model well reproduces the displacements of As and metal atoms in In-Ga-As alloys extracted from a model independent analysis the PDF peak widths. Thus the present and previous results suggest that the Kirkwood-based model is a good starting point for any further calculations requiring good knowledge of the local structure of the In<sub>0.5</sub>Ga<sub>0.5</sub>As alloy. ### B Refinement of chemically resolved differential and Spatially resolved Total PDF In this paper we present, for the first time, both high-resolution total- and chemically resolved In-partial PDFs for In<sub>0.5</sub>Ga<sub>0.5</sub>As. In the previous Section we showed that the data are consistent with a supercell model based on the Kirkwood potential. However, in addition we would like to extract structural information from the data without recourse to potential-based models which can be used to compare with other experimental results and theoretical predictions . To do this we have constructed the simplest possible model that was still consistent with the data, and we have refined it using the PDF profile-fitting program PDFFIT . We have fit to both the high-spatial resolution total-PDF and the chemically resolved differential PDF data at the same time which resulted in equivalent atomic displacement parameters being refined. The model is based on the 8-atom cubic unit cell of the zinc-blende structure. The split nearest-neighbor peak in the total PDF, and the shifted nearest-neighbor peak in the In-differential PDF, both require that definite static displacements of fixed length be incorporated in the model. In addition, the shifted position of the nearest neighbor peak in the In-differential PDF requires that a model be constructed which has a definite chemical species on specific sites, i.e., goes beyond the virtual crystal approximation. In simple 8-atom cubic unit cell this de facto leads to a chemically ordered model that is not observed in the real alloy and which, furthermore, does not sample all of the possible chemical environments for As in the random alloy . Nonetheless, this is the minimal model which can be successfully refined to the experimental data to extract information about local atomic displacement amplitudes. In this model the four metal sites are populated with two In and two Ga ions. Static displacements of As and metal ions were then allowed. The model was constrained so that all four As sites had the same displacement amplitude. The metal sites were likewise constrained to be displaced by the same amount as each other, but the metal site displacement was independent of the As sublattice displacement. The directions of the displacements were also constrained to be along $`111`$ type directions. The choice of which of the 8 possible $`111`$ directions was determined by the chemical environment. A model with $`100`$ type displacements was less successful at reconciling the sharp first peak and broad later peaks in the PDF. This is discussed in more detail later. We call these the “discrete” displacements. In addition to the discrete atomic displacements, atomic-displacement-parameters (thermal factors) and lattice parameters were refined. The thermal factors contain both static and dynamic disorder. These atom displacement distributions we refer to as “continuous” to differentiate from the discrete displacements described above. Finally, the nearest neighbor peak was sharpened with respect to the rest of the PDF using a sharpening factor. This accounts for the highly correlated nature of the displacements of near-neighbor atoms. The resulting fit to the data is shown in Fig. 6. The values we refine are as follows: the discrete displacements on the As and Metal sublattices are 0.133(5) Å and 0.033(8) Å respectively. These values are independent of temperature. The continuous-displacement amplitudes are $`u_{As}^2=0.00814(12)`$ Å<sup>2</sup> and $`u_M^2=0.00373`$ Å<sup>2</sup> for the As and metal sublattices at $`T=10`$ K and $`u_{As}^2=0.0135(15)`$ Å<sup>2</sup> and $`u_M^2=0.010(2)`$ Å<sup>2</sup> , respectively, at room temperature. These compare with literature values of $`u_{As}^2=0.0015(8)`$ Å<sup>2</sup> and $`u_M^2=0.0017(9)`$ Å<sup>2</sup> for the end-member compounds at 10 K , and $`u_{As}^2=0.00716(5)`$ Å<sup>2</sup> and $`u_M^2=0.009`$ Å<sup>2</sup> at room temperature for As and the metal site, respectively. The discrete displacements obtained from the fits are illustrated schematically, by a projection of a fragment of the structure down the direction, in Fig. 7. This shows that both the discrete and continuous displacements (indicated schematically by the size of the circles representing the atoms) are larger on the As than the metal sublattice. The size of the circles representing the continuous-displacements have been exaggerated. ## IV Discussion Existing experimental results that characterize the structure of semiconductor alloys beyond the average structure include XAFS , ion-channeling , x-ray diffuse scattering , and Raman scattering . The XAFS results clearly show that short and long near-neighbor bonds exist which are from Ga-As and In-As neighbors respectively. The bond-length distribution is not recovered with great accuracy and there is limited information available on higher neighbor pairs; however, the indication is that the atom-pair separations return quickly to the virtual crystal values with increasing pair-separation, $`r`$. This implies significant distortions to the crystal structure. Indeed, a correct analysis of the phonons in Raman spectra from Ga<sub>1-x</sub>In<sub>x</sub>As required significant structural distortions . Our current and earlier PDF results bear out all these observations. The discrete displacements refined in our $`111`$ displaced model are primarily determined by the splitting, and displacement, of the first peak in the total and In-differential PDF’s, respectively: this sharp feature in the PDF is very sensitive to the amplitude of the discrete displacement. The bond-length difference, $`\mathrm{\Delta }r`$, between the end-members is 0.173 Å and is $`0.14`$ Å in the alloy . If we add the discrete displacements on the arsenic and metal sites we get $`\mathrm{\Delta }r=0.16(2)`$ Å. The bond-length difference can be obtained directly by fitting Gaussians to the first PDF peak in a model independent way . What this modeling shows is that within the structure, most of the relaxation of local bonds occurs by arsenic moving off its site, but displacements of the metal atoms are also important. Ion channeling results give a precise determination of the mean-square displacement amplitude, $`u^2`$, (including static and dynamic components) perpendicular to the channeling direction. The results of Haga et al. give a $`u^2`$ for Ga<sub>0.47</sub>In<sub>0.53</sub>As of 0.017 Å<sup>2</sup> at room temperature. Based on the theoretical thermal amplitude of the end-member compounds being $`u^2=0.0121`$ Å<sup>2</sup>, (and in good agreement with the value measured using the PDF ) they determined that the mean-square static displacements perpendicular to were of magnitude $`u^2=0.005`$ Å<sup>2</sup>. They saw a similar value in directions perpendicular to $`110`$. This corresponds to a static root-mean-square displacement amplitude of 0.07 Å. This is much smaller than the discrete displacements of 0.133 Å that we observe. However, if we find the average displacement by adding the discrete displacements on the As and metal sublattices in quadrature and dividing by 2 we get 0.068 Å<sup>2</sup> in good agreement with the ion channeling. Their work did not report which sublattice contributed most of the disorder; however, there is a suggestion from electron diffraction , in agreement with theory , that the As sublattice is more disordered as we show directly from our measurement. Finally, we note that the actual displacement pattern on the arsenic site is expected to have $`111`$ type displacements (as in our model) but also significant $`100`$ type displacements . In fact, recent calculations indicate that the $`100`$ displacements should be significantly more pronounced than $`111`$ displacements, especially at room temperature . We are undertaking a more sophisticated modeling approach to explore this prediction. We tried a simple $`100`$ displaced model, analogous to the one described here, but found it to explain the data less successfully than the $`111`$ model we described. We feel that this is a deficiency of the simple single-displacement-direction modeling rather than signifying that the displacement directions are actually $`111`$ in the real alloy. The likely reason is that larger displacement amplitudes are required along $`100`$ directions to satisfy the bond-length difference seen in the first PDF peak (actually $`\sqrt{2}`$ times larger). With these large discrete displacements it is harder for the model to account for additional disorder in the data using enlarged thermal factors. A better fit in these imperfect models is obtained with smaller discrete displacements coupled with larger continuous-displacements. However, an improved fit should result from a more sophisticated model which includes both $`100`$ and $`111`$ type displacements. ## V Conclusions From high real space resolution total and In differential PDFs of In<sub>0.5</sub>Ga<sub>0.5</sub>As alloy we conclude the following: In good agreement with earlier XAFS results the Ga-As and In-As bonds do not take some compositionally averaged length but remain close to their natural lengths in In<sub>0.5</sub>Ga<sub>0.5</sub>As alloy. This bond- length mismatch brings about a considerable local disorder seen as a significant broadening of the next- nearest atomic-pair distributions. The positions and widths of the low and high-$`r`$ peaks in both the total- and indium differential-PDFs are very well reproduced using a relaxed supercell model based on the Kirkwood potential with parameters taken from fits to the end-members of the alloy series. This suggests that this is a reasonable approach for generating the local structure of these alloys. A co-refinement of both the high-resolution total PDF and the chemically resolved indium differential PDF using a simplified structural model was carried out. The arsenic sublattice contains most of the disorder in the structure as evidenced from both discrete atomic displacements in the model and enlarged thermal parameters. However, small, but significant, displacements are evident on the metal sites and these have been quantified. ###### Acknowledgements. We would like to acknowledge M. F. Thorpe and J. Chung for discussions and for making the results of their supercell calculations available for comparison with the experimental data. We would like to acknowledge S. Kycia and A. Perez for help in collecting the CHESS data. We are very grateful to T. Egami for making x-ray beamtime available to us and to D. E. Cox for help with the NSLS experiments. This work was supported by DOE grant DE FG02 97ER45651. S.J.L.B. also acknowledges support from the Alfred P. Sloan Foundation as a Sloan Fellow. High- energy x-ray diffraction experiments were carried out at Cornell High Energy Synchrotron Source (CHESS). CHESS is operated by NSF through grant DMR97- 13424. Anomalous scattering experiments were carried out at The National Synchrotron Light Source, Brookhaven National Laboratory, which is funded under contract DE-AC02-98CH10886.
no-problem/9911/gr-qc9911060.html
ar5iv
text
# References KIMS-1999-11-12 gr-qc/9911060 Quantum mechanical time contradicts the uncertainty principle Hitoshi Kitada Department of Mathematical Sciences University of Tokyo Komaba, Meguro, Tokyo 153-8914, Japan e-mail: kitada@kims.ms.u-tokyo.ac.jp http://kims.ms.u-tokyo.ac.jp/ November 17, 1999 Abstract. The a priori time in conventional quantum mechanics is shown to contradict the uncertainty principle. A possible solution is given. In classical Newtonian mechanics, one can define mean velocity $`v`$ by $`v=x/t`$ of a particle that starts from the origin at time $`t=0`$ and arrives at position $`x`$ at time $`t`$, if we assume that the coordinates of space and time are given in an a priori sense. This definition of velocity and hence that of momentum do not produce any problems, which assures that in classical regime there is no problem in the notion of space-time. Also in classical relativistic view, this would be valid insofar as we discuss the motion of a particle in the coordinates of the observer’s. Let us consider quantum mechanical case where the space-time coordinates are given a priori. Then the mean velocity of a particle that starts from a point around the origin at time $`0`$ and arrives at a point around $`x`$ at time $`t`$ should be defined as $`v=x/t`$. The longer the time length $`t`$ is, the more exact this value will be, if the errors of the positions at time $`0`$ and $`t`$ are the same extent, say $`\delta >0`$, for all $`t`$. This is a definition of the velocity, so this must hold in exact sense if the definition works at all. Thus we have a precise value of (mean) momentum $`p=mv`$ at a large time $`t`$ (1) with $`m`$ being the mass of the particle. Note that the mean momentum approaches the momentum at time $`t`$ when $`t\mathrm{}`$ as the interaction of the particle with other particles vanishes as $`t\mathrm{}`$. However in quantum mechanics, the uncertainty principle prohibits the position and momentum from taking exact values simultaneously. For illustration we consider a normalized state $`\psi `$ such that $`\psi =1`$ in one dimensional case. Then the expectation values of the position and momentum operators $`Q=x`$ and $`P=\frac{\mathrm{}}{i}\frac{d}{dx}`$ on the state $`\psi `$ are given by $$q=(Q\psi ,\psi ),p=(P\psi ,\psi )$$ respectively, and their variances are $$\mathrm{\Delta }q=(Qq)\psi ,\mathrm{\Delta }p=(Pp)\psi .$$ Then their product satisfies the inequality $`\mathrm{\Delta }q\mathrm{\Delta }p`$ $`=`$ $`(Qq)\psi (Pp)\psi |((Qq)\psi ,(Pp)\psi )|`$ $`=`$ $`|(Q\psi ,P\psi )qp||\text{Im}((Q\psi ,P\psi )qp)|`$ $`=`$ $`|\text{Im}(Q\psi ,P\psi )|=\left|{\displaystyle \frac{1}{2}}((PQQP)\psi ,\psi )\right|`$ $`=`$ $`\left|{\displaystyle \frac{1}{2}}{\displaystyle \frac{\mathrm{}}{i}}\right|={\displaystyle \frac{\mathrm{}}{2}}.`$ Namely $`\mathrm{\Delta }q\mathrm{\Delta }p{\displaystyle \frac{\mathrm{}}{2}}.`$ (2) This uncertainty principle means that there is a least value $`\mathrm{}/2(>0)`$ for the product of the variances of position and momentum so that the independence between position and momentum is assured in an absolute sense that there is no way to let position and momentum correlate exactly as in classical views. Applying (2) to the above case of the particle that starts from the origin at time $`t=0`$ and arrives at $`x`$ at time $`t`$, we have at time $`t`$ $`\mathrm{\Delta }p>{\displaystyle \frac{\mathrm{}}{2\delta }}`$ (3) because we have assumed the error $`\mathrm{\Delta }q`$ of the coordinate $`x`$ of the particle at time $`t`$ is less than $`\delta >0`$. But the argument (1) above tells that $`\mathrm{\Delta }p0`$ when $`t\mathrm{}`$, contradicting (3). This observation shows that, if given a pair of a priori space and time coordinates, quantum mechanics becomes contradictory. A possible solution would be to regard the independent quantities, space and momentum operators, as the fundamental quantities of quantum mechanics. As time $`t`$ can be introduced as a ratio $`x/v`$ on the basis of the notion of space and momentum in this view<sup>2</sup><sup>2</sup>2See , for a precise definition., time is a redundant notion that should not be given a role independent of space and momentum. It might be thought that in this view we lose the relation $`v=x/t`$ that is necessary for the notion of time to be valid, if space and momentum operators are completely independent as we have seen. However there can be found a relation like $`x/t=v`$ as an approximate relation that holds to the extent that the relation does not contradict the uncertainty principle (, ). The quantum jumps that are assumed as an axiom on observation in usual quantum mechanics may arise from the classical nature of time that determines the position and momentum in precise sense simultaneously. This nature of time may urge one to think jumps must occur and consequently one has to observe definite eigenstates. In actuality what one is able to observe is scattering process, but not the eigenstates as the final states of the process. Namely jumps and eigenstates are ghosts arising based on the passed classical notion of time. Or in more exact words, the usual quantum mechanical theory is an overdetermined system that involves too many independent variables: space, momentum, and time, and in that framework time is not free from the classical image that velocity is defined by $`v=x/t`$.
no-problem/9911/nucl-ex9911005.html
ar5iv
text
# 1 The 𝑝-𝑡 momentum distributions The Longitudinal and Transverse Response of the $`{}_{}{}^{4}He(e,e^{}p)`$ Reaction in the Dip Region A. Kozlov<sup>1</sup>, K.A. Aniol<sup>5</sup>, P. Bartsch<sup>2</sup>, D. Baumann<sup>2</sup>, W. Bertozzi<sup>3</sup>, R. Böhm<sup>2</sup>, K. Bohinc<sup>10</sup>, J.P. Chen<sup>4</sup>, D. Dale<sup>6</sup>, L. Dennis<sup>8</sup>, S. Derber<sup>2</sup>, M. Ding<sup>2</sup>, M.O. Distler<sup>2</sup>, A. Dooley<sup>8</sup>, P. Dragovitsch<sup>8</sup>, M.B. Epstein<sup>5</sup>, I. Ewald<sup>2</sup>, K.G. Fissum<sup>3</sup>, R.E.J. Florizone<sup>3</sup>, J. Friedrich<sup>2</sup>, J.M. Friedrich<sup>2</sup>, R. Geiges<sup>2</sup>, S. Gilad<sup>3</sup>, P. Jennewein<sup>2</sup>, M. Kahrau<sup>2</sup>, M. Kohl<sup>7</sup>, K.W. Krygier<sup>2</sup>, A. Liesenfeld<sup>2</sup>, D.J. Margaziotis<sup>5</sup>, H. Merkel<sup>2</sup>, P. Merle<sup>2</sup>, U. Müller<sup>2</sup>, R. Neuhausen<sup>2</sup>, T. Pospischil<sup>2</sup>, G. Riccardi<sup>8</sup>, R. Roche<sup>8</sup>, G. Rosner<sup>2</sup>, D. Rowntree<sup>3</sup>, A.J. Sarty<sup>8</sup>, H. Schmieden<sup>2</sup>, S. S̆irca<sup>10</sup>, J.A. Templon<sup>9</sup>, M. Thompson<sup>1</sup>, A. Wagner<sup>2</sup>, Th. Walcher<sup>2</sup>, M. Weis<sup>2</sup>, J. Zhao<sup>3</sup>, Z. Zhou<sup>3</sup> 1 – School of Physics, The University of Melbourne, Parkville 3052, VIC, Australia; 2 – Institut für Kernphysik, Universität Mainz, D-55099 Mainz, Germany; 3 – Laboratory for Nuclear Science, MIT, Cambridge, MA 02139, USA; 4 – TJNAF, Newport News, VA, USA; 5 – Department of Physics and Astr., California St. U., Los Angeles, CA 90032, USA; 6 – Department of Physics and Astr., U. of Kentucky, Lexington, KY 40506, USA; 7 – Institut für Kernphysik, Technische U. Darmstadt, D-64289 Darmstadt, Germany; 8 – Department of Physics, Florida State University, Tallahassee, FL 32306, USA; 9 – Department of Physics and Astronomy, U. of Georgia, Athens,GA 30602, USA; 10 – Institute ”Jos̆ef Stefan”, University of Ljubljana, SI-1001 Ljubljana, Slovenija; A high-resolution study of the $`(e,e^{}p)`$ reaction on $`{}_{}{}^{4}He`$ was carried out at the Institut für Kernphysik in Mainz, Germany. The high quality 100 % duty factor electron beam, and the high-resolution three-spectrometer-system of the A1 collaboration were used. The measurements were done in parallel kinematics at a central momentum transfer $`|\stackrel{}{q}|`$ = 685 MeV/c, and at a central energy transfer $`\omega `$ = 334 MeV, corresponding to a value of the $`y`$-scaling variable of +140 MeV/c. In order to enable the Rosenbluth separation of the longitudinal $`\sigma _L`$ and transverse $`\sigma _T`$ response functions (as defined in ), three measurements at different incident beam energies, corresponding to three values of the virtual photon polarization $`ϵ`$, were performed. The absolute $`(e,e^{}p)`$ cross section for $`{}_{}{}^{4}He`$ was obtained as a function of missing energy and missing momentum. A distorted spectral function, $`S^{dist}(E_m,p_m)`$ was extracted from the data using the $`cc1`$ prescription for the elementary off-shell e-p cross section (see ref.), where $$S^{dist}(p_m,E_m)=\frac{1}{p_p^2\sigma _{ep}}\frac{d^6\sigma }{d\mathrm{\Omega }_ed\mathrm{\Omega }_pdp_edp_p}$$ (1) For the two-body breakup channel the experimental results were compared to the theoretical calculations performed by Schiavilla et al. and Forest et al. , and to the earlier experimental momentum distributions measured at NIKHEF by van den Brand et al. and the new results from MAMI by Florizone as shown in Fig. 1. For the continuum channel, recent calculations for the $`{}_{}{}^{4}He`$ spectral function by Efros et al. are presented in Fig. 2,B to compare with the experimental results. A Rosenbluth separation was performed for both the two-body breakup and for continium channels, and preliminary results were obtained. The ratio $`\sigma _L/\sigma _T`$ was determined and compared with predictions (see Figure 2). The measurements show no significant strength corresponding to the $`(e,e^{}p)`$ reaction channel for missing energy values $`E_m\mathrm{\hspace{0.17em}50}MeV`$ as shown in Fig. 2. References 1. S.Boffi et al., Nucl. Phys. A435 (1985) 697 2. T. de Forest, Nucl. Phys., A392, 232 (1983) 3. R. Schiavilla et al., Nucl. Phys. A449 219 (1986) 4. J.L. Forest et al., Phys. Rev C54, 646 (1996) 5. J.F.J. van den Brand et al., Nucl. Phys. A534 (1991) 637 6. R.E.J. Florizone et al., to be published in Phys. Rev. C 7. V. D. Efros, W. Leidemann, G. Orlandini, Phys. Rev. C58 (1998) 582
no-problem/9911/cond-mat9911164.html
ar5iv
text
# Field-induced segregation of ferromagnetic nano-domains in Pr0.5Sr0.5MnO3, detected by 55Mn NMR \[ ## Abstract The antiferromagnetic manganite Pr<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub> was investigated at low temperature by means of magnetometry and <sup>55</sup>Mn NMR. A field-induced transition to a ferromagnetic state is detected by magnetization measurements at a threshold field of a few tesla. NMR shows that the ferromagnetic phase develops from zero field by the nucleation of microscopic ferromagnetic domains, consisting of an inhomogeneous mixture of tilted and fully aligned parts. At the threshold the NMR spectrum changes discontinuously into that of a homogeneous, fully aligned, ferromagnetic state, suggesting a percolative origin for the ferromagnetic transition. \] Manganites R<sub>1-x</sub>A<sub>x</sub>MnO<sub>3</sub> (R = rare earth, A = alkali-earth metal) display correlated magnetic and transport properties, which include a colossal magnetoresistance (CMR) near $`T_C`$ for the metallic ferromagnetic compositions (around $`x=1/3`$ ). The complexity of the physics in manganites is witnessed by a very rich phase diagram, which comprises various magnetic structures and regions of phase coexistence at $`x<0.1`$ and at $`x0.5`$ . The magneto-transport properties of these materials are generally understood in terms of the double exchange interaction , arising from spin-polarized carriers coupled to localized electronic moments by a strong intra-atomic exchange. The underlying physics, however, is probably more complex, and other competing interactions are relevant. Among these, the narrow bands, nesting effects of the peculiar Fermi surfaces, and the electron-lattice coupling through the Jahn-Teller (JT) active Mn<sup>3+</sup> ion play perhaps a major role . Recently the focus of studies has moved to non-CMR compositions, in particular to the 50% substituted compounds, where the itinerant ferromagnetic (F) state becomes unstable and electronic localization with antiferromagnetic (AF) order take over at low temperature. Manganites at half band filling display in fact two magnetically ordered states: a F metallic state at $`T_c>T>T_N150`$K, and an AF insulating phase at lower temperature. The AF phase can be accompanied by the ordering of Mn<sup>3+</sup> and Mn<sup>4+</sup> on two distinct sublattices, like in La<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub> and Nd<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub>. We have recently shown that in La<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub> the charge ordered state sets in at $`T_N`$ by nucleation of mesoscopic AF domains from the ferromagnetic bulk in a first order transition. However, in Pr<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub> charge ordering (CO) does not take place and the magnetic structure is of the layered A-type . Both AF ground states can be destroyed by suitably strong applied fields, which restore the metallic F phase: this can be viewed as another regime of CMR. In this paper we present an investigation of the AF-F transition in Pr<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub>, carried out by means of <sup>55</sup>Mn NMR, a.c. susceptibility, and magnetization measurements. The sample is a random assembly of small single crystals obtained from crushing a floating zone specimen . Magnetization and a.c. susceptibility were measured by means of an Oxford Instruments Maglab<sup>2000</sup> System ($`\mu _{}H_{dc}=07`$ T, $`T=1.5400`$ K), equipped with a d.c. extraction magnetometer and an a.c. induction susceptometer. NMR was performed in liquid He at 1.3 K with a home built spectrometer and a variable field superconducting solenoid. A.c. susceptibility ($`H_{ac}=1`$ Oe, $`\nu =1`$ kHz) in zero d.c. field was measured as a function of temperature. The $`\chi ^{}(T)`$ and $`\chi ^{\prime \prime }(T)`$ curves, reproducible on cooling and warming, are shown in figure 1a. The magnetization curve in a d.c. field of 500 Oe was measured as well, and it reproduces closely the features of $`\chi ^{}(T)`$. The curves show clearly the two magnetic transitions of the sample: the Curie point $`T_C=270`$ K, observed as a steep rise of $`\chi ^{}`$ and the sharp peak on $`\chi ^{\prime \prime }`$, and the F-AF transition at $`T_N150`$ K, where the susceptibility drops by two orders of magnitude. This behavior is qualitatively similar to that encountered in La<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub> where, however, in all reported works, a comparatively high remanent susceptibility (approximately 5-20% of maximum, depending on the author) was found in the CO-AF phase. In the present case the susceptibility saturates below $`T_N`$ at the value $`\chi ^{}=1.5\times 10^4`$ emu/g Oe, only a factor 10 larger than expected in a simple AF state by Curie-Weiss law, suggesting a very weak ferromagnetic term. Moreover no appreciable thermal hysteresis was observed here, in contrast again with La<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub> . Magnetization at constant temperature as a function of the applied field is shown in fig. 1b for several temperatures below $`T_C`$. At $`T_N<T<T_C`$ an applied field of a few kOe fully saturates the magnetization $`M(H)`$. Below $`T_N`$, the initial slope of $`M(H)`$ drops abruptly, corresponding to the onset of AF order. In both cases the initial d.c. susceptibility $`\chi _{dc}=dM/dH`$ is in good quantitative agreement with $`\chi _{ac}^{}(T)`$. In addition at $`T<T_N`$ a first order metamagnetic transition takes place at larger fields: $`M(H)`$ deviates from the linear behavior, with a steep rise at a threshold field $`H_\theta (T)`$ (marked by arrows in the figure). This agrees with results from Tomioka et al. . The threshold field, determined as the knee of the curve, increases with decreasing temperature, as shown in the inset of fig. 1b. The magnetic moment right above $`H_\theta (T)`$ is roughly 2/3 of the apparent saturation value $`\mu _s(T)3\mu _B`$/formula unit, which is approached at higher fields. Such a large moment demonstrates that the magnetic order is close to that of a full ferromagnet above $`H_\theta `$. In order to obtain microscopic information on the field induced transition we employed <sup>55</sup>Mn NMR at 1.3 K in variable magnetic fields as a local probe of magnetization and of magnetic structure. The local field experienced by <sup>55</sup>Mn arises from dipolar, transferred, and the Fermi contact fields, and is proportional to the electronic moment $`g\mu _B\text{S}`$: $$\text{B}=\frac{2\pi }{\gamma }g\mu _B𝒜\text{S}+\mu _{}\text{H}.$$ (1) Here $`\mu _0𝐇`$ is the external field, and $`\gamma /2\pi =10.501`$ MHz/T for <sup>55</sup>Mn. The hyperfine coupling tensor $`𝒜`$ is found to be negative and isotropic within the experimental resolution . The resonance frequency determines with this equation only the product $`𝒜gS`$. We use below the resonance frequencies in homogeneous Mn compounds as a reference to assign local moments and a valence to different sites in our spectra. The nuclei of the 3$`\mu _B`$ Mn<sup>4+</sup> ions resonate at low temperatures around 300 MHz in several single valence insulating Mn compounds. Similar frequencies have also been observed in CO manganites . In the conducting CMR compositions $`0.2x0.5`$, on the other hand, the higher electronic spin $`gS=4x`$ yields nuclear resonances at 1.3K ranging from 400 MHz down to 370 MHz . <sup>55</sup>Mn NMR is also sensitive to local magnetic structure. The superposition of the external and the internal (hyperfine) field is different in F and AF domains, giving rise to distinct shifts and broadenings for the corresponding resonance lines . In particular in a F region, where Mn electronic spins align parallel to the external field above saturation, the NMR resonance frequency shifts according to $`\mathrm{\Delta }\nu =g\mu _B\left|𝒜\right|S\gamma \mu _0H/2\pi `$, by eq. 1. Further information is provided by the radio frequency (rf) enhancement $`\eta `$, consisting of an amplification of both the effective driving rf field $`H_1^{}=\eta H_1`$ and the NMR signal induced in the coil, due to the hyperfine coupling of the electronic magnetization to the nucleus. The enhancement can be estimated from the rf power required for an optimized spin echo excitation . A large $`\eta `$ is typical of ferromagnetically ordered systems. The spin-echo spectra, measured at different applied fields (always after zero field cooling), are plotted in fig. 2, corrected for NMR sensitivity ($`\omega ^2`$) and rescaled for clarity by arbitrary factors. The zero field spectrum (bottom of fig.2) consists of a broad inhomogeneous distribution of hyperfine fields over a range of approximately 10 T, where two peaks, centered at 370 MHz and 290 MHz, are clearly resolved. In an applied field not exceeding 7 T, the whole spectrum shifts to lower frequencies, while the two peaks get closer. This is evident from fig. 3a, where the mean resonance frequencies, from a two Gaussian components fit, are plotted as a function of $`H`$. Note that this spectrum differs from that of the F fraction of the CO manganite La<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub>, consisting of a single much narrower peak (FWHM $`2`$ T) centered at 380 MHz. Although the sample is antiferromagnetic at $`H=0`$ the high frequency NMR signal originates entirely from a ferromagnetic fraction, as it is demonstrated by the sizeable enhancement $`\eta 100`$ and by the field dependent frequency shifts. The slope of the full line in fig. 3a shows that the high frequency peak shifts with field according to the full nuclear gyromagnetic ratio ($`\mu _{}^1d\mathrm{\Delta }\nu /dH=\gamma /2\pi `$, from eq.1). This implies that the electronic moments on the Mn sites are constant and fully aligned to the external field, as expected in a saturated ferromagnet. The low frequency peak exhibits only a fractional shift (fig.3a), which implies a partial alignment of the Mn moments giving rise to this signal. Assuming for the sake of simplicity a constant angle between external field and the Mn moments one finds for this angle $`\theta 65`$ degrees from the slope $`d\nu /\mu _{}dH\gamma \mathrm{cos}\theta /2\pi 4.5`$ MHz/T. We shall refer to this contribution as a tilted F (tF) component and to the former as fully F (fF). Fig. 3b shows the area $`I(H)`$ under the full spectrum, corrected for $`\eta (H)`$, which is proportional to the number of resonating nuclei. The zero field signal has a tiny intensity, hence the F fraction is initially a minority phase. Its presence may account for the enhanced macroscopic d.c. susceptibility in the AF state discussed above. However, $`I(H)`$ increases rapidly with field, and the intensity ratio of the two peaks remains constant, of order one, independent of the field. The rapid increase of $`I`$ with field rules out an impurity phase. No signal from the majority AF phase is observed, probably due to extremely fast relaxation, as it is suggested by comparison with the related compound La<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub>, where <sup>55</sup>Mn relaxes two orders of magnitude faster in the AF phase than in the F phase . The field induced magnetic transition was easily located at 1.3 K and 7.7(1) T by an abrupt change in $`\chi `$’ and $`\chi `$”which induces a severe detuning and rf-mismatch of the probe head. The transition also shows up in the NMR spectrum, which at 8 T (the topmost in fig. 2) consists of a narrower single high frequency peak, whereas the low frequency peak, still well resolved at 7 T, has completely disappeared. No additional peak was found between 210 and 420 MHz. The mean frequency of the 8 T spectrum lies on the same line of the fF peak in fig. 3a. The tF peak is not recovered by setting the field back to 7 T: the hysteresis demonstrates that this is a first order transition, clearly corresponding to that detected by $`M(H)`$ at higher temperatures. Although instrumental limitations prevented direct verification by magnetometry, the value of 7.7 T at 1.3 K is in good agreement with the $`H_\theta (T)`$ curve extrapolated from magnetization data (see inset to fig.1b). The identification is also supported by the steep rise of the NMR amplitude $`I(H)`$ near 7 T, in qualitative agreement with the $`M(H)`$ curve at the lowest temperature (cfr. fig. 3b and fig. 1b). From our NMR data we can conclude, therefore, that on a microscopic scale the increase of $`M(H)`$ below $`H_\theta `$ is not due to homogeneously increasing induced moments or a field induced homogeneous canting of the AF structure, since both are incompatible with the slope of the fF-line in fig. 3a. Instead, the simultaneous increase in the tF- and fF-line intensities shows that $`M(H)`$ develops by inhomogeneous nucleation of fF- and tF-phases from the AF matrix. The strong correlation between the intensity of the fF- and the tF-line while both change with field by more than an order of magnitude strongly suggests a growth of both phases in spatially connected volumes. It is tempting to associate the two lines with the inner core and with the outer surface layers of ferromagnetic clusters within the AF matrix respectively. At the threshold field the tilted component vanishes, indicating that the ferromagnetic volume fraction becomes homogeneous (AF-regions may still exist). If we follow this idea we may discuss some further consequences of our data for the properties of these clusters. First, the fact that the intensity ratio is constant means that the volume fraction of the mixed phase (tF + fF) increases by growth in the number of clusters rather than in their size. Second, the ratio $`I_{fF}/I_{tf}1`$ implies a very large surface to volume ratio, corresponding to a very small size of the clusters. Assuming for simplicity a cubic shape, the core contains $`(N2)^3`$ unit cells, covered by a layer of $`6(N2)^2`$ unit cells (not counting the edges). The NMR intensity ratio then implies nearly equal volumes or $`N=8`$, that is a size of the core in the order of six lattice constants. Finally, comparison with the zero field frequencies of the reference materials described above provides information on the local valence: 370 MHz for the fF-line corresponds to Mn<sup>+3.3≤v≤+3.5</sup> in a metallic ferromagnet, while 290 MHz for the tF-line is close to the value of Mn<sup>+4</sup> in antiferromagnetic insulators. The peculiar nature of these clusters brings to mind a static version of magnetic polarons, often invoked by theories as the excitations of either magnetic JT or magnetic semiconductor systems. Unfortunately, we cannot distinguish from our NMR data between the two cases of a tF core surrounded by a fF layer or vice versa, the ferromagnet being surrounded by a tilted structure. From a magnetic point of view the second possibility is more intuitive, but it implies some electrostatic overshielding of the core hole state ($`v+3.5`$) in the surface layer ($`v+4`$), followed by the surrounding AF ($`v+3.5`$). In the other case the valence decreases nearly monotonically from the center of the cluster where Mn<sup>+4</sup> forms an AF structure, canted due to the field and frustrated magnetic bonds, to the fully aligned ferromagnetic surface of the cluster. An interface layer between fF surface and the surrounding AF matrix might well be unobservable in NMR. In both cases the metamagnetic transition at $`H_\theta `$ indicates a change of topology in this phase. Its coincidence with a large mean magnetic moment strongly suggests the crossing of a percolation threshold by F domains at $`H_\theta `$. This view is also supported by the abrupt increase of electrical conductivity accompanying the transition . A similar intrinsic phase separation was encountered in La<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub>, where a minority F fraction coexists with the majority AF phase at all temperatures below 150 K . In that sample, however, the large thermal and magnetic hysteresis and the single fF peak in the <sup>55</sup>Mn NMR spectrum indicate a bulk F phase. Recent TEM imaging actually showed that the size of F domains in La<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub> is mesoscopic rather than nanoscopic . In this respect Pr<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub> is more similar to low doped La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> ($`x<0.1`$), where the nanoscopic dimension of spontaneously segregated hole-rich F droplets was demonstrated by small angle neutron scattering . It is worth noting that both Pr<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub> and under-doped La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> present the same A-type AF structure, , whereas the AF phase of La<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub> is charge ordered CE-type. Such a difference might be relevant: since the CO phase is far more insulating than the A-type phase , the latter may provide a screening mechanism sufficiently effective to cut the long range tails of the Coulomb interactions and to accommodate charged clusters, whereas such a mechanism is ruled out in the insulating CO state. In conclusion, <sup>55</sup>Mn NMR in the AF state of Pr<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub> demonstrates the segregation of nanoscopic ferromagnetic clusters, dressed by a modulation of local spin and charge density at the interface with the host AF matrix. Evidence is provided that the field-induced transition to a ferromagnetic state, detected also by magnetization measurements, is percolative in nature. This work was partially supported by MURST-Cofin 1997 and EPSRC GR/K95802 grants. Support by Prof. J. Kötzler (IAP, Universität Hamburg) is gratefully acknowledged.